1 Introduction

We start by considering the simplest non-reversible interacting stochastic particle system, namely the totally asymmetric simple exclusion process (TASEP) on \(\mathbb {Z}\). Despite its simplicity, this model is full of interesting features. In TASEP, particles independently try to jump to their right neighbor site at a constant rate and jumps occur if the exclusion constraint is satisfied: no site can be occupied by more than one particle. Under hydrodynamic scaling, the particle density solves the deterministic Burgers equation (see e.g. [1, 32]). This model belongs to the Kardar–Parisi–Zhang (KPZ) universality class [30] (see [15] for a recent review).

We are interested in the fluctuations around the macroscopic behavior given in terms of the solution of the Burgers equation and we focus on the fluctuations of particles’ positions. Depending on the initial condition, the deterministic solution may have parts of constant and decreasing density, as well as a discontinuity, also referred to as shock. The fluctuations of the shock location have attracted a lot of attention.

For TASEP product Bernoulli measures are the only translation invariant stationary measures [31]. In the first works one considered initial configurations to have a shock at the origin, with Bernoulli measures with density \(\rho \) (resp. \(\lambda \)) at its left (resp. right), with \(\rho <\lambda \). The shock location is often identified by the position of a second class particle. In this case, the shock fluctuations are Gaussian in the scale \(t^{1/2}\) [19, 20, 26]. Microscopic information on the shock are available too [8, 17, 18, 21]. The origin of the \(t^{1/2}\) fluctuations lies in the randomness of the initial conditions, since fluctuations coming from the dynamics grow only as \(t^{1/3}\). If the initial randomness is only at one side of the shock, a similar picture still holds. For example, in [14] one considers the initial condition is Bernoulli-\(\rho \) to the right and periodic with density \(1/2\) to the left of the origin. When \(\rho >1/2\) there is a shock with Gaussian fluctuations in the scale \(t^{1/2}\). In that work, the fluctuations of the shock position are derived from the ones of the particle positions. The result fits in with the heuristic argument in [35] (Section 5). The Gaussian form of the distribution function is not robust (see for instance Remark 17 in [14]).

This paper is the first where the fluctuation laws around a shock occurring without initial randomness are analyzed. In that case, one heuristically expects that the shock fluctuations, but also tagged particles fluctuations, live only on a scale of order \(t^{1/3}\), see [7] for a physical argument. We find that the distribution function of a particle position (and also of tagged particles) is a product of two other distribution functions. The reason of the product form of the distribution function is that (1) at the shock two characteristics merge and (2) along the characteristics decorrelation is slow [16, 22].

More precisely, if we look at the history of a particle close to the shock at time \(t\), it has non-trivial correlations with a region of width \(\mathcal {O}(t^{2/3})\) around the characteristics, see Fig. 1. At the shock the two characteristics come together with a positive angle so that at time \(t-t^{\nu }, 2/3<\nu <1\), their distance will be farther away than \(\mathcal {O}(t^{2/3})\). This implies that the fluctuations built up along the two characteristics before time \(t-t^\nu \) will be (asymptotically) independent. But if we stay on a characteristic, then the dynamical fluctuations created between time \(t-t^\nu \) and time \(t\) are only \(o(t^{1/3})\), which are irrelevant with respect to the total fluctuations present at time \(t-t^\nu \) that are of order \(t^{1/3}\) (this is also known as the slow-decorrelation phenomenon [16, 22]).

Fig. 1
figure 1

Illustration of the characteristics for TASEP. \(E\) is the shock location, where two characteristics merges (the thick lines). The gray region is of order \(t^\nu \) for some \(2/3<\nu <1\). Due to the slow decorrelation along characteristics, at large time \(t\) the fluctuations at \(E\) originates from the ones at \(E_\ell \) and \(E_r\)

To generate a shock between two regions of constant density, we consider the initial condition where \(2\mathbb {Z}\) is fully occupied and where the jump rates of particles starting to the left (resp. right) of the origin is equal to \(1\) (resp. \(\alpha <1\)).

In Corollary 2.5 we determine the distribution function of the fluctuations of TASEP particles in the \(t^{1/3}\) scale. It is a product of two GOE Tracy-Widom distributions, \(F_1\), and the transition from a single \(F_1\) to the second \(F_1\) distribution occurs over a distance of order \(t^{1/3}\). In Corollary 2.6 (resp. 2.7) we study another shock situation, where the distribution function goes from \(F_2\) to \(F_1\) (resp. from \(F_2\) to \(F_2\)).

For completeness, let us briefly discuss what happens when there is no shock. From the KPZ theory, the fluctuations of particle positions generated by the dynamics grow as \(t^{1/3}\) where \(t\) is the time. When the density is constant and the initial condition is non-random, the fluctuations will be governed by \(F_1\), as shown in a few cases in [6] (see [11, 12, 34] for joint distributions). However, when the density is decreasing, the fluctuations are governed by \(F_2\), the GUE Tracy-Widom distribution, as shown in [27] (see also [9, 33] for random initial conditions and [10, 11, 29] for joint distributions). The transition process between these two cases has been determined in [13]. The stationary initial condition [31] has also constant density but with different fluctuation laws, see [3, 5, 25]. For further details see e.g. the review [23].

It is well known that TASEP is linked to last passage percolation (LPP). As a consequence slow-decorrelation holds also for generic LPP models [16]. For this reason our first main result, Theorem 2.1, is a generic statement proven under some assumptions, that one needs to verify model by model. This theorem states that the distribution function of a generic last passage percolation is the product of two distribution functions corresponding to two simpler last passage problems. The verification of the assumptions can be quite involved. Here we prove they are valid for three different LPP models and obtain Corollaries 2.2–2.4.

Using the connection to TASEP we then restate the results in terms of TASEP particles (see Corollaries 2.5–2.7). Corollary 2.5 is the result which motivated our work.

1.1 Outline

In Sect. 2 we define precisely the models and state our main results. In Sect. 3 we prove the generic theorem. In order to apply it to specific models we have to verify the assumptions, which is the content of Sect. 4. Finally, in Sect. 5 we derive a kernel used in Sect. 4.

2 Models and main results

2.1 Last passage percolation—general statement

We consider last passage percolation (LPP) modelsFootnote 1 on \(\mathbb {Z}^2\) with independent random variables \(\{\omega _{i,j},i,j\in \mathbb {Z}\}\). An up-right path \(\pi =(\pi (0),\pi (1),\ldots ,\pi (n))\) on \(\mathbb {Z}^2\) from a point \(A\) to a point \(E\) is a sequence of points in \(\mathbb {Z}^2\) with \(\pi (k+1)-\pi (k)\in \{(0,1),(1,0)\}\), with \(\pi (0)=A\) and \(\pi (n)=E\), and where \(n\) is called the length \(\ell (\pi )\) of \(\pi \). Now, given two sets of points \(S_A\) and \(S_E\), one defines the last passage time \(L_{S_A\rightarrow S_E}\) as

$$\begin{aligned} L_{S_A\rightarrow S_E}=\max _{\begin{array}{c}\pi :A\rightarrow E\\ A\in S_A,E\in S_E\end{array}} \sum _{1\le k\le \ell (\pi )} \omega _{\pi (k)}. \end{aligned}$$
(2.1)

The purpose of this paper is to determine the law of last passage times of various models. Finally, we denote by \(\pi ^\mathrm{max}_{S_A\rightarrow S_E}\) any maximizer of the last passage time \(L_{S_A\rightarrow S_E}\). For continuous random variables, the maximizer is a.s. unique.

In this paper we consider situations where the end set is one point and the starting set is the union of sets, namelyFootnote 2

$$\begin{aligned} S_A=\mathcal {L}^+\cup \mathcal {L}^-,\quad S_E=E=(\lfloor \eta t\rfloor ,\lfloor t\rfloor ), \end{aligned}$$
(2.2)

where \(\mathcal {L}^+ \subseteq \{(v,n)\in \mathbb {Z}^{2}:v\le 0, n \ge 0\}\), \(\mathcal {L}^- \subseteq \{(v,n)\in \mathbb {Z}^{2}:n\le 0, v\ge 0\}\). Note that, by putting some of the \(\omega _{i,j}\) to zero, it is always possible to choose \(\mathcal {L}^+=(\mathbb {Z}_-,0)\) and \(\mathcal {L}^-=(0,\mathbb {Z}_-)\).

With this choice it follows from the definition of the last passage time (2.1) that

$$\begin{aligned} L=L_{S_A\rightarrow S_E} = \max \left\{ L_{\mathcal {L}^+\rightarrow (\eta t, t)},L_{\mathcal {L}^-\rightarrow (\eta t, t)}\right\} . \end{aligned}$$
(2.3)

The two random variables \(L_1=L_{\mathcal {L}^+\rightarrow (\eta t, t)}\) and \(L_2=L_{\mathcal {L}^-\rightarrow (\eta t, t)}\) are not independent. However, under some assumptions they are essentially independent as \(t\rightarrow \infty \), in the sense that the random last passage time \(L=\max \{L_1,L_2\}\) properly rescaled has asymptotically the law of the product of the two rescaled random variables. This is due to the fact that the fluctuations present in the region where the maximizers of the two LPP problems tend to come together are on a smaller scale than the typical fluctuations. This is by virtue of the slow-decorrelation phenomenon [16, 22].

Typically one has in mind a law of large numbers \(L_i/ t \rightarrow \mu _i\) as \(t\rightarrow \infty \) and a fluctuation result \(L_i-\mu _i t=\mathcal {O}( t^{\chi _i})\) with \(\chi _i=1/3\) or \(\chi _i=1/2\). If \(L_1\) and \(L_2\) have different leading orders \(\mu _1,\mu _2\), the result is quite easy since only the largest of the two random variables is relevant in the \( t\rightarrow \infty \) limit. This situation can be treated directly with coupling arguments as in [9]. If \(\mu _1=\mu _2=\mu \) but for instance \(\chi _1<\chi _2\), then the natural scaling is \((L-\mu t)/ t^{\chi _2}\), under which scaling \((L_1-\mu t)/ t^{\chi _2}\) degenerates to the trivial random variable \(0\) and acts as a cut-off. This situation occurred for instance in [14] (Proposition 1).

In this paper we consider the case where \(L_1\) and \(L_2\) have the same leading order \(\mu \) and both fluctuations live in the scale \( t^{1/3}\). This is our first assumption.

Assumption 1

Assume that there exists some \(\mu \) such that

$$\begin{aligned} \lim _{ t\rightarrow \infty } \mathbb {P}\left( \frac{L_{\mathcal {L}^+\rightarrow (\eta t, t)}-\mu t}{ t^{1/3}}\le s\right) = G_1(s), \end{aligned}$$
(2.4)

and

$$\begin{aligned} \lim _{ t\rightarrow \infty } \mathbb {P}\left( \frac{L_{\mathcal {L}^-\rightarrow (\eta t, t)}-\mu t}{ t^{1/3}}\le s\right) = G_2(s), \end{aligned}$$
(2.5)

where \(G_1\) and \(G_2\) are some distribution functions.

Secondly, we assume that there is a point \(E^+\) at distance of order \( t^\nu \), for some \(1/3<\nu <1\), which lies on the characteristic from \(\mathcal {L}^+\) to \(E\) and that there is slow-decorrelation as in Theorem 2.1 of [16].

Assumption 2

Assume that we have a point \(E^+=(\eta t-\kappa t^\nu , t- t^\nu )\) such that for some \(\mu _0\), and \(\nu \in (1/3,1)\) it holds

$$\begin{aligned} \begin{aligned} \lim _{ t\rightarrow \infty } \mathbb {P}\left( \frac{L_{E^+\rightarrow (\eta t, t)}-\mu _0 t^\nu }{ t^{\nu /3}}\le s\right)&= G_0(s),\\ \lim _{ t\rightarrow \infty } \mathbb {P}\left( \frac{L_{\mathcal {L}^+\rightarrow E^+}-\mu t+\mu _0 t^\nu }{ t^{1/3}}\le s\right)&= G_1(s), \end{aligned} \end{aligned}$$
(2.6)

where \(G_0\) and \(G_1\) are distribution functions.

Then, provided (2.4) and (2.6) hold, Theorem 2.1 of [16] implies that for any \(M>0\),

$$\begin{aligned} \lim _{ t\rightarrow \infty }\mathbb {P}\left( |L_{\mathcal {L}^+\rightarrow (\eta t, t)}-L_{\mathcal {L}^+\rightarrow E^+}-\mu _0 t^\nu |\ge M t^{1/3}\right) =0. \end{aligned}$$
(2.7)

This means that the fluctuations of \(L_{\mathcal {L}^+\rightarrow (\eta t, t)}\) are the same as the ones of \(L_{\mathcal {L}^+\rightarrow E^+}\) up to \(o( t^{1/3})\). Thus, we have to determine the maximum of \(L_{\mathcal {L}^+\rightarrow E^+}\) and \(L_{\mathcal {L}^-\rightarrow E}\). The final assumption ensures that these two random variables are asymptotically independent.

Assumption 3

Let \(\nu \) be as in Assumption 2. Consider the points \(D_{\gamma }=(\lfloor \gamma \eta t \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1- t^{\beta -1}]\). Assume that there exists a \(\beta \in (0,\nu )\), such that

$$\begin{aligned} \begin{aligned} \lim _{ t\rightarrow \infty }\mathbb {P}\bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1- t^{\beta -1}]}}\left\{ D_\gamma \in \pi ^\mathrm{max}_{L_{\mathcal {L}^+\rightarrow E^+}}\right\} \bigg )&=0,\\ \lim _{ t\rightarrow \infty }\mathbb {P}\bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1- t^{\beta -1}]}}\left\{ D_\gamma \in \pi ^\mathrm{max}_{L_{\mathcal {L}^-\rightarrow (\eta t, t)}}\right\} \bigg )&=0. \end{aligned} \end{aligned}$$
(2.8)

Under these assumptions, that will be verified in special cases, we have the first result of this paper, proven in Sect. 3.

Theorem 2.1

Under Assumptions 1–3 we have

$$\begin{aligned} \lim _{ t\rightarrow \infty } \mathbb {P}\left( \frac{\max \left\{ L_{\mathcal {L}^+\rightarrow (\eta t, t)},L_{\mathcal {L}^-\rightarrow (\eta t, t)}\right\} -\mu t}{ t^{1/3}}\le s\right) = G_1(s) G_2(s). \end{aligned}$$
(2.9)

2.2 Application to specific LPP models

Let us consider now \(\omega _{i,j}\) to be exponentially distributed random variables, that will become waiting times for TASEP particles. Let the waiting times be given by

$$\begin{aligned} \begin{aligned} \omega _{i,j}\sim \exp (1),\quad&j\ge 1,\\ \omega _{i,j}\sim \exp (\alpha ),\quad&j\le 0, \end{aligned} \end{aligned}$$
(2.10)

for some \(\alpha >0\). We are going to consider the scaling

$$\begin{aligned} \eta =\eta _0+u t^{-2/3}. \end{aligned}$$
(2.11)

Then the following results hold true and will be proven in Sect. 4.3, see Fig. 2 for an illustration of the geometry in the following three corollaries.

Fig. 2
figure 2

Illustration of the geometry considered in a Corollary 2.2, b Corollary 2.3, and c Corollary 2.4, for \(u=b=0\) and \(\alpha =1/2\). The random variables in the gray (resp. white) regions are \(\exp (\alpha )\) (resp. \(\exp (1)\)) distributed. The dashed lines represents the typical trajectories of the maximizers for the two LPP problems

Corollary 2.2

(Two point-to-line problems) Let

$$\begin{aligned} \mathcal {L}^+=\{(-v,v),v\in \mathbb {Z}_+\},\quad \mathcal {L}^-=\{(-v,v),v\in \mathbb {Z}_-\}, \end{aligned}$$
(2.12)

with \(\eta _0=\frac{\alpha }{2-\alpha }\) and \(\alpha <1\). Then, Theorem 2.1 holds with \(\mu =4/(2-\alpha )\) and

$$\begin{aligned} G_1(s)=F_1\left( \frac{s-2u}{\sigma _1}\right) ,\quad G_2(s)=F_1\left( \frac{s-2u/\alpha }{\sigma _2}\right) , \end{aligned}$$
(2.13)

where \(F_1\) is the GOE Tracy-Widom distribution function of random matrices [37], \(\sigma _1=\frac{2^{2/3}}{(2-\alpha )^{1/3}}\) and \(\sigma _2=\frac{2^{2/3}(2-2\alpha +\alpha ^2)^{1/3}}{\alpha ^{2/3}(2-\alpha )}\).

Corollary 2.3

(One point-to-point and one point-to-line problem) Let

$$\begin{aligned} \mathcal {L}^+=([-\lfloor \beta t\rfloor ,0],0)\cup (-\lfloor \beta t\rfloor ,\mathbb {Z}_+),\quad \mathcal {L}^-=\{(-v,v),v\in \mathbb {Z}_-\}, \end{aligned}$$
(2.14)

with \(\beta =\beta _0+b t^{-2/3}\), \(\beta _0=1-\eta _0\), \(\eta _0=\frac{\alpha (3-2 \alpha )}{2-\alpha }\) and \(\alpha \in (0,1)\). Then, Theorem 2.1 holds with \(\mu =4\) and

$$\begin{aligned} G_1(s)=F_2\left( \frac{s-2(u+b)}{\sigma _1}\right) ,\quad G_2(s)=F_1\left( \frac{s-2u/\alpha }{\sigma _2}\right) , \end{aligned}$$
(2.15)

where \(F_2\) is the GUE Tracy-Widom distribution function of random matrices [36], \(\sigma _1=2^{4/3}\), and \(\sigma _2=\frac{2^{2/3}(6-10\alpha +6\alpha ^2-\alpha ^3)^{1/3}}{\alpha ^{2/3}(2-\alpha )}\).

Corollary 2.4

(Two point-to-point problems) Let us fix a \(\beta >0\) and consider

$$\begin{aligned} \mathcal {L}^+=(-\lfloor \beta t\rfloor ,\mathbb {Z}_+)\cup ([-\lfloor \beta t\rfloor ,0],0),\quad \mathcal {L}^-=(0,[0,-\lfloor \beta t\rfloor ])\cup (\mathbb {Z}_+,-\lfloor \beta t\rfloor ),\nonumber \\ \end{aligned}$$
(2.16)

with \(\eta _0=1\) and \(\alpha =1\). Then, Theorem 2.1 holds with \(\mu = \left( 1+\sqrt{1+\beta }\right) ^2\) and

$$\begin{aligned} G_1(s)=F_2\left( \frac{s-u\left( 1+1/\sqrt{1+\beta }\right) }{\sigma }\right) ,\quad G_2(s)=F_2\left( \frac{s-u\left( 1+\sqrt{1+\beta }\right) }{\sigma }\right) ,\nonumber \\ \end{aligned}$$
(2.17)

where \(\sigma =\left( 1+\sqrt{1+\beta }\right) ^{4/3}/(1+\beta )^{1/6}\).

2.3 Application to the totally asymmetric simple exclusion process

It is well known that the choice of \(\omega _{i,j}\) to be exponential distributed random variables directly links LPP with the totally asymmetric simple exclusion process (TASEP), which we recall here. TASEP is an interacting particle system on \(\mathbb {Z}\) where two particles can not occupy the same site at the same time. Further, particles (independently) try to jump to their right neighboring site after an exponentially distributed waiting time. A jump occurs only if the destination site is empty. As a consequence, the order of particles is preserved. Thus, we can assign to each particle a number and we do it from right to left, i.e.

$$\begin{aligned} \cdots < x_2(0) < x_1(0) < 0 \le x_0(0)< x_{-1}(0)< \cdots . \end{aligned}$$

Then, at all time \(t\ge 0\), \(x_{n+1}(t)<x_n(t)\), \(n\in \mathbb {Z}\). Then, the precise link between LPP and TASEP is the following. Let \(\omega _{i,j}\) be the exponential waiting time of particle \(j\), and \(L_E\) be the last passage time from \(S_A=\{(u,k)\in \mathbb {Z}^2: u=k+x_k(0), k\in \mathbb {Z}\}\). This implies that

$$\begin{aligned} \mathbb {P}\left( L_{{S_A}\rightarrow (m,n)}\le t\right) =\mathbb {P}\left( x_n(t)+n\ge m\right) . \end{aligned}$$
(2.18)

This connection will be used several times to verify that Assumptions 1–3 hold in special cases.

The particular choice of the \(\omega _{i,j}\) in (2.10) means that particles with label \(n\ge 1\) have jump rate \(1\), while particles with label \(n\le 0\) have jump rate \(\alpha \). The choice (2.11) implies that we look at particle number \(t\) at different times. Indeed, if

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}\left( L_{S_A\rightarrow \left( \eta _0 t+ut^{1/3},t\right) }\le \mu t + s t^{1/3}\right) =F(u,s), \end{aligned}$$
(2.19)

then by (2.18) we have that

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}\left( x_t\big (\mu t+\tau t^{1/3}\big )\ge (\eta _0-1)t - s t^{1/3}\right) =F(-s,\tau ). \end{aligned}$$
(2.20)

Since this relation is straightforward we do not restate the three corollaries for the tagged particle problem. Instead, we restate them so that they gives the distribution function at a fixed time \(t\) of particles around the shock.

In the case of Corollary 2.3–2.4, the boundaries of the LPP problem to \((\eta t,t)\) also depends on the variable \(t\). This has to be taken in account here too. Therefore, let us write explicitly this dependence in the measure and just write \(L_{m,n}\) for the last passage time. For the case of Corollary 2.2, the boundary condition does not depends on the observation time parameter \(t\). For this case, one can just set \(\beta =0\) in the computations below. Assume that we have, as in the previous section,

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}_{\beta t}(L_{\eta _0 t+u t^{1/3},t}\le \mu (\beta ) t+s t^{1/3})= F(\beta ,u,s). \end{aligned}$$
(2.21)

By (2.18) we have

$$\begin{aligned} \mathbb {P}_{\beta t}(x_{\nu t+\xi t^{1/3}}(t)\ge v t - s t^{1/3})=\mathbb {P}_{\beta t}(L_{(\nu +v)t+(\xi -s)t^{1/3},\nu t + \xi t^{1/3}}\le t) \end{aligned}$$
(2.22)

Let us define \(\tilde{t}\), \(\eta \), and \(\tilde{\beta }\) by the equations

$$\begin{aligned} \tilde{t}=\nu t+\xi t^{1/3},\quad \eta \tilde{t}= (\nu +v)t+(\xi -s) t^{1/3},\quad \tilde{\beta }\tilde{t} =\beta t. \end{aligned}$$
(2.23)

This gives, \(t=\tilde{t}/\nu -\xi \nu ^{-4/3}\tilde{t}^{1/3}+\mathcal {O}(\tilde{t}^{-1/3})\), from which

$$\begin{aligned} \begin{aligned} \eta&=(1+v/\nu )-(s+\xi v/\nu ) \nu ^{-1/3} \tilde{t}^{-2/3},\\ \tilde{\beta }&=\beta /\nu -\xi \beta \nu ^{-4/3} \tilde{t}^{-2/3}, \end{aligned} \end{aligned}$$
(2.24)

up to \(\mathcal {O}(\tilde{t}^{-4/3})\). By plugging this in (2.22) one readily obtains

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}_{\beta t}(x_{\nu t+\xi t^{1/3}}(t)\ge v t - s t^{1/3})&= \lim _{\tilde{t}\rightarrow \infty }\mathbb {P}_{\tilde{\beta }\tilde{t}}(L_{\eta _0\tilde{t}+u \tilde{t}^{1/3},\tilde{t}}\le \tilde{t}/\nu -\xi \nu ^{-4/3}\tilde{t}^{1/3})\nonumber \\&= \lim _{\tilde{t}\rightarrow \infty }\mathbb {P}_{\tilde{\beta }\tilde{t}}(L_{\eta _0\tilde{t}+u \tilde{t}^{1/3},\tilde{t}}\le \mu (\tilde{\beta })\tilde{t} +\tilde{s}\tilde{t}^{1/3})\nonumber \\ \end{aligned}$$
(2.25)

with

$$\begin{aligned} \eta _0=1+v/\nu ,\quad u=-(s+\xi v/\nu ) \nu ^{-1/3}, \quad \tilde{s}=\xi (\beta \mu '(\beta /\nu )-1)\nu ^{-4/3}.\qquad \end{aligned}$$
(2.26)

provided that it holds \(\mu (\beta /\nu )=1/\nu \). This condition sets which particles are around the shock position at time \(t\). Then, by (2.22) we have

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}_{\beta t}(x_{\nu t+\xi t^{1/3}}(t)\ge v t - s t^{1/3}) = F\left( \frac{\beta }{\nu },-\frac{s+\xi v/\nu }{\nu ^{1/3}},\frac{\xi (\beta \mu '(\beta /\nu )-1)}{\nu ^{4/3}}\right) .\nonumber \\ \end{aligned}$$
(2.27)

The LPP situations considered above correspond to cases where, in terms of TASEP, there is a macroscopic discontinuity in the particles’ density, i.e., there is a shock. Using (2.27) we can restate Corollaries 2.2–2.4 in terms of TASEP as follows, see Fig. 3 for an illustration of the density profiles.

Fig. 3
figure 3

The thick lines are the density profiles \(\rho \) at time \(t\) (resp. \(t=\ell \)) of a Corollary 2.5, b Corollary 2.6, and c Corollary 2.7, for \(u=b=0\) and \(\alpha =1/2\). The thin lines are the initial conditions. The dotted vertical lines indicate the macroscopic position of the particle that started from the origin

Corollary 2.5

(At the \(F_1\)\(F_1\) shock) Let \(x_n(0)=-2n\) for \(n\in \mathbb {Z}\). For \(\alpha <1\) let \(\nu =\frac{2-\alpha }{4}\) and \(v=-\frac{1-\alpha }{2}\). Then it holds

$$\begin{aligned} \lim _{t\rightarrow \infty }\mathbb {P}\left( x_{\nu t+\xi t^{1/3}}(t)\ge v t - s t^{1/3}\right) =F_1\left( \frac{s-\xi /\rho _1}{\sigma _1}\right) F_1\left( \frac{s-\xi /\rho _2}{\sigma _2}\right) ,\nonumber \\ \end{aligned}$$
(2.28)

with \(\rho _1=\frac{1}{2}\), \(\rho _2=\frac{2-\alpha }{2}\), \(\sigma _1=\frac{1}{2}\), and \(\sigma _2=\frac{\alpha ^{1/3}(2-2\alpha +\alpha ^2)^{1/3}}{2(2-\alpha )^{2/3}}\).

As one can see from (2.28) the shock moves with speed \(v\). When \(\xi \) is very large we are in the region before the shock, where the density of particle is \(1/2\). Indeed, by replacing \(s\rightarrow s+2\xi \) and taking the \(\xi \rightarrow \infty \) limit, then (2.28) converges to \(F_1(s/\sigma _1)\). Similarly, when \(-\xi \) is very large we are already in the shock, where the density of particles in \((2-\alpha )/2\). Indeed, by replacing \(s\rightarrow s+2\xi /(2-\alpha )\) and taking \(\xi \rightarrow -\infty \), then (2.28) converges to \(F_1(s/\sigma _2)\). This is the reason why we call this situation a \(F_1\)\(F_1\) shock.

Corollary 2.6

(At the \(F_2\)\(F_1\) shock) For \(\alpha <1\) let \(\nu =1/4\) and \(v=-\frac{(1-\alpha )^2}{2(2-\alpha )}\). Let \(x_n(0)=v \ell -n\) for \(n\ge 1\) and \(x_n(0)=-2n\) for \(n\le 0\). Then it holds

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}\left( x_{\nu \ell +\xi \ell ^{1/3}}(t=\ell )\ge v \ell - s \ell ^{1/3}\right) =F_2\left( \frac{s-\xi /\rho _1}{\sigma _1}\right) F_1\left( \frac{s-\xi /\rho _2}{\sigma _2}\right) ,\nonumber \\ \end{aligned}$$
(2.29)

with \(\rho _1=\frac{1}{2}\), \(\rho _2=\frac{2-\alpha }{2}\), \(\sigma _1=2^{-1/3}\), and \(\sigma _2=\frac{\alpha ^{1/3}(6-10\alpha +6\alpha ^2-\alpha ^3)^{1/3}}{2(2-\alpha )}\).

Corollary 2.7

(At the \(F_2\)\(F_2\) shock) For a fixed \(\beta \in (0,1)\), consider the initial condition given by \(x_n(0)=-n-\lfloor \beta \ell \rfloor \) for \(n\ge 1\) and \(x_n(0)=-n\) for \(-\lfloor \beta \ell \rfloor \le n\le 0\). Then, for \(\alpha =1\), \(\nu =\frac{(1-\beta )^2}{4}\) it holds

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}\left( x_{\nu \ell +\xi \ell ^{1/3}}(t=\ell )\ge -s\ell ^{1/3}\right) =F_2\left( \frac{s-\xi /\rho _1}{\sigma _1}\right) F_2\left( \frac{s-\xi /\rho _2}{\sigma _2}\right) \nonumber \\ \end{aligned}$$
(2.30)

with \(\rho _1=\frac{1-\beta }{2}\), \(\rho _2=\frac{1+\beta }{2}\), \(\sigma _1=\frac{(1+\beta )^{2/3}}{2^{1/3}(1-\beta )^{1/3}}\), and \(\sigma _2=\frac{(1-\beta )^{2/3}}{2^{1/3}(1+\beta )^{1/3}}\).

As expected by KPZ universality, if we move away from the shock, the distribution function considered above becomes a single GOE or GUE distribution, with GOE whenever the particles density is constant and GUE whenever the particle density is decreasing, e.g., in the \(F_2\)\(F_2\) shock, the particle density is decreasing both to the left and to the right of the shock.

3 Proof of Theorem 2.1

In the following, we will several times use the following two lemmas from [9]. By “\(\Rightarrow \)” we designate convergence in distribution.

Lemma 3.1

(Lemma 4.1 in [9]) Let \(D\) be a probability distribution and \((X_n)_{n\in \mathbb {N}}\) be a sequence of random variables. If \(X_n\ge \tilde{X}_n\) and \(X_n \Rightarrow D\) and \(X_n-\tilde{X}_n\) converges to zero in probability, then \(\tilde{X}_n \Rightarrow D\) as well.

Lemma 3.2

(Lemma 4.2 in [9]) Let \((X_n)_{n\in \mathbb {N}}\), \((Y_n)_{n\in \mathbb {N}}\), \((\tilde{X}_n)_{n\in \mathbb {N}}\), \((\tilde{Y}_n)_{n\in \mathbb {N}}\) be sequences of random variables and \(D_1,D_2,D_3\) be probability distributions. Assume \(X_n\ge \tilde{X}_n\) and \(X_n \Rightarrow D_1\) as well as \(\tilde{X}_n \Rightarrow D_1\); and similarly \(Y_n\ge \tilde{Y}_n\) and \(Y_n \Rightarrow D_2\) as well as \(\tilde{Y}_n \Rightarrow D_2\). Let \(Z_n=\max \{X_n,Y_n\}\) and \(\tilde{Z}_n=\max \{\tilde{X}_n, \tilde{Y}_n\}\). Then if \(\tilde{Z}_n\Rightarrow D_3\), we also have \(Z_n \Rightarrow D_3\).

We denote

$$\begin{aligned} L_{\mathcal {L}^{+}\rightarrow E}^{\mathrm{resc}}=\frac{L_{\mathcal {L}^+\rightarrow (\eta t, t)}-\mu t}{ t^{1/3}}, \end{aligned}$$
(3.1)

i.e. the last passage time \(L_{\mathcal {L}^{+}\rightarrow E}\) rescaled as required by Assumption 1, we define analogously \(L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}, L_{E^{+}\rightarrow (\eta t,t)}^{\mathrm{resc}}\) and \(L_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}\) as the last passage times rescaled as required by Assumption 1 resp. 2. We first note the following.

Proposition 3.3

If \( \max \{L_{\mathcal {L}^{+}\rightarrow E}^{\mathrm{resc}},L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\}\Rightarrow D\) as \(t\rightarrow \infty \), then

$$\begin{aligned} \frac{L_{\mathcal {L}\rightarrow E}-\mu t}{ t^{1/3}}\Rightarrow D. \end{aligned}$$
(3.2)

Proof

Simply note that \(L_{\mathcal {L}\rightarrow E}=\max \{L_{\mathcal {L}^{+}\rightarrow E},L_{\mathcal {L}^{-}\rightarrow E}\}\). \(\square \)

Thus it suffices to determine the limiting distribution of \(\max \{L_{\mathcal {L}^{+}\rightarrow E}^{\mathrm{resc}},L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\}\). We can actually reduce our problem a bit more.

Proposition 3.4

Under Assumptions 1 and 2,

$$\begin{aligned} \max \left\{ \frac{L_{\mathcal {L}^{+}\rightarrow E^{+}}+L_{E^{+}\rightarrow E}-\mu t}{ t^{1/3}},L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\right\} \Rightarrow D \end{aligned}$$
(3.3)

implies

$$\begin{aligned} \frac{L_{\mathcal {L}\rightarrow E}-\mu t}{t^{1/3}}\Rightarrow D. \end{aligned}$$
(3.4)

Proof

We have

$$\begin{aligned} L_{\mathcal {L}^{+}\rightarrow E}^{\mathrm{resc}}\ge \frac{L_{\mathcal {L}^{+}\rightarrow E^{+}}-\mu t+\mu _0 t^{\nu }}{t^{1/3}}+\frac{L_{E^{+}\rightarrow E}-\mu _0 t^{\nu }}{ t^{1/3}}=L_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}+L_{ E^{+}\rightarrow (\eta t, t)}^{\mathrm{resc}}.\nonumber \\ \end{aligned}$$
(3.5)

By Assumption 2, \(L_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}\) converges to \(G_1\). Also by Assumption 2, \(L_{E^{+}\rightarrow E}\) has fluctuations of order \(t^{\nu /3}\), thus one can write

$$\begin{aligned} L_{E^{+}\rightarrow E}^{\mathrm{resc}}=\frac{1}{t^{(1-\nu )/3}} X_t, \end{aligned}$$
(3.6)

where \(X_t\) is a random variable converging to \(G_0\). In particular, (3.6) vanishes as \(t\!\rightarrow \!\infty \). Applying Lemma 3.2 to \(X_n=L_{\mathcal {L}^{+}\rightarrow E}^{\mathrm{resc}}\), \(\tilde{X_n}=\left( L_{\mathcal {L}^{+}\rightarrow E^{+}}+L_{E^{+}\rightarrow E}-\mu t\right) / t^{1/3}\) and \(Y_n=\tilde{Y}_n=L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\) finishes the proof. \(\square \)

Using the preceeding Propositions we can now prove Theorem 2.1.

Proof of Theorem 2.1

Define, for some set \(B\) and point \(C\), \(\tilde{L}_{B\rightarrow C}\) to be the last passage time of all paths from \(B\) to \(C\) conditioned not to contain any point \(\bigcup _{\gamma \in [0,1-t^{\beta -1}]}D_{\gamma } \) with \(D_\gamma \) as in Assumption 3. Then,

$$\begin{aligned} \mathbb {P}\bigg (\bigg |\frac{L_{\mathcal {L}^{+}\rightarrow E^{+}}-\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}}{ t^{1/3}}\bigg |>\varepsilon \bigg )&\le \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}}\{D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{+}\rightarrow E^{+}}\}\Bigg )\rightarrow 0\nonumber \\ \end{aligned}$$
(3.7)

as \(t\rightarrow \infty \), so that

$$\begin{aligned} \mathbb {P}\bigg (\frac{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}+\tilde{L}_{E^{+}\rightarrow E}-\mu t}{ t^{1/3}}\le s\bigg )\rightarrow G_1(s) \end{aligned}$$
(3.8)

by the vanishing of (3.6) and Lemma 3.1. Using Assumptions 1 and 3, an analogous argument shows

$$\begin{aligned} \mathbb {P}\big (\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\big )\rightarrow G_2(s). \end{aligned}$$
(3.9)

Let \(\varepsilon >0\) and recall \(X_t\) from (3.6). We take \(R>0\) such that with \(A_R=\{|\tilde{X}_t|<R\}\) \(\mathbb {P}(A_{R}^{c})\le \varepsilon \) for all \(t\) large enough. This implies that

$$\begin{aligned}&\big |\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}+t^{(\nu -1)/3}\tilde{X}_t\le s\}\cap A_R \cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\})\nonumber \\&\quad -\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}+t^{(\nu -1)/3}\tilde{X}_t\le s\} \cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\})\big |\le \varepsilon \end{aligned}$$
(3.10)

Then,

$$\begin{aligned}&\,\,\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}+t^{(\nu -1)/3}R\le s\}\cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\})-\varepsilon \end{aligned}$$
(3.11)
$$\begin{aligned}&\le \,\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}+t^{(\nu -1)/3}\tilde{X}_t\le s\}\cap A_R \cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\})\end{aligned}$$
(3.12)
$$\begin{aligned}&\le \,\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}-t^{(\nu -1)/3}R\le s\}\cap A_R \cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\})\end{aligned}$$
(3.13)
$$\begin{aligned}&\le \,\mathbb {P}(\{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}-t^{(\nu -1)/3}R\le s\}\cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\}) \end{aligned}$$
(3.14)

Finally, by construction, \(\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}\) and \(\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\) are independent random variables, since \(\beta < \nu \) and \(\pi _{\mathcal {L}^{-}\rightarrow E}^{\mathrm{max}}\) has to pass to the right of \(D_{1-t^{\beta -1}}\) by conditioning. Due to this independence, the fact that \(\nu <1\) and the convergence in (3.8), (3.9), there is a \(t_0\) such that for \(t>t_0\)

$$\begin{aligned} G_1(s) G_2(s) -2\varepsilon&\le (3.11) \le (3.12) \le (3.14)\le G_1(s)G_2(s) +\varepsilon . \end{aligned}$$
(3.15)

Thus applying (3.10) to (3.12) yields

$$\begin{aligned}&\bigg |\mathbb {P}\left( \{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}^{\mathrm{resc}}{+}t^{(\nu -1)/3}\tilde{X}_t\le s\} \cap \{\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\le s\}\right) - G_1(s)G_2(s) \bigg |\le 3\varepsilon ,\qquad \qquad \end{aligned}$$
(3.16)

for all \(t\) large enough. Therefore

$$\begin{aligned}&\lim \limits _{t\rightarrow \infty }\mathbb {P}\left( \max \bigg \{\frac{\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}{+}\tilde{L}_{E^{+}\rightarrow E}-\mu t}{ t^{1/3}},\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\bigg \}\le s\right) =\, G_1(s) G_2(s).\qquad \qquad \end{aligned}$$
(3.17)

Applying Lemma 3.2 to \(X_n=\left( L_{\mathcal {L}^{+}\rightarrow E^{+}}+L_{E^{+}\rightarrow E}-\mu t\right) /t^{1/3}\), \(Y_n=L_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\), \(\tilde{X}_n=\big (\tilde{L}_{\mathcal {L}^{+}\rightarrow E^{+}}+\tilde{L}_{E^{+}\rightarrow E}-\mu t\big )/t^{1/3}\), \(\tilde{Y}_n=\tilde{L}_{\mathcal {L}^{-}\rightarrow E}^{\mathrm{resc}}\), and using Proposition 3.4 finishes the proof.

4 Results on specific LPP

In this section we derive some results on the LPP model with

$$\begin{aligned} \begin{aligned} \omega _{i,j}\sim \exp (1),\quad&j\ge 1,\\ \omega _{i,j}\sim \exp (\alpha ),\quad&j\le 0, \end{aligned} \end{aligned}$$
(4.1)

and with two half-lines given by

$$\begin{aligned} \mathcal {L}^{+}=\{(-v,v)|v \in \mathbb {Z}_+\}\quad \hbox {and}\quad \mathcal {L}^{-}=\{(-v,v)|v \in \mathbb {Z}_-\}. \end{aligned}$$
(4.2)

Assumptions 1–2 will be verified by using the results of Sect. 4.1. After that, in Sect. 4.2 we determine the no-crossing results corresponding to Assumption 3.

4.1 Deviation results for LPP

4.1.1 Point-to-point LPP results

First we remind a result of Johansson (Theorem 1.6 of [27], originally stated for \(\eta \ge 1\), but by symmetry of the LPP one easily extends the statement for any \(\eta >0\)).

Proposition 4.1

(Point-to-point LPP: convergence to \(F_2\)) Let \(0<\eta <\infty \). Then,

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}\left( L_{0\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \mu _\mathrm{pp}\ell +s \sigma _\eta \ell ^{1/3}\right) = F_2(s) \end{aligned}$$
(4.3)

where \(\mu _\mathrm{pp}=(1+\sqrt{\eta })^2\), and \(\sigma _\eta =\eta ^{-1/6}(1+\sqrt{\eta })^{4/3}\), and \(F_2\) is the GUE Tracy-Widom distribution function.

The distribution function of \(L_{0\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\) has the following known decay.Footnote 3

Proposition 4.2

(Point-to-point LPP: upper tail) Let \(0<\eta <\infty \). Then for given \(\ell _0>0\) and \(s_0 \in \mathbb {R}\), there exist constants \(C,c>0\) only dependent on \(\ell _0,s_0\) such that for all \(\ell \ge \ell _0\) and \(s\ge s_0\) we have

$$\begin{aligned} \mathbb {P}\left( L_{0\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> \mu _\mathrm{pp}\ell +\ell ^{1/3}s\right) \le C \exp (-c s), \end{aligned}$$
(4.4)

where \(\mu _\mathrm{pp}=(1+\sqrt{\eta })^{2}\).

Proof

By symmetry, it is enough to consider \(\eta \in (0,1]\). Also, we will (re)derive the statement for the complementary event. As stated in Proposition 6.1 of [2], we have

$$\begin{aligned} \mathbb {P}(\lambda _{1}(m-d,m+d) \le u)=\mathbb {P}\left( L_{0\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le u \right) \!, \end{aligned}$$
(4.5)

where \(\lambda _1\) is the largest eigenvalue of a \((m-d)\times (m+d)\) Laguerre Unitary Ensemble (LUE), i.e., the largest eigenvalue of \(\frac{1}{m-d}XX^{*}\), where \(X\) is a \((m-d)\times (m+d)\) matrix with i.i.d. standard complex Gaussian entries; the choice of parameters is so that \(m\,+\,d=\lfloor \eta \ell /\mu _\mathrm{pp}\rfloor \) and \(m-d=\lfloor \ell /\mu _\mathrm{pp}\rfloor \) (explicitly, one might take \(m=\lfloor \frac{\ell (\eta +1)}{2\mu _\mathrm{pp}}\rfloor \) and \(d=\lfloor \frac{\ell (1-\eta )}{2\mu _\mathrm{pp}}\rfloor \), but then these identites might only hold with an error \(\pm 1\)). Take \(K_{m,d}\) to be the kernel (3.13) of [25] (with \(w=0\)), which, according to Proposition C.1 of [25], is a conjugated correlation kernel for the LUE. Then, with \(\chi _{u}={1\!\!1}_{(u,+\infty )}\)

$$\begin{aligned} F(u):= \det (1-\chi _{u}K_{m,d}\chi _u)=\mathbb {P}(\lambda _{1}(m-d,m+d) \le u) \end{aligned}$$
(4.6)

Define the function \(u(s,\ell )=\ell -s\ell ^{1/3}\). The decay of \(F(u)\) is known, see (37) in [4]; more precisely we have with \(C,d>0\) dependent on \(s_0 \in \mathbb {R}\) and \(\ell _0>0\)

$$\begin{aligned} 1-Ce^{-ds}\le F(u(s,\ell )) \end{aligned}$$
(4.7)

for \(\ell >\ell _0\) and \(s>s_0\). Making the change of variable \(\ell \rightarrow \mu _\mathrm{pp}\ell \), (4.4) follows with \(c=d/\mu _\mathrm{pp}^{1/3}\).\(\square \)

Proposition 4.3

(Point-to-point LPP: lower tail) Let \(0\!<\!\eta \!<\!\infty \) and \(\mu _\mathrm{pp}\!=\!(1+\sqrt{\eta })^2\). There exist positive constants \(s_0,\ell _0,C,c\) such that for \(s\le -s_0,\) \(\ell \ge \ell _0\),

$$\begin{aligned} \mathbb {P}\left( L_{0\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \mu _\mathrm{pp} \ell +s\ell ^{1/3}\right) \le C\exp (-c|s|^{3/2}).\end{aligned}$$
(4.8)

Proof

Take the functions \(F,u(s,t)\) and the parameters \(w,m,d\) as in the proof of Proposition 4.2. Proposition 3 of [4] (to be found in the proof of Proposition 2 of [4]) and the inequality (56) of the same paper imply that there exist positive constants \(s_0,t_0,C,c\) such that

$$\begin{aligned} F(u(s,t))\le C \exp (-c |s|^{3/2}), \end{aligned}$$
(4.9)

for all \(s\le -s_0\) and \(t\ge t_0\).\(\square \)

4.1.2 Half-line \(\mathcal {L}^+\)-to-point LPP results

To obtain the results for the LPP from the half-line \(\mathcal {L}^+\) to a point \((\eta \ell ,\ell )\), we use the correspondence of LPP and TASEP, namely

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (m,n)}\le t\right) =\mathbb {P}\left( x_n(t)+n\ge m\right) , \end{aligned}$$
(4.10)

where \(x_n(t)\) is the position at time \(t\) of the TASEP particle that started from \(x_n(0)=-2n\) in the initial configuration where particles occupy \(-2\mathbb {N}_0\). TASEP particle have all jump rate \(1\). The latter distribution function is expressed as a Fredholm determinant of a kernel \(\hat{K}_{n,t}\), as is shown in [13].

Proposition 4.4

(Proposition 3 in [13]) Let particle number \(n \in \mathbb {N}_0\) start in \(-2n\) at time \(t=0\). Denote by \(x_n (t)\) the position of particle number \(n\) at time \(t\). We then have

$$\begin{aligned} \mathbb {P}(x_n(t) > s)=\det (1-\chi _s \hat{K}_{n,t} \chi _s)_{\ell ^2(\mathbb {Z})} \end{aligned}$$
(4.11)

where \(\chi _s ={1\!\!1}_{(-\infty ,s]}\) and \(\hat{K}_{n,t}\) is given byFootnote 4

$$\begin{aligned} \hat{K}_{n,t}(x_1,x_2)&= \frac{1}{(2\pi i)^{2}} \oint \limits \limits _{\Gamma _1}\mathrm{d}v\oint \limits \limits _{\Gamma _{0,1-v}}\frac{\mathrm{d}w}{w}\frac{e^{tw}(w-1)^{n }}{w^{x_1 + n}}\frac{v^{x_2 + n }}{e^{tv}(v-1)^{n}}\nonumber \\&\times \frac{2v-1}{(w+v-1)(w-v)}. \end{aligned}$$
(4.12)

To get a bound for the upper tail we need to have the following estimate of the decay of the kernel.

Proposition 4.5

(Exponential decay \(\hat{K}_{n,t}\)) Consider the scaling

$$\begin{aligned} n(t)=\left[ \frac{r}{4}t\right] \quad x_{i}=\left[ \frac{1-r}{2}t-s_{i}t^{1/3}\right] , \end{aligned}$$
(4.13)

for some \(r>1\). With this choice, there exists a constant \(C\) and a \(t_0\) such that for \(t>t_0\) and \(s_{1},s_{2} \ge 0\)

$$\begin{aligned} |\hat{K}_{n,t} (x_1,x_2)t^{1/3}2^{x_2-x_1} e^{-(s_2-s_1)/2}|\le C\, e^{-(s_1+s_2)/2}. \end{aligned}$$
(4.14)

Proof of Proposition 4.5

Below we will show that for \(t\) large enough, there are constants \(C,\mu (r)>0\) such that we have uniformly in \(s_1,s_2\ge 0\)

$$\begin{aligned} |\hat{K}_{n,t} (x_1,x_2)t^{1/3}2^{x_2-x_1} |\le C e^{-(s_1+s_2)}+C t^{1/3} e^{-\mu (r) t} e^{s_1 t^{1/3}\ln (2-r)}. \end{aligned}$$
(4.15)

From this then follows that

$$\begin{aligned} |\hat{K}_{n,t} (x_1,x_2)t^{1/3}2^{x_2-x_1} e^{-(s_2-s_1)/2}|\le 2C e^{-(s_1+s_2)/2} \end{aligned}$$
(4.16)

since \(t^{1/3} e^{-\mu (r) t}\le 1\) and \(e^{s_1 (t^{1/3}\ln (2-r)+1/2)}\le 1\) for \(t\) large enough (because \(\ln (2-r)<0\)).

Therefore, below we need to bound \(\hat{K}_{n,t} (x_1,x_2)t^{1/3}2^{x_2-x_1}\). We can divide the kernel \(\hat{K}_{n,t}\) into the contribution coming from the residue at \(w=-v+1\) and the rest. The contribution of this residue is

$$\begin{aligned} (-1)^{x_1+1}2^{x_2-x_1} \frac{t^{1/3}}{2\pi i} \oint \limits _{\Gamma _{1}}\text {d}v\frac{v^{x_2+2n}}{(1-v)^{x_1+2n+1}}e^{(1-2v)t} \end{aligned}$$
(4.17)

This kernel was already analyzed in [10]. Indeed, (4.17) is the kernel from Proposition 5.3 in [10] for the special choice of parameters \(t_1=t_2=T=t\), \(L=0\), and \(R=1\). Our scaling also fits in the one from (2.9) in [10]; take \(\pi (\theta )=r/4+\theta \) and \(\theta \) to be the solution of \(r/4+2\theta =1\), i.e., \(\theta =1/2-r/8\). Then (2.9) in [10] equals exactly (4.13). Said Proposition yields now that for any \((s_1,s_2) \in [-l,\infty )^{2}\) we have

$$\begin{aligned} |(4.17)| \le \text {const}\, e^{-(s_1+s_2)}. \end{aligned}$$
(4.18)

Let us deal now with the remaining part. Taking \(\tilde{s}_{i}=s_i t^{-2/3}\), we have to bound the kernel

$$\begin{aligned}&2^{x_2-x_1}\dfrac{t^{1/3}}{(2\pi i)^{2}}\oint \limits _{\Gamma _1}\text {d}v\oint \limits _{\Gamma _{0}}\dfrac{\text {d}w}{w}\dfrac{e^{tw}(w-1)^{n}}{w^{x_1 + n}} \dfrac{v^{x_2 + n}}{e^{tv}(v-1)^{n}}\frac{2v-1}{(w+v-1)(w-v)}\nonumber \\&\quad =\dfrac{t^{1/3}}{(2\pi i)^{2}}\oint \limits _{\Gamma _1}\text {d}v\oint \limits _{\Gamma _{0}}\text {d}w \dfrac{e^{tf_{0}(w,\tilde{s}_1)}}{e^{tf_{0}(v,\tilde{s}_2)}} \dfrac{2v-1}{(w+v-1)(w-v)} \end{aligned}$$
(4.19)

with

$$\begin{aligned} f_{0}(w,s)=\frac{r}{4}\ln (w-1)+w-\frac{2-r}{4}\ln (w)+s\ln (2w). \end{aligned}$$
(4.20)

We first note that for \(r\ge 2\) the pole at \(w=0\) disappears and thus (4.19) vanishes. We therefore assume \(1<r<2\) in the following. We now claim that

$$\begin{aligned} \Gamma _{0}(t)=\lambda e^{it},\quad t \in [0,2\pi ) \end{aligned}$$
(4.21)

is a steep descent path of \(f_{0}\) for \(\lambda =1-r/2\). To check the steep descent condition, note

$$\begin{aligned}&\mathrm{Re }(f_{0}(\Gamma _0 (t),\tilde{s}_1)) =\tilde{s}_1\ln (2\lambda )+\lambda \cos (t)-\frac{2-r}{4}\ln (\lambda ) +\frac{r}{4}\ln (|\lambda e^{it}-1|)\nonumber \\&\quad =\tilde{s}_1\ln (2\lambda )+\lambda \cos (t)-\frac{2-r}{4}\ln (\lambda ) +\frac{r}{8}\ln \left( \lambda ^{2}+1-2\lambda \cos (t)\right) . \end{aligned}$$
(4.22)

Thus we have

$$\begin{aligned} \frac{\partial }{\partial t} \mathrm{Re }\left( f_{0}(\Gamma _0 (t),\tilde{s}_1)\right) =-\lambda \sin (t)\left( 1 -\frac{r/4}{|\lambda e^{it}-1|^2}\right) , \end{aligned}$$
(4.23)

which is strictly negative for all \(t\in (0,\pi )\) (and strictly positive for \(t\in (\pi ,2\pi )\)). Indeed, \(|\lambda e^{it}-1|\ge r/2\), from which \(1 -\frac{r/4}{|\lambda e^{it}-1|^2}\ge 1-1/r>0\). Thus \(\Gamma _0\) as chosen above is a steep descent path for \(f_0\) with maximum at \(t=0\).

For \(\Gamma _{1}\), we choose

$$\begin{aligned} \Gamma _{1}(t)=1-\frac{1}{2} e^{it}, \quad t \in [0,2\pi ) \end{aligned}$$
(4.24)

and we want to show that it is a steep descent path for \(-f_{0}\). We have

$$\begin{aligned} \mathrm{Re }(-f_{0}(\Gamma _{1}(t),\tilde{s}_2))=&-\frac{r}{4}\ln (1/2)+\frac{2-r}{8}\ln \left( 5/4-\cos (t)\right) +\frac{1}{2}\cos (t)\\&-\tilde{s}_2\ln (|2-e^{it}|). \end{aligned}$$

The term \(-\tilde{s}_2\ln (|2-e^{it}|)\) reaches clearly its maximum at \(t=0\) for any \(\tilde{s}_2\ge 0\). Thus we can focus on the \(\tilde{s}_2=0\) case. We have

$$\begin{aligned} \frac{\partial }{\partial t} \mathrm{Re }\left( f_{0}(\Gamma _1(t),0)\right) =-\frac{\sin (t)}{2} \left( 1-\frac{2-r}{8}\frac{1}{|1-\frac{1}{2} e^{it}|^2}\right) , \end{aligned}$$
(4.25)

which is strictly negative for \(t\in (0,\pi )\) and strictly positive for \(t\in (\pi ,2\pi )\). This follows from \(|1-\frac{1}{2} e^{it}|\ge 1/2\), so that \(1-\frac{2-r}{8}|1-\frac{1}{2} e^{it}|^{-2}\ge r/2>0\). Thus \(\Gamma _1\) is a steep descent path for \(-f_0\) attaining its maximum at \(t=0\).

The paths \(\Gamma _0\) and \(\Gamma _1\) are such that the factor \(\frac{2v-1}{(w+v-1)(w-v)}\) in (4.19) is uniformly bounded and the length of the paths is also bounded. Therefore, since \(\Gamma _0\) and \(\Gamma _1\) are steep descent paths, we get the easy bound

$$\begin{aligned} |(4.19)|\le t^{1/3}e^{t(f_{0}(1-r/2,\tilde{s}_1)-f_{0}(1/2,\tilde{s}_2))}=t^{1/3}e^{-\mu (r) t}e^{t^{1/3}\ln (2-r)s_1}, \end{aligned}$$
(4.26)

with \(\mu (r)=-\frac{r}{4}\ln (r)-\frac{1-r}{2}+\frac{2-r}{4}\ln (2-r)>0\) for all \(1<r<2\). \(\square \)

Proposition 4.6

Fix an \(0\!<\! \eta \!<\! 1\) and let \(\mu \!=\!2(1+\eta )\). Then, for any \(\varepsilon \in [0,2(1-\eta ))\), there exists constants \(C,\tilde{c}>0\) and \(\ell _0 >0\) such that for all \(\ell >\ell _0\)

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> (\mu +\varepsilon /2) \ell \right) \le C\exp \left( -\tilde{c} \,\varepsilon \ell ^{2/3}\right) .\end{aligned}$$
(4.27)

Proof of Proposition 4.6

We follow along the lines of the proof of Theorem 2.5 in Section 5 of [11]. We use the relation (4.10) between LPP and TASEP, in which we set \(t:=(\mu +\varepsilon /2) \ell \) and denote by \(\ell (t)=t/(\mu +\varepsilon /2)\) its inverse function. Then, using this relation and Proposition 4.4, we see that

$$\begin{aligned} \, (4.27)=1-\mathbb {P}\left( x_{\ell (t)}(t)\ge (\eta -1)\ell (t)\right) . \end{aligned}$$
(4.28)

Let us denote

$$\begin{aligned} X^\mathrm{resc}_t=\frac{x_{\ell (t)}(t)-(-2 \ell (t)+t/2)}{-t^{1/3}}. \end{aligned}$$
(4.29)

Then,

$$\begin{aligned} \,(4.28)&= 1-\mathbb {P}\left( X^\mathrm{resc}_t \le \frac{(\eta +1)\ell (t)-t/2}{-t^{1/3}} \right) \nonumber \\&= -\sum _{m=1}^{\infty }\dfrac{(-1)^{m}}{m!}\int \limits \text {d}s_1\cdots \int \limits \text {d}s_m \det [t^{1/3}\hat{K}_{\ell (t),t}([x(s_i)],[x(s_j)])]_{1\le i,j\le m}\nonumber \\ \end{aligned}$$
(4.30)

where \(x(s)=(-2\ell (t)+t/2)-s t^{1/3}\) and the integration domain of the \(s_i\)’s is \((\varepsilon t^{2/3}/4(\mu +\varepsilon /2),\infty )\). On (4.30) we apply Proposition 4.5 with \(r=4/(\mu +\varepsilon /2)\).

We can thus single out a product \(\prod _{i=1}^{m}e^{-s_i}\) of the determinant, so that the absolute value of all entries in the matrix is bounded by a constant \(C\), so using Hadamard’s bound, we get

$$\begin{aligned} |(4.30)|&\le \sum _{m=1}^{\infty }\frac{C^{m}m^{m/2}}{m!}\int \limits _{\varepsilon t^{2/3}/4(\mu +\varepsilon /2)}\text {d}s_1\cdots \int \limits _{\varepsilon t^{2/3}/4(\mu +\varepsilon /2)} \text {d}s_{m}\prod _{i=1}^{m}e^{-s_i}\nonumber \\&\quad =\sum _{m=1}^{\infty }\dfrac{(2C)^{m}m^{m/2}\exp \left( -m \varepsilon t^{2/3}/4(\mu +\varepsilon /2)\right) }{m!}\nonumber \\&\quad \le \tilde{C}\exp \left( -\varepsilon t^{2/3}/4(\mu +\varepsilon /2)\right) \le \tilde{C}\exp \left( -\tilde{c} \varepsilon \ell ^{2/3}\right) \end{aligned}$$
(4.31)

for some constants \(\tilde{C},\tilde{c}\) (uniform in \(\ell \)).

Proposition 4.7

(Half-line \(\mathcal {L}^+\)-to-point LPP: convergence to \(F_1\)) For any fixed \(0< \eta < 1\), it holds

$$\begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \mu \ell +s \tilde{\sigma }_\eta \ell ^{1/3}\right) = F_1(2s) \end{aligned}$$
(4.32)

where \(\mu =2(1+\eta )\), \(\tilde{\sigma }_\eta =2^{4/3}(1+\eta )^{1/3}\), and \(F_1\) is the GOE Tracy-Widom distribution function.

Proof of Proposition 4.7

As in the proof of Proposition 4.6 we use the relation (4.10) between LPP and TASEP, in which we set \(t:=\mu \ell +s\tilde{\sigma }_\eta \ell ^{1/3}\) and denote by

$$\begin{aligned} \ell (t)=\frac{t}{\mu }-2s \frac{t^{1/3}}{\mu }+o(1) \end{aligned}$$
(4.33)

its inverse function. Thus,

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \mu \ell +s \tilde{\sigma }_\eta \ell ^{1/3}\right) =\mathbb {P}\left( x_{\ell (t)}(t)\ge (\eta -1)\ell (t) \right) . \end{aligned}$$
(4.34)

Let us denote

$$\begin{aligned} X^\mathrm{resc}_t=\frac{x_{\ell (t)}(t)-(-2 \ell (t)+t/2)}{-t^{1/3}}. \end{aligned}$$
(4.35)

Then,

$$\begin{aligned} (4.34)&= \mathbb {P}\left( X^\mathrm{resc}_t \le \dfrac{(\eta +1)\ell (t)-t/2}{-t^{1/3}} \right) \nonumber \\&= \sum _{m=0}^{\infty }\frac{(-1)^{m}}{m!}\int \limits _s^\infty \text {d}s_1\cdots \int \limits _s^\infty \text {d}s_m \det [t^{1/3}\hat{K}_{\ell (t),t}([x(s_i)],[x(s_j)])]_{1\le i,j\le m}\nonumber \\ \end{aligned}$$
(4.36)

where \(x(s)=(-2\ell (t)+t/2)-s t^{1/3}\). The bound of Proposition 4.5 allows us to apply dominated convergence and take the \(t\rightarrow \infty \) (i.e., \(\ell \rightarrow \infty \)) inside the Fredholm series. Thus it remains to show that the rescaled kernel \(t^{1/3}\hat{K}_{\ell (t),t}([x(s_i)],[x(s_j)])\), or a conjugation of it, converges pointwise to the Airy\(_1\) kernel \(\mathcal {A}_1(s_i,s_j)=\mathrm{Ai}(s_i+s_j)\).

As in Proposition 4.5, we consider the kernel conjugated by the factor \(2^{x(s_j)-x(s_i)}\). We can divide the kernel \(\hat{K}_{n,t}\) into the contribution coming from (a) the residue at \(u=-v+1\) and (b) the rest. The contribution coming from the residue is (4.17), that is, the kernel for the flat initial configuration (all even sites are initially occupied by a particle). It was shown in Theorem 2.3 of [12] (see also Proposition 5.1 of [10]) that the kernel converges pointwise to the Airy\(_1\) kernel. The control of the contribution of (b) is already made in the proof of Proposition 4.5. Indeed, the estimate (4.26) implies that this contribution goes to \(0\) as \(t\rightarrow \infty \) for all fixed \(s\in \mathbb {R}\). This ends the proof of Proposition 4.7, since det\((1\!\!1-\mathcal {A}_1)_{L^2(s,\infty )}=F_1(2s)\) by [24].

A simple corollary of Proposition 4.6 adapted to the problem we are looking at is the following.

Corollary 4.8

Fix an \(0< \eta < 1\), a \(\beta \in (1/3,1]\) and define

$$\begin{aligned} \gamma \in [0,1-t^{\beta -1}],\quad \varepsilon = t^{-\chi }\, \hbox { with }\, \chi \in (0,2/3). \end{aligned}$$
(4.37)

Then there exists constants \(C,\tilde{c}>0\) and \(t_0 >0\) such that for all \(t >t_0\)

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow D_\gamma }> \left( \mu _{\gamma }+\frac{\varepsilon }{2}\right) t\right) \le C\exp \left( -\tilde{c} \,t^{2/3-\chi }\right) . \end{aligned}$$
(4.38)

Proof

It is a straightforward consequence of Proposition 4.6. Indeed, setting \(\ell =\gamma t\),

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow D_\gamma }> \left( \mu _{\gamma }+\varepsilon /2\right) t\right)&= \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> (\mu +\varepsilon /(2\gamma )) \ell \right) \nonumber \\&\le \mathbb {P}\left( L_{\mathcal {L}^{+}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> (\mu +\varepsilon /2) \ell \right) \end{aligned}$$
(4.39)

since \(\gamma \in [0,1]\). Then the result is the bound (4.27). \(\square \)

4.1.3 Half-line \(\mathcal {L}^-\)-to-point LPP results

To obtain the results for the LPP from the half-line \(\mathcal {L}^-\) to a point \((\eta \ell ,\ell )\), we use the correspondence of LPP and TASEP, namely

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (m,n)}\le t\right) =\mathbb {P}\left( x_n(t)+n\ge m\right) , \end{aligned}$$
(4.40)

where \(x_n(t)\) is the position at time \(t\) of the TASEP particle with label \(n\). The initial condition is

$$\begin{aligned} x_n(0)=-n, n\ge 1,\quad x_n(0)=-2n, n\le 0, \end{aligned}$$
(4.41)

and the jump rates \(v_n\) of particles are given by

$$\begin{aligned} v_n=1,n\ge 1,\quad v_n=\alpha , n\le 0. \end{aligned}$$
(4.42)

Proposition 4.9

Let us consider TASEP with jump rates (4.42) and initial condition (4.41). Denote \(x_n(t)\) the position of particle number \(n\) at time \(t\). We then have

$$\begin{aligned} \mathbb {P}(x_n(t) > s)=\det (1-\chi _s \tilde{K}_{n,t} \chi _s)_{\ell ^2(\mathbb {Z})} \end{aligned}$$
(4.43)

where \(\chi _s ={1\!\!1}_{(-\infty ,s]}\) and \(\tilde{K}_{n,t}=K_{n,t}^{(1)}+ K_{n,t}^{(2)}\) with

$$\begin{aligned} K_{n,t}^{(1)}(x_1,x_2)&= \dfrac{1}{(2\pi \mathrm{i})^{2}} \oint \limits _{\Gamma _{-1}}\dfrac{\mathrm{d}w}{w+1}\oint \limits _{\Gamma _{0,\alpha -2-w}}\mathrm{d}z\dfrac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\nonumber \\&\times \dfrac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\dfrac{1}{z-(\alpha -2-w)},\nonumber \\ K_{n,t}^{(2)}(x_1,x_2)&= \dfrac{1}{(2\pi \mathrm{i})^{2}}\oint \limits _{\Gamma _{0}}\mathrm{d}z \oint \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1}\dfrac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\dfrac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\dfrac{1}{w-z}.\nonumber \\ \end{aligned}$$
(4.44)

The proof of this proposition is not so short and it is given in Sect. 5 below.

Next we show the point-wise convergence and get bounds for the properly rescaled kernel. Consider the scalingFootnote 5

$$\begin{aligned} n=\left[ \frac{\kappa (2-\alpha )}{4}t\right] \quad x_{i}=\left[ \frac{\alpha -\kappa }{2}t-s_{i}t^{1/3}\right] , \end{aligned}$$
(4.45)

for \(\alpha \in [0,1)\) and \(\kappa \in [0,1)\). Then, we define the rescaled and conjugated kernels by

$$\begin{aligned} K^{(i)}_{t,\mathrm{resc}}(s_1,s_2)=t^{1/3}(\alpha /2)^{x_1-x_2}K^{(i)}_{n,t}(x_1,x_2),\quad i=1,2, \end{aligned}$$
(4.46)

with \(x_i\) and \(n\) as in (4.45). Before stating the results, let us manipulate the kernel slightly. Denote by \(\tilde{s}_i=s_i t^{-2/3}\). In particular, we can assume \(0\le \tilde{s}_1\le \alpha (2-\kappa )/4\), since otherwise the kernel is identically equal to zero. Because of that, the Fredholm determinant in (4.43) is identically equal to zero for \(s>\alpha (2-\kappa )t^{2/3}/4\). Therefore, below we can restrict our estimates to \(s_1,s_2\le \alpha (2-\kappa )t^{2/3}/4\) only.

Let us introduce the function

$$\begin{aligned} f_0(w,\tilde{s})=w+1+\frac{\kappa (2-\alpha )}{4}\ln (w)- \left( \frac{\alpha (2-\kappa )}{4}-\tilde{s}\right) \ln (2(w+1)/\alpha ).\nonumber \\ \end{aligned}$$
(4.47)

we have

$$\begin{aligned} K^{(2)}_{t,\mathrm{resc}}(s_1,s_2) = \frac{t^{1/3}}{(2\pi \mathrm{i})^{2}} \oint \limits _{\Gamma _{0}}\mathrm{d}z \oint \limits _{\Gamma _{-1}}\dfrac{\mathrm{d}w}{w+1} \dfrac{e^{t f_0(w,\tilde{s}_1)}}{e^{t f_0(z,\tilde{s}_2)}} \dfrac{1}{w-z} \end{aligned}$$
(4.48)

and, separating the contribution of the simple pole at \(z=\alpha -2-w\) in \(K^{(1)}_{n,t}\),

$$\begin{aligned} K^{(1)}_{t,\mathrm{resc}}(s_1,s_2) = K^{(1,a)}_{t,\mathrm{resc}}(s_1,s_2)+K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2) \end{aligned}$$
(4.49)

where

$$\begin{aligned} K^{(1,a)}_{t,\mathrm{resc}}(s_1,s_2)&= \dfrac{t^{1/3}}{(2\pi \mathrm{i})^{2}} \displaystyle \oint \limits _{\Gamma _{-1,\alpha -2}}\dfrac{\mathrm{d}w}{w+1}\displaystyle \oint \limits _{\Gamma _{0}}\mathrm{d}z \dfrac{e^{t f_0(w,\tilde{s}_1)}}{e^{t f_0(z,\tilde{s}_2)}} \frac{1}{z-(\alpha -2-w)},\nonumber \\ K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2)&= \dfrac{t^{1/3}}{2\pi \mathrm{i}} \displaystyle \oint \limits _{\Gamma _{-1,\alpha -2}}\dfrac{\mathrm{d}w}{w+1} e^{t [f_0(w,\tilde{s}_1)-f_0(\alpha -2-w,\tilde{s}_2)]}. \end{aligned}$$
(4.50)

Remark 4.10

\(\alpha -2\) is not a pole for the double integral, but the reason why we have chosen the path for \(w\) to encircle also \(\alpha -2\) is the following. The function \(-f_0(\alpha -2-w,\tilde{s}_2)\) has a pole at \(w=\alpha -2\). Therefore, if, before computing the residue at \(z=\alpha -2-w\), we choose the path \(w\) so that it goes around \(\alpha -2\) too, then, its image by \(\alpha -2-w\) goes around the origin too, see Fig. 4. This means that, the path for \(z\) in the first term of (4.50) will have to be chosen to stay inside the image of \(\alpha -2-w\). We could have also chosen to have \(\alpha -2\) outside the path for \(w\), but this is not adequate to get the bounds on the kernel.

Fig. 4
figure 4

Illustration of the paths used in the kernel \(K_{t,\mathrm{resc}}^{(1,a)}\). The dashed line is the image of \(\alpha -2-w\)

Remark 4.11

For large \(|w|\), the leading term in \(f_0(w,\tilde{s})\) is given simply the linear term \(w\). So, we can as well consider (open) contours \(\Gamma _{-1,\alpha -2}\) such that the real part of \(w\) goes to \(-\infty \), and similarly \(\Gamma _0\) such that the real part of \(z\) goes to \(\infty \), see Fig. 5.

Fig. 5
figure 5

Paths used for the asymptotic analysis in Proposition 4.12 and Proposition 4.13. The dashed line is the image of \(\alpha -2-w\)

Proposition 4.12

(Bounds for \(K^{(1,a)}_{t,\mathrm{resc}}\) and \(K^{(2)}_{t,\mathrm{resc}}\)) For any \(\ell _0>0\), there exists a \(t_0\) such that for \(t>t_0\) and \(s_{1},s_{2} \in [-\ell _0,\frac{\alpha (2-\kappa )}{4} t^{2/3}]\),

$$\begin{aligned} \begin{aligned} |K^{(1,a)}_{t,\mathrm{resc}}(s_1,s_2)|&\le e^{-t F(\alpha ,\kappa )/2},\\ |K^{(2)}_{t,\mathrm{resc}}(s_1,s_2)|&\le e^{-t F(\alpha ,\kappa )/2}, \end{aligned} \end{aligned}$$
(4.51)

where

$$\begin{aligned} F(\alpha ,\kappa )=-\dfrac{\alpha +\kappa -2}{2}-\dfrac{\kappa (2-\alpha )}{4}\ln \left( \dfrac{2-\alpha }{\kappa }\right) +\dfrac{\alpha (2-\kappa )}{4}\ln \left( \frac{2-\kappa }{\alpha }\right) >0\nonumber \\ \end{aligned}$$
(4.52)

for all \(\alpha ,\kappa \in [0,2)\) and \(\kappa \in [0,2-\alpha )\).

Proof

To get the result we need to choose the paths for \(z,w\) so that they will be steep descent. Let us consider the following paths:

$$\begin{aligned} \begin{aligned} \Gamma _{-1,\alpha -2}&=\left\{ w=-1+\frac{\alpha }{2}+\mathrm{i}y-|y|,y\in \mathbb {R}\right\} ,\\ \Gamma _0&=\left\{ z=-\frac{\kappa }{2}+\mathrm{i}y+|y|,y\in \mathbb {R}\right\} . \end{aligned} \end{aligned}$$
(4.53)

With this choice, \(\Gamma _0\) stays on the right of \(\alpha -2-\Gamma _{-1,\alpha -2}\) since we assumed \(\kappa <2-\alpha \), see Fig. 5. Now we verify the steep descent property of the paths. By symmetry it is enough to consider the portion of the paths in the upper-half plane.

\({\textit{Path}}\) \(\Gamma _{-1,\alpha -2}\): Consider \(w=-1+\frac{\alpha }{2}+\mathrm{i}y-y\) for \(y\ge 0\), \(\tilde{s}\in [0,\alpha (2-\kappa )/4]\). Then,

$$\begin{aligned} \mathrm{Re }(f_0(w,\tilde{s}))=\mathrm{const} -y{+}\frac{\kappa (2-\alpha )}{8}\ln (|w|^2){-} \frac{1}{2}\left( \frac{\alpha (2-\kappa )}{4}-\tilde{s}\right) \ln (|w+1|^2),\nonumber \\ \end{aligned}$$
(4.54)

with \(|w|^2=\frac{(2-\alpha )^2}{4}+(2-\alpha )y+2y^2\) and \(|w+1|^2=\frac{\alpha ^2}{4}-\alpha y+2y^2\). Thus,

$$\begin{aligned} \frac{\partial \mathrm{Re }(f_0(w,\tilde{s}))}{\partial y} = -1+ \frac{\kappa (2-\alpha )}{8|w|^2}\left( 4y+2-\alpha \right) - \left( \frac{\alpha (2-\kappa )}{4}-\tilde{s}\right) \frac{4y-\alpha }{2|w+1|^2}.\nonumber \\ \end{aligned}$$
(4.55)

Now we consider two cases:

Case a :

\(0< y\le \alpha /4\). In this case,

$$\begin{aligned} (4.55)&\le -1+ \dfrac{\kappa (2-\alpha )}{8|w|^2}\left( 4y+2-\alpha \right) - \dfrac{\alpha (2-\kappa )}{8}\dfrac{4y-\alpha }{|w+1|^2}\nonumber \\&= -y^2\frac{8 y^2+(4y+1-\alpha ) (2-\alpha -\kappa )+2-\alpha }{2|w|^2|w+1|^2}<0 \end{aligned}$$
(4.56)

for all \(0<\alpha <2\) and \(0\le \kappa < 2-\alpha \).

Case b :

\(y\ge \alpha /4\). In this case,

$$\begin{aligned} (4.55)&\le -1+ \dfrac{\kappa (2-\alpha )}{8|w|^2}\left( 4y+2-\alpha \right) \nonumber \\&= -\dfrac{(2-\kappa )\left( \dfrac{(2-\alpha )^2}{4}+(2-\alpha )y\right) +4y^2}{2|w|^2}<0 \end{aligned}$$
(4.57)

for all \(\kappa <2\).

Further, as \(y\rightarrow \infty \), \(\frac{\partial \mathrm{Re }(f_0(w,\tilde{s}))}{\partial y}\rightarrow -1\), i.e., \(\mathrm{Re }(f_0(w,\tilde{s}))\simeq -y\). This implies that the estimates of the integrand in \(w\) will have an exponential decay as \(e^{-yt}\). Thus our chosen path \(\Gamma _{-1,\alpha -2}\) is steep descent.

\({\textit{Path}}\) \(\Gamma _0\): Consider \(z=-\frac{\kappa }{2}+\mathrm{i}y + y\) for \(y\ge 0\). Then

$$\begin{aligned} \mathrm{Re }(-f_0(z,\tilde{s}))=\mathrm{const} -y-\frac{\kappa (2-\alpha )}{8}\ln (|z|^2){+} \frac{1}{2}\left( \frac{\alpha (2-\kappa )}{4} {-}\tilde{s}\right) \ln (|z+1|^2),\nonumber \\ \end{aligned}$$
(4.58)

with \(|z|^2=\frac{\kappa ^2}{4}-\kappa y+2y^2\) and \(|z+1|^2=\frac{(2-\kappa )^2}{4}+(2-\kappa ) y+2y^2\). Thus, using \(\tilde{s}\ge 0\),

$$\begin{aligned} \frac{\partial \mathrm{Re }(-f_0(z,\tilde{s}))}{\partial y}&= -1- \frac{\kappa (2-\alpha )}{8|z|^2}\left( 4y-\kappa \right) + \left( \frac{\alpha (2-\kappa )}{4}-\tilde{s}\right) \frac{4y+2-\kappa }{2(|z+1|^2)}\nonumber \\&\le -1- \frac{\kappa (2-\alpha )}{8|z|^2}\left( 4y-\kappa \right) + \frac{\alpha (2-\kappa )}{8}\frac{4y+2-\kappa }{(|z+1|^2)}\nonumber \\&= -y^2\frac{8 y^2+(4 y+2-\kappa ) (2-\alpha -\kappa )+\alpha \kappa }{2 |z|^2 |z+1|^2}<0 \end{aligned}$$
(4.59)

for all \(\kappa >0\), \(y>0\), since we assumes \(0<\alpha <2\) and \(0\le \kappa <2-\alpha <2\).

By these two results on the steep descent property, the exponential decay for large \(y\), and the fact that \(|z-w|\) remains bounded away from \(0\), we get the bound

$$\begin{aligned}&\left| K^{(2)}_{t,\mathrm{resc}}(s_1,s_2)\right| \le \mathrm{const}\, t^{1/3} e^{t \mathrm{Re }(f_0((\alpha -2)/2,\tilde{s}_1))-t \mathrm{Re }(f_0(-\kappa /2,\tilde{s}_2))}\nonumber \\&\quad = \mathrm{const}\, t^{1/3} e^{t [\frac{\alpha +\kappa -2}{2}+\dfrac{\kappa (2-\alpha )}{4}\ln (\dfrac{2-\alpha }{\kappa })-\dfrac{\alpha (2-\kappa )}{4}\ln (\dfrac{2-\kappa }{\alpha })]}e^{-s_2 \ln ((2-\kappa )/\alpha ) t^{1/3}}.\nonumber \\ \end{aligned}$$
(4.60)

Since \((2-\kappa )/\alpha >1\) and \(s_2\ge -\ell _0\), the last term is at worse \(e^{c \ell _0 t^{1/3}}\) with \(c=\ln ((2-\kappa )/\alpha )>0\). Further one can verify that \(F(\alpha ,\kappa )>0\) for all \(\alpha \in [0,2)\) and \(\kappa \in [0,2-\alpha )\). Thus \(\mathrm{const}\, t^{1/3} e^{-t F(\alpha ,\kappa )}e^{c \ell _0 t^{1/3}}\le e^{-t F(\alpha ,\kappa )/2}\) for \(t\) large enough. We have obtained that

$$\begin{aligned} \left| K^{(2)}_{t,\mathrm{resc}}(s_1,s_2)\right| \le e^{-t F(\alpha ,\kappa )/2} \end{aligned}$$
(4.61)

for \(t\) large enough.

By exactly the same argument, but using that \(|z-(\alpha -2-w)|\) remains bounded away from zero, we can bound \(K^{(1,a)}_{t,\mathrm{resc}}\), namely

$$\begin{aligned} \left| K^{(1,a)}_{t,\mathrm{resc}}(s_1,s_2)\right| \le e^{-t F(\alpha ,\kappa )/2}. \end{aligned}$$
(4.62)

\(\square \)

Proposition 4.13

(Convergence for \(K^{(1,b)}_{t,\mathrm{resc}}\)) For any \(s_1,s_2\) in a bounded set,

$$\begin{aligned} \lim _{t\rightarrow \infty } K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2) = \sigma \mathrm{Ai}(\sigma (s_1+s_2)) \end{aligned}$$
(4.63)

with \(\sigma =\frac{(2-\alpha )^{2/3}}{(\alpha \left( (2-\alpha )^2-2 (1-\alpha )\kappa \right) )^{1/3}}\).

Proof

We have

$$\begin{aligned} K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2) = \frac{t^{1/3}}{2\pi \mathrm{i}} \oint \limits \limits _{\Gamma _{-1,\alpha -2}}\frac{\mathrm{d}w}{w+1} e^{t [f_0(w,0)-f_0(\alpha -2-w,0)]} e^{t^{1/3} [s_1 f_2(w)-s_2 f_2(2-\alpha -w)]}\nonumber \\ \end{aligned}$$
(4.64)

with \(f_2(w)=\ln (2(w+1)/\alpha )\).

First we show that \(\Gamma _{-1,\alpha -2}\) as in (4.53) is steep descent for

$$\begin{aligned} g_0(w,\tilde{s}_1,\tilde{s}_2):=f_0(w,\tilde{s}_1)-f_0(\alpha -2-w,\tilde{s}_2), \end{aligned}$$
(4.65)

for \(\tilde{s}_1,\tilde{s}_2\in [0,\alpha (2-\kappa )/4]\). It is a little bit more than what we need for this proposition, but we will use it in Proposition 4.14 again. From the proof of Proposition 4.12 we already know that the path is steep descent for \(f_0(w,\tilde{s}_1)\). Now consider \(z=\alpha -2-w=-1+\frac{\alpha }{2}+\mathrm{i}y+y\), \(y\ge 0\). Then, \(|z|^2=\frac{(2-\alpha )^2}{4}-(2-\alpha )y+2y^2\) and \(|z+1|^2=\frac{\alpha ^2}{4}+\alpha y+2 y^2\). The same computation as in (4.59) given, for \(\tilde{s}\ge 0\),

$$\begin{aligned} \frac{\partial \mathrm{Re }(-f_0(z,\tilde{s}))}{\partial y}&\le -1- \frac{\kappa (2-\alpha )}{8|z|^2}\left( 4y-1+\alpha /2\right) + \frac{\alpha (2-\kappa )}{8}\frac{4y+1+\alpha /2}{(|z+1|^2)}\nonumber \\&= -y^2\frac{8 y^2+(4y+1-\alpha ) (2-\alpha -\kappa )+2-\alpha }{2 |z|^2 |z+1|^2}<0 \end{aligned}$$
(4.66)

for all \(y>0\) under our assumptions \(0<\alpha <2\) and \(0\le \kappa < 2-\alpha \). Moreover, as \(y\rightarrow \infty \), \(\mathrm{Re }(-f_0(z,\tilde{s}))\simeq -y\). Putting together the two results, we have that the chosen path \(\Gamma _{-1,\alpha -2}\) is steep descent for \(g_0(w,\tilde{s}_1,\tilde{s}_2)\) and for \(y\rightarrow \infty \) we have \(\mathrm{Re }(g_0(w,\tilde{s}_1,\tilde{s}_2))\lesssim -2y\).

Therefore, the contribution to \(K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2)\) coming from \(|y|\ge \delta \) is of order \(\mathcal {O}(t^{1/3} e^{-c(\delta ) t})\) for some \(c(\delta )>0\). It remains to control the contribution for \(|y|\le \delta \). By Taylor series we have

$$\begin{aligned} g_0(w,0,0)=-Q(\alpha ,\kappa ) \frac{(2(\mathrm{i}-1)y/\alpha )^3}{3}+\mathcal {O}(y^4), \end{aligned}$$
(4.67)

with

$$\begin{aligned} Q(\alpha ,\kappa )=\frac{\alpha \left( (2-\alpha )^2-2 (1-\alpha )\kappa \right) }{(2-\alpha )^2} \end{aligned}$$
(4.68)

and

$$\begin{aligned} s_1 f_2(w)-s_2 f_2(2-\alpha -w) = (s_1+s_2) 2(\mathrm{i}-1)y/\alpha +\mathcal {O}(y^2). \end{aligned}$$
(4.69)

So, the contribution from \(0\le y\le \delta \) is given by

$$\begin{aligned} \frac{t^{1/3}}{2\pi \mathrm{i}}\frac{2(\mathrm{i}-1)}{\alpha } \int \limits \limits _{0}^\delta \mathrm{d}y e^{-t Q(\alpha ,\kappa ) (2(\mathrm{i}-1)y/\alpha )^3/3+t^{1/3} (s_1+s_2) 2(\mathrm{i}-1)y/\alpha } e^{\mathcal {O}(t y^4,t^{1/3} y^2)}.\nonumber \\ \end{aligned}$$
(4.70)

The cubic term has a prefactor with negative real part, so that it dominates all the error terms. Consider first (4.70) without the error terms. Then, by the change of variables \(W:=-t^{1/3} Q(\alpha ,\kappa )^{1/3} 2 (\mathrm{i}-1) y/\alpha \), we get

$$\begin{aligned} \frac{Q(\alpha ,\kappa )^{-1/3}}{2\pi \mathrm{i}} \int \limits \limits _{-t^{1/3} Q(\alpha ,\kappa )^{1/3} 2 (1-\mathrm{i}) \delta /\alpha }^0 \mathrm{d}W e^{W^3/3-(s_1+s_2)Q(\alpha ,\kappa )^{-1/3} W}.\qquad \quad \end{aligned}$$
(4.71)

Extending the contour to \((\mathrm{i}-1)\infty \) the error term is only \(\mathcal {O}(e^{-c(\delta )t})\) and adding the contribution of \(y\le 0\) we finally get that the main contribution is given by

$$\begin{aligned} \frac{Q(\alpha ,\kappa )^{-1/3}}{2\pi \mathrm{i}} \int \limits \limits _{-(1-\mathrm{i})\infty }^{-(1+\mathrm{i})\infty } \mathrm{d}W e^{W^3/3-(s_1+s_2)Q(\alpha ,\kappa )^{-1/3} W} = \sigma \mathrm{Ai}(\sigma (s_1+s_2))\qquad \end{aligned}$$
(4.72)

where we set \(\sigma =Q(\alpha ,\kappa )^{-1/3}\). Finally, to control the error terms in (4.70), one uses as usual the identity \(|e^{|x|}-1|\le |x|e^{|x|}\) with \(x\) replaced by the error terms, and obtains a contribution of order \(\mathcal {O}(t^{-1/3})\). \(\square \)

Proposition 4.14

(Bounds for \(K^{(1,b)}_{t,\mathrm{resc}}\)) For any \(\ell _0>0\), there exists a \(t_0\) such that for \(t>t_0\) and \(s_{1},s_{2} \in [-\ell _0,\frac{\alpha (2-\kappa )}{4} t^{2/3}]\)

$$\begin{aligned} |K^{(1,b)}_{t,\mathrm{resc}}(s_1,s_2)| \le C e^{-(s_1+s_2)/2}, \end{aligned}$$
(4.73)

for some finite constant \(C\).

Proof

The proof is very similar to the one in previous papers, see e.g. Proposition 5.3 of [10]. We will skip some algebraic details and focus on the strategy and the key points. First, for any \(t\)-independent \(\tilde{\ell }\) the result for \((s_1,s_2)\in [-\ell _0,\tilde{\ell }]^2\) follows from the proof of Proposition 4.13. The constant \(\tilde{\ell }\) can be chosen later and, for instance, if \((s_1,s_2)\in [-\ell _0,\infty )^2{\setminus } [-\ell _0,\tilde{\ell }]^2\), it can be chosen such that \(s_1+s_2\) is large enough.

As before, we denote \(\tilde{s}_i=s_i t^{-2/3}\). The integral we have to estimate is then

$$\begin{aligned} \frac{t^{1/3}}{2\pi \mathrm{i}} \oint \limits \limits _{\Gamma _{-1,\alpha -2}}\frac{\mathrm{d}w}{w+1} e^{t g_0(w,\tilde{s}_1,\tilde{s}_2)} \end{aligned}$$
(4.74)

with \(g_0\) given in (4.65). We have seen in the first part of the proof of Proposition 4.13 that the path \(\Gamma _{-1,\alpha -2}\) as in (4.53) is steep descent for general values of \(s_1,s_2\) in our domain. The idea is now to consider a minor modification of this path around \(w_c=-1+\alpha /2\) as follows, see Fig. 6.

Consider

$$\begin{aligned} w=w_c-\rho (1-\mathrm{i}y),\quad |y|\le 1, \end{aligned}$$
(4.75)

where \(\rho \) is chosen as follows:

$$\begin{aligned} \begin{aligned} \rho =\left\{ \begin{array}{ll} \frac{\alpha }{2\sqrt{Q(\alpha ,\kappa )}}\sqrt{\tilde{s}_1+\tilde{s}_2}, &{} \, \hbox { for }\,0\le \tilde{s}_1+\tilde{s}_2\le \varepsilon , \\ \frac{\alpha }{2\sqrt{Q(\alpha ,\kappa )}}\sqrt{\varepsilon }, &{} \, \hbox { for }\,\tilde{s}_1+\tilde{s}_2 \ge \varepsilon , \end{array} \right. \end{aligned} \end{aligned}$$
(4.76)

with \(Q=Q(\alpha ,\kappa )\) given in (4.68). For the asymptotic analysis, \(\varepsilon >0\) can be chosen as small as needed (but independent of \(t\)). This piece of contour joins the original path (4.53). Now one has to control the real part of \(g_0\) only in a neighborhood of \(-1+\alpha /2\) (at a distance \(\mathcal {O}(\varepsilon )\) only). Taylor series at \(w_c\) gives

$$\begin{aligned} g_0(w,\tilde{s}_1,\tilde{s}_2)&= -Q\frac{2^3}{\alpha ^3}\frac{(w-w_c)^3}{3}+(\tilde{s}_1+\tilde{s}_2)\frac{2}{\alpha }(w-w_c)\nonumber \\&+\, \mathcal {O}\left( (w-w_c)^4,\tilde{s}_i (w-w_c)^2\right) . \end{aligned}$$
(4.77)

For the choice in (4.75)–(4.76), one looks for the minimal \(w\) of (4.77) without the error terms and gets the first choice. However, in order to have enough control through Taylor approximation, we have to stay in a small neighborhood of \(w_c\). This is the reason for the \(\varepsilon \) cut-off in (4.76).

Replacing (4.75) into the main part of (4.77) one gets, for \(0\le \tilde{s}_1+\tilde{s}_2\le \varepsilon \),

$$\begin{aligned} \mathrm{Re }\left( -Q\frac{2^3}{\alpha ^3}\frac{(w-w_c)^3}{3}+(\tilde{s}_1{+}\tilde{s}_2)\frac{2}{\alpha }(w-w_c)\right) =-\frac{(\tilde{s}_1+\tilde{s}_2)^{3/2} (2+3 y^2)}{3\sqrt{Q}},\nonumber \\ \end{aligned}$$
(4.78)

while for \(\tilde{s}_1+\tilde{s}_2 \ge \varepsilon \),

$$\begin{aligned} \mathrm{Re }\left( -Q\frac{2^3}{\alpha ^3}\frac{(w-w_c)^3}{3}+(\tilde{s}_1{+}\tilde{s}_2)\frac{2}{\alpha }(w-w_c)\right)&= {-}\frac{3(\tilde{s}_1{+}\tilde{s}_2)\sqrt{\varepsilon }{+}(3 y^2{-}1)\varepsilon ^{3/2}}{3\sqrt{Q}}\nonumber \\&\le -\frac{2(\tilde{s}_1+\tilde{s}_2)\sqrt{\varepsilon }+3 y^2\varepsilon ^{3/2}}{3\sqrt{Q}}.\nonumber \\ \end{aligned}$$
(4.79)

The two key properties in (4.78) and (4.79) are: (1) the quadratic decay of \(e^{t g_0(w,\tilde{s}_1,\tilde{s}_2)}\) due the \(y^2\) term, and (2) at \(y=0\) one would have the bound

$$\begin{aligned} \begin{aligned} e^{t\mathrm{Re }(g_0(w,\tilde{s}_1,\tilde{s}_2))}&\lesssim \left\{ \begin{array}{ll} e^{-\frac{2}{3} (s_1+s_2)^{3/2} Q^{-1/2}}, &{} \, \hbox { for }\,0\le \tilde{s}_1+\tilde{s}_2\le \varepsilon , \\ e^{-\frac{2}{3} (s_1+s_2)\sqrt{\varepsilon }t^{1/3} Q^{-1/2}}, &{} \, \hbox { for }\,\tilde{s}_1+\tilde{s}_2 \ge \varepsilon , \end{array} \right. \end{aligned} \end{aligned}$$
(4.80)

by ignoring the error terms in (4.77). For \(s_1+s_2\) large enough and \(t\) large enough, in both cases (4.79) is bounded by \(e^{-c (s_1+s_2)}\) for any choice of \(c>0\). By choosing \(\varepsilon \) small enough, it is not so difficult (but a bit lengthy) to control the error terms in (4.77) too. This can be made in exactly the same way as in the proof of Proposition 5.3 of [10] (see the argument between equations (5.40) and (5.47) in [10]). As a result, one obtains for instance a bound for the rescaled kernel (4.74) like (4.79) with the prefactor \(\frac{2}{3}\) replaced by \(\frac{1}{3}\). This estimate is good enough and leads to the bound (4.73).\(\square \)

Fig. 6
figure 6

Paths used for the asymptotic analysis in Proposition 4.14

Proposition 4.15

Let \(\eta >\frac{\alpha ^2}{(2-\alpha )^2}\) and \(\tilde{\mu }=2\big (\frac{\eta }{\alpha }+\frac{1}{2-\alpha }\big )\). Then, for any \(\epsilon \ge 0\), there exist constants \(C,\tilde{c}\) such that

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> (\tilde{\mu }+\epsilon /2)\ell \right) \le C \exp (-\tilde{c}\epsilon \ell ^{2/3}). \end{aligned}$$
(4.81)

Proof

It is quite similar to the one of Proposition 4.6. We use again the correspondance (4.10) between TASEP and LPP. We set \(t:=(\tilde{\mu }+\epsilon /2)\ell \), \(\ell (t)=t/(\tilde{\mu }+\epsilon /2)\), Proposition 4.9 tells us

$$\begin{aligned} (4.81) =1-\mathbb {P}(x_{\ell (t)}(t)\ge (\eta -1)\ell (t)). \end{aligned}$$
(4.82)

We denote

$$\begin{aligned} X_{t}^{\mathrm{resc}}=\frac{x_{\ell (t)}(t)-\frac{(\alpha -\kappa )t}{2}}{-t^{1/3}} \end{aligned}$$
(4.83)

with \(\kappa =\frac{4}{2-\alpha }\big (\tilde{\mu }+\frac{\epsilon }{2}\big )^{-1}\) so that \(\ell (t)=\kappa \frac{2-\alpha }{4}t\). Then,

$$\begin{aligned} (4.81)&= 1-\mathbb {P}(X_{t}^{\mathrm{resc}}\le \frac{(\eta -1)\ell (t)-\frac{\alpha -\kappa }{2}t}{-t^{1/3}})\nonumber \\&= -\sum _{m=1}^{\infty }\frac{(-1)^{m}}{m!}\int \limits \mathrm{d}s_{1}\cdots \int \limits \mathrm{d}s_{m}\det [t^{1/3}\tilde{K}_{\ell (t),t}(x(s_i),x(s_j)]_{1\le i,j\le m},\nonumber \\ \end{aligned}$$
(4.84)

where \(x(s)=\frac{\alpha -\kappa }{2}t-st^{1/3}\) and the integration domain of the \(s_i\) is \((\alpha \epsilon t^{2/3}/4(\tilde{\mu }+\epsilon /2),\alpha (2-\kappa ) t^{2/3}/4]\). This comes from the fact that with \(x=(\eta -1)\ell (t)\) we have

$$\begin{aligned} s =\frac{x-\frac{\alpha -\kappa }{2}t}{-t^{1/3}}=\frac{\alpha \epsilon t^{2/3}}{4(\tilde{\mu }+\epsilon /2)} \end{aligned}$$
(4.85)

together with the fact that the original kernel \(K_{n,t}\) is identically equal to zero for \(x(s)+n<0\).

An easy consequence of Proposition 4.12 is that, for \(s_1,s_2\in [-\ell _0,\alpha (2-\kappa )t^{2/3}/4]\) it holds

$$\begin{aligned} |K^{(1,a)}_{t,\mathrm{resc}}(s_1,s_2)|+|K^{(2)}_{t,\mathrm{resc}}(s_1,s_2)| \le e^{-F(\alpha ,\kappa )t/4} e^{-(s_1+s_2)/2} \end{aligned}$$
(4.86)

for \(t\) large enough. This together with the exponential bound of Proposition 4.14 implies that we can thus single out a factor \(\prod _{i=1}^{m}C^{m}e^{-s_i}\) so that using Hadamard’s bound, we get

$$\begin{aligned} |(4.81)|&\le \sum _{m=1}^{\infty }\frac{C^{m}m^{m/2}}{m!}\int \limits \limits _{\varepsilon \alpha t^{2/3}/4(\tilde{\mu }+\varepsilon /2)}^{\alpha (2-\kappa ) t^{2/3}/4}\text {d}s_1\cdots \int \limits \limits _{\varepsilon \alpha t^{2/3}/4(\tilde{\mu }+\varepsilon /2)}^{\alpha (2-\kappa ) t^{2/3}/4} \text {d}s_{m}\prod _{i=1}^{m}e^{-s_i}\nonumber \\&\le \tilde{C}\exp \left( -\tilde{c} \varepsilon \ell ^{2/3}\right) \end{aligned}$$
(4.87)

for some constants \(\tilde{C},\tilde{c}\) (uniform in \(\ell \)), where the last steps are identical to the ones of Proposition 4.6.\(\square \)

Proposition 4.16

(Half-line \(\mathcal {L}^-\)-to-point LPP: convergence to \(F_1\)) For any fixed \(\eta >\frac{\alpha ^2}{(2-\alpha )^2}\), it holds

$$\begin{aligned} \begin{aligned} \lim _{\ell \rightarrow \infty }\mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \tilde{\mu }\ell +s \hat{ \sigma }_\eta \ell ^{1/3}\right) = F_1(2s) \end{aligned} \end{aligned}$$
(4.88)

where \(\tilde{\mu }=2\big (\frac{\eta }{\alpha }+\frac{1}{2-\alpha }\big )\), \(\hat{\sigma }_\eta =\frac{2^{4/3}}{\alpha }\left( \eta +\frac{\alpha ^3}{(2-\alpha )^3}\right) ^{1/3}\), and \(F_1\) is the GOE Tracy-Widom distribution function.

Proof

First, with \(\sigma \) as in Proposition 4.13, it holds

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}\le \tilde{\mu }\ell +s \hat{ \sigma }_\eta \ell ^{1/3}\right)&= \mathbb {P}\left( x_{\ell }(\tilde{\mu }\ell + s \hat{ \sigma }_\eta \ell ^{1/3})\ge (\eta -1)\ell \right) \nonumber \\&= \mathbb {P}\left( x_{[\kappa (2-\alpha )t/4]}(t)\ge \frac{\alpha -\kappa }{2}t-\sigma ^{-1} s t^{1/3}\right) \nonumber \\ \end{aligned}$$
(4.89)

if we choose

$$\begin{aligned} \begin{aligned} t = \tilde{\mu }+s \hat{ \sigma }_\eta \ell ^{1/3}&\Leftrightarrow \ell =\frac{t}{\tilde{\mu }}-\frac{s\hat{\sigma }_\eta t^{1/3}}{\tilde{\mu }^{4/3}}+o(1),\\ \frac{\kappa (2-\alpha )}{4}t=\ell&\Leftrightarrow \kappa =\frac{4}{2-\alpha }\left( \frac{1}{\tilde{\mu }}-\frac{s\hat{\sigma }_\eta t^{-2/3}}{\tilde{\mu }^{4/3}}\right) , \end{aligned} \end{aligned}$$
(4.90)

and finally \(\frac{\alpha -\kappa }{2}t-\sigma ^{-1} s t^{1/3} =(\eta -1)\ell \), which fixes the values of \(\tilde{\mu }\) and \(\hat{\sigma }_\eta \) as given in the statement. Now, the r.h.s. of (4.89) is given by a Fredholm determinant like in (4.84), with the minor difference that now the lower integration bound is simply given by \(s\) and that the scaling of the kernel has the extra \(\sigma ^{-1}\) in front. From Propositions 4.12 and 4.14 we know that the kernel is uniformly bounded (in \(t\)) by a function so that its Fredholm series is bounded. Thus we can apply dominated convergence to take the limit inside the Fredholm series. Finally, Proposition 4.13 tells us that the pointwise limit of the rescaled kernel (including the extra \(\sigma ^{-1}\) factor in the spatial scaling) converges pointwise to \(\mathrm{Ai}(s_1+s_2)\). Thus,

$$\begin{aligned} \lim _{t\rightarrow \infty } \mathbb {P}\left( x_{[\kappa (2-\alpha )t/4]}(t)\ge \frac{\alpha -\kappa }{2}t-\sigma ^{-1} s t^{1/3}\right) =F_1(2s), \end{aligned}$$
(4.91)

which ends the proof.\(\square \)

Corollary 4.17

Fix an \(\eta >\alpha ^2/(2-\alpha )^2\), a \(\beta \in (1/3,1]\) and define

$$\begin{aligned} \gamma \in [0,1-t^{\beta -1}],\quad \varepsilon = t^{-\chi }\hbox { with }\, \chi \in (0,2/3). \end{aligned}$$
(4.92)

Then there exists constants \(C,\tilde{c}>0\) and \(t_0 >0\) such that for all \(t >t_0\)

$$\begin{aligned} \begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow D_\gamma }> \left( \tilde{\mu }_{\gamma }+\frac{\epsilon }{2}\right) t\right) \le C\exp \left( -\tilde{c} \,t^{2/3-\chi }\right) , \end{aligned} \end{aligned}$$
(4.93)

where \(\tilde{\mu }_{\gamma }= 2 \gamma \left( \frac{\eta }{\alpha }+\frac{1}{2-\alpha }\right) \).

Proof

It is a straightforward consequence of Proposition 4.15. Indeed, with \(\ell =\gamma t\), we have

$$\begin{aligned} \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow D_\gamma }> \left( \tilde{\mu }_{\gamma }+\epsilon /2\right) t\right)&= \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> \left( \tilde{\mu }+\epsilon /(2\gamma )\right) \ell \right) \nonumber \\&\le \mathbb {P}\left( L_{\mathcal {L}^{-}\rightarrow (\lfloor \eta \ell \rfloor ,\lfloor \ell \rfloor )}> \left( \tilde{\mu }+\epsilon /2\right) \ell \right) \nonumber \\&\le C\exp (-\tilde{c}t^{2/3-\chi }), \end{aligned}$$
(4.94)

where the second inequality holds since \(\gamma \le 1\).\(\square \)

4.2 No-crossing results

In this section we collect the non-crossing results, which are proven below.

Proposition 4.18

Consider the point \(E=(\lfloor \eta t \rfloor ,\lfloor t\rfloor )\) for \(0<\eta < 1\) (see Fig. 7). For some fixed \(\beta \in (1/3,1]\), consider the points \(D_{\gamma }=(\lfloor \gamma \eta t \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1-t^{\beta -1}]\). Then, for all \(t\) large enough

$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}}\{D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{+}\rightarrow E}\}\Bigg )\le C \exp (-c t^{\beta -1/3}), \end{aligned} \end{aligned}$$
(4.95)

for some \(t\)-independent constants \(C,c>0\).

Fig. 7
figure 7

Illustration of the geometry for the LPP of Propositions 4.18–4.21. The half-line \(\mathcal {L}^-\) is the solid one, while the half-line \(\mathcal {L}^+\) is the dashed one. Further, \(E^{+}=(\eta t -t^{\nu },t-t^{\nu })\), \(B=(\eta -\alpha ^2/(2-\alpha )^2)(t,0)\), \(Z^+=(1-\eta )(-t/2,t/2)\), and \(Z^-=(\eta -\alpha ^2/(2-\alpha )^2)(t/2,-t/2)\). In the grey regions, the exponential random variables have parameter \(\alpha \in (0,2)\), while in the white regions, they have parameter \(1\)

Proposition 4.19

Consider the point \(E^{+}=(\lfloor \eta t -t^{\nu }\rfloor ,\lfloor t-t^{\nu }\rfloor )\) for \(0<\eta <1\) and \(1/3<\nu <1\)(see Fig. 7). For some fixed \(\beta \in (1/3,1]\), consider the points \(D_{\gamma }=(\lfloor \gamma \eta t) \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1-t^{\beta -1}]\). Then, for all \(t\) large enough

$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}}\{D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{+}\rightarrow E^{+}}\}\Bigg )\le C \exp (-c t^{\beta -1/3}), \end{aligned} \end{aligned}$$
(4.96)

for some \(t\)-independent constants \(C,c>0\).

Proposition 4.20

Let \(\alpha \in (0,2)\). Consider, for some \(\eta >\alpha ^2/(2-\alpha )^2\), the point \(E=(\lfloor \eta t \rfloor ,\lfloor t\rfloor )\) as in Fig. 7. For some fixed \(\beta \in (1/3,1]\), consider the points \(D_{\gamma }=(\lfloor \gamma \eta t \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1-t^{\beta -1}]\). Then, for all \(t\) large enough

$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}} \{D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{-}\rightarrow E}\}\Bigg )\le C \exp (-c t^{\beta -1/3}), \end{aligned} \end{aligned}$$
(4.97)

for some \(t\)-independent constants \(C,c>0\).

Similarly, for the point-to-point geometry we have:

Proposition 4.21

Consider the point \(E=(\lfloor \eta t \rfloor ,\lfloor t\rfloor )\) for \(0<\eta < 1\). For some fixed \(\beta \in (1/3,1]\), consider the points \(D_{\gamma }=(\lfloor \gamma \eta t \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1-t^{\beta -1}]\). Then, for all \(t\) large enough

$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}}\{D_\gamma \in \pi ^\mathrm{max}_{(\lfloor (\eta -1)t\rfloor ,0)\rightarrow E}\}\Bigg )\le C \exp (-c t^{\beta -1/3}), \end{aligned} \end{aligned}$$
(4.98)

for some \(t\)-independent constants \(C,c>0\).

Proposition 4.22

For some fixed \(\beta \in (1/3,1]\), consider the points \(D_{\gamma }=(\lfloor \gamma t \rfloor ,\lfloor \gamma t\rfloor )\) with \(\gamma \in [0,1-t^{\beta -1}]\). Then, for all \(t\) large enough

$$\begin{aligned} \begin{aligned} \mathbb {P}\Bigg ({\mathop {\mathop {\bigcup }\limits _{D_{\gamma }}}\limits _{\gamma \in [0,1-t^{\beta -1}]}}\{D_\gamma \in \pi ^\mathrm{max}_{(-t,0)\rightarrow (t,t)}\}\Bigg )\le C \exp (-c t^{\beta -1/3}), \end{aligned} \end{aligned}$$
(4.99)

for some \(t\)-independent constants \(C,c>0\).

4.2.1 Proof of Propositions 4.18, 4.19, 4.21, 4.22

In order to prove Proposition 4.18, we will adopt the notation and line of argumentation of a proof due to Johansson, namely the Lemmas 3.1, 3.2 and 3.3 in [28]. He used them to prove that the maximizing path of a LPP model in Poisson points does not leave a cylinder of width \(N^{2/3}\) as \(N\rightarrow \infty \).

Using the deviation results from the previous section, we first show that the probability that for some \(\gamma \) the LPP-times \(L_{\mathcal {L}^{+} \rightarrow D_\gamma }\) and \(L_{D_\gamma \rightarrow E}\) exceed by \(\varepsilon t/2\) their leading orders converges to zero.

Proposition 4.23

Fix an \(0< \eta < 1\), a \(\beta \in (1/3,1]\), a \(\chi \in (0,2/3)\). Let us set \(\varepsilon = t^{-\chi }\). We define a finiteFootnote 6 family of events \(\left\{ E_{D_{\gamma }}\right\} _{\gamma \in [0,1-t^{\beta -1}]}\) via

$$\begin{aligned} E_{D_\gamma }:=&\{\omega : L_{\mathcal {L}^{+}\rightarrow D_\gamma }(\omega )\le (\mu _\gamma +\varepsilon /2)t\} \cap \{L_{ D_\gamma \rightarrow E}(\omega )\le (\mu _{\mathrm{pp},\gamma } +\varepsilon /2)t\},\nonumber \\ \end{aligned}$$
(4.100)

where

$$\begin{aligned} \mu _{\gamma }=2(1+\eta )\gamma ,\quad \mu _{\mathrm{pp},\gamma }=(1-\gamma )(1+\sqrt{\eta })^{2}. \end{aligned}$$
(4.101)

Then

$$\begin{aligned} \mathbb {P}\left( \bigcup _{D_\gamma } \Omega {\setminus } E_{D_\gamma }\right) \le C' \exp (-c' t^{2/3-\chi }) \end{aligned}$$
(4.102)

for some constants \(C',c'>0\).

Proof

To get the result, notice that there are \(\mathcal {O}(t)\) many points \(D_\gamma \), \(\gamma \in [0,1-t^{\beta -1}]\), so that it is enough to get a good bound (uniform in \(\gamma \)) of \(\mathbb {P}\left( \Omega {\setminus } E_{D_\gamma }\right) \). We have

$$\begin{aligned} \mathbb {P}\left( \Omega {\setminus } E_{D_\gamma }\right) \le \mathbb {P}(L_{\mathcal {L}^{+} \rightarrow D_\gamma }\ge (\mu _\gamma +\varepsilon /2)t) + \mathbb {P}( L_{D_\gamma \rightarrow E}\ge (\mu _{\mathrm{pp},\gamma } +\varepsilon /2)t).\nonumber \\ \end{aligned}$$
(4.103)

According to Corollary 4.8 there is a \(t_0\) such that for \(t>t_0\) we get

$$\begin{aligned} \mathbb {P}(L_{\mathcal {L}^{+} \rightarrow D_\gamma }\ge (\mu _\gamma +\varepsilon /2)t)\le C \exp (-\tilde{c} t^{2/3-\chi }). \end{aligned}$$
(4.104)

Remark that (with \(\overset{d}{=}\) designating equality in distribution)

$$\begin{aligned} L_{D_\gamma \rightarrow E} \overset{d}{=} L_{0\rightarrow (\lfloor (1-\gamma ) \eta t\rfloor ,\lfloor (1-\gamma ) t\rfloor )}. \end{aligned}$$
(4.105)

Furthermore, Proposition 4.2 with \(\ell =(1-\gamma ) t\) and \(s=\frac{\varepsilon t^{2/3}}{(1- \gamma )^{1/3}}\) gives

$$\begin{aligned} \mathbb {P}( L_{D_\gamma \rightarrow E}\ge (\mu _{\mathrm{pp},\gamma } +\varepsilon /2)t)\le C \exp \left( -\varepsilon t^{2/3}\frac{c}{(1- \gamma )^{1/3}}\right) \le C \exp (-c t^{2/3-\chi }).\nonumber \\ \end{aligned}$$
(4.106)

The bounds (4.104) and (4.106) imply that, for some constants \(C',c'\),

$$\begin{aligned} \mathbb {P}\left( \Omega {\setminus } E_{D_\gamma }\right) \le C' \exp (- c' t^{2/3-\chi }). \end{aligned}$$
(4.107)

Being the number of \(D_\gamma \) of order \(t\) only, the claimed bound holds true.\(\square \)

Now we know that if a path goes through a point \(D_\gamma \), then its typical last passage time is smaller than \((\mu _\gamma +\mu _{\mathrm{pp},\gamma }+2\varepsilon )t\). However, the typical last passage time of the maximizing paths is \(\mu t,\) which is much larger.

Proposition 4.24

Fix an \(0< \eta < 1\), a \(\beta \in (1/3,1]\), and \(\gamma \in [0,1-t^{\beta -1}]\). Let us set \(\varepsilon = C t^{\beta -1}\). Then for all \(t>0\) it holds

$$\begin{aligned} \frac{(\mu _{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\mu )t}{t^{1/3}}\le -Ct^{\beta -1/3}, \end{aligned}$$
(4.108)

with \(C= (1-\sqrt{\eta })^2/2\), and

$$\begin{aligned} \mu =2(1+\eta ),\quad \mu _{\gamma }=2(1+\eta )\gamma ,\quad \mu _{\mathrm{pp},\gamma }=(1-\gamma )(1+\sqrt{\eta })^{2}. \end{aligned}$$
(4.109)

Proof

A simple computations gives, for \(0<\eta <1\),

$$\begin{aligned} \begin{aligned} \frac{(\mu _{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\mu )t}{t^{1/3}}&=t^{\beta -1/3} (1-\sqrt{\eta })^2/2- (1-\gamma )(1-\sqrt{\eta })^2\\&\le -t^{\beta -1/3} (1-\sqrt{\eta })^2/2, \end{aligned} \end{aligned}$$
(4.110)

where we used \(1-\gamma \ge t^{\beta -1}\).\(\square \)

We can now proceed to the final Proposition.

Proposition 4.25

Fix an \(0< \eta < 1\), a \(\beta \in (1/3,1]\) and \(\gamma \in [0,1-t^{\beta -1}]\). Then, there exists a \(t_0>0\) such that for all \(t\ge t_0\) it holds

$$\begin{aligned} \mathbb {P}(\{\omega :D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{+}\rightarrow E}(\omega )\}) \le C\exp (-c\, t^{\beta -1/3}), \end{aligned}$$
(4.111)

for some \(t\)-independent constants \(C,c>0\).

Proof of Proposition 4.25

Denote by \(I_{D_\gamma }\) the event that the maximizers from \(\mathcal {L}^+\) to \(E\) passes by the point \(D_\gamma \), namely

$$\begin{aligned} I_{D_\gamma }=\{\omega :D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{+}\rightarrow E}(\omega )\}. \end{aligned}$$
(4.112)

Let us choose \(\varepsilon = t^{\beta -1} (1-\sqrt{\eta })^2/2\). Then,

$$\begin{aligned} \mathbb {P}(I_{D_\gamma })\le \mathbb {P}\left( I_{D_\gamma }\cap \Big (\bigcap _{D_\gamma } E_{D_\gamma }\Big )\right) +\mathbb {P}\left( \Big (\bigcap _{D_\gamma } E_{D_\gamma }\Big )^c\right) . \end{aligned}$$
(4.113)

The second term is exactly (4.102) with \(\chi =1-\beta \) (the extra coefficient in the definition of \(\varepsilon \) is irrelevant, since it just modifies the value of the constant \(c'\)). Thus, the decay of the second term is as \(\exp (-c' t^{\beta -1/3})\).

To bound the first term, notice that if \(\omega \in I_{D_\gamma }\) and at the same time in each of the \(E_{D_\gamma }\)’s, then by Propositions 4.23 and 4.24,

$$\begin{aligned} \begin{aligned} L_{\mathcal {L}^+\rightarrow E}(\omega )&\le (\mu _\gamma +\mu _{\mathrm{pp},\gamma }+\varepsilon )t=\mu t + (\mu _\gamma +\mu _{\mathrm{pp},\gamma }+\varepsilon -\mu )t\\&\le \mu t - (C t^{\beta -1/3}) t^{1/3}. \end{aligned} \end{aligned}$$
(4.114)

Therefore,

$$\begin{aligned} \mathbb {P}\left( I_{D_\gamma }\cap \Big (\bigcap _{D_\gamma } E_{D_\gamma }\Big )\right) \le \mathbb {P}(L_{\mathcal {L}^+\rightarrow E}\le \mu t - (C t^{\beta -1/3}) t^{1/3}). \end{aligned}$$
(4.115)

Further, denote by \(Z^+\) the orthogonal projection of \(E\) on \(\mathcal {L}^{+}\), i.e., \(Z^+=\lfloor \frac{1-\eta }{2} \rfloor (-1,1)\). Then, since \(L_{\mathcal {L}^{+}\rightarrow E}\ge L_{Z^+ \rightarrow E}\), it follows that

$$\begin{aligned} (4.115)\le \mathbb {P}(L_{Z^+\rightarrow E}\le \mu t - (C t^{\beta -1/3}) t^{1/3}). \end{aligned}$$
(4.116)

Moreover, since \(L_{Z^+\rightarrow E}\overset{d}{=}L_{0\rightarrow \left( \lfloor \frac{1+\eta }{2}t\rfloor , \lfloor \frac{1+\eta }{2}t\rfloor \right) }\) we can apply the bound of Proposition 4.3 (with \(\ell \rightarrow (1+\eta ) t/2\), \(\eta \rightarrow 1\), and \(s\ell ^{1/3}\rightarrow C t^\beta \)) to obtain

$$\begin{aligned} (4.116)\le \tilde{C} \exp (-\tilde{c} t^{3\beta /2-1/2}) \end{aligned}$$
(4.117)

for some constants \(\tilde{C},\tilde{c}>0\).

Since for \(\beta \in (1/3,1]\) and \(\beta -1/3 \le 3\beta /2-1/2\), then for all \(t\) large enough

$$\begin{aligned} \mathbb {P}(I_{D_\gamma })\le C \exp (-c t^{\beta -1/3}), \end{aligned}$$
(4.118)

for some \(t\)-independent constants \(C,c>0\), which is the claimed result. \(\square \)

Proof of Proposition 4.18

The proof is a straightforward consequence of Proposition 4.25, since the cardinality of the family of points \(\{D_\gamma \}_{\gamma \in [0,1-t^{\beta -1}]}\) is only of order \(t\). \(\square \)

Proof of Proposition 4.19

The proof is very similar to the one of Proposition 4.18. Note first that for \(\gamma > 1-t^{\nu -1}\) then \(\mathbb {P}(D_\gamma \in \pi _{\mathcal {L}^{+}\rightarrow E^{+}}^{\max })=0\). For the analogue of Proposition 4.23, one only has to replace \(E\) by \(E^{+}\) in (4.100), which amounts to replace \(\eta \) by \(\tilde{\eta }=\frac{(1-\gamma )\eta t-t^{\nu }}{(1-\gamma )t-t^{\nu }}\rightarrow _{t\rightarrow \infty } \eta \) in (4.105), \(\mu _{\mathrm{pp},\gamma }\) by \(\mu _{\mathrm{pp},\gamma }^{+}=(1-\gamma -t^{\nu -1})\left( 1+\sqrt{\frac{\eta -t^{\nu -1}}{1-t^{\nu -1}}}\right) ^{2}\rightarrow _{t\rightarrow \infty } \mu _{\mathrm{pp},\gamma }\) and apply Proposition 4.2 to this new point-to-point LPP. The following analogue of Proposition 4.24 is a bit different.

Proposition 4.26

Fix an \(0< \eta < 1\), a \(\nu ,\beta \in (1/3,1)\) , and \(\gamma \in [0,1-t^{\beta -1}]\). Let us set \(\varepsilon = C t^{\beta -1}\). Then for all \(t\) large it holds

$$\begin{aligned} \frac{(\mu _{\gamma }^{+}+\mu _{\mathrm{pp},\gamma }^{+}+\varepsilon -\mu ^{+})t}{t^{1/3}}\le -C t^{\beta -1/3}, \end{aligned}$$
(4.119)

with \(C= (1-\sqrt{\eta })^2/4\), and

$$\begin{aligned} \mu ^{+}{=}2(1{+}\eta )-4t^{\nu -1},\quad \mu _{\gamma }^{+}{=}\gamma \mu ^+,\quad \mu _{\mathrm{pp},\gamma }^{+}{=}(1-\gamma -t^{\nu -1})\left( 1+\sqrt{\frac{\eta -t^{\nu -1}}{1-t^{\nu -1}}}\right) ^{2}.\nonumber \\ \end{aligned}$$
(4.120)

Proof

Using \(\sqrt{\frac{\eta -t^{\nu -1}}{1-t^{\nu -1}}}<\sqrt{\eta }\) for \(\eta <1\), we have \(\mu _{\mathrm{pp},\gamma }^{+}\le (1-\gamma )(1+\sqrt{\eta })^2\) so that

$$\begin{aligned} \frac{(\mu _{\gamma }^{+}+\mu _{\mathrm{pp},\gamma }^{+}+\varepsilon -\mu ^{+})t}{t^{1/3}}\le C t^{\beta -1/3}-(1-\gamma )\left( t^{2/3}(1-\sqrt{\eta })^2-4t^{\nu -1/3}\right) .\nonumber \\ \end{aligned}$$
(4.121)

Then, using \(\nu <1\) and \(1-\gamma \ge t^{\beta -1}\) we have, for \(t\) large enough,

$$\begin{aligned} (4.121) \le C t^{\beta -1/3}-t^{\beta -1/3}(1-\sqrt{\eta })^2/2=-C t^{\beta -1/3}. \end{aligned}$$
(4.122)

\(\square \)

With these two analogous statements at hand, we can adopt the proof of Proposition 4.25, simply replace again \(E\) by \(E^{+}\) in (4.116), and then again apply Proposition 4.3 with \(\ell \rightarrow \frac{1+\eta }{2}t-t^{\nu }\) to obtain a bound analogous to (4.117), which finishes the proof. \(\square \)

Proof of Proposition 4.21

The proof of Proposition 4.21 is almost identical, so let us indicate just the minor modifications. What we have to do is to replace \(\mathcal {L}^+\) with the point \((\lfloor (\eta -1)t\rfloor ,0)\), now \(\mu =4\) and \(\mu _\gamma =4\gamma \). Further, there is one simplification, namely, the step (4.116) is not needed (we would have equality in there). \(\square \)

Proof of Proposition 4.22

The analogue of Proposition 4.23 can be proven almost identically, one has \(\mu _\gamma =\left( 1+\sqrt{\frac{1+\gamma }{\gamma }}\right) ^{2}\gamma , \mu _{\mathrm{pp},\gamma }=4(1-\gamma )\) and uses twice Proposition 4.2.

The analogue of Proposition 4.24 is again a bit different.

Proposition 4.27

Fix a \(\beta \in (1/3,1]\) , and \(\gamma \in [0,1-t^{\beta -1}]\). Let us set \(\varepsilon = C t^{\beta -1}\). Then for all \(t\) large it holds

$$\begin{aligned} \frac{(\mu _{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\mu )t}{t^{1/3}}\le -C t^{\beta -1/3}, \end{aligned}$$
(4.123)

with \(C= (3-2\sqrt{2})/4\), and

$$\begin{aligned} \mu _\gamma =\left( 1+\sqrt{\frac{1+\gamma }{\gamma }}\right) ^{2}\gamma ,\quad \mu =(1+\sqrt{2})^{2},\quad \mu _{\mathrm{pp},\gamma }=4(1-\gamma ). \end{aligned}$$
(4.124)

Proof of Proposition 4.27

We have

$$\begin{aligned} \mu _{\gamma }+\mu _{\mathrm{pp},\gamma }-\mu =2 \left( \sqrt{\frac{1}{\gamma }+1}-1\right) \gamma -2 \sqrt{2}+2 \end{aligned}$$
(4.125)

that is increasing in \(\gamma \). Further, it holds

$$\begin{aligned} \mu _{\gamma }+\mu _{\mathrm{pp},\gamma }-\mu =(\gamma -1)\frac{3-2\sqrt{2}}{\sqrt{2}}+\mathcal {O}((\gamma -1)^2). \end{aligned}$$
(4.126)

Thus by choosing \(\gamma =1-t^{\beta -1}\) we get

$$\begin{aligned} \frac{(\mu _{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\mu )t}{t^{1/3}}&\le -t^{\beta -1/3} \left( \frac{3-2\sqrt{2}}{\sqrt{2}}-C\right) + \mathcal {O}(t^{2(\beta -1/3)})\le -C t^{\beta -1/3}\nonumber \\ \end{aligned}$$
(4.127)

for \(t\) large enough.

The analogue of Proposition 4.25 can be proven almost identically, the only difference being that the step (4.116) is not needed. \(\square \)

4.2.2 Proof of Proposition 4.20

The proof is very close to the one of Proposition 4.18, therefore we will skip some of the details, focusing more on the differences.

Proposition 4.28

Fix an \(\eta >\alpha ^2/(2-\alpha )^2\), a \(\beta \in (1/3,1]\) and a \(\chi \in (0,2/3)\). Let us set \(\varepsilon = t^{-\chi }\). We define a finite family of events \(\big \{\tilde{E}_{D_{\gamma }}\big \}_{\gamma \in [0,1-t^{\beta -1}]}\) via

$$\begin{aligned} \tilde{E}_{D_\gamma }:=&\{\omega : L_{\mathcal {L}^{-}\rightarrow D_\gamma }(\omega )\le (\tilde{\mu }_\gamma +\varepsilon /2)t\} \cap \{L_{ D_\gamma \rightarrow E}(\omega )\le (\mu _{\mathrm{pp},\gamma } +\varepsilon /2)t\},\nonumber \\ \end{aligned}$$
(4.128)

where

$$\begin{aligned} \tilde{\mu }_{\gamma }= 2 \gamma \left( \frac{\eta }{\alpha }+\frac{1}{2-\alpha }\right) ,\quad \mu _{\mathrm{pp},\gamma }=(1-\gamma )(1+\sqrt{\eta })^{2}. \end{aligned}$$
(4.129)

Then

$$\begin{aligned} \mathbb {P}\bigg (\bigcup _{D_\gamma } \Omega {\setminus } \tilde{E}_{D_\gamma }\bigg )\le C' \exp (-c' t^{2/3-\chi }) \end{aligned}$$
(4.130)

for some constants \(C',c'>0\).

Proof

The proof is like the one of Proposition 4.23, with the only difference that we employ Corollary 4.17 instead of Corollary 4.8 to control the decay of \(\mathbb {P}(L_{\mathcal {L}^{-} \rightarrow D_\gamma }\ge (\tilde{\mu }_\gamma +\varepsilon /2)t)\). \(\square \)

Now we know that if a path goes through the a point \(D_\gamma \), then its typical last passage time is smaller than \((\tilde{\mu }_\gamma +\mu _{\mathrm{pp},\gamma }+2\varepsilon )t\). However, the typical last passage time of the maximizing path is \(\tilde{\mu } t\) which is much larger.

Proposition 4.29

Fix \(\eta >\alpha ^2/(2-\alpha )^2\), \(\beta \in (1/3,1]\), and \(\gamma \in [0,1-t^{\beta -1}]\). Let us set \(\varepsilon = C t^{\beta -1}\). Then for all \(t>0\) it holds

$$\begin{aligned} \frac{(\tilde{\mu }_{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\tilde{\mu })t}{t^{1/3}}\le -\tilde{C}t^{\beta -1/3}, \end{aligned}$$
(4.131)

with \(C=\frac{(\alpha -(2-\alpha )\sqrt{\eta })^2}{2\alpha (2-\alpha )}\), and

$$\begin{aligned} \tilde{\mu }= 2 \left( \frac{\eta }{\alpha }+\frac{1}{2-\alpha }\right) ,\quad \tilde{\mu }_{\gamma }= \gamma \tilde{\mu }, \quad \mu _{\mathrm{pp},\gamma }=(1-\gamma )(1+\sqrt{\eta })^{2}. \end{aligned}$$
(4.132)

Proof

A simple computations gives,

$$\begin{aligned} \frac{(\tilde{\mu }_{\gamma }+\mu _{\mathrm{pp},\gamma }+\varepsilon -\tilde{\mu })t}{t^{1/3}}&= (\gamma -1) \frac{(\alpha -(2-\alpha )\sqrt{\eta })^2}{\alpha (2-\alpha )}t^{2/3}+C t^{\beta -1/3}\nonumber \\&\le -t^{\beta -1/3} \left( \frac{(\alpha -(2-\alpha )\sqrt{\eta })^2}{\alpha (2-\alpha )}-C\right) \le -C t^{\beta -1/3}\qquad \qquad \quad \end{aligned}$$
(4.133)

where we used \(\alpha <1\) and \(\gamma -1\le -t^{\beta -1}\) and the fact that \(\eta >\alpha ^2/(2-\alpha )^2\). \(\square \)

We can now proceed to the final proposition.

Proposition 4.30

Fix an \(\eta >\alpha ^2/(2-\alpha )^2\), a \(\beta \in (1/3,1]\) and let \(\gamma \in [0,1-t^{\beta -1}]\). Then, there exists a \(t_0>0\) such that for all \(t\ge t_0\) it holds

$$\begin{aligned} \mathbb {P}(\{\omega :D_\gamma \in \pi ^\mathrm{max}_{\mathcal {L}^{-}\rightarrow E}(\omega )\}) \le \tilde{C}\exp (-c\, t^{\beta -1/3}), \end{aligned}$$
(4.134)

for some \(t\)-independent constants \(\tilde{C},c>0\).

Proof of Proposition 4.30

This proof is very close to the one of Proposition 4.25. This time we choose \(\varepsilon = \frac{C}{2} t^{\beta -1}\) with \(C=\frac{(\alpha -(2-\alpha )\sqrt{\eta })^2}{2\alpha (2-\alpha )}\) and denote by \(\tilde{I}_{D_\gamma }\) the events such that the maximizers from \(\mathcal {L}^-\) to \(E\) passes by the point \(D_\gamma \). Then,

$$\begin{aligned} \mathbb {P}(\tilde{I}_{D_\gamma })\le \mathbb {P}\left( \tilde{I}_{D_\gamma }\cap \left( \bigcap _{D_\gamma } \tilde{E}_{D_\gamma }\right) \right) +\mathbb {P}\left( \left( \bigcap _{D_\gamma } \tilde{E}_{D_\gamma }\right) ^c\right) . \end{aligned}$$
(4.135)

Using Corollary 4.17 we can bound the second term as \(\exp (-c' t^{\beta -1/3})\). By Propositions 4.28 and 4.29 we obtain

$$\begin{aligned} \begin{aligned} L_{\mathcal {L}^-\rightarrow E}(\omega )&\le (\tilde{\mu }_\gamma +\mu _{\mathrm{pp},\gamma }+\varepsilon )t=\tilde{\mu }t + (\tilde{\mu }_\gamma +\mu _{\mathrm{pp},\gamma }+\varepsilon -\tilde{\mu })t \\&\le \tilde{\mu }t - (\tilde{C} t^{\beta -1/3}) t^{1/3} \end{aligned} \end{aligned}$$
(4.136)

for \(\omega \in \tilde{I}_{D_\gamma }\) and at the same time in each of the \(\tilde{E}_{D_\gamma }\)’s. Therefore,

$$\begin{aligned} \mathbb {P}\left( \tilde{I}_{D_\gamma }\cap \left( \bigcap _{D_\gamma } \tilde{E}_{D_\gamma }\right) \right) \le \mathbb {P}\left( L_{\mathcal {L}^-\rightarrow E}\le \mu t - (\tilde{C} t^{\beta -1/3}) t^{1/3}\right) . \end{aligned}$$
(4.137)

The following is slightly different from the previous proof. Denote by

$$\begin{aligned} Z^-= (\kappa t/2,-\kappa t/2),\quad B=(\kappa t,0), \end{aligned}$$
(4.138)

where \(\kappa =\eta -\alpha ^2/(2-\alpha )^2\). Then, since \(L_{\mathcal {L}^{-}\rightarrow E}\ge L_{Z^- \rightarrow B}+ L_{B\rightarrow E}\), it follows that

$$\begin{aligned} (4.137)&\le \mathbb {P}\left( L_{Z^-\rightarrow B}\!+\!L_{B\rightarrow E}\!\le \! \tilde{\mu }t - (\tilde{C} t^{\beta -1/3}) t^{1/3}\right) \nonumber \\&\le \mathbb {P}\bigg (L_{Z^-\rightarrow B}\!\le \! \tilde{\mu }_1 t \!-\! \frac{\tilde{C} t^{\beta -1/3}}{2} t^{1/3}\bigg )\!+\! \mathbb {P}\bigg (L_{B\rightarrow E}\!\le \! \tilde{\mu }_2 t \!-\! \frac{\tilde{C} t^{\beta -1/3}}{2} t^{1/3}\bigg ),\nonumber \\ \end{aligned}$$
(4.139)

where \(\tilde{\mu }_1=2\kappa /\alpha \) and \(\tilde{\mu }_2=\tilde{\mu }-\tilde{\mu }_1=4/(2-\alpha )^2\). We can finally apply the bound of Proposition 4.3 to the two point-to-point problems and finish the proof as in Proposition 4.25. \(\square \)

Proof of Proposition 4.20

The proof is a straightforward consequence of Proposition 4.30, since the cardinality of the family of points \(\{D_\gamma \}_{\gamma \in [0,1-t^{\beta -1}]}\) is only of order \(t\). \(\square \)

4.3 Verification of Assumptions 1–3

Proof of Corollary 2.2

Assumption 1 is fulfilled through Propositions 4.7 and 4.16. Note that taking \(\hat{\sigma }_\eta ,\tilde{\sigma }_\eta \) or \(\hat{\sigma }_{\eta _{0}},\tilde{\sigma }_{\eta _{0}}\) yields the same limits. Let \(\tilde{\mu }_\eta =2(\eta /\alpha +1/(2-\alpha ))\) and \(\mu _\eta =2(1+\eta )\) be the leading order terms of the two LPP problems for \(\eta \). The shift in \(G_2\) comes from the fact that \(\frac{(\mu -\tilde{\mu }_\eta )t}{t^{1/3}}=-\frac{2u}{\alpha }\) and \(\frac{(\mu -\mu _\eta )t}{t^{1/3}}=-2u\). Assumption 2 is directly satisfied via Propositions 4.1 and 4.7 with \(E^{+}=(\eta t-t^{\nu },t-t^{\nu })\). Finally, Assumption 3 is precisely the content of Propositions 4.18 and 4.20.

Proof of Corollary 2.3

Clearly any maximizing path \(\pi _{\mathcal {L}^{+}\rightarrow (\eta t,t)}^{\max }\) starts off at \((-\lfloor \beta _0 t +b t^{1/3}\rfloor ,0)\). Let \(\tilde{\mu }_\eta =2(\eta /\alpha +1/(2-\alpha ))\) and \(\mu _{\mathrm{pp},\eta }=4+2(u+b)t^{-2/3}\) be the leading order terms of the two LPP problems.Then we have \(\frac{(4-\tilde{\mu }_\eta )t}{t^{1/3}}=-\frac{2u}{\alpha }\), \(\frac{(4-\mu _{\mathrm{pp},\eta })t}{t^{1/3}}=-2(u+b)\). Assumption 1 is fulfilled through Propositions 4.1 and 4.16. The requirement \(\alpha <1\) comes from the requirement \(\eta _0>\alpha ^2/(2-\alpha )^2\) from Proposition 4.16. Assumption 2 is directly satisfied via Propositions 4.1 and 4.7. Finally, Assumption 3 is precisely the content of Propositions 4.21 and 4.20.

Proof of Corollary 2.4

Any maximizing path \(\pi _{\mathcal {L}^{+}\rightarrow (\eta t,t)}^{\max }\) starts off from \((-\lfloor \beta t \rfloor ,0)\). Let \(\mu _{\mathrm{pp},\eta }=\big (1+\sqrt{1+\beta }\big )^{2}+\big (1+\frac{1}{\sqrt{1+\beta }}\big )ut^{-2/3}\) be the leading order of \(L_{\mathcal {L}^{+}\rightarrow (\eta t,t)}\), i.e. \(\frac{(\mu -\mu _{\mathrm{pp},\eta })t}{t^{1/3}}=\big (1+\frac{1}{\sqrt{1+\beta }}\big )u\), so Assumption 1 is fulfilled through Proposition 4.1 with \(G_1(s)\!=\! F_2\big (s/\sigma \!-\!u\big (1\!+\!1/\sqrt{1+\beta }\big )/\sigma \big )\). Note now \(L_{\mathcal {L}^{-}\rightarrow (\eta t, t)}\overset{d}{=}L_{0\rightarrow (\eta t, (1+\beta )t)}\), implying that the leading order of this LPP is \(\mu _{\mathrm{pp},\gamma }\!=\!\big (1+\sqrt{1+\beta }\big )^{2}\!+\!\big (1\!+\!\sqrt{1+\beta }\big )ut^{-2/3}\) so that \(\frac{(\mu -\mu _{\mathrm{pp},\gamma })t}{t^{1/3}}\!=\!-\!u\big (1\!+\!\sqrt{1\!+\!\beta }\big )\), which shows \(G_2(s)=F_2\big (s/\sigma -u\big (1+\sqrt{1+\beta }\big )/\sigma \big )\). Assumption 2 is directly satisfied via Proposition 4.1. Finally, Assumption 3 holds by Proposition 4.22.

5 Derivation of the kernel for TASEP with \(\alpha \)-particles

In order to prove Proposition 4.9 we first study the system with only \(M\) \(\alpha -\)particles. We denote by \(\mathbb {P}^{(M)}\) the probability measure for this system. The system we are considering is then recovered by taking \(M\rightarrow \infty \). We first recall the generic theorem for joint distributions in TASEP, specialized to our jump rates and initial configuration.

Proposition 5.1

(Proposition 4 in [14]) Let us consider particles starting from

$$\begin{aligned} x_j (0)= 2(M-j), 1 \le j \le M, \quad x_j(0)= -j+M, j> M \end{aligned}$$
(5.1)

and having jump rates \(v_j\) given by

$$\begin{aligned} v_j=\alpha ,1\le j\le M,\quad v_j=1, j> M. \end{aligned}$$
(5.2)

Denote \(x_{j}(t)\) the position of particle \(j\) at time \(t\). Then

$$\begin{aligned} \mathbb {P}^{(M)}(x_{n}(t) > s) =\det ({1\!\!1}-\chi _{s} K_{n,t} \chi _{s})_{\ell ^{2}(\mathbb {Z})}, \end{aligned}$$
(5.3)

where \(\chi _{s}={1\!\!1}(x <s)\). The kernel \(K_{n,t}\) is given by

$$\begin{aligned} K_{n,t}(x_1,x_2)= \sum _{k=1}^{n}\Psi _{n-k}^{n,t}(x_1)\Phi _{n-k}^{n,t}(x_2). \end{aligned}$$
(5.4)

The functions \(\Psi _{n-j}^{n,t}\) are given by

$$\begin{aligned} \Psi _{n-j}^{n,t}(x)=\frac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _{0}}\frac{\mathrm{d}w}{w}\frac{e^{tw}}{w^{x-x_{j}(0)+n-j}}\prod _{k=j+1}^{n}(w-v_k). \end{aligned}$$
(5.5)

The functions \(\{\Phi _{n-j}^{n,t}\}_{1\le j \le n}\) are characterized by the two conditions:

$$\begin{aligned} \langle \Psi _{n-j}^{n,t},\Phi _{n-k}^{n,t}\rangle :=\sum _{x\in \mathbb {Z}}\Psi _{n-j}^{n,t}(x)\Phi _{n-k}^{n,t}(x)=\delta _{j,k},\quad 1\le j,k\le n, \end{aligned}$$
(5.6)

and

$$\begin{aligned} \mathrm{span}\{ \Phi _{n-j}^{n,t}(x), 1\le j\le n \}=\mathrm{span}\{1,x,\ldots ,x^{n-M-1},\alpha ^{x},x\alpha ^{x},\ldots ,x^{M-1}\alpha ^{x}\}.\nonumber \\ \end{aligned}$$
(5.7)

The following lemma gives explicit formulas for the orthogonal functions \(\Phi ,\Psi \) defined in the preceeding proposition. We only give them for \(n\ge M+1\), since these are the ones we need.

Lemma 5.2

Let \(n\ge M+1\). We then have two cases:

  1. (a)

    for \(j=M+1,\ldots ,n,\)

    $$\begin{aligned} \begin{aligned}&\Psi _{n-j}^{n,t}(x)=\frac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1}\frac{e^{t(w+1)}}{(w+1)^{x-M+n}} w^{n-j}\\&\Phi _{n-j}^{n,t}(x)=\frac{1}{2 \pi \mathrm{i}}\oint \limits \limits _{\Gamma _{0}}\mathrm{d}z \frac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-j+1}} \end{aligned} \end{aligned}$$
    (5.8)
  2. (b)

    for \(j=1,\ldots ,M,\)

    $$\begin{aligned} \Psi _{n-j}^{n,t}(x)&= \dfrac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _{-1}}\dfrac{\mathrm{d}w}{w+1}\dfrac{w^{n-M}(w+1-\alpha )^{M-j}}{(w+1)^{x-2M+n+j}}e^{t(w+1)}\nonumber \\ \Phi _{n-j}^{n,t}(x)&= \dfrac{1}{(2 \pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{\alpha -1}}\mathrm{d}v \oint \limits _{\Gamma _{0,v}}\mathrm{d}z\dfrac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-M}}\nonumber \\&\times \dfrac{2v+2-\alpha }{((v+1)(v+1-\alpha ))^{M-j+1}}\frac{1}{z-v} \end{aligned}$$
    (5.9)

Proof

The formulas for \(\Psi _{n-j}^{n,t}\) are easily obtained by plugging (5.1),(5.2) into (5.5).

In case (a), using the derivative formula for the residue, one sees that \(\Phi _{n-j}^{n,t}\) is a polynomial of degree \(n-j\) and thus

$$\begin{aligned} \text {span}\{\Phi _{n-j}^{n,t}(x),j=M+1,\ldots ,n\}=\text {span}\{1,x,\ldots ,x^{n-M-1}\}. \end{aligned}$$
(5.10)

In case (b), taking the residue at \(z=v\), one gets

$$\begin{aligned} \Phi _{n-j}^{n,t}(x)&=\frac{1}{2 \pi \mathrm{i}}\oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v \frac{(2v+2-\alpha )(v+1)^{x-2M+j-1}}{e^{t(v+1)}v^{n-M}(v+1-\alpha )^{M-j+1}}\end{aligned}$$
(5.11)
$$\begin{aligned}&+\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v\oint \limits \limits _{\Gamma _{0}} \text {d}z\frac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-M}}\frac{2v+2-\alpha }{((v+1) (v+1-\alpha ))^{M-j+1}}\frac{1}{z-v}. \end{aligned}$$
(5.12)

Now, \((5.11)=\alpha ^{x}p_{M-j}(x),\) where \(p_{M-j}\) is a polynomial of degree \(M-j\). For (5.12), we choose the integration paths such that \(|v|>|z|\), apply the identity \((z-v)^{-1}=-v^{-1}\sum _{\ell \ge 0} (z/v)^\ell \), and obtain

$$\begin{aligned} \, (5.12)=\sum _{\ell \ge 0}\frac{-1}{(2\pi \mathrm{i})^{2}} \oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v\frac{(2v+2-\alpha )v^{-(\ell +1)}}{((v+1)(v+1-\alpha ))^{M-\ell +1}}\oint \limits \limits _{\Gamma _{0}}\text {d}z\frac{(z+1)^{x-M+n}}{e^{t(z+1)} z^{n-M-\ell }},\nonumber \\ \end{aligned}$$
(5.13)

which for \(\ell =0,\ldots ,n-M-1\) is a polynomial of degree \(n-M-1-\ell \), and is \(0\) for larger \(\ell \). Therefore (5.7) holds. Next we check the biorthogonality relations (5.6). We shall recurrently use

$$\begin{aligned} \sum _{x\ge M-n}\left( \frac{z+1}{w+1}\right) ^{x-M+n} = \frac{w+1}{w-z}, \end{aligned}$$
(5.14)

which holds if \(|w+1|>|z+1|\).

Case \(M+1\le j,k\le n\):

$$\begin{aligned} \langle \Psi _{n-j}^{n,t},\Phi _{n-k}^{n,t}\rangle&=\sum \limits _{x \in \mathbb {Z}}\dfrac{1}{(2\pi \mathrm{i})^{2}} \displaystyle \oint \limits \limits _{\Gamma _{-1}}\dfrac{\mathrm{d}w}{w+1}\dfrac{e^{t(w+1)}w^{n-j}}{(w+1)^{x-M+n}} \displaystyle \oint \limits \limits _{\Gamma _{0}}\hbox {d}\, z \dfrac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-j+1}}\nonumber \\&=\sum \limits _{x \ge M-n}\dfrac{1}{(2\pi \mathrm{i})^{2}} \displaystyle \oint \limits \limits _{\Gamma _{-1}}\dfrac{\mathrm{d}w}{w+1}\dfrac{e^{t(w+1)}w^{n-j}}{(w+1)^{x-M+n}} \displaystyle \oint \limits \limits _{\Gamma _{0}}\hbox {d}\, z \dfrac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-j+1}}\nonumber \\ \end{aligned}$$
(5.15)

since for \(x<M-n\) the functions \(\Psi ^{n,t}_{n-j}(x)=0\). We can now choose the integration paths such that \(|w+1|>|z+1|\). Applying (5.14), the pole at \(w=-1\) disappears and instead there is a simple pole at \(w=z\),

$$\begin{aligned} \begin{aligned} (5.15)&=\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{0}}\hbox {d} z \frac{1}{e^{t(z+1)}z^{n-j+1}} \oint \limits \limits _{\Gamma _{z}}dw\frac{e^{t(w+1)}w^{n-j}}{w-z}\\&=\frac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _{0}}\text {d}z\frac{1}{z^{j-k+1}}=\delta _{j,k}. \end{aligned} \end{aligned}$$
(5.16)

Case \(M+1\le j\le n\) and \(1\le k\le M\): Also in this case we first restrict the sum over \(x\ge M-n\), use (5.14), and integrate out the remaining simple pole at \(w=z\), with the result

$$\begin{aligned}&\langle \Psi _{n-j}^{n,t},\Phi _{n-k}^{n,t}\rangle =\sum _{x \in \mathbb {Z}}\frac{1}{(2\pi \mathrm{i})^{3}} \oint \limits \limits _{\Gamma _{-1}}\frac{\text {d}w}{w+1}\frac{e^{t(w+1)}w^{n-j}}{(w+1)^{x-M+n}}\nonumber \\&\qquad \times \oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v \oint \limits \limits _{\Gamma _{0,v}}\text {d}z\frac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-M}}\frac{2v+2-\alpha }{((v+1)(v+1-\alpha ))^{M-k+1}}\frac{1}{z-v}\nonumber \\&\quad =\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v\oint \limits \limits _{\Gamma _{0,v}}\text {d}z \frac{1}{z^{j-M}}\frac{(2v+2-\alpha )}{(z-v)((v+1)(v+1-\alpha ))^{M-k+1}}.\qquad \end{aligned}$$
(5.17)

Since \(j>M\), for \(|z|\rightarrow \infty \), the integrand in \(z\) goes to zero at least as fast as \(1/z^2\) and it does not contain any other poles than \(z=0,v\). Therefore, the integrand in \(z\) has no pole at infinity and consequently \((5.17) = 0\).

Case \(1\le j,k\le M\): Also in this case we first restrict the sum over \(x\ge M-n\), use (5.14), and integrate out the remaining simple pole at \(w=z\). This gives

$$\begin{aligned} \langle \Psi _{n-j}^{n,t},\Phi _{n-k}^{n,t}\rangle =\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v \oint \limits \limits _{\Gamma _{0,v}}\text {d}z \frac{(2v+2-\alpha )((z+1)(z+1-\alpha ))^{M-j}}{((v+1)(v+1-\alpha ))^{M-k+1}(z-v)}.\nonumber \\ \end{aligned}$$
(5.18)

Now, the pole at \(z=0\) disappeared and the only contribution comes from the simple pole \(z=v\), i.e.,

$$\begin{aligned} (5.18)=\frac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _{\alpha -1}}\text {d}v\frac{2v+2-\alpha }{((v+1)(v+1-\alpha ))^{j-k+1}} =\frac{1}{2\pi \mathrm{i}}\oint \limits \limits _{\Gamma _0}\text {d}u\frac{1}{u^{j-k+1}}=\delta _{j,k},\nonumber \\ \end{aligned}$$
(5.19)

where we used the change of variables \(u=(v+1)(v+1-\alpha )\).

Case \(1\le j\le M\) and \(M+1\le k\le n\): Doing the first steps as in the three other cases above, we get

$$\begin{aligned} \langle \Psi _{n-j}^{n,t},\Phi _{n-k}^{n,t}\rangle&= \sum _{x \in \mathbb {Z}}\frac{1}{(2\pi \mathrm{i})^{2}} \oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1}\frac{w^{n-M}(w+1-\alpha )^{M-j}}{(w+1)^{x+n-2M+j}}e^{t(w+1)}\nonumber \\&\times \oint \limits \limits _{\Gamma _{0}}\hbox {d} z \frac{(z+1)^{x-M+n}}{e^{t(z+1)}z^{n-k+1}}\nonumber \\&= \frac{1}{2 \pi \mathrm{i}}\oint \limits \limits _{\Gamma _0}\text {d}z\frac{((z+1)(z+1-\alpha ))^{M-j}}{z^{M-k+1}}=0,\qquad \end{aligned}$$
(5.20)

since, for \(k>M\) the pole at \(z=0\) disappears.\(\square \)

Later, we will take the \(M\rightarrow \infty \) limit with \(n-M\) finite. To this end we give a compact form of \(K_{n,t}\).

Corollary 5.3

Let \(K_{n,t}\) be the kernel defined in (5.4). Then

$$\begin{aligned} K_{n+M,t}=K_{n,M,t}^{(0)}+K_{n,t}^{(1)}+K_{n,t}^{(2)}, \end{aligned}$$
(5.21)

where \(K_{n,t}^{(1)}\) and \(K_{n,t}^{(2)}\) are given in (4.44) and

$$\begin{aligned} K_{n,M,t}^{(0)}(x_1,x_2)&= \frac{-1}{(2\pi \mathrm{i})^{3}}\oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1}\oint \limits \limits _{\Gamma _{\alpha -1}} \mathrm{d}v\oint \limits \limits _{\Gamma _{0,v}}\mathrm{d}z\frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\nonumber \\&\times \frac{1}{z-v}\frac{2v+2-\alpha }{(v-w)(v+w+2-\alpha )}\bigg (\frac{(w+1)(w+1-\alpha )}{(v+1)(v+1-\alpha )}\bigg )^{M}.\nonumber \\ \end{aligned}$$
(5.22)

Proof

We first show that

$$\begin{aligned} K_{n,M,t}^{(0)}(x_1,x_2)+K_{n,t}^{(1)}(x_1,x_2)=\sum _{k=1}^{M}\Psi _{n+M-k}^{n+M,t}(x_1)\Phi _{n+M-k}^{n+M,t}(x_2). \end{aligned}$$
(5.23)

We have

$$\begin{aligned}&\sum _{k=1}^{M}\Psi _{n+M-k}^{n+M,t}(x_1)\Phi _{n+M-k}^{n+M,t}(x_2)\nonumber \\&\quad =\sum _{k=1}^{M} \frac{1}{(2\pi \mathrm{i})^{3}}\oint \limits \limits _{\Gamma _{\alpha -1}}\mathrm{d}v \oint \limits \limits _{\Gamma _{0,v}} \mathrm{d}z\oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1} \frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\nonumber \\&\qquad \times \frac{2v+2-\alpha }{(v+1)(v+1-\alpha )} \bigg (\frac{(w+1-\alpha )(w+1)}{ (v+1)(v+1-\alpha )}\bigg )^{M-k}\frac{1}{z-v}.\qquad \qquad \end{aligned}$$
(5.24)

We apply a finite geometric sum formula to \(q=\frac{(w+1-\alpha )(w+1)}{ (v+1)(v+1-\alpha )}\). For this the contours need to satisfy \(q\ne 1\). We take the contours such that

$$\begin{aligned} -\Gamma _{-1}-2+\alpha \subset \Gamma _{\alpha -1}, \Gamma _{-1}\not \subset \Gamma _{\alpha -1},\Gamma _{\alpha -1} \subset \Gamma _{0,v} ,\,\quad \text {and}\quad q\ne 1. \end{aligned}$$
(5.25)

Note that none of these conditions alter (5.24). An explicit choice of paths satisfying (5.25) is later given in (5.32). Using the linearity of the integral, we get

$$\begin{aligned} (5.24)&= \frac{1}{(2\pi \mathrm{i})^{3}} \oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1} \oint \limits \limits _{\Gamma _{\alpha -1,-w-2+\alpha }}\mathrm{d}v \oint \limits \limits _{\Gamma _{0,v}} \mathrm{d}z \frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\nonumber \\&\times \frac{2v+2-\alpha }{(v-w)(v+w+2-\alpha )} \bigg (1-\bigg (\frac{(w+1-\alpha )(w+1)}{(v+1)(v+1-\alpha )}\bigg )^{M}\bigg )\frac{1}{z-v}\nonumber \\&= \frac{1}{(2\pi \mathrm{i})^{3}} \oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1} \oint \limits \limits _{\Gamma _{-w-2+\alpha }}\mathrm{d}v \oint \limits \limits _{\Gamma _{0,v}} \mathrm{d}z \frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\nonumber \\&\times \frac{2v+2-\alpha }{(v-w)(v+w+2-\alpha )}\frac{1}{z-v}\nonumber \\&-\frac{1}{(2\pi \mathrm{i})^{3}} \oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1} \oint \limits \limits _{\Gamma _{\alpha -1}}\mathrm{d}v \oint \limits \limits _{\Gamma _{0,v}} \mathrm{d}z \frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\nonumber \\&\times \frac{2v+2-\alpha }{(v-w)(v+w+2-\alpha )} \bigg (\frac{(w+1-\alpha )(w+1)}{(v+1)(v+1-\alpha )}\bigg )^{M}\frac{1}{z-v}. \end{aligned}$$
(5.26)

Here we used that in the first triple integral the pole \(v=\alpha -1\) is no longer present. Plugging in the remaining residue at \(v=-w-2+\alpha \) yields then

$$\begin{aligned} (5.26)=K_{n,M,t}^{(0)}(x_1,x_2)+K_{n,t}^{(1)}(x_1,x_2). \end{aligned}$$
(5.27)

Next we define

$$\begin{aligned} K_{n,t}^{(2)}(x_1,x_2):=\sum _{k=M+1}^{n+M}\Psi _{n+M-k}^{n+M-k,t}(x_1)\Phi _{n+M-k}^{n+M-k,t}(x_2). \end{aligned}$$
(5.28)

Note that \(\Phi _{n+M-k}\) is zero for \(k\ge n+M+1\), thus

$$\begin{aligned} K_{n,t}^{(2)}(x_1,x_2)=\sum _{k=M+1}^{\infty }\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{0}}\mathrm{d}z \oint \limits \limits _{\Gamma _{-1}}\frac{\mathrm{d}w}{w+1}\frac{e^{t(w+1)}w^{n+M-k}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n+M-k+1}}.\nonumber \\ \end{aligned}$$
(5.29)

Assuming the contours are such that \(|w|>|z|,\) taking geometric series yields

$$\begin{aligned} K_{n,t}^{(2)}(x_1,x_2)=\frac{1}{(2\pi \mathrm{i})^{2}}\oint \limits \limits _{\Gamma _{0}}\mathrm{d}z \oint \limits \limits _{\Gamma _{-1,z}}\frac{\mathrm{d}w}{w+1}\frac{e^{t(w+1)}w^{n}}{(w+1)^{x_1+n}}\frac{(z+1)^{x_2+n}}{e^{t(z+1)}z^{n}}\frac{1}{w-z}.\nonumber \\ \end{aligned}$$
(5.30)

Finally, it is straightforward to check that the contribution of the simple pole at \(w=z\) is zero, so that we can drop it in the final expression of \(K_{n,t}^{(2)}\). \(\square \)

Proposition 5.4

Let \(K_{n,M,t}^{(0)},K_{n,t}^{(1)}, K_{n,t}^{(2)}\) be as in (4.44) and (5.22). Then, for \(x_1,x_2 \le \ell ,\) we have the following bounds.

$$\begin{aligned} \begin{aligned}&|K_{n,M,t}^{(0)}(x_1,x_2)|\le C\,e^{c x_2}q^{M}\\&|K_{n,t}^{(1)}(x_1,x_2)|\le C\, e^{c x_2}\\&|K_{n,t}^{(2)}(x_1,x_2)|\le C\, e^{c x_2} \end{aligned}, \end{aligned}$$
(5.31)

with \(q \in [0,1)\), \(c>0\) a constant, and \(C\) depends only on \(\ell ,n,t\).

Proof

To bound \(|K_{n,M,t}^{(0)}(x_1,x_2)|,\) we set

$$\begin{aligned} \Gamma _{-1}=-1+r_{1}e^{\mathrm{i}s_1}\quad \Gamma _{\alpha -1}=\alpha -1+r_{2}e^{\mathrm{i}s_2}\quad \Gamma _{0,v}=r_{3}e^{\mathrm{i}s_3} \end{aligned}$$
(5.32)

with \(r_1=\frac{\alpha ^{2}}{10},r_2 =\frac{\alpha }{\sqrt{1.5}}, r_3 =1-\alpha +r_2+\frac{|r_1+r_2-\alpha |}{2}.\) It is straightforward to check that (5.32) satisfy (5.25). We will bound the different parts of \(K_{n,M,t}^{(0)}\). First we note

$$\begin{aligned}&q:=\frac{\max _{\Gamma _{-1}}|(w+1)(w+1-\alpha )|}{\min _{\Gamma _{-1}}|(v+1)(v+1-\alpha )|}<\frac{\sqrt{1.5}|\alpha (-1-\alpha /10)|}{10(1-1/\sqrt{1.5}))}<1\nonumber \\&\qquad \frac{\max _{\Gamma _{0,v}}|(z+1)^{x_2+n}|}{\min _{\Gamma _{0,v}}|e^{t(z+1)}z^{n}|}\le C \,(1+r_{3})^{x_2}\le C e^{c x_2} \end{aligned}$$
(5.33)

The remaining parts can now be bounded by a constant:

$$\begin{aligned}&\dfrac{\max _{\Gamma _{\alpha -1}}|2v+2-\alpha |}{\min _{\Gamma _{-1},\Gamma _{\alpha -1}}|(v-w)(v+w+2-\alpha )|}\le \frac{\alpha +2r_2}{(r_2-r_1)(\alpha -\alpha /\sqrt{1.5}-\alpha ^{2}/10)} < C\nonumber \\&\quad \dfrac{1}{\min _{\Gamma _{-1},\Gamma _{0,v}}|z-v|} < C,\nonumber \\&\quad \dfrac{\max _{\Gamma _{-1}}|e^{t(w+1)}w^{n}|}{\min _{\Gamma _{-1}}|(w+1)^{x_1+n}|}\le \tilde{C}\, r_{1}^{-x_1}\le C, \end{aligned}$$
(5.34)

where the last estimate in (5.34) holds since \(0<r_1<1\) and \(x_1 \le \ell \). Putting these bounds together gives the estimate for \(K_{n,M,t}^{(0)}\). Note that the contour for \(z\) contains \(\alpha -2-w\). Therefore, in \(K_{n,t}^{(1)}\) we can choose the same contours for \(z,w\) as before and use the estimates from (5.33), (5.34). Noting

$$\begin{aligned} \min _{\Gamma _{-1},\Gamma _{0,\alpha -2-w}}|z-(\alpha -2-w)|^{-1}\le C, \end{aligned}$$
(5.35)

one gets the same bound as for \(K_{n,M,t}^{(0)}\), only without the \(q^{M}\).

As for \(K_{n,t}^{(2)}\), we can again choose the same contours for \(z,w\) as before. Since \(|w-z|\) is bounded from below, we get the same estimate as for \(K_{n,t}^{(1)}\).\(\square \)

Now we are ready to proof Proposition 4.9.

Proof of Proposition 4.9

Denote for clarity by \(x_{n+M}^{M}(t)\) the position of particle number \(n+M\) at time \(t\) in the system with \(M\) slow particles (defined via (5.1) and (5.2),) and by \(x_{n}(t)\) the position of particle \(n\) at time \(t\) in the system with infinitely many slow particles (defined via (4.41) and (4.42)). First we note that

$$\begin{aligned} \lim _{M\rightarrow \infty }\mathbb {P}^{(M)}\big (x_{n+M}^{M}(t) > s\big )=\mathbb {P}\big (x_{n}(t) > s\big ). \end{aligned}$$
(5.36)

This follows since \(x_{n+M}^{M}(0)=x_{n}(0)\) and by the fact that in TASEP the position of a particle up to a fixed time \(t\) depends only on finitely many other particles with probability one, as is seen from a graphical construction of it. Therefore, by Corollary 5.3, it remains to prove

$$\begin{aligned} \lim _{M\rightarrow \infty } \det ({1\!\!1}-\chi _s K_{n+M,t} \chi _s)_{\ell ^2(\mathbb {Z})}=\det ({1\!\!1}-\chi _s \tilde{K}_{n,t} \chi _s)_{\ell ^2(\mathbb {Z})}, \end{aligned}$$
(5.37)

where we used the notation \(K_{n+M,t}=K_{n,M,t}^{(0)}+K_{n,t}^{(1)}+ K_{n,t}^{(2)}\).

By the bounds in (5.31), we know that \(K_{n,M,t}^{(0)}\) converges pointwise to \(0\). Thus, it remains to show that also the Fredholm determinant converges. Consider the Fredholm series expansion

$$\begin{aligned} \det ({1\!\!1}-\chi _s K_{n+M,t} \chi _s)_{\ell ^2(\mathbb {Z})} {=} \sum _{m\ge 0}\frac{(-1)^m}{m!} \sum _{x_1\le s}\ldots \sum _{x_m\le s} \det [K_{n+M,t}(x_i,x_j)]_{1\le i,j\le m}.\nonumber \\ \end{aligned}$$
(5.38)

By (5.31), we have

$$\begin{aligned} \left| \frac{(-1)^{n}}{n!}\det \big (K_{n+M,t}(x_k,x_l)\big )_{1\le k,l\le n}\right| \le \frac{1}{n!}e^{c(x_1+\cdots +x_n)}C^{n}(2+q^{M})^{n}n^{n/2},\nonumber \\ \end{aligned}$$
(5.39)

where \(n^{n/2}\) is the Hadamard bound for matrices with entries of absolute value less or equal than \(1\). Since \(q<1\), we may replace \(2+q^{M}\) by \(3\) to get a summable uniform bound. Thus we may apply dominated convergence to (5.38) to take the \(M\rightarrow \infty \) inside the sum, which proves the result. \(\square \)