1 Introduction

In this paper we will consider the so called two-time distribution in directed last-passage percolation with geometric weights. This last-passage percolation model has several interpretations. It can be related to the Totally Asymmetric Simple Exclusion Process (TASEP) and to local random growth models. It is a basic example of a solvable model in the KPZ universality class. It has been less clear to what extent the two-time problem is also solvable but recently there has been some developments in this direction [1, 5, 9, 13, 17, 18]. The approach in this paper is different in many ways from that in our previous work [17]. It is closer to standard computations for determinantal processes, more straightforward and simpler.

To define the model, let \(\left( w(i,j)\right) _{i,j\ge 1}\) be independent geometric random variables with parameter q,

$$\begin{aligned} \mathbb {P}[w(i,j)=k]=(1-q)q^k,\quad k\ge 0. \end{aligned}$$

Consider the last-passage times

$$\begin{aligned} G(m,n)=\max _{\pi :(1,1)\nearrow (m,n)} \sum _{(i,j)\in \pi } w(i,j), \end{aligned}$$
(1.1)

where the maximum is over all up/right paths from (1, 1) to (mn), see [14]. We are interested in the correlation between \(G(m_1,n_1)\) and \(G(m_2,n_2)\), when \((m_1,n_1)\) and \((m_2,n_2)\) are ordered in the time-like direction, i.e. \(m_1<m_2\) and \(n_1<n_2\). To see why this is called a time-like direction, and give one reason why we are interested in the two-time problem, let us reinterpret the model as a discrete polynuclear growth model. It is clear from (1.1) that

$$\begin{aligned} G(m,n)=\max (G(m-1,n), G(m,n-1))+w(m,n). \end{aligned}$$
(1.2)

Let \(G(m,n)=0\) if \((m,n)\notin \mathbb {Z}_+^2\), and define the height function h(xt) by

$$\begin{aligned} h(x,t)=G\left( \frac{t+x+1}{2},\frac{t-x+1}{2}\right) \end{aligned}$$
(1.3)

for \(x+t\) odd, and extend it to all \(x\in \mathbb {R}\) by linear interpolation. Then (1.2) leads to a growth rule for h(xt) and this is the discrete time and space polynuclear growth model. We think of \(x\mapsto h(x,t)\) as the height above x at time t, and we get a random one-dimensional interface. Let the constants \(c_i\) be given by (2.1). It is known, see [15], that the rescaled process

$$\begin{aligned} \mathcal {H}_T(\eta ,t)=\frac{h(2c_1\eta (tT)^{2/3},2tT)-c_2tT}{c_3(tT)^{1/3}}, \end{aligned}$$
(1.4)

as a process in \(\eta \in \mathbb {R}\) for a fixed \(t>0\), converges as \(T\rightarrow \infty \) to \(\mathcal {A}_2(\eta )-\eta ^2\), where \(\mathcal {A}_2(\eta )\) is the Airy-2-process [21]. In particular, for any fixed \(\eta ,t\),

$$\begin{aligned} \lim _{T\rightarrow \infty }\mathbb {P}[\mathcal {H}_T(\eta ,t)\le \xi -\eta ^2]= F_2(\xi )=\det (I-K_{\text {Ai}\,})_{L^2(\xi ,\infty )}, \end{aligned}$$

where \(F_2\) is the Tracy–Widom distribution, and

$$\begin{aligned} K_{\text {Ai}}(x,y)=\int _0^\infty \text {Ai}\,(x+s)\text {Ai}\,(y+s)\,ds, \end{aligned}$$

is the Airy kernel. The two-time problem is concerned with the question of the correlation between heights at different times. What is the limiting joint distribution of \(\mathcal {H}_T(\eta _1,t_1)\) and \(\mathcal {H}_T(\eta _2,t_2)\) for \(t_1<t_2\), as \(T\rightarrow \infty \)? From (1.3), we see that this is related to understanding the correlation between last-passage times in the time-like direction. That a time separation of order T is the correct order to get non-trivial correlations is quite clear if we think about how much random environment e.g. G(nn) and G(NN), \(n<N\), share. It can also be seen from the slow de-correlation phenomenon, see [4, 12]. Looking at (1.4) we see that we have the fluctuation exponent 1/3 (fluctuations have order \(T^{1/3}\)), the spatial correlation exponent 2/3, and we also have the time correlation exponent 1 = 3/3 as explained. This is the KPZ 1:2:3 scaling. For further references and more on random growth models in the KPZ-universality class and related interacting particle systems, we refer to the survey papers [2, 3, 22].

The main result of the present paper is a limit theorem for the following two-time probability. Fix mMnN with \(1\le m<M\) and \(1\le n<N\). For \(a,A\in \mathbb {Z}\), we will consider the probability

$$\begin{aligned} P(a,A)=\mathbb {P}[G(m,n)< a,\,G(M,N)<A], \end{aligned}$$
(1.5)

in the appropriate scaling limit. The result is formulated in Theorem 2.1 below.

The first studies of the two-time problem, using a non-rigorous based on the replica method, was given by Dotsenko in [9, 10], see also [11]. However, the formulas are believed not to be correct [5]. The replica method has also been used by De Nardis and Le Doussal [5], to derive very interesting results in the limit \(t_1/t_2\rightarrow 1\) and, for arbitrary \(t_1/t_2\), in the partial tail of the joint law of \(\mathcal {H}_T(\eta _1,t_1)\) and \(\mathcal {H}_T(\eta _2,t_2)\) when \(\mathcal {H}_T(\eta _1,t_1)\) is large positive. In Le Doussal [18] gives a conjecturally exact formula for the limit \(t_1/t_2\rightarrow 0\). See also [13] for some rigorous work on this with quantitative results for the height correlation in the stationary case, which is not investigated here. We will not discuss these limits although to do so would be interesting. There are very interesting experimental and numerical results on the two-time problem by K. A. Takeuchi and collaborators, see [6, 23, 24].

Recently there has been a striking new development on the two-time problem, and more generally the multi-time problem, by Baik and Liu [1]. They consider the totally asymmetric simple exclusion process (TASEP) in a circular geometry, the periodic TASEP. Baik and Liu are able to give formulas for the multi-time distribution as contour integrals of Fredholm determinants, and take the scaling limit in the so-called relaxation time scale, \(T=O(L^{3/2})\), where L is the period. In principle their formulas include the problem studied here, but they are not able to take the scaling limit that we study in this paper. It would be interesting to understand the relation between the two approaches. For some comments on the multi-time problem in the setting used here see Remark 2.2. A related problem is to understand the Markovian time evolution of the whole limiting process with some fixed initial condition, the so called KPZ-fixed point. There has recently been very interesting progress on this problem by Matetski, Quastel and Remenik, see [19, 20].

An outline of the paper is as follows. In Sect. 2 we give the formula for the two-time distribution using an integral of a Fredholm determinant and state the main theorem. The main theorem is proved in Sect. 3 using a sequence of lemmas proved in Sects. 4 and 5. In Sect. 7, we briefly discuss the relation to the result in our previous work [17].

Notation Throughout the paper \(1(\cdot )\) denotes an indicator function, \(\gamma _r(a)\) is a positively oriented circle of radius r around the point a, and \(\gamma _r=\gamma _r(0)\). Also, \(\Gamma _c\) is the upward oriented straight line through the point c, \(t\mapsto c+it\), \(t\in \mathbb {R}\).

2 Results

Let \(0<t_1<t_2\), \(\eta _1,\eta _2\in \mathbb {R}\) and \(\xi _1,\xi _2\in \mathbb {R}\) be given. Furthermore T is a parameter that will tend to infinity. To formulate the scaling limit we need the constants,

$$\begin{aligned} c_0= & {} q^{-1/3}(1+\sqrt{q})^{1/3},\quad c_1=q^{-1/6}(1+\sqrt{q})^{2/3},\nonumber \\ c_2= & {} \frac{2\sqrt{q}}{1-\sqrt{q}},\quad c_3=\frac{q^{1/6}(1+\sqrt{q})^{1/3}}{1-\sqrt{q}} \end{aligned}$$
(2.1)

We will investigate the asymptotics of the probability distribution defined by (1.5). The appropriate scaling is then

$$\begin{aligned} n&=t_1T-c_1\eta _1(t_1T)^{2/3},\quad m=t_1T+c_1\eta _1(t_1T)^{2/3}\nonumber \\ N&=t_2T-c_1\eta _2(t_2T)^{2/3},\quad M=t_2T+c_1\eta _2(t_2T)^{2/3}\nonumber \\ a&=c_2t_1T+c_3\xi _1(t_1T)^{1/3}, \quad A=c_2t_2T+c_3\xi _2(t_2T)^{1/3}. \end{aligned}$$
(2.2)

Let \(\Delta t=t_2-t_1\), and write

$$\begin{aligned} \alpha =\left( \frac{t_1}{\Delta t}\right) ^{1/3}. \end{aligned}$$
(2.3)

Introduce the notation

$$\begin{aligned} \Delta \eta =\eta _2\left( \frac{t_2}{\Delta t}\right) ^{2/3}-\eta _1\left( \frac{t_1}{\Delta t}\right) ^{2/3},\quad \Delta \xi =\xi _2\left( \frac{t_2}{\Delta t}\right) ^{1/3}-\xi _1\left( \frac{t_1}{\Delta t}\right) ^{1/3}.\qquad \end{aligned}$$
(2.4)

We will now define the limiting probability function. Before we can do that we need to define some functions. Fix \(\delta \) such that

$$\begin{aligned} \delta >\max (\eta _1,\alpha \Delta \eta ), \end{aligned}$$
(2.5)

and define

$$\begin{aligned} S_1(x,y)&=-\alpha e^{(\eta _1-\delta )x+(\delta -\alpha \Delta \eta )y}\int _0^\infty e^{(\alpha \Delta \eta -\eta _1)s}K_{\text {Ai}}(\xi _1+\eta _1^2-s,\xi _1+\eta _1^2 -x)\nonumber \\&\quad \times \, K_{\text {Ai}}(\Delta \xi +\Delta \eta ^2+\alpha s,\Delta \xi +\Delta \eta ^2+\alpha y)\,ds, \end{aligned}$$
(2.6)
$$\begin{aligned} T_1(x,y)&=\alpha e^{(\eta _1-\delta )x+(\delta -\alpha \Delta \eta )y} \int _{-\infty }^0 e^{(\alpha \Delta \eta -\eta _1)s} K_{\text {Ai}}(\xi _1+\eta _1^2-s,\xi _1+\eta _1^2-x)\nonumber \\&\quad \times K_{\text {Ai}}(\Delta \xi +\Delta \eta ^2+\alpha s, \Delta \xi +\Delta \eta ^2+\alpha y)\,ds, \end{aligned}$$
(2.7)
$$\begin{aligned} S_2(x,y)&=\alpha e^{(\delta -\alpha \Delta \eta )(y-x)}K_{\text {Ai}}(\Delta \xi +\Delta \eta ^2+\alpha x,\Delta \xi +\Delta \eta ^2+\alpha y), \end{aligned}$$
(2.8)

and

$$\begin{aligned} S_3(x,y)=e^{(\delta -\eta _1)(y-x)}K_{\text {Ai}}(\xi _1+\eta _1^2- x,\xi _1+\eta _1^2-y). \end{aligned}$$
(2.9)

Using these, we can define the functions

$$\begin{aligned} S(x,y)= & {} S_1(x,y)+1(x> 0)S_2(x,y)-S_3(x,y)1(y<0), \end{aligned}$$
(2.10)
$$\begin{aligned} T(x,y)= & {} -T_1(x,y)-1(x>0)S_2(x,y)+S_3(x,y)1(y< 0). \end{aligned}$$
(2.11)

Let u be a complex parameter and set

$$\begin{aligned} R(u)(x,y)=S(x,y)+u^{-1}T(x,y). \end{aligned}$$
(2.12)

Consider the space

$$\begin{aligned} X=L^2(\mathbb {R}_-,dx)\oplus L^2(\mathbb {R}_+,dx), \end{aligned}$$
(2.13)

and define the following matrix kernel on X,

$$\begin{aligned} K(u)(x,y)=\begin{pmatrix} R(u)(x,y) &{}\quad R(u)(x,y) \\ uR(u)(x,y) &{}\quad uR(u)(x,y) \end{pmatrix}. \end{aligned}$$
(2.14)

K(u) defines a trace-class operator on X, which we also denote by K(u). Let \(\gamma _r\) denote a circle around the origin of radius r with positive orientation. We define the two-time probability distribution by

$$\begin{aligned} F_{\text {two-time}}(\xi _1,\eta _1;\xi _2,\eta _2;\alpha )=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det (I+K(u))_X\,du, \end{aligned}$$
(2.15)

where \(r>1\).

We can now formulate our main theorem.

Theorem 2.1

Let P(aA) be defined as in (1.5) and consider the scaling (2.2). Then,

$$\begin{aligned} \lim _{T\rightarrow \infty } P(a,A)=F_{\text {two-time}}(\xi _1,\eta _1;\xi _2,\eta _2;\alpha ). \end{aligned}$$
(2.16)

The theorem will be proved in Sect. 3. The fact that K(u) is a trace-class operator is Lemma 4.1 below.

The formula for the two-time distribution can be written in different ways. In Sect. 6, we will give formulas suitable for studying the limits \(\alpha \rightarrow 0\), \(\alpha \rightarrow \infty \) and expansions in \(\alpha \) and \(1/\alpha \) respectively. We will not discuss these expansions here, but refer to [7] for more on this and comparison with the results in [18].

For comments on the relation between this formula and the formula derived in [17], see the discussion in Sect. 7.

Remark 2.2

It would be interesting to be able to prove the same type of scaling limit for the multi-time case, i.e. to consider the probability function

$$\begin{aligned} P(a_1,\dots ,a_{L})=\mathbb {P}\left[ G(m_1,n_1)<a_1,\dots ,G(m_{L}, n_{L})<a_{L}\right] , \end{aligned}$$

where \(m_1< m_2<\dots < m_L\), and \(n_1< n_2<\dots < n_L\). It is possible to write a formula analogous to (3.17) below but with \(L-1\) contour integrals. This can be proved in a very similar way as the proof of (3.17). We hope to say more on this problem in future work.

3 Proof of the main theorem

In this section we will prove the main theorem. Along the way we will use several lemmas that will be proved in Sects. 4 and 5.

Write

$$\begin{aligned} \mathbf {G}(m)=(G(m,1),\dots ,G(m,N)), \end{aligned}$$
(3.1)

for \(m\ge 0\), and a fixed \(N\ge 1\). Let \(\mathbf {G}(0)=0\). By \(\Delta \) we denote the finite difference operator defined on functions \(f:\mathbb {Z}\mapsto \mathbb {C}\) by \(\Delta f(x)=f(x+1)-f(x)\), which has the inverse

$$\begin{aligned} \Delta ^{-1} f(x)=\sum _{y=-\infty }^{x-1} f(y), \end{aligned}$$

for all functions f for which the series converges. The negative binomial weight is

$$\begin{aligned} w_m(x)=(1-q)^m\left( {\begin{array}{c}x+m-1\\ x\end{array}}\right) q^x 1(x\ge 0), \end{aligned}$$
(3.2)

for \(m\ge 1\), \(x\in \mathbb {Z}\). Write

$$\begin{aligned} W_N=\{\mathbf {x}=(x_1,\dots ,x_N)\in \mathbb {Z}^N\,,\,x_1\le \dots \le x_N\}. \end{aligned}$$
(3.3)

Note that \(\mathbf {G}(m)\in W_N\).

The following proposition is the starting point for the proof. It is proved in [16] following the paper by Warren [25], see also [8] for a more systematic treatment.

Proposition 3.1

The vectors \((\mathbf {G}(m))_{m\ge 0}\) form a Markov chain with transition function

$$\begin{aligned} \mathbb {P}[\mathbf {G}(m)=\mathbf {y}\,|\,\mathbf {G}(\ell )= \mathbf {x}]=\det (\Delta ^{j-i}w_{m-\ell }(y_j-x_i))_{1\le i,j\le N}, \end{aligned}$$
(3.4)

for any \(\mathbf {x},\mathbf {y}\in W_N\), \(m>\ell \ge 0\).

Write

$$\begin{aligned} \Delta m=M-m,\quad \Delta N=N-n,\quad \Delta a=A-a, \end{aligned}$$
(3.5)

and

$$\begin{aligned} W_{N,n}(a)=\{\mathbf {x}\in W_N;\,x_n< a\}. \end{aligned}$$

We can the write

$$\begin{aligned}&P(a,A)=\sum _{\mathbf {x}\in W_{N,n}(a)}\sum _{\mathbf {y}\in W_{N,N}(A)}\nonumber \\&\quad \det (\Delta ^{j-i}w_m(x_j))_{1\le i,j\le N}\det (\Delta ^{j-i}w_{\Delta m}(y_j-x_i))_{1\le i,j\le N}. \end{aligned}$$
(3.6)

Here we would like to perform the sum over \(\mathbf {y}\), which is straightforward, and then the sum over \(\mathbf {x}\), which is tricky since we cannot use the Cauchy–Binet identity directly. An important step is part a) of the following lemma, which is proved in Sect. 4. The proof of (3.7) uses successive summations by parts and generalizes the proof of Lemma 3.2 in [16].

Lemma 3.2

Let \(f,g:\mathbb {Z}\mapsto \mathbb {R}\) be given functions and assume that there is an \(L\in \mathbb {Z}\) such that \(f(x)=g(x)=0\) if \(x<L\).

  1. (a)

    Let \(a_i,d_i\in \mathbb {Z}\), \(1\le i\le N\) and fix k, \(1\le k\le N\). Then,

    $$\begin{aligned}&\sum _{\mathbf {x}\in W_{N,k}(a)}\det \big (\Delta ^{j-a_i}f(x_j-y_i)\big )_{1\le i,j\le N}\det \big (\Delta ^{d_i-j}g(z_i-x_j)\big )_{1\le i,j\le N}\nonumber \\&\quad =\sum _{\mathbf {x}\in W_{N,k}(a)}\det \big (\Delta ^{k-a_i}f(x_j-y_i)\big )_{1\le i,j\le N}\det \big (\Delta ^{d_i-k}g(z_i-x_j)\big )_{1\le i,j\le N}. \end{aligned}$$
    (3.7)
  2. (b)

    For \(1\le n\le N\), we have the identity

    $$\begin{aligned} \sum _{\mathbf {x}\in W_{N,N}(A)}\det \big (\Delta ^{i-n}w_m(x_i-y_j)\big )_{1\le i,j\le N}=\det \big (\Delta ^{i-n-1}w_m(A-y_j)\big )_{1\le i,j\le N}.\nonumber \\ \end{aligned}$$
    (3.8)

If we use (3.7) and (3.8) in (3.6), we find

$$\begin{aligned} P(a,A)=\sum _{\mathbf {x}\in W_{N,n}(a)}\det \big (\Delta ^{n-i}w_m(x_j)\big )_{1\le i,j\le N}\det \big (\Delta ^{j-n-1}w_{\Delta m}(A-x_i)\big )_{1\le i,j\le N}.\nonumber \\ \end{aligned}$$
(3.9)

Before we show how we can use the Cauchy–Binet identity to do the summation in (3.9), we will modify it somewhat. Below, this modification will be a kind of orthogonalization procedure, and will be important for obtaining a Fredholm determinant. Let \(A=(a_{ij})\) and \(B=(b_{ij})\) be two \(N\times N\)-matrices that satisfy \(a_{ij}=0\) if \(j>i\) and \(b_{ij}=0\) if \(j<i\), so that A is lower- and B upper-triangular. Assume that

$$\begin{aligned} \det AB=\prod _{i=1}^Na_{ii}b_{ii}=1. \end{aligned}$$
(3.10)

For \(x\in \mathbb {Z}\), \(1\le i,j\le N\), we define

$$\begin{aligned} f_{0,1}(i,x)=\sum _{k=1}^Na_{ik}(-1)^n\Delta ^{n-k}w_m(x+a), \end{aligned}$$
(3.11)

and

$$\begin{aligned} f_{1,2}(x,j)=\sum _{k=1}^N(-1)^n\Delta ^{k-1-n}w_{\Delta m}(\Delta a-x)b_{kj}, \end{aligned}$$
(3.12)

where \(w_m\) is the negative binomial weight (3.2). If we shift \(x_i\rightarrow x_i+a\), \(1\le i\le N\), in (3.9), and use (3.10), (3.11) and (3.12), we get

$$\begin{aligned} P(a,A)=\sum _{\mathbf {x}\in W_{N,n}(0)}\det \big (f_{0,1}(i,x_j)\big )_{1\le i,j\le N}\det \big (f_{1,2}(x_i,j)\big )_{1\le i,j\le N}. \end{aligned}$$
(3.13)

This formula is the basis for the next lemma, the proof of which is based on the Cauchy–Binet identity. However, because of the restriction \(x_n< 0\) in the summation in (3.13), we cannot apply the identity directly. In order to state the result we need some further notation. Define

$$\begin{aligned} L_1(i,j)= & {} \sum _{x=-\infty }^{-1}f_{0,1}(i,x)f_{1,2}(x,j), \end{aligned}$$
(3.14)
$$\begin{aligned} L_2(i,j)= & {} \sum _{x=0}^{\infty }f_{0,1}(i,x)f_{1,2}(x,j). \end{aligned}$$
(3.15)

Let u be a complex parameter and set

$$\begin{aligned} L(i,j;u)=u^{1(i>n)}L_1(i,j)+u^{-1(i\le n)}L_2(i,j). \end{aligned}$$
(3.16)

Lemma 3.3

We have the formula,

$$\begin{aligned} P(a,A)=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det \big (L(i,j;u)\big )_{1\le i,j\le N}du, \end{aligned}$$
(3.17)

for any \(r>1\).

The lemma is proved in Sect. 4. The contour integral come from the need to capture the restriction \(x_n<0\) and still use the Cauchy–Binet identity.

We now come to the choice of the matrices A and B. The aim is to get a good formula for \(f_{0,1}\) and \(f_{1,2}\) and make it possible to write the determinant in (3.17) as a Fredholm determinant suitable for asymptotic analysis. Define

$$\begin{aligned} H_{n,m,x}(w)=\frac{w^n(1-w)^{x+m}}{\big (1-\frac{w}{1-q}\big )^m}. \end{aligned}$$
(3.18)

Using a generating function for the negative binomial weight (3.2), it is straightforward to show that for all \(m\ge 1\), \(k,x\in \mathbb {Z}\),

$$\begin{aligned} \Delta ^n w_m(x)=\frac{(-1)^{k-1}}{2\pi \mathrm {i}}\int _{\gamma _r}H_{n,m,x}(z)\frac{dz}{1-z}, \end{aligned}$$
(3.19)

if \(r>1\). For \(k,x\in \mathbb {Z}\), \(m\ge 1\), \(\epsilon \in \{0,1\}\) and \(0<\tau <1\), we define

$$\begin{aligned} \beta _k^{\epsilon }(m,a)=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _\tau }\zeta ^{k-1}\frac{\left( 1-\frac{\zeta }{1-q}\right) ^m}{(1-\zeta )^{a+m-\epsilon }}d\zeta . \end{aligned}$$
(3.20)

Note that \(\beta _0^{\epsilon }=1\) and \(\beta _k^{\epsilon }=0\) if \(k\ge 1\). By expanding \((z-\zeta )^{-1}\) in powers of \(\zeta /z\), we see that

$$\begin{aligned} \sum _{k=1}^N\frac{\beta _{k-i}^{\epsilon }(m,a)}{z^k}=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _\tau }\frac{(1-\zeta )^\epsilon }{H_{i,m,a}(\zeta )(z-\zeta )}\,d\zeta , \end{aligned}$$
(3.21)

provided \(|z|>\tau \).

We now define the matrices A and B. Let c(i) be a conjugation factor defined below in (3.25) which we need to make the asymptotic analysis work. Set

$$\begin{aligned} a_{ik}=c(i)(-1)^{-k}\beta _{k-i}^1(m,a),\quad b_{kj}=c(j)^{-1}(-1)^k\beta _{j-k}^0(\Delta m,\Delta a). \end{aligned}$$
(3.22)

From the properties of \(\beta _k^\epsilon \), we see that \((a_{ik})\) is lower- and \((b_{kj})\) upper-triangular, and that the condition (3.10) is satisfied.

Lemma 3.4

If \(f_{0,1}\) and \(f_{1,2}\) are defined by (3.11) and (3.12) respectively, and \(a_{ik}\) and \(b_{kj}\) by (3.22), then

$$\begin{aligned} f_{0,1}(i,x)= & {} -\frac{c(i)}{(2\pi \mathrm {i})^2}\int _{\gamma _r}dz\int _{\gamma _\tau }d\zeta \frac{H_{n,m,a+x}(z)(1-\zeta )}{H_{i,m,a}(\zeta )(z-\zeta )(1-z)}, \end{aligned}$$
(3.23)
$$\begin{aligned} f_{1,2}(x,j)= & {} \frac{c(j)^{-1}}{(2\pi \mathrm {i})^2}\int _{\gamma _r}dw\int _{\gamma _\tau }d\omega \frac{H_{\Delta n,\Delta m,\Delta a-x}(w)}{H_{N+1-j,\Delta m,\Delta a}(\omega )(w-\omega )(1-w)},\qquad \quad \end{aligned}$$
(3.24)

where \(0<\tau<1<r\).

The proof of the lemma, which will be given in Sect. 4, is a straightforward computation using the definitions and (3.21).

We now turn to rewriting the determinant in (3.17) as a Fredholm determinant and performing the asymptotic analysis. The conjugation factor c(i) in (3.22) is given by

$$\begin{aligned} c(i)=(1-\sqrt{q})^i e^{-\delta i/c_0(t_1T)^{1/3}}, \end{aligned}$$
(3.25)

where \(\delta >0\) is fixed, and satisfies (2.5), and \(c_1\) is given by (2.1). Let \(\tau _1, \tau _2,\rho _1,\rho _2\) and \(\rho _3\) be radii such that

$$\begin{aligned} 0<\tau _1,\tau _2<1-\rho _1<1-\rho _2<1-\rho _3<1-q. \end{aligned}$$
(3.26)

We denote by \(\gamma _\rho (1)\) a positively oriented circle around the point 1 with radius \(\rho \). For \(\epsilon \in \{0,1\}\) and \(1\le i,j\le N\), we define

$$\begin{aligned} A_1(i,j)= & {} \frac{c(i)}{c(j)(2\pi \mathrm {i})^4}\int _{\gamma _{\rho _1}(1)}dz\int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\tau _2}}d\omega \nonumber \\&\times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )(1-z)^{-1}}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta )(w-\omega )(z-w)}, \end{aligned}$$
(3.27)
$$\begin{aligned} B_1(i,j)= & {} \frac{c(i)}{c(j)(2\pi \mathrm {i})^4}\int _{\gamma _{\rho _3}(1)}dz\int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\tau _2}}d\omega \nonumber \\&\times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )(1-z)^{-1}}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta )(w-\omega )(z-w)}, \end{aligned}$$
(3.28)
$$\begin{aligned} A_2(i,j)= & {} \frac{c(i)}{c(j)(2\pi \mathrm {i})^2}\int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \frac{H_{N-i,\Delta m,\Delta a}(w)}{H_{N+1-j,\Delta m,\Delta a}(\omega )(w-\omega )},\qquad \quad \end{aligned}$$
(3.29)

and

$$\begin{aligned} A_3(i,j)=\frac{c(i)}{c(j)(2\pi \mathrm {i})^2}\int _{\gamma _{\rho _1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \frac{H_{j-1,m,a}(z)(1-\zeta )}{H_{i,m,a}(\zeta )(z-\zeta )(1-z)}. \end{aligned}$$
(3.30)

We also define, for \(\epsilon \in \{0,1\}\) and \(1\le i,j\le N\),

$$\begin{aligned} C(i,j)= & {} A_1(i,j)-1(i>n)A_2(i,j)+A_3(i,j)1(j\le n), \end{aligned}$$
(3.31)
$$\begin{aligned} D(i,j)= & {} -B_1(i,j)+1(i>n)A_2(i,j)-A_3(i,j)1(j\le n), \end{aligned}$$
(3.32)

compare with (2.10) and (2.11).

We can now express \(L_p\), \(p=1,2\), in terms of these objects.

Lemma 3.5

We have the formulas

$$\begin{aligned} L_1(i,j)=1(i\le n)\delta _{ij}+C(i,j), \end{aligned}$$
(3.33)

and

$$\begin{aligned} L_2(i,j)=1(i>n)\delta _{ij}+D(i,j). \end{aligned}$$
(3.34)

The proof is based on (3.14), (3.15), and Lemma 3.4, and suitable contour deformations in order to get the contours into positions that can be used in the asymptotic analysis, see Sect. 4.

Combining (3.16) with Lemma 3.5 we obtain

$$\begin{aligned} L(i,j;u)=\delta _{ij}+M_u(i,j), \end{aligned}$$
(3.35)

where

$$\begin{aligned} M_u(i,j)=u^{-1(i\le n)}\left( uC(i,j)+D(i,j)\right) , \end{aligned}$$
(3.36)

and we also set \(M_{u}(i,j)=0\) if \(i,j\notin \{1,\dots ,N\}\). Thus we have the formula

$$\begin{aligned} P(a,A)=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det \big (\delta _{ij}+M_u(i,j)\big )_{1\le i,j\le N}du. \end{aligned}$$
(3.37)

Next, we want to rewrite the determinant in (3.37) in a block determinant form, corresponding to \(i\le n\) and \(i>n\), and similarly for j. For \(r,s\in \{1,2\}\), and \(x,y\in \mathbb {R}\), we define

$$\begin{aligned} F_{u}(r,x;s,y)=M_{u}(n+[x]+1,n+[y]+1), \end{aligned}$$
(3.38)

where \([\cdot ]\) denotes the integer part. The right side of (3.38) does not depend on r or s explicitely but we have \(x<0\) for \(r=1\) and \(x\ge 0\) for \(r=2\), and correspondingly for y depending on s. Let \(\Lambda =\{1,2\}\times \mathbb {R}\) and define the measures

$$\begin{aligned} d\nu _1(x)=1(x<0)dx,\quad d\nu _2(x)=1(x\ge 0)(x) dx. \end{aligned}$$

On \(\Lambda \) we define a measure \(\rho \) by

$$\begin{aligned} \int _{\Lambda }f(\lambda )d\rho (\lambda )=\sum _{r=1}^2\int _{\mathbb {R}}f(r,x)\,d\nu _r(x), \end{aligned}$$
(3.39)

for every integrable function \(f:\Lambda \mapsto \mathbb {R}\). \(F_{u}\) defines an integral operator \(F_{u}\) on \(L^2(\Lambda ,\rho )\) with kernel \(F_{u}(r,x;s,y)\). Note that the space \(L^2(\Lambda ,\rho )\) is isomorphic to the space X defined in (2.13), and we can also think of \(F_{u}\) as a matrix operator.

Lemma 3.6

We have the identity,

$$\begin{aligned} \det (\delta _{ij}+M_{u}(i,j))_{1\le i,j\le N}=\det (I+F_{u})_{L^2(\Lambda ,\rho )}. \end{aligned}$$
(3.40)

This is straightforward, using Fredholm expansions, and the lemma will be proved in Sect. 4.

We can now insert the formula (3.40) into (3.37). This leads to a formula that can be used for taking a limit, but before considering the limit, we have to introduce the appropriate scalings. For \(s=1,2\), we define

$$\begin{aligned} \tilde{F}_{u,T}(r,x;s,y)= c_0(t_1T)^{1/3}F_{u}(r,c_0(t_1T)^{1/3}x;s,c_0(t_1T)^{1/3}y) \end{aligned}$$
(3.41)

where \(c_0\) is given by (2.1). The next lemma follows from (3.37), Lemma 3.6, and (3.41), see Sect. 4.

Lemma 3.7

We have the formula,

$$\begin{aligned} P(a,A)=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det \big (I+\tilde{F}_{u,T}\big )_{L^2(\Lambda ,\rho )}du. \end{aligned}$$
(3.42)

Theorem 2.1 now follows by combining this lemma with the next lemma which will be proved in Sect. 5.

Lemma 3.8

Consider the scaling (2.2) and let K(u) be the matrix kernel defined by (2.14). Then,

$$\begin{aligned} \lim _{T\rightarrow \infty }\det \big (I+\tilde{F}_{u,T}\big )_{L^2(\Lambda ,\rho )}= \det \big (I+K(u)\big )_X, \end{aligned}$$
(3.43)

uniformly for u in a compact set.

4 Proof of Lemmas

In this section we will prove the lemmas that were used in Sect. 3. Some results related to the asymptotic analysis will be proved in Sect. 5.

Proof of Lemma 3.2

Write

$$\begin{aligned} W^*_{N,k}(a)=\{\mathbf {x}\in W_N\,;\,x_k=a\} \end{aligned}$$

so that

$$\begin{aligned} W_{N,k}(a)=\bigcup _{t=-\infty }^a W^*_{N,k}(t) \end{aligned}$$

Hence, it is enough to prove the statement with \(W_{N,k}(a)\) replaced by \(W^*_{N,k}(t)\). Let \(a_i, b_i, c_i, d_i\in \mathbb {Z}\), \(1\le i,j\le N\), and let \(k<\ell \le N\). Assume that \(b_{\ell -1}=b_\ell -1\), and \(c_{\ell }=c_{\ell +1}\) if \(\ell <N\). Set

$$\begin{aligned} b_j'={\left\{ \begin{array}{ll}b_j &{}\quad \text {if }j\ne \ell \\ b_\ell -1 &{}\quad \text {if }j=\ell \end{array}\right. }, \quad c_j'={\left\{ \begin{array}{ll}c_j &{}\quad \text {if }j\ne \ell \\ c_\ell -1 &{}\quad \text {if }j=\ell \end{array}\right. }. \end{aligned}$$

Then,

$$\begin{aligned}&\sum _{\mathbf {x}\in W^*_{N,k}(t)}\det \left( \Delta ^{b_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\nonumber \\&\quad =\sum _{\mathbf {x}\in W^*_{N,k}(t)}\det \left( \Delta ^{b'_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c'_j}g(z_i-x_j)\right) _{1\le i,j\le N}. \end{aligned}$$
(4.1)

To prove (4.1), we use the summation by parts identity,

$$\begin{aligned} \sum _{y=a}^b\Delta u(y-x)c(z-y)= & {} \sum _{y=a}^b u(y-x)\Delta c(z-y) + u(b+1-x)v(z-b)\nonumber \\&-u(a-x)v(z+1-a). \end{aligned}$$
(4.2)

Consider the \(x_\ell \)-summation in the left side of (4.1) with all the other variables fixed. Let \(x_{\ell +1}=\infty \) if \(\ell =N\) and let \(\Delta _x\) denote the finite difference with respect to the variable x. Using (4.2) in the second inequality we get

$$\begin{aligned}&\sum _{x_\ell =x_{\ell -1}}^{x_{\ell +1}}\det \left( \Delta ^{b_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\nonumber \\&\quad =\sum _{x_\ell =x_{\ell -1}}^{x_{\ell +1}}\Delta _{x_{\ell }}\det \left( \Delta ^{b'_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\nonumber \\&\quad =\sum _{x_\ell =x_{\ell -1}}^{x_{\ell +1}}\det \left( \Delta ^{b'_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c'_j}g(z_i-x_j)\right) _{1\le i,j\le N}\nonumber \\&\qquad +\det \left( \Delta ^{b_j'-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\Bigg |_{x_\ell \rightarrow x_{\ell +1}+1}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\Bigg |_{x_\ell \rightarrow x_{\ell +1}}\nonumber \\&\qquad -\det \left( \Delta ^{b_j'-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\Bigg |_{x_\ell \rightarrow x_{\ell -1}}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\Bigg |_{x_\ell \rightarrow x_{\ell -1}-1}. \end{aligned}$$
(4.3)

If \(\ell =N\), then the first boundary term in (4.3) is \(=0\). This follows since \(\Delta ^{d_i-c_\ell }g(z_i-\infty )=0\) (assumption that all series are convergents, expressions well-defined), so one column in the second determinant the first boundary term in (4.3) is \(=0\). If \(\ell <N\), then the first boundary term in (4.3) is \(=0\) because \(c_{\ell }=c_{\ell +1}\), and \(x_\ell \rightarrow x_{\ell +1}\) means that columns \(\ell \) and \(\ell +1\) will be identical in the second determinant. Since \(b'_{\ell }=b_\ell -1=b_{\ell -1}\), we see that columns \(\ell \) and \(\ell -1\) in the first determinant in the second boundary term in (4.3) will be identical.

Similarly, if \(1\le \ell <k\), and \(c_{\ell +1}=c_\ell +1\), \(b_\ell =b_{\ell -1}\), then

$$\begin{aligned}&\sum _{\mathbf {x}\in W^*_{N,k}(t)}\det \left( \Delta ^{b_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c_j}g(z_i-x_j)\right) _{1\le i,j\le N}\nonumber \\&\quad =\sum _{\mathbf {x}\in W^*_{N,k}(t)}\det \left( \Delta ^{b''_j-a_i}f(x_j-y_i)\right) _{1\le i,j\le N}\det \left( \Delta ^{d_i-c''_j}g(z_i-x_j)\right) _{1\le i,j\le N}, \end{aligned}$$
(4.4)

where

$$\begin{aligned} b_j''={\left\{ \begin{array}{ll}b_j &{}\quad \text {if }j\ne \ell \\ b_\ell +1 &{}\quad \text {if }j=\ell \end{array}\right. }, \quad c_j''={\left\{ \begin{array}{ll}c_j &{}\quad \text {if }j\ne \ell \\ c_\ell +1 &{}\quad \text {if }j=\ell \end{array}\right. }.\quad \end{aligned}$$

The proof of (4.4) is analogous to the proof of (4.1).

To prove Lemma 3.2, we apply (4.1) successively to \(x_N,x_{N-1},\dots ,x_{k+1}\), and then to \(x_N,x_{N-1},\dots ,x_{k+2}\) etc., and then finally just to \(x_N\). Similarly, we apply (4.4) to \(x_1,x_2,\dots , x_{k-1}\), then to \(x_1,x_2,\dots , x_{k-2}\), and finally just to \(x_1\). This proofs part (a) of the lemma.

Part (b) of the lemma follows from the identity

$$\begin{aligned} \sum _{\mathbf {x}\in W_{N,N}(a)}\det \left( \Delta ^{i-n}f_j(x_i)\right) _{1\le i,j\le N}=\det \left( \Delta ^{i-1-n}f_j(a+1)\right) _{1\le i,j\le N}. \end{aligned}$$
(4.5)

To prove (4.5), first sum over \(x_N\) from \(x_{N-1}\) to a in the last row. This gives \(\Delta ^{N-1-n}f_j(a+1)-\Delta ^{N-1}f_j(x_{N-1})\). The last term does not contribute since it is the same as in row \(N-1\). We can now sum over \(x_{N-1}\) from \(x_{N-2}\) to a in row \(N-1\) etc. In this way we obtain (4.5). \(\square \)

Proof of Lemma 3.3

We see that

$$\begin{aligned} P(a,A)&=\sum _{\mathbf {x}\in W_N\,;\,x_n<0}\det \big (f_{0,1}(i,x_j)\big )_{1\le i,j\le N}\det \big (f_{1,2}(x_i,j)\big )_{1\le i,j\le N}\nonumber \\&=\sum _{\mathbf {x}\in W_N}\det \big (f_{0,1}(i,x_j)\big )_{1\le i,j\le N}\nonumber \\&\quad \det \big (f_{1,2}(x_i,j)\big )_{1\le i,j\le N} 1\left( \sum _{j=1}^N1(x_j<0)\ge n\right) \end{aligned}$$
(4.6)

Now, for any \(r>0\),

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{u^{\sum _{j=1}^N1(x_j<0)}}{u^{\ell +1}}du=1\left( \sum _{j=1}^N1(x_j<0)=\ell \right) . \end{aligned}$$

Summing over \(\ell \ge n\) and assuming that \(r>1\), we get

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{u^{\sum _{j=1}^N1(x_j<0)}}{u^n(u-1)}du=1\left( \sum _{j=1}^N1(x_j<0)\ge n\right) . \end{aligned}$$
(4.7)

Since,

$$\begin{aligned} u^{\sum _{j=1}^N1(x_j<0)}=\prod _{j=1}^N\left( u1(x_j<0)+1(x_j\ge 0)\right) , \end{aligned}$$

it follows from (4.6), (4.7), and the Cauchy–Binet identity that

$$\begin{aligned} P(a,A)&=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{du}{u^n(u-1)}\sum _{\mathbf {x}\in W_N}\det \big (f_{0,1}(i,x_j)\big )_{1\le i,j\le N}\det \big (f_{1,2}(x_i,j)\big )_{1\le i,j\le N}\nonumber \\&\quad \times \prod _{j=1}^N\left( u1(x_j<0)+1(x_j\ge 0)\right) \nonumber \\&=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{du}{u^n(u-1)}\nonumber \\&\quad \det \left( \sum _{x\in \mathbb {Z}}f_{0,1}(i,x)f_{1,2}(x,j)(u1(x<0)+1(x\ge 0))\right) _{1\le i,j\le N}\nonumber \\&=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{du}{u-1}\det \left( u^{-1(i\le n)}(uL_1(i,j)+L_2(i,j))\right) _{1\le i,j\le N}\nonumber \\&=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{du}{u-1}\det \left( L(i,j;u)\right) _{1\le i,j\le N}. \end{aligned}$$

\(\square \)

Proof of Lemma 3.4

It follows from (3.11), (3.21), and (3.22), that

$$\begin{aligned} f_{0,1}(i,x)&=c(i)\sum _{k=1}^N\beta _{k-i}^1(m,a)(-1)^{n-k}\Delta ^{n-k}w_m(a+x)\\&=-\frac{c(i)}{2\pi \mathrm {i}}\int _{\gamma _r}\left( \sum _{k=1}^N\frac{\beta _{k-i}^1(m,a)}{z^k}\right) H_{n,m,a+x}(z)\frac{dz}{1-z}\\&=-\frac{c(i)}{(2\pi \mathrm {i})^2}\int _{\gamma _r}dz\int _{\gamma _\tau }d\zeta \frac{H_{n,m,a+x}(z)(1-\zeta )}{H_{i,m,a}(\zeta )(z-\zeta )(1-z)}. \end{aligned}$$

Similarly, by (3.12), (3.21) and (3.22),

$$\begin{aligned} f_{1,2}(i,x)&=c(j)^{-1}\sum _{k=1}^N(-1)^{k-n}\Delta ^{k-n-1}w_{\Delta m}(\Delta a-x)\beta _{j-k}^0(\Delta m,\Delta a)\\&=\frac{c(j)^{-1}}{2\pi \mathrm {i}}\int _{\gamma _r}\left( \sum _{k=1}^N\frac{\beta _{j-k}^0(\Delta m,\Delta a)}{w^{N+1-k}}\right) H_{\Delta n,\Delta m,\Delta a-x}(w)\frac{dw}{1-w}\\&=\frac{c(j)^{-1}}{2\pi \mathrm {i}}\int _{\gamma _r}\left( \sum _{k=1}^N\frac{\beta _{j-(N+1-k)}^0(\Delta m,\Delta a)}{w^{k}}\right) H_{\Delta n,\Delta m,\Delta a-x}(w)\frac{dw}{1-w}\\&=\frac{c(j)^{-1}}{(2\pi \mathrm {i})^2}\int _{\gamma _r}dw\int _{\gamma _\tau }d\omega \frac{H_{\Delta n,\Delta m,\Delta a-x}(z)}{H_{N+1-j,\Delta m,\Delta a}(\omega )(w-\omega )(1-w)}. \end{aligned}$$

This proves the lemma. \(\square \)

Proof of Lemma 3.5

Recall the condition (3.26) and choose \(r_1, r_2\) so that \(r_1>r_2>1+\max (\rho _1,\rho _2)\), which means that \(\gamma _{r_i}(1)\) surrounds \(\gamma _{\rho _i}\) and \(\gamma _{\tau _i}\), \(i=1,2\). It follows from (3.23) and (3.24), that

$$\begin{aligned} L_1(i,j)&=-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\sum _{x=-\infty }^{-1}\left( \int _{\gamma _{r_1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \frac{H_{n,m,a+x}(z)(1-\zeta )}{H_{i,m,a}(\zeta )(z-\zeta )(1-z)}\right) \\&\quad \times \left( \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \frac{H_{\Delta n,\Delta m,\Delta a-x}(w)}{H_{N+1-\Delta m,\Delta a}(\omega )(w-\omega )(1-w)}\right) \\&=-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{r_1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \left( \sum _{x=-\infty }^{-1}\left( \frac{1-z}{1-w}\right) ^x\right) \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta )(w-\omega )(1-z)(1-w)}. \end{aligned}$$

Since \(r_1>r_2\),

$$\begin{aligned} \sum _{x=-\infty }^{-1}\left( \frac{1-z}{1-w}\right) ^x=-\frac{1-w}{z-w}, \end{aligned}$$

and we obtain

$$\begin{aligned} L_1(i,j)&=\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{r_1}(1)}dz \int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta ) (w-\omega )(z-w)(1-z)}. \end{aligned}$$

We now deform \(\gamma _{r_2}(1)\) to \(\gamma _{\rho _2}(1)\). Doing so, we cross the pole at \(w=\omega \), and hence

$$\begin{aligned} L_1(i,j)&=\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{r_1}(1)}dz\int _{\gamma _{ \tau _1}}d\zeta \int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta ) (w-\omega )(z-w)(1-z)}\nonumber \\&\quad +\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^3}\int _{\gamma _{r_1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \times \frac{H_{n,m,a}(z)(1-\zeta )}{H_{i,m,a}(\zeta )\omega ^{n+1-j} (z-\omega )(z-\zeta )(1-z)}:=I_1+I_2. \end{aligned}$$
(4.8)

In \(I_1\) we can shrink \(\gamma _{r_1}(1)\) to \(\gamma _{\rho _1}(1)\). We then cross the pole at \(z=\zeta \) (but not \(z=w\) since \(\rho _2<\rho _1\)). Thus, by (3.27),

$$\begin{aligned} I_1&=A_1(i,j)+\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^3}\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \times \frac{\zeta ^{n-i}H_{\Delta n,\Delta m,\Delta a}(w)}{H_{N+1-j,\Delta m,\Delta a}(\omega )(w-\omega )(\zeta -w)}\nonumber \\ :&=A_1(i,j)+I_3. \end{aligned}$$
(4.9)

We note that

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\gamma _{\tau _1}}\frac{d\zeta }{\zeta ^{i-n}(\zeta -w)}=-\frac{1(i>n)}{w^{i-n}}, \end{aligned}$$
(4.10)

since \(|w|>|\zeta |\), and hence by (3.29),

$$\begin{aligned} I_3=-1(i>n)A_2(i,j). \end{aligned}$$
(4.11)

Also

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\gamma _{\tau _2}}\frac{d\omega }{\omega ^{n+1-j}(z-\omega )}=\frac{1(j\le n)}{z^{n+1-j}}, \end{aligned}$$
(4.12)

and we obtain

$$\begin{aligned} I_2=\frac{1(j\le n)c(i)c(j)^{-1}}{(2\pi \mathrm {i})^2}\int _{\gamma _{r_1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \frac{H_{j-1,m.a}(z)(1-\zeta )}{H_{i,m,a}(\zeta )(z-\zeta )(1-z)}. \end{aligned}$$

Deform \(\gamma _{r_1}(1)\) to \(\gamma _{\rho _1}(1)\). We then cross the pole at \(z=\zeta \) and we obtain, using (3.30),

$$\begin{aligned} I_2= & {} 1(j\le n)A_3(i,j)+\frac{1(j\le n)c(i)c(j)^{-1}}{2\pi \mathrm {i}}\int _{\gamma _{\tau _1}}\zeta ^{i-j-1}\,d\zeta \nonumber \\= & {} 1(j\le n)A_3(i,j)+1(i\le n)\delta _{ij}. \end{aligned}$$
(4.13)

Combining (4.8), (4.9), (4.11), (4.13) and (3.31), we get (3.33).

Consider next,

$$\begin{aligned} L_2(i,j)&=-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{r_3}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \left( \sum _{x=0}^{\infty } \left( \frac{1-z}{1-w}\right) ^x\right) \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta )(w-\omega ) (1-z)(1-w)}, \end{aligned}$$

where now \(r_2>r_3>1+\max (\rho _1,\rho _2)\). Thus,

$$\begin{aligned} \sum _{x=0}^{\infty }\left( \frac{1-z}{1-w}\right) ^x=\frac{1-w}{z-w}, \end{aligned}$$

and consequently

$$\begin{aligned} L_2(i,j)&=-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{r_3}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta ) (w-\omega )(z-w)(1-z)}. \end{aligned}$$

We now deform \(\gamma _{r_3}(1)\) to \(\gamma _{\rho _3}(1)\), and doing so we pass the pole at \(z=\zeta \), and find

$$\begin{aligned} L_2(i,j)&=-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^4}\int _{\gamma _{\rho _3}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \\&\quad \times \frac{H_{n,m,a}(z)H_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )}{H_{i,m,a}(\zeta )H_{N+1-j,\Delta m,\Delta a}(\omega )(z-\zeta ) (w-\omega )(z-w)(1-z)}\\&\quad -\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^3}\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \times \frac{H_{\Delta n,\Delta m,\Delta a}(w)}{\zeta ^{i-n}H_{N+1-j,\Delta m,\Delta a}(\omega ) (w-\omega )(\zeta -w)}:=J_1+J_2. \end{aligned}$$

In \(J_1\) we deform \(\gamma _{r_2}(1)\) to \(\gamma _{\rho _2}(1)\). Since \(\rho _2>\rho _3\), we only cross the pole at \(w=\omega \), and we get

$$\begin{aligned} J_1&=-B_1(i,j)-\frac{c(i)c(j)^{-1}}{(2\pi \mathrm {i})^3}\int _{\gamma _{\rho _3}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \times \frac{H_{n,m,a}(z)(1-\zeta )}{\omega ^{n+1-j}H_{i,m,a}(\zeta )(z-\zeta )(z-\omega )(1-z)}\\ :&=-B_1(i,j)+J_3. \end{aligned}$$

Using (4.10), we find

$$\begin{aligned} J_2&=\frac{1(i>n)c(i)c(j)^{-1}}{(2\pi \mathrm {i})^2}\int _{\gamma _{r_2}(1)}dw\int _{\gamma _{\tau _2}}d\omega \frac{H_{N-i,\Delta m,\Delta a}(w)}{H_{N+1-j,\Delta m,\Delta a}(\omega )(w-\omega )}\\&=1(i>n)A_2(i,j)+\frac{1(i>n)c(i)c(j)^{-1}}{2\pi \mathrm {i}}\nonumber \\&\quad \times \int _{\gamma _{\tau _2}} \omega ^{j-i-1}d\omega =1(i>n)(A_2(i,j)+\delta _{ij}), \end{aligned}$$

which gives (3.34) and the lemma is proved. \(\square \)

Proof of Lemma 3.6

We start with the right side of (3.40),

$$\begin{aligned} \det (I+F_{u})_{L^2(\Lambda ,\rho )}&=\sum _{k=0}^{\infty }\frac{1}{k!}\int _{\Lambda ^k}d\rho ^k(\lambda )\det (F_{u}(\lambda _p,\lambda _q))_{1\le p,q\le k}\\&=\sum _{k=0}^{\infty }\frac{1}{k!}\sum _{r_1,\dots ,r_k=1}^2\int _{\mathbb {R}^k}d\nu _{r_1}(x_1)\dots d\nu _{r_k}(x_k) \\&\quad \times \,\det \left( M_{u}(n+[x_p]+1,n+[x_q]+1)\right) _{1\le p,q\le k}\\&=\sum _{k=0}^{\infty }\frac{1}{k!}\sum _{i_1=-n}^{N-n-1}\dots \sum _{i_k=-n}^{N-n-1}\det \left( M_{u,v}(n+i_p+1,n+i_q+1)\right) _{1\le p,q\le k}\\&=\det (\delta _{ij}+M_{u})(i,j))_{1\le i,j\le N}, \end{aligned}$$

where we recall that \(M_{u}(i,j)=0\) if \(i,j\notin \{1,\dots ,N\}\). \(\square \)

Proof of Lemma 3.7

By the formula (3.37) for P(aA) and Lemma 3.6, we see that

$$\begin{aligned} P(a;A)&=\frac{c_3(t_1T)^{1/3}}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det (I+F_{u})_{L^2(\lambda ,\rho )}du\nonumber \\&=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{du}{u-1}\det (I+F_{u})_{L^2(\lambda ,\rho )}du. \end{aligned}$$
(4.14)

We have the Fredholm expansion,

$$\begin{aligned} \det \left( I+F_{u}\right) _{L^2(\lambda ,\rho )}= & {} \sum _{k=0}^\infty \frac{1}{k!}\sum _{r_1,\dots ,r_k=1}^2\int _{\mathbb {R}^k}d\nu _{r_1}(x_1)\dots d\nu _{r_k}(x_k)\nonumber \\&\quad \det \left( F_{u}(r_p,x_p;r_q,x_q)\right) _{1\le p,q\le k}. \end{aligned}$$
(4.15)

The change of variables \(x_p\rightarrow c_0(t_1T)^{1/3}x_p\) gives

$$\begin{aligned} d\nu _{r_p}(c_0(t_1T)^{1/3}x_p)=c_0(t_1T)^{1/3}d\nu _{r_p}(x_p). \end{aligned}$$

Take the factor \(c_0(t_1T)^{1/3}\) into row p. We see then that the right side of (4.15) equals,

$$\begin{aligned}&\sum _{k=0}^\infty \frac{1}{k!}\sum _{r_1,\dots ,r_k=1}^2\int _{\mathbb {R}^k}d\nu _{r_1}(x_1)\dots d\nu _{r_k}(x_k)\det \left( \tilde{F}_{u}(r_p,x_p;r_q,x_q)\right) _{1\le p,q\le k} \nonumber \\&\quad =\det \left( I+\tilde{F}_{u}\right) _{L^2(\lambda ,\rho )}. \end{aligned}$$

Combining this with (4.14) we have proved the lemma. \(\square \)

We want to prove that the operator K(u) in the definition of the two-time distribution is a trace-class operator.

Lemma 4.1

The operator K(u) defined by (2.14) is a trace-class operator on the space X given by (2.13).

Proof

Write

$$\begin{aligned} S_2^*(x,y)=1(x>0)S_2(x,y),\quad S_3^*(x,y)=S_3(x,y)1(y<0) \end{aligned}$$

so that

$$\begin{aligned} S=S_1-S_2^*+S_3^{*},\quad T=-T_1+S_2^{*}-S_3^*. \end{aligned}$$

By splitting K(u) into several parts and factoring out multiplicative constants, we see that it is enough to prove that

$$\begin{aligned} \begin{pmatrix} A &{}\quad A \\ A &{}\quad A \end{pmatrix} \end{aligned}$$

is a trace-class operator on X for \(A=S_1, T_1, S_2^*,S_3^*\). We can think of A as an operator on \(L^2(\Lambda ,\rho )\) instead, where \(\Lambda =\{1,2\}\times \mathbb {R}\) and \(\rho \) is given by (3.39).

Define the kernels

$$\begin{aligned}&a_1(x,s)=S_3(x,s)e^{-\delta s},\quad a_2(s,y)=e^{\delta s}S_2(s,y),\nonumber \\&b_1(x,s)=\alpha 1(x>0)e^{-(\delta -\alpha \Delta \eta )x}\text {Ai}\,(\Delta \xi +\Delta \eta ^2+\alpha x+s),\nonumber \\&b_2(x,s)=e^{(\delta -\alpha \Delta \eta )y}\text {Ai}\,(\Delta \xi +\Delta \eta ^2+\alpha y+s),\nonumber \\&c_1(x,s)=e^{-(\delta -\eta _1)x}\text {Ai}\,(\xi _1+\eta _1^2-x+s),\nonumber \\&c_2(x,s)=e^{(\delta -\eta _1)y}\text {Ai}\,(\xi _1+\eta _1^2-y+s)1(y<0). \end{aligned}$$
(4.16)

Using the definitions, we see that

$$\begin{aligned} S_1(x,y)&=\int _0^\infty (-a_1(x,s))a_2(s,y)\,ds,\quad T_1(x,y)=\int _{-\infty }^0a_1(x,s)a_2(s,y)\,ds\nonumber \\ S_2^*(x,y)&=\int _0^\infty b_1(x,s))b_2(s,y)\,ds,\quad S_3^*(x,y)=\int _0^\infty c_1(x,s)c_2(s,y)\,ds,~ \end{aligned}$$
(4.17)

To get kernels on \(L^2(\Lambda ,\rho )\), we define

$$\begin{aligned} a_1(r_1,x;1,s)&=b_1(r_1,x;1,s)=c_1(r_1,x;1,s)=0\\ a_2(1,s;r_3,y)&=b_2(1,s;r_3,y)=c_2(1,s;r_3,y)=0. \end{aligned}$$

for \(r_1=1,2\), and

$$\begin{aligned} \tilde{a}_1(r_1,x;2,s)=\tilde{a}_2(2,s;r_3,y)=0 \end{aligned}$$

for \(r_1=1,2\). Furthermore, we define

$$\begin{aligned} -a_1(r_1,x;2,s)&=\tilde{a}_1(r_1,x;1,s)=a_1(x,s)\\ a_2(2,s;r_3,y)&=\tilde{a}_2(2,s;r_3,y)=a_2(s,y)\\ b_1(r_1,x;2,s)&=b_1(x,s),\quad b_2(2,s;r_3,y)=b_2(s,y)\\ c_1(r_1,x;2,s)&=c_1(x,s),\quad c_2(2,s;r_3,y)=c_2(s,y). \end{aligned}$$

Then, by (4.17) and (3.39),

$$\begin{aligned} \int _{\Lambda }a_1(r_1,x;r_2,z)a_2(r_2,z;r_3,y) \,d\rho (r_2,z)=S_1(r_1,x;r_2,y), \end{aligned}$$

so \(S_1=a_1a_2\). Similarly, we see that \(T_1=\tilde{a}_1\tilde{a}_2\), \(S_2^*=b_1b_2\) and \(S_3^*=c_1c_2\). Using (2.5) and asymptotic properties of the Airyfunction, we see that \(a_1,a_2,b_1,b_2, c_1, c_2\) are square integrable over \(\mathbb {R}^2\), and also over \(\mathbb {R}\) if we fix one of the variables to be zero. It follows from this that \(a_1, a_2, \tilde{a}_1,\dots , c_2\) are Hilbert-Schmidt operators on \(L^2(\Lambda ,\rho )\). Since the composition of two Hilbert-Schmidt operators is a trace-class operator, we have that \(S_1, T_1, S_2^*\) and \(S_3^{*}\) are trace-class operators on \(L^2(\Lambda ,\rho )\), and hence K(u) is a trace-class operator also. \(\square \)

5 Asymptotic analysis

In this section we will prove Lemma 3.8. The proof has several steps and we will split it into a sequence of lemmas. The proofs of these lemmas will appear later in the section.

For \(k=1,2,3\), we define the rescaled kernels

$$\begin{aligned} \tilde{A}_{1,T}(x,y)&=c_0(t_1T)^{1/3}A_1(n+[c_0(t_1T)^{1/3}x]+1,n+[c_0(t_1T)^{1/3}y]+1),\nonumber \\ \tilde{A}_{2,T}(x,y)&=1(x\ge 0)c_0(t_1T)^{1/3}A_2(n+[c_0(t_1T)^{1/3}x]+1,n+[c_0(t_1T)^{1/3}y]+1),\nonumber \\ \tilde{A}_{3,T}(x,y)&=1(y<0)c_0(t_1T)^{1/3}A_3(n+[c_0(t_1T)^{1/3}x]+1,n+[c_0(t_1T)^{1/3}y]+1),\nonumber \\ \tilde{B}_{1,T}(x,y)&=c_0(t_1T)^{1/3}B_1(n+[c_0(t_1T)^{1/3}x]+1,n+[c_0(t_1T)^{1/3}y]+1). \end{aligned}$$
(5.1)

Lemma 5.1

Uniformly, for xy in a compact subset of \(\mathbb {R}\), we have the limits

$$\begin{aligned} \lim _{T\rightarrow \infty }\tilde{A}_{1,T}(x,y)&=S_1(x,y),\nonumber \\ \lim _{T\rightarrow \infty }\tilde{A}_{2,T}(x,y)&=1(x\ge 0)S_2(x,y)\nonumber \\ \lim _{T\rightarrow \infty }\tilde{A}_{3,T}(x,y)&=S_3(x,y)1(y<0), \end{aligned}$$
(5.2)

and

$$\begin{aligned} \lim _{T\rightarrow \infty }\tilde{B}_{1,T}(x,y)=T_1(x,y). \end{aligned}$$
(5.3)

The lemma is proved below. In order to prove the convergence of the Fredholm determinant we also need some estimates.

Lemma 5.2

Assume that \(|\xi |,|\eta |\le L\) for some fixed L. If we choose \(\delta \) in (3.25) sufficiently large, depending on q and L, there are positive constants \(C_0, C_1, C_2\) that only depend on q and L, so that for all xy satisfying

$$\begin{aligned} 0\le n+[c_0(t_1T)^{1/3}x]<N,\quad 0\le n+[c_0(t_1T)^{1/3}y]<N, \end{aligned}$$
(5.4)

we have the estimates

$$\begin{aligned} \left| \tilde{A}_{1,T}(x,y)\right|&\le C_0 e^{-C_1(-x)_+^{3/2}-C_2(x)_+-C_1(y)_+^{3/2}-C_2(-y)_+},\nonumber \\ \left| \tilde{B}_{1,T}(x,y)\right|&\le C_0 e^{-C_1(-x)_+^{3/2}-C_2(x)_+-C_1(y)_+^{3/2}-C_2(-y)_+},\nonumber \\ \left| \tilde{A}_{2,T}(x,y)\right|&\le C_0 1(x\ge 0)e^{-C_1(x)_+^{3/2}-C_1(y)_+^{3/2}-C_2(-y)_+},\nonumber \\ \left| \tilde{A}_{3,T}(x,y)\right|&\le C_0 1(y<0)e^{-C_1(-x)_+^{3/2}-C_2(x)_+ -C_1(-y)_+^{3/2}}. \end{aligned}$$
(5.5)

Here \((x)_+=\max (0,x)\).

The proof is given below. We now have the estimates that we need to prove Lemma 3.8

Proof of Lemma 3.8

Recall from (2.12) and (2.14) that

$$\begin{aligned} K_u(1,x;s,y)=S(x,y)+u^{-1}T(x,y),\quad K_u(2,x;s,y)=uS(x,y)+T(x,y), \end{aligned}$$

\(s=1,2\). It follows from Lemma 5.1 that

$$\begin{aligned} \lim _{T\rightarrow \infty }\tilde{F}_{u,T}(r,x;s,y)=K_{u}(r,x;s,y), \end{aligned}$$
(5.6)

for \(r,s\in \{1,2\}\), uniformly for uxy in compact sets. From (5.5) we see that for all \(\xi ,\eta ,u\) in compact sets there are positive constants \(C_0,C_1\) so that

$$\begin{aligned} \left| \tilde{F}_{u,T}(r,x;s,y)\right| \le C_0e^{-C_1(|x|+|y|)}, \end{aligned}$$
(5.7)

for \(r,s\in \{1,2\}\) and all \(x,y\in \mathbb {R}\). Note that, by definition \(\tilde{F}_{u,T}\) is zero if xy do not satisfy (5.4). We can expand the Fredholm determinant,

$$\begin{aligned} \det (I+\tilde{F}_{u,T})_{L^2(\Lambda ,\rho )}=\sum _{k=0}^\infty \frac{1}{k!}\int _{\Lambda ^k}\det (\tilde{F}_{u,T}(\lambda _i, \lambda _j))_{1\le i,j\le k}d^k\rho (\lambda ) \end{aligned}$$
(5.8)

in its Fredholm expansion. It follows from (5.6), (5.7) and Hadamard’s inequality that we can take the limit \(T\rightarrow \infty \) in (4.15) and get

$$\begin{aligned} \sum _{k=0}^\infty \frac{1}{k!}\int _{\Lambda ^k}\det (K_u(\lambda _i,\lambda _j))_{1\le i,j\le k}d^k\rho (\lambda )=\det (I+K_u)_X. \end{aligned}$$

This completes the proof. \(\square \)

Consider

$$\begin{aligned} H_{k,\ell ,b}(w)=\frac{w^k(1-w)^{b+\ell }}{\left( 1-\frac{w}{1-q}\right) ^\ell } \end{aligned}$$

with the scalings (\(K\rightarrow \infty \), \(\eta ,\xi ,v\) fixed),

$$\begin{aligned} k&=K-c_1\eta K^{2/3}+c_0vK^{1/3},\nonumber \\ \ell&=K+c_1\eta K^{2/3},\nonumber \\ b&=c_2K+c_3\xi K^{1/3}. \end{aligned}$$
(5.9)

Here the constants \(c_i\) are given by (2.1). Write

$$\begin{aligned} f(w)=\log H_{k,\ell ,b}(w)=k\log w+(b+\ell )\log (1-w)-\ell \log \left( 1-\frac{w}{1-q}\right) .\nonumber \\ \end{aligned}$$
(5.10)

If \(\eta =\xi =v=0\), then f(w) has a double critical point at

$$\begin{aligned} w_c=1-\sqrt{q}. \end{aligned}$$
(5.11)

Define

$$\begin{aligned} H^*_{k,\ell ,b}(w)=\frac{H_{k,\ell ,b}(w)}{H_{k,\ell ,b}(w_c)}. \end{aligned}$$
(5.12)

The local asymptotics around the critical point is given by the next lemma.

Lemma 5.3

Fix \(L>0\) and assume that \(|\xi |,|\eta |, |v|\le L\). Furthermore, assume that we have the scaling (5.9). Then, uniformly for \(w'\) in a compact set in \(\mathbb {C}\)

$$\begin{aligned} \lim _{K\rightarrow \infty } H^*_{k,\ell ,b}\left( w_c+\frac{c_4}{K^{1/3}}w'\right) =\exp \left( \frac{1}{3}w'^3+\eta w'^2-(\xi -v)w'\right) , \end{aligned}$$
(5.13)

where

$$\begin{aligned} c_4=\frac{q^{1/3}(1-\sqrt{q})}{(1+\sqrt{q})^{1/3}}. \end{aligned}$$
(5.14)

Proof

Let

$$\begin{aligned} f_1(w)&=\log w+(c_2+1)\log (1-w)-\log \left( 1-\frac{w}{1-q}\right) ,\\ f_2(w)&=-\log w+\log (1-w)-\log \left( 1-\frac{w}{1-q}\right) ,\\ f_3(w)&=c_0x\log w+c_3\xi \log (1-w), \end{aligned}$$

so that

$$\begin{aligned} f(w)=Kf_1(w)+c_1\eta K^{2/3}f_2(w)+K^{1/3}f_3(w). \end{aligned}$$
(5.15)

Then \(f_1'(w)\) has a double zero at \(w_c\) only if the constant \(c_2=2\sqrt{q}/(1-\sqrt{q})\). A computation gives

$$\begin{aligned} f_1^{(3)}(w_c)=\frac{2(1+\sqrt{q})}{q(1-\sqrt{q})^3}, \end{aligned}$$

and we find

$$\begin{aligned} K\left( f_1\left( w_c+\frac{c_4}{K^{1/3}}w'\right) -f_1(w_c)\right) =\frac{1}{3} w'^3+O\left( \frac{|w'|^4}{K^{1/3}}\right) . \end{aligned}$$
(5.16)

Also,

$$\begin{aligned} c_1\eta K^{2/3}\left( f_2\left( w_c+\frac{c_4}{K^{1/3}}w'\right) -f_2(w_c)\right) =\eta w'^2+O\left( \frac{|w'|^3}{K^{1/3}}\right) , \end{aligned}$$
(5.17)

and

$$\begin{aligned} K^{2/3}\left( f_3\left( w_c+\frac{c_4}{K^{1/3}}w'\right) -f_3(w_c)\right) =-(\xi -v)w'+O\left( \frac{|w'|^2}{K^{1/3}}\right) . \end{aligned}$$
(5.18)

Using (5.16), (5.17) and (5.18) in (5.15), we obtain

$$\begin{aligned} H^*_{k,\ell ,b}\left( w_c+\frac{c_4}{K^{1/3}}w'\right) = \exp \left( \frac{1}{3}w'^3+\eta w'^2-(\xi -x)w'+O(|w'|^4/K^{1/3})\right) \end{aligned}$$

as \(K\rightarrow \infty \). \(\square \)

To prove the estimates that we need, we use some explicit contours in (3.27) to (3.30). Let \(d>0\) and define

$$\begin{aligned} w_1(\sigma )=w_1(\sigma ;d)=w_c\left( 1-\frac{d}{K^{1/3}}\right) e^{\mathrm {i}\sigma /K^{1/3}}, \end{aligned}$$
(5.19)

and

$$\begin{aligned} w_2(\sigma )=w_2(\sigma ;d)=1- \sqrt{q}\left( 1-\frac{d}{K^{1/3}}\right) e^{\mathrm {i}\sigma /K^{1/3}}, \end{aligned}$$
(5.20)

for \(|\sigma |\le \pi K^{1/3}\), where K is as in (5.9). Thus, \(w_1\) gives a circle around the origin of radius \(w_c(1-\frac{d}{K^{1/3}})\), and \(w_2\) gives a circle of radius \( \sqrt{q}(1-\frac{d}{K^{1/3}})\) around 1.

Lemma 5.4

Fix \(L>0\). Assume that we have the scaling (5.9) and that \(|\xi |,|\eta |, |v|\le L\). Then, there are positive constants \(C_j\), \(1\le j\le 4\) that only depend on q and L, so that if \(C_1\le d\le C_2\), then

$$\begin{aligned} \left| H^*_{k,\ell ,b}(w_1(\sigma ;d))\right| ^{-1}\le C_3e^{-C_4\sigma ^2}, \end{aligned}$$
(5.21)

and

$$\begin{aligned} \left| H^*_{k,\ell ,b}(w_2(\sigma ;d))\right| \le C_3e^{-C_4\sigma ^2}, \end{aligned}$$
(5.22)

for \(|\sigma |\le \pi K^{1/3}\).

We will also need estimates that work for large v.

Lemma 5.5

Assume that \(|\xi |,|\eta |\le L\) for some fixed \(L>0\), and assume that we have the scaling (5.9) and v is such that \(k\ge 0\). Then, we can choose \(d=d(v)\ge C_0\), so that

$$\begin{aligned} \left| H^*_{k,\ell ,b}(w_1(\sigma ;d(v)))\right| ^{-1}\le C_1e^{-C_2\sigma ^2-\mu _1(-v)_+^{3/2}+\mu _2(v)_+}, \end{aligned}$$
(5.23)

for \(|\sigma |\le \pi K^{1/3}\), where \(C_0,C_1,C_2,\mu _1,\mu _2\) are positive constants that only depend on q and L. Similarly, there is a choice of \(d=d(v)\) so that

$$\begin{aligned} \left| H^*_{k,\ell ,b}(w_2(\sigma ;d(v)))\right| \le C_1e^{-C_2\sigma ^2-\mu _1(-v)_+^{3/2}+\mu _2(v)_+}. \end{aligned}$$
(5.24)

These two Lemmas will be proved below. We can use Lemma 5.3 and Lemma 5.4 to prove Lemma 5.1.

Proof of Lemma 5.1

It follows from (3.25), (3.27) and (5.12) that

$$\begin{aligned}&\tilde{A}_{1,T}(i,j)\nonumber \\&\quad =\frac{c_0(t_1T)^{1/3}e^{-\delta (x-y)}}{(2\pi \mathrm {i})^4}\int _{\gamma _{\rho _1}(1)}dz\int _{\gamma _{\rho _2}(1)}dw\int _{\gamma _{\tau _1}}d\zeta \int _{\gamma _{\tau _2}}d\omega \nonumber \\&\quad \quad \times \frac{H^*_{n,m,a}(z)H^*_{\Delta n,\Delta m,\Delta a}(w)(1-\zeta )(1-\sqrt{q})^{-1}}{H^*_{n+[c_0(t_1T)^{1/3}x]+1,m,a}(\zeta )H^*_{\Delta n-[c_0(t_1T)^{1/3}y],\Delta m,\Delta a}(\omega )(z-\zeta )(w-\omega )(z-w)(1-z)}. \end{aligned}$$
(5.25)

Let \(\Gamma _D\) denote the vertical line through D oriented upwards, \(\mathbb {R}\ni t\mapsto D+\mathrm {i}t\). Let \(D_1>D_2>0\), \(d_1,d_2>0\) be such that

$$\begin{aligned} C_1\le \frac{c_4}{\sqrt{q}}D_r\le C_2,\quad C_1\le \frac{c_4}{\sqrt{q}}d_r\le C_2, \end{aligned}$$

\(r=1,2\), where \(C_1,C_2\) are the constants in Lemma 5.4 with some fixed L arbitrarily large. We choose the following parametrizations in (5.25),

$$\begin{aligned} z(\sigma _1)=w_2\left( \frac{c_4\sigma _1}{\sqrt{q}},\frac{c_4D_1}{\sqrt{q}}\right) ,\quad \zeta (\sigma _3)=w_1\left( \frac{c_4\sigma _3}{\sqrt{q}},\frac{c_4d_1}{\sqrt{q}}\right) , \end{aligned}$$
(5.26)

where \(K=K_1=(t_1T)^{1/3}\) in (5.19), (5.19), and

$$\begin{aligned} w(\sigma _2)=w_2\left( \frac{c_4\sigma _2}{\sqrt{q}},\frac{c_4D_2}{\sqrt{q}}\right) ,\quad \omega (\sigma _4)=w_1\left( \frac{c_4\sigma _4}{\sqrt{q}},\frac{c_4d_2}{\sqrt{q}}\right) , \end{aligned}$$
(5.27)

where \(K=K_2=(\Delta tT)^{1/3}\),

$$\begin{aligned} |\sigma _i|\le \pi K_1^{1/3}, \text { for }i=1,3, \quad |\sigma _i|\le \pi K_2^{1/3}, \text { for }i=2,4. \end{aligned}$$
(5.28)

Recall the condition (3.26) on the radii. Let

$$\begin{aligned}&h_1(\sigma _1)=H^*_{n,m,a}(z(\sigma _1)), \quad h_2(\sigma _2)=H*_{\Delta n,\Delta m,\Delta a}(w(\sigma _2)),\nonumber \\&h_3(\sigma _3)=H^*_{n+[c_0(t_1T)^{1/3}x]+1,m,a}(\zeta (\sigma _3)), \quad h_4(\sigma _4)=H^*_{\Delta n-[c_0(t_1T)^{1/3}y],\Delta m,\Delta a}(\omega (\sigma _4)). \end{aligned}$$
(5.29)

Now, a computation shows that, for some constant C,

$$\begin{aligned} \left| \frac{c_0K_1^{1/3}}{(z(\sigma _1)-\zeta (\sigma _3))(w(\sigma _2)-\omega (\sigma _4))(z(\sigma _1)-w(\sigma _2))}\frac{dz}{d\sigma _1}\frac{dw}{d\sigma _2} \frac{d\zeta }{d\sigma _3}\frac{d\omega }{d\sigma _4}\right| \le C\nonumber \\ \end{aligned}$$
(5.30)

for all \(\sigma _i\) satisfying (5.28). Thus, for xy in a compact set, we have the following bound on the integrand in (5.25),

$$\begin{aligned}&\left| \frac{c_0K_1^{1/3}h_1(\sigma _1)h_2(\sigma _2)(1-\zeta (\sigma _3))(1-z(\sigma _1))^{-1}}{h_3(\sigma _3)h_4(\sigma _4) (z(\sigma _1)-\zeta (\sigma _3))(w(\sigma _2)-\omega (\sigma _4))(z(\sigma _1)-w(\sigma _2))}\frac{dz}{d\sigma _1}\frac{dw}{d\sigma _2} \frac{d\zeta }{d\sigma _3}\frac{d\omega }{d\sigma _4}\right| \nonumber \\&\quad \le C\left| \frac{h_1(\sigma _1)h_2(\sigma _2)}{h_3(\sigma _3)h_4(\sigma _4)}\right| \le C_3'e^{-C_4'(\sigma _1^2+\sigma _2^2+\sigma _3^2+\sigma _4^2)}, \end{aligned}$$
(5.31)

where the last inequality follows from Lemma 5.4.

For \(\sigma _i\) in a bounded set, we see that

$$\begin{aligned} z(\sigma _1)&=w_c+\frac{c_4}{K_1^{1/3}}(-\mathrm {i}\sigma _1+D_1)+O(K_1^{-2/3}),\nonumber \\ w(\sigma _2)&=w_c+\frac{c_4}{K_2^{1/3}}(-\mathrm {i}\sigma _2+D_2)+O(K_2^{-2/3}),\nonumber \\ \zeta (\sigma _3)&=w_c+\frac{c_4}{K_1^{1/3}}(\mathrm {i}\sigma _3+d_1)+O(K_1^{-2/3}),\nonumber \\ \omega (\sigma _4)&=w_c+\frac{c_4}{K_2^{1/3}}(\mathrm {i}\sigma _4+d_2)+O(K_2^{-2/3}), \end{aligned}$$
(5.32)

It follows from (2.2) that

$$\begin{aligned} n&=K_1-c_1\eta _1K_1^{2/3},\quad \Delta n=K_2-c_1\Delta \eta K_2^{2/3}\nonumber \\ m&=K_1-c_1\eta _2K_1^{2/3},\quad \Delta m=K_2+c_1\Delta \eta K_2^{2/3}\nonumber \\ a&=c_2K_1+c_3\xi _1K_1^{1/3}, \quad \Delta a=c_2K_2+c_3\Delta \xi K_2^{1/3}, \end{aligned}$$
(5.33)

and hence

$$\begin{aligned} n+c_0x(t_1T)^{1/3}&=K_1-c_1\eta _1 K_1^{2/3}+c_0xK_1^{1/3},\nonumber \\ \Delta n-c_0y(t_1T)^{1/3}&=K_2-c_1\Delta \eta K_1^{2/3}-c_0\alpha yK_2^{1/3}. \end{aligned}$$
(5.34)

Write \(z'=-\mathrm {i}\sigma _1+D_1\), \(w'=-\mathrm {i}\sigma _2+D_2\), \(\zeta '=\mathrm {i}\sigma _3+d_1\), \(\omega '=\mathrm {i}\sigma _4+d_2\). Note that

$$\begin{aligned}&c_0(t_1T)^{1/3}\frac{dzdwd\zeta d\omega }{(z-\zeta )(w-\omega )(z-w)}=\alpha (1-\sqrt{q})\frac{dz'dw'd\zeta ' d\omega '}{(z'-\zeta ')(w'-\omega ')(z'-\alpha w')},\nonumber \\&c_0(t_1T)^{1/3}\frac{dzd\zeta }{z-\zeta }=(1-\sqrt{q})\frac{dz'd\zeta '}{z'-\zeta '},\quad c_0(t_1T)^{1/3}\frac{dwd\omega }{w-\omega }=\alpha (1-\sqrt{q})\frac{dw'd\omega '}{w'-\omega '}. \end{aligned}$$
(5.35)

It follows from Lemma 5.3, (5.25), (5.32), (5.31) and the dominated convergence theorem that

$$\begin{aligned}&\lim _{T\rightarrow \infty }\tilde{A}_{1,T}(x,y)\nonumber \\&\quad =\frac{\alpha e^{\delta (y-x)}}{(2\pi \mathrm {i})^4}\int _{\Gamma _{D_1}}dz'\int _{\Gamma _{D_2}}dw' \int _{\Gamma _{-d_1}}d\zeta '\int _{\Gamma _{-d_2}}d\omega '\nonumber \\&\qquad \times \frac{e^{\frac{1}{3}z'^3+\eta _1z'^2-\xi _1z'+\frac{1}{3}w'^3+\Delta \eta w'^2-\Delta \xi w'}}{e^{\frac{1}{3}\zeta '^3+\eta _1\zeta '^2-(\xi _1-x)\zeta ' +\frac{1}{3}\omega '^3+\Delta \eta \omega '^2-(\Delta \xi +\alpha y)\omega '} (z'-\zeta ')(w'-\omega ')(z'-\alpha w')}, \end{aligned}$$
(5.36)

and we have the condition

$$\begin{aligned} d_1,d_2>0,\quad 0<D_1<\alpha D_2<D_3. \end{aligned}$$
(5.37)

Define

$$\begin{aligned} G_{\xi ,\eta }(z)=e^{\frac{1}{3} z^3+\eta z^2-\xi z}, \end{aligned}$$
(5.38)

and let

$$\begin{aligned} S_1(x,y)= & {} \frac{\alpha e^{\delta (y-x)}}{(2\pi \mathrm {i})^4}\int _{\Gamma _{D_1}}dz\int _{\Gamma _{D_2}}dw \int _{\Gamma _{-d_1}}d\zeta \int _{\Gamma _{-d_2}}d\omega \nonumber \\&\quad \times \frac{G_{\xi _1,\eta _1}(z)G_{\Delta \xi ,\Delta \eta }(w)}{G_{\xi _1-x,\eta _1}(\zeta )G_{\Delta \xi +\alpha y,\Delta \eta }(\omega )(z-\zeta )(w-\omega )(z-\alpha w)}. \end{aligned}$$
(5.39)

If \(d,D>0\), we have the formulas,

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\Gamma _D}G_{\xi ,\eta }(z)\,dz&=\text {Ai}\,(\xi +\eta ^2)e^{\xi \eta +\frac{2}{3}\eta ^3},\nonumber \\ \frac{1}{2\pi \mathrm {i}}\int _{\Gamma _{-d}}\frac{d\zeta }{G_{\xi ,\eta }(\zeta )}&=\text {Ai}\,(\xi +\eta ^2)e^{-\xi \eta -\frac{2}{3}\eta ^3}, \end{aligned}$$
(5.40)

with absolutely convergent integrals. Using (5.37), we see that

$$\begin{aligned} \frac{1}{z-\zeta }= & {} \int _0^\infty e^{-s_1(z-\zeta )}ds_1,\quad \frac{1}{w-\omega }=\int _0^\infty e^{-s_2(w-\omega )}ds_2, \nonumber \\ \frac{1}{z-\alpha w}= & {} -\int _0^\infty e^{s_3(z-\alpha w)}ds_3. \end{aligned}$$

It follows from these formulas, (5.39) and (5.40) that \(S_1\) is also given by (2.6).

The proof of (5.3) is identical with \(D_1\) replaced by \(D_3\) satisfying (5.37). The integral formula for \(T_1\) reads

$$\begin{aligned} T_1(x,y)= & {} \frac{\alpha e^{\delta (y-x)}}{(2\pi \mathrm {i})^4}\int _{\Gamma _{D_3}}dz\int _{\Gamma _{D_2}}dw \int _{\Gamma _{-d_1}}d\zeta \int _{\Gamma _{-d_2}}d\omega \nonumber \\&\quad \times \frac{G_{\xi _1,\eta _1}(z)G_{\Delta \xi ,\Delta \eta }(w)}{G_{\xi _1-x,\eta _1}(\zeta )G_{\Delta \xi +\alpha y,\Delta \eta }(\omega )(z-\zeta )(w-\omega )(z-\alpha w)}.\quad \end{aligned}$$
(5.41)

The other cases are treated similarly. For \(S_2\) and \(S_3\) we get the formulas

$$\begin{aligned} S_2(x,y)=\frac{\alpha e^{\delta (y-x)}}{(2\pi \mathrm {i})^2}\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_2}}d\omega \frac{G_{\Delta \xi +\alpha x,\Delta \eta }(w)}{G_{\Delta \xi +\alpha y,\Delta \eta }(\omega )(w-\omega )}, \end{aligned}$$
(5.42)

and

$$\begin{aligned} S_3(x,y)=\frac{e^{\delta (y-x)}}{(2\pi \mathrm {i})^2}\int _{\Gamma _{D_1}}dz\int _{\Gamma _{-d_1}}d\zeta \frac{G_{\xi _1-y,\eta _1}(\zeta )}{G_{\xi _1-x,\eta _1}(\zeta )(z-\zeta )}. \end{aligned}$$
(5.43)

This proves Lemma 5.1. \(\square \)

Proof of Lemma 5.2

Consider first \(\tilde{A}_{1,T}\). By Lemma (5.4), we can choose \(d_1\) and \(d_2\), with \(d_1<\alpha d_2\), so that

$$\begin{aligned} |H^*_{n,m,a}(w_2(\sigma _1,d_1))|&\le C_3e^{-C_4\sigma _1^2}, \quad |\sigma _1|\le \pi K_1^{1/3},\nonumber \\ |H^*_{\Delta n,\Delta m,\Delta a}(w_2(\sigma _2,d_1))|&\le C_3e^{-C_4\sigma _1^2}, \quad |\sigma _2|\le \pi K_2^{1/3}, \end{aligned}$$
(5.44)

where \(C_3,C_4\) are some positive constants independent of \(\sigma _1\) and \(\sigma _2\). By Lemma 5.5, we can choose \(d=d_3(x)\ge C_0\), and \(d=d_4(y)\ge C_0\), so that

$$\begin{aligned} |H^*_{n+[c_0x(t_1T)^{1/3}]+1,m,a}(w_1(\sigma _3,d_3(x)))|^{-1}&\le C_1e^{-C_2\sigma _3^2-\mu _1(-x)_+^{3/2}+\mu _2(x)_+},\nonumber \\ |H^*_{\Delta n-[c_0y(t_1T)^{1/3}],\Delta m,\Delta a}(w_1(\sigma _4,d_4(y)))|^{-1}&\le C_1e^{-C_2\sigma _3^2-\mu _1(y)_+^{3/2}+\mu _2(-y)_+}, \end{aligned}$$
(5.45)

It is not difficult to check that if \(z=w_2(\sigma _1,d_1)\), \(w=w_2(\sigma _2,d_2)\), \(\zeta =w_1(\sigma _3,d_3(x))\) and \(\omega =w_1(\sigma _4,d_4(y))\), then there is a constant \(C_5\) so that

$$\begin{aligned} |z-\zeta |\ge C_5K_1^{-1/3}, \quad |w-\omega |\ge C_5K_2^{-1/3}, \end{aligned}$$

and

$$\begin{aligned} |z-w|\ge \sqrt{q}|d_1-\alpha d_2|K_1^{-1/3}\ge C_5K_1^{-1/3}. \end{aligned}$$

Introducing these parametrizations into (5.25) and using the estimates above, we find

$$\begin{aligned} |\tilde{A}_{1,T}(x,y)|&\le Ce^{-\delta (x-y)-\mu _1(-x)_+^{3/2}+\mu _2(x)_+-\mu _1(y)_+^{3/2}+\mu _2(-y)_+}\nonumber \\&\quad \times \int _{\mathbb {R}^4} e^{-C_4(\sigma _1^2+\sigma _2^2+\sigma _3^2+\sigma _4^2)}d^4\sigma \nonumber \\&\le Ce^{-\delta (x-y)-\mu _1(-x)_+^{3/2}+\mu _2(x)_+-\mu _1(y)_+^{3/2}+\mu _2(-y)_+}. \end{aligned}$$
(5.46)

We see that for large enough |x|, we can choose \(\delta \) so large that

$$\begin{aligned} -\mu _1(-x)_+^{3/2}+\mu _2(x)_+-\delta x\le -C_1(-x)_+^{3/2}-C_2(x)_+ \end{aligned}$$

for some positive constants \(C_1,C_2\). This proves the estimate for \(\tilde{A}_{1,T}\). The proof for \(\tilde{B}_{1,T}\) is completely analogous.

Consider now \(\tilde{A}_{3,T}\),

$$\begin{aligned} \tilde{A}_{3,T}(x,y)= & {} \frac{c_0(t_1T)^{1/3}e^{-\delta (x-y)}1(y<0)}{(22\pi \mathrm {i})^2}\int _{\gamma _{\rho _1}(1)}dz\int _{\gamma _{\tau _1}}d\zeta \nonumber \\&\times \frac{H^*_{n+[c_0y(t_1T)^{1/3}],m,a}(z)(1-\zeta )}{H^*_{n+[c_0x(t_1T)^{1/3}]+1,m,a}(z)(1-z)(z-\zeta )}. \end{aligned}$$
(5.47)

Using Lemma 5.5, we see that, just as for \(\tilde{A}_{1,T}\), we can choose \(d_1(y)\) and \(d_2(x)\) so that

$$\begin{aligned} |H^*_{n+[c_0y(t_1T)^{1/3}],m,a}(w_2(\sigma _1,d_1(y)))|&\le C_1e^{-C_2\sigma _1^2-\mu _1(-y)_+^{3/2}+\mu _2(y)_+},\\ |H^*_{n+[c_0x(t_1T)^{1/3}],m,a}(w_1(\sigma _2,d_2(x)))^{-1}|&\le C_1e^{-C_2\sigma _1^2-\mu _1(-x)_+^{3/2}+\mu _2(x)_+}, \end{aligned}$$

and we get the estimate

$$\begin{aligned} |\tilde{A}_{3,T}(x,y)|\le Ce^{-\mu _1(-x)_+^{3/2}+\mu _2(x)_+-\delta x-\mu _1(-y)_+^{3/2}+\delta y}1(y<0). \end{aligned}$$

This gives us the estimate we want by choosing \(\delta \) large enough. The proof for \(\tilde{A}_{2,T}\) is analogous. \(\square \)

The statements in Lemma 5.4 and in Lemma 5.5 are consequences of two other lemmas that we will now state and prove. The first lemma is concerned with the decay along the paths given by \(w_1(\sigma )\) and \(w_2(\sigma )\).

Lemma 5.6

Assume that we have the scaling (5.9) and let \(|\xi |,|\eta |\le L\) for some fixed \(L>0\). There are positive constants \(C_1,C_2,C_3,C_4\) that only depend on q and L, so that if

$$\begin{aligned} C_1\le d\le C_2K^{1/3} \end{aligned}$$
(5.48)

then for \(|\sigma |\le \pi K^{1/3}\),

$$\begin{aligned} \left| \frac{H_{k,\ell ,b}(w_1(\sigma ;d))}{H_{k,\ell ,b}(w_1(0;d))}\right| ^{-1}\le C_3 e^{-C_4d\sigma ^2}, \end{aligned}$$
(5.49)

for all \(v\in \mathbb {R}\). Furthermore, for \(|\sigma |\le \pi K^{1/3}\),

$$\begin{aligned} \left| \frac{H_{k,\ell ,b}(w_2(\sigma ;d))}{H_{k,\ell ,b}(w_1(0;d))}\right| \le C_3 e^{-C_4d\sigma ^2}, \end{aligned}$$
(5.50)

for all \(v\le 0\) such that \(k\ge 0\), and all v such that \(|v|\le L\).

Proof

Recall the definition of f(w) in (5.10) and the parametrizations (5.19) and (5.20). Define

$$\begin{aligned} g_r(\sigma )= & {} \text {Re}\,f(w_r(\sigma ))= k\log |w_r(\sigma )|+(b+\ell )\log |1-w_r(\sigma )|\nonumber \\&-\ell \log \left| 1-\frac{w_r(\sigma )}{1-q}\right| , \end{aligned}$$
(5.51)

\(r=1,2\), \(|\sigma |\le \pi K^{1/3}\). Note that for any real numbers \(\alpha ,\beta \),

$$\begin{aligned} \frac{d}{d\sigma }\log |1-\alpha e^{\mathrm {i}\beta \sigma }|=\frac{\alpha \beta \sin {\beta \sigma }}{(1-\alpha )^2+4\alpha \sin ^2(\beta \sigma /2)}. \end{aligned}$$
(5.52)

Let \(\beta =K^{-1/3}\), \(\alpha _1=w_c(1-dK^{-1/3})\), \(\alpha _2=\alpha _1/(1-q)\). Then a computation using (5.51) and (5.52) gives

$$\begin{aligned} g_1'(\sigma )=\frac{(b+\ell )\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2+4b\alpha _1\alpha _2\sin ^2\frac{\beta \sigma }{2}}{((1-\alpha _1)^2+4\alpha _1\sin ^2\frac{\beta \sigma }{2})((1-\alpha _2)^2+4\alpha _2\sin ^2\frac{\beta \sigma }{2})}\beta \sin \beta \sigma .\qquad \end{aligned}$$
(5.53)

By symmetry it is enough to consider \(0\le \sigma \le \pi K^{1/3}\). We have to compute

$$\begin{aligned}&(b+\ell )\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2\nonumber \\&\quad =\frac{\alpha _1}{1-q}[(1-q)(b+\ell )(1-\alpha _2)^2-\ell (1-\alpha _1)^2]. \end{aligned}$$
(5.54)

Now,

$$\begin{aligned} 1-\alpha _1=\sqrt{q}+(1-\sqrt{q})d\beta ,\quad 1-\alpha _2=\frac{1}{1+\sqrt{q}}(\sqrt{q}+d\beta ), \end{aligned}$$

and using (5.9) a computation gives

$$\begin{aligned}&(1-q)(b+\ell )(1-\alpha _2)^2-\ell (1-\alpha _1)^2\\&\quad =\left( 2qd-\frac{2c_1q^{3/2}}{1+\sqrt{q}}\eta \right) K^{2/3}+\left( \sqrt{q}d^2-\frac{2c_1q(1-\sqrt{q})}{1+\sqrt{q}}\eta d+ \frac{c_3(1-\sqrt{q})q}{1+\sqrt{q}}\xi \right) K^{1/3}\\&\qquad -\frac{c_1\sqrt{q}(1-\sqrt{q})}{1+\sqrt{q}}\eta d^2+\frac{2c_3\sqrt{q}(1-\sqrt{q})}{1+\sqrt{q}}\xi d+\frac{c_3(1-\sqrt{q})}{1+\sqrt{q}}\xi d^2K^{-1/3}. \end{aligned}$$

Since \(|\xi |,|\eta |\le L\), we see that

$$\begin{aligned} (1-q)(b+\ell )(1-\alpha _2)^2-\ell (1-\alpha _1)^2\ge qdK^{2/3}+\Delta _1K^{2/3}+\Delta _2K^{1/3}, \end{aligned}$$
(5.55)

where

$$\begin{aligned} \Delta _1&=qd-\frac{2c_1q^{3/2}}{1+\sqrt{q}}L,\\ \Delta _2&=\sqrt{q}d^2-\frac{2c_1q(1-\sqrt{q})}{1+\sqrt{q}}Ld-\frac{c_3(1-\sqrt{q})q}{1+\sqrt{q}}L-\frac{c_1\sqrt{q}(1-\sqrt{q})}{1+\sqrt{q}}Ld^2K^{-1/3}\\&\quad -\frac{2c_3\sqrt{q}(1-\sqrt{q})}{1+\sqrt{q}}LdK^{-1/3}-\frac{c_3(1-\sqrt{q})}{1+\sqrt{q}}L d^2K^{-2/3}. \end{aligned}$$

We note that we can choose \(C_1\) and \(C_2\), depending only on q and L, so that if \(C_1\le d\le C_2K^{1/3}\), then \(\Delta _1\ge 0\) and \(\Delta _2\ge 0\), and also

$$\begin{aligned} \frac{\alpha _1}{1-q}\ge \frac{w_c}{2(1-q)}=\frac{1}{2(1+\sqrt{q})}. \end{aligned}$$

Thus, we see from (5.54) and (5.55) that

$$\begin{aligned} (b+\ell )\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2\ge \frac{q}{2(1+\sqrt{q})}dK^{2/3} \end{aligned}$$

provided that \(C_1\le d\le C_2K^{1/3}\). Consequently, by (5.53),

$$\begin{aligned} g_1'(\sigma )\ge \frac{qdK^{2/3}\sin K^{-2/3}\sigma }{2(1+\sqrt{q})(1+\alpha _1)^2(1+\alpha _2)^2}\ge \frac{q}{8(1+\sqrt{q})}dK^{2/3}\sin K^{-2/3}\sigma \qquad \end{aligned}$$
(5.56)

since

$$\begin{aligned} (1+\alpha _1)^2(1+\alpha _2)^2\le 4. \end{aligned}$$

It follows, by integration, that, for \(0\le \sigma \le \pi K^{1/3}\),

$$\begin{aligned} g_1(\sigma )-g_1(0)\ge & {} \frac{q}{4(1+\sqrt{q})}dK^{4/3}\sin ^2\left( \frac{\sigma }{2K^{2/3}}\right) \nonumber \\\ge & {} \frac{q}{4(1+\sqrt{q})}dK^{4/3}\left( \frac{2\sigma }{2\pi K^{2/3}}\right) ^2 =\frac{q}{4\pi ^2(1+\sqrt{q})}d\sigma ^2, \end{aligned}$$

since by convexity \(\sin t\ge 2t/\pi \) for \(0\le t\le \pi /2\). This proves the estimate (5.49).

Next, we turn to the proof of (5.50) which is similar. In this case we get

$$\begin{aligned} g_2'(\sigma )=\frac{d}{d\sigma }\left( k\log |1-\sqrt{q}(1-d\beta )e^{\mathrm {i}\beta \sigma }|-\ell \log |1-\frac{1}{\sqrt{q}}(1-d\beta )e^{\mathrm {i}\beta \sigma }|\right) , \end{aligned}$$

where \(\beta =K^{-1/3}\). Let \(\alpha _1=\sqrt{q}(1-d\beta )\), \(\alpha _2=\frac{1}{q} \alpha \). Then, using (5.52), we obtain

$$\begin{aligned} g_2'(\sigma )=\frac{k\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2+4(k-\ell )\alpha _1\alpha _2\sin ^2\frac{\beta \sigma }{2}}{((1-\alpha _1)^2+4\alpha _1\sin ^2\frac{\beta \sigma }{2})((1-\alpha _2)^2+4\alpha _2\sin ^2\frac{\beta \sigma }{2})}\beta \sin \beta \sigma .\qquad \end{aligned}$$
(5.57)

Now,

$$\begin{aligned} k\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2=\frac{\alpha _1}{q}\left[ kq(1-\alpha _2)^2-\ell (1-\alpha _1)^2\right] , \end{aligned}$$
(5.58)

and a computation gives

$$\begin{aligned} kq(1-\alpha _2)^2-\ell (1-\alpha _1)^2=-3(1-\sqrt{q})dK^{2/3}-\Delta K^{2/3}, \end{aligned}$$

where

$$\begin{aligned} \Delta&=(1-\sqrt{q})d+2c_1(1-\sqrt{q})^2\eta -(1-q)dK^{-1/3}-c_0(1-\sqrt{q})^2vK^{-1/3}\nonumber \\&\quad +\,2c_1(1+q)\eta dK^{-2/3}+2c_0(1-\sqrt{q})vdK^{-2/3}-c_0vd^2K^{-1}. \end{aligned}$$
(5.59)

If \(|\xi |,|\eta |,|v|\le L\), we see that we can choose \(C_1, C_2\), depending only on qL, so that if \(C_1\le d\le C_2K^{1/3}\), the \(\Delta \ge 0\), and we obtain

$$\begin{aligned} kq(1-\alpha _2)^2-\ell (1-\alpha _1)^2\le -3(1-\sqrt{q})dK^{2/3}. \end{aligned}$$
(5.60)

If \(|\xi |,|\eta |\le L\) and \(v\le 0\), we can also choose \(C_1, C_2\) so that \(\Delta \ge 0\) if \(C_1\le d\le C_2K^{1/3}\). Also, we see that

$$\begin{aligned} 4(k-\ell )\alpha _1\alpha _2\sin ^2\frac{\beta \sigma }{2}&=\left( -2c_1\eta K^{2/3}+c_0vK^{1/3}\right) \alpha _1\alpha _2\sin ^2\frac{\sigma }{2K^{1/3}}\nonumber \\&\le 8(c_0+c_1)L\alpha _1\alpha _2K^{2/3}\le 8(c_0+c_1)LK^{2/3}. \end{aligned}$$
(5.61)

if \(v\le 0\) or \(|v|\le L\). Assume that \(C_2\) is such that \(\alpha _1\ge \sqrt{q}/2\). Then (5.58), (5.60) and (5.61) give

$$\begin{aligned}&k\alpha _1(1-\alpha _2)^2-\ell \alpha _2(1-\alpha _1)^2+4(k-\ell )\alpha _1\alpha _2\sin ^2\frac{\beta \sigma }{2}\\&\quad \le -\frac{1}{\sqrt{q}}(1-\sqrt{q})dK^{2/3}+\left( -\frac{1-\sqrt{q}}{2\sqrt{q}}d+8(c_0+c_1)L\right) K^{2/3}\\&\quad \le -\frac{1}{\sqrt{q}}(1-\sqrt{q})dK^{2/3}, \end{aligned}$$

if we choose \(C_1\) so that

$$\begin{aligned} -\frac{1-\sqrt{q}}{2\sqrt{q}}d+8(c_0+c_1)L\le 0 \end{aligned}$$

for \(d\ge C_1\). Since \(\alpha _1\le \sqrt{q}\), \(\alpha _1\le 1/\sqrt{q}\),

$$\begin{aligned} \frac{1}{(1+\alpha _1)^2(1+\alpha _2)^2}\ge \frac{1}{(2+\sqrt{q}+1/\sqrt{q})^2}, \end{aligned}$$

and (5.57) gives

$$\begin{aligned} g_2'(\sigma )\le -\frac{1-\sqrt{q}}{\sqrt{q}(2+\sqrt{q}+1/\sqrt{q})^2}dK^{2/3}. \end{aligned}$$

We can now proceed, as for \(g_1\), to prove that

$$\begin{aligned} g_2(\sigma )-g_2(0)\le -\frac{1-\sqrt{q}}{\pi ^2\sqrt{q}(2+\sqrt{q}+1/\sqrt{q})^2}d\sigma ^2. \end{aligned}$$

This completes the proof of the Lemma. \(\square \)

The next Lemma is concerned with the decay for large |v|.

Lemma 5.7

Assume that we have the scaling (5.9) and that v is such that \(k\ge 0\), which will always be the case. Also, assume that \(|\xi |,|\eta |\le L\) for some \(L>0\). There are positive constants \(\mu _1,\mu _2,\mu _3\) that only depend on qL, and a choice \(d=d(v)\) satisfying (5.48) so that

$$\begin{aligned} \left| \frac{H_{k,\ell ,b}(w_c)}{H_{k,\ell ,b}(w_1(0;d(v)))}\right| \le \mu _3e^{-\mu _1(-v)_+^{3/2}+\mu _2(v)_+}. \end{aligned}$$
(5.62)

There is also a choice \(d=d(v)\) satisfying (5.48) so that

$$\begin{aligned} \left| \frac{H_{k,\ell ,b}(w_2(0;d(v)))}{H_{k,\ell ,b}(w_c)}\right| \le \mu _3e^{-\mu _1(-v)_+^{3/2}+\mu _2(v)_+}. \end{aligned}$$
(5.63)

If we assume that \(|v|\le L\), we can choose d independent of v in some interval so that (5.62) and (5.63) hold.

Proof

Using (5.51) we see that

$$\begin{aligned} \left| \frac{H_{k,\ell ,b}(w_1(0;d(v)))}{H_{k,\ell ,b}(w_c)}\right| =e^{g_1(0)-\log f(w_c)}, \end{aligned}$$

so we want to estimate \(g_1(0)-\log f(w_c)\) from below, and then make a good choice of d. We see that

$$\begin{aligned} g_1(0)-\log f(w_c)= & {} k\log (1-dK^{-1/3})+(b+\ell )\log \left( 1+\frac{1-\sqrt{q}}{\sqrt{q}}dK^{-1/3}\right) \nonumber \\&-\ell \log \left( 1+\frac{1}{\sqrt{q}}dK^{-1/3}\right) . \end{aligned}$$
(5.64)

To estimate this expression, we will use the inequalities

$$\begin{aligned} -x-\frac{x^2}{2}-\frac{2x^3}{3}\le \log (1-x)\le -x-\frac{x^2}{2}-\frac{x^3}{3}, \end{aligned}$$
(5.65)

for \(1/2\le x\le 1\), and

$$\begin{aligned} x-\frac{x^2}{2}\le \log (1+x)\le x-\frac{x^2}{2}+\frac{x^3}{3}, \end{aligned}$$
(5.66)

for \(x\ge 0\). It follows from (5.64) and these inequalities that

$$\begin{aligned} g_1(0)-\log f(w_c)&\ge k\left( -dK^{-1/3}-\frac{1}{2}d^2K^{-2/3}-\frac{2}{3}d^3K^{-1}\right) \\&\quad +\,(b+\ell )\left( \frac{1-\sqrt{q}}{\sqrt{q}}dK^{-1/3}-\frac{1}{2}\left( \frac{1-\sqrt{q}}{\sqrt{q}}\right) ^2d^2K^{-2/3} \right) \\&\quad +\,\ell \left( -\frac{1}{\sqrt{q}}dK^{-1/3}+\frac{1}{2q}d^2K^{-2/3}-\frac{1}{3q^{3/2}}d^3K^{-1}\right) \end{aligned}$$

Substitute the expressions in (5.9). After some manipulation this gives

$$\begin{aligned}&g_1(0)-\log f(w_c)\nonumber \\&\quad \ge \left( -c_0v+\frac{1-\sqrt{q}}{\sqrt{q}}c_3\xi \right) d \nonumber \\&\qquad + \left( \frac{1}{\sqrt{q}}c_1\eta -\frac{1}{2}c_0vK^{-1/3}-\frac{(1-\sqrt{q})^2}{2q}c_3\xi K^{-1/3}\right) d^2\nonumber \\&\qquad +\left( -\frac{2}{3}-\frac{1}{3q^{3/2}}+\left( \frac{2}{3}-\frac{1}{3q^{3/2}}\right) c_1\eta K^{-1/3}-\frac{2}{3}c_0vK^{-2/3}\right) d^3\nonumber \\&\quad \ge \left( -c_0v-\frac{1-\sqrt{q}}{\sqrt{q}}c_3L\right) d\nonumber \\&\qquad +\left( -\frac{1}{\sqrt{q}}c_1L-\frac{1}{2}c_0vK^{-1/3}-\frac{(1-\sqrt{q})^2}{2q}c_3LK^{-1/3}\right) d^2\nonumber \\&\qquad +\left( -\frac{2}{3}-\frac{1}{3q^{3/2}}-\left| \frac{2}{3}-\frac{1}{3q^{3/2}}\right| c_1LK^{-1/3}-\frac{2}{3}c_0vK^{-2/3}\right) d^3. \end{aligned}$$
(5.67)

If \(|v|\le L\), we see that if we choose d so that \(C_1'\le d\le C_2'\), then

$$\begin{aligned} g_1(0)-\log f(w_c)\ge -C_3'. \end{aligned}$$

Here \(C_1',C_2', C_3'\) only depend on qL. If \(v\le 0\), then it follows from (5.67) that

$$\begin{aligned} g_1(0)-\log f(w_c)&\ge \left( -c_0v-\frac{1-\sqrt{q}}{\sqrt{q}}c_3L\right) d\nonumber \\&\quad +\left( -\frac{1}{\sqrt{q}}c_1L-\frac{(1-\sqrt{q})^2}{2q}c_3LK^{-1/3}\right) d^2\nonumber \\&\quad +\left( -\frac{2}{3}-\frac{1}{3q^{3/2}}-\left| \frac{2}{3}-\frac{1}{3q^{3/2}}\right| c_1LK^{-1/3}\right) d^3. \end{aligned}$$
(5.68)

Choose \(d=\epsilon \sqrt{-v}\). Then, by (5.68),

$$\begin{aligned} g_1(0)-\log f(w_c)&\ge c_0\epsilon (-v)^{3/2}\left[ 1-\left( \frac{1-\sqrt{q}}{\sqrt{q}}\right) \frac{c_3L}{-v}\right. \nonumber \\&\quad -\left( \frac{1}{\sqrt{q}}c_1L+\frac{(1-\sqrt{q})^2}{2q}\frac{c_3L}{K^{1/3}}\right) \epsilon ^2\frac{1}{\sqrt{-v}}\nonumber \\&\quad -\left. \left( \frac{2}{3}+\frac{1}{3q^{3/2}}+\left| \frac{2}{3}-\frac{1}{3q^{3/2}}\right| \frac{c_1L}{K^{1/3}}\right) \epsilon ^2\right] \end{aligned}$$
(5.69)

Choose \(D_1\) large, depending on only qL, so that

$$\begin{aligned} \left( \frac{1-\sqrt{q}}{\sqrt{q}}\right) \frac{c_3L}{-v}\le \frac{1}{4},\quad \left( \frac{1}{\sqrt{q}}c_1L+\frac{(1-\sqrt{q})^2}{2q}\frac{c_3L}{K^{1/3}}\right) \frac{1}{\sqrt{-v}}\le 1, \end{aligned}$$

if \(\sqrt{-v}\ge D_1\). Since \(k\ge 0\), there is a constant \(D_2\) so that \(\sqrt{-v}\le D_2K^{1/3}\). The condition (5.48) becomes

$$\begin{aligned} \frac{C_1}{\sqrt{-v}}\le \epsilon \le \frac{C_2K^{1/3}}{\sqrt{-v}}, \end{aligned}$$

which is satisfied if

$$\begin{aligned} \frac{C_1}{D_1}\le \epsilon \le \frac{C_2}{D_2}. \end{aligned}$$
(5.70)

We can choose \(D_1\) so large that \(C_1/D_1\) is as small as we want, and hence we can choose \(\epsilon \) so small that

$$\begin{aligned} \left( 1+\frac{2}{3}+\frac{1}{3q^{3/2}}+\left| \frac{2}{3}-\frac{1}{3q^{3/2}}\right| \frac{c_1L}{K^{1/3}}\right) \epsilon ^2\le \frac{1}{4}. \end{aligned}$$

It then follows from (5.69) that

$$\begin{aligned} g_1(0)-\log f(w_c)\ge \frac{1}{2}c_0\epsilon (-v)^{3/2} \end{aligned}$$

for \(\sqrt{-v}\ge D_1\). By adjusting \(\mu _3\), we see that (5.62) holds if \(v\le 0\).

If \(v\ge 0\), we choose a d satistying (5.48) depending on qL, but not on v or K. It follows from (5.68) that there are constants \(\mu _1\) and \(\mu _3'\), so that

$$\begin{aligned} g_1(0)-\log f(w_c)\ge -\mu _1(v)_+-\mu _3'. \end{aligned}$$

Hence (5.62) holds also when \(v\ge 0\).

To prove (5.63) we consider instead

$$\begin{aligned}&g_2(0)-\log f(w_c)\\&\quad =k\log \left( 1+\frac{\sqrt{q}}{1-\sqrt{q}}dK^{-1/3}\right) +(b+\ell )\log (1-dK^{-1/3})\nonumber \\&\qquad -\ell \log \left( 1-\frac{1}{1-\sqrt{q}}dK^{-1/3}\right) \\&\quad \le k\left( \frac{\sqrt{q}}{1-\sqrt{q}}dK^{-1/3}-\frac{q}{2(1-\sqrt{q})^2}d^2K^{-2/3}+\frac{q^{3/2}}{(1-\sqrt{q})^3}d^3K^{-1}\right) \\&\qquad +(b+\ell )\left( -dK^{-1/3}-\frac{1}{2}d^2K^{-2/3}-\frac{1}{3}d^3K^{-1}\right) \\&\qquad +\ell \left( \frac{1}{1-\sqrt{q}}dK^{-1/3}+\frac{1}{2(1-\sqrt{q})^2}d^2K^{-2/3}+\frac{2}{(1-\sqrt{q})^3}d^3K^{-1}\right) , \end{aligned}$$

by (5.65) and (5.66). Into this estimate we insert the expressions in (5.9), and after some computation we get

$$\begin{aligned}&g_2(0)-\log f(w_c)\nonumber \\&\quad \le \left( \frac{\sqrt{q}}{1-\sqrt{q}}c_0v-c_3\xi \right) d\\&\quad \quad +\frac{1}{2(1-\sqrt{q})^2}\left( 2\sqrt{q}c_1\eta -qc_0vK^{-1/3}+c_3(1-\sqrt{q})^2\xi K^{-1/3}\right) d^2\\&\quad \quad +\frac{1}{3(1-\sqrt{q})^3}\left( 1+\sqrt{q}+q+(1+3\sqrt{q}-3q)c_1\eta K^{-1/3}\right. \nonumber \\&\quad \quad \left. +q^{3/2}c_0vK^{-2/3}-c_3(1-\sqrt{q})^3\xi K^{-2/3}\right) d^3. \end{aligned}$$

We can now proceed in analogy with the previous case to show (5.63). \(\square \)

6 More formulas for the two-time distribution

In this section we give an alternative formula for the two-time distribution, see Proposition 6.1 below.

Recall the notation (5.38),

$$\begin{aligned} G_{\xi ,\eta }(z)=e^{\frac{1}{3} z^3+\eta z^2-\xi z}. \end{aligned}$$
(6.1)

Looking at (5.40), we see that it is natural to write

$$\begin{aligned} \text {Ai}\,_{\xi ,\eta }(x,y)=\text {Ai}\,(\xi +\eta ^2+x+y)e^{(\xi +x+y)\eta +\frac{2}{3}\eta ^3}, \end{aligned}$$
(6.2)

since we then get the formulas

$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\Gamma _D}G_{\xi +x+y,\eta }(z)\,dz&=\text {Ai}\,_{\xi ,\eta }(x,y), \end{aligned}$$
(6.3)
$$\begin{aligned} \frac{1}{2\pi \mathrm {i}}\int _{\Gamma _{-d}}\frac{d\zeta }{G_{\xi +x+y,\eta }(\zeta )}&=\text {Ai}\,_{\xi ,-\eta }(x,y), \end{aligned}$$
(6.4)

for any \(d,D>0\). We can think of (6.2) as the kernel of an integral operator on \(L^2(\mathbb {R}_+)\).

In order to give a different formula for the two-time distribution, we need to define several kernels. We will write

$$\begin{aligned} \alpha '=(1+\alpha ^3)^{1/3}=\left( \frac{t_2}{\Delta t}\right) ^{1/3}. \end{aligned}$$
(6.5)

Let

$$\begin{aligned} M_1(v_1,v_2)&=\frac{e^{\delta (v_1-v_2)}}{(2\pi \mathrm {i})^2}\int _{\Gamma _D}dz\int _{\Gamma _{-d}}d\zeta \frac{G_{\xi _1+v_1,\eta _1}(z)}{G_{\xi _1+v_2,\eta _1}(\zeta )(z-\zeta )}\nonumber \\&=e^{\delta (v_1-v_2)}\int _0^\infty \text {Ai}\,_{\xi _1,\eta _1}(v_1,\lambda )\text {Ai}\,_{\xi _1,-\eta _1}(\lambda ,v_2)\,d\lambda , \end{aligned}$$
(6.6)
$$\begin{aligned} M_2(v_1,v_2)&=\frac{1}{(2\pi \mathrm {i})^2\alpha '}\int _{\Gamma _D}dz\int _{\Gamma _{-d}}d\zeta \frac{G_{\xi _2+v_2/\alpha ',\eta _2}(z)}{G_{\xi _2+v_1/\alpha ',\eta _2}(\zeta )(z-\zeta )}\nonumber \\&=\frac{1}{\alpha '}\int _0^\infty \text {Ai}\,_{\xi _2,-\eta _2}(v_1/\alpha ',\lambda )\text {Ai}\,_{\xi _2,\eta _2}(\lambda ,v_2/\alpha ')\,d\lambda , \end{aligned}$$
(6.7)

and

$$\begin{aligned} M_3(v_1,v_2)&=\frac{1}{(2\pi \mathrm {i})^2}\int _{\Gamma _D}dz\int _{\Gamma _{-d}}d\zeta \frac{G_{\Delta \xi +v_2,\Delta \eta }(z)}{G_{\Delta \xi +v_1,\Delta \eta }(\zeta )(z-\zeta )}\nonumber \\&=\int _0^\infty \text {Ai}\,_{\Delta \xi ,-\Delta \eta }(v_1,\lambda )\text {Ai}\,_{\Delta \xi ,\Delta \eta }(\lambda ,v_2)\,d\lambda , \end{aligned}$$
(6.8)

We will also need the following kernels. Let

$$\begin{aligned} 0<d_1<\alpha d_2<d_3,\quad 0<D_1<\alpha D_2<D_3. \end{aligned}$$
(6.9)

Define

$$\begin{aligned}&k_1(v_1,v_2)\nonumber \\&\quad =\frac{\alpha }{(2\pi \mathrm {i})^4}\int _{\Gamma _{D_3}}dz\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_3}}d\zeta \int _{\Gamma _{-d_2}}d\omega \nonumber \\&\qquad \times \frac{G_{\xi _1,\eta _1}(z)G_{\Delta \xi +v_2,\Delta \eta }(w)}{G_{\xi _1,\eta _1}(\zeta )G_{\Delta \xi +v_1,\Delta \eta }(\omega )(z-\zeta )(z-\alpha w)(\alpha \omega -\zeta )}\nonumber \\&\quad =\alpha \int _{\mathbb {R}_+^3}\text {Ai}\,_{\Delta \xi ,-\Delta \eta }(v_1,-\alpha \lambda _1)\text {Ai}\,_{\xi _1,-\eta _1}(\lambda _1,\lambda _2)\text {Ai}\,_{\xi _1,\eta _1}(\lambda _2,\lambda _3)\nonumber \\&\qquad \times \text {Ai}\,_{\Delta \xi ,\Delta \eta }(-\alpha \lambda _3,v_2)\,d^3\lambda , \end{aligned}$$
(6.10)
$$\begin{aligned}&k_2(v_1,v_2)\nonumber \\&\quad =\frac{\alpha }{(2\pi \mathrm {i})^3}\int _{\Gamma _{D_3}}dz\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_2}}d\omega \frac{G_{\xi _1,\eta _1}(z)G_{\Delta \xi +v_2,\Delta \eta }(w)}{G_{\xi _2+v_1/\alpha ',\eta _2}(\omega )(\alpha 'z-\alpha \omega )(z-\alpha w)}\nonumber \\&\quad =\alpha \int _{\mathbb {R}_+^2}\text {Ai}\,_{\xi _2,-\eta _2}(\frac{v_1}{\alpha '},\alpha \lambda _1)\text {Ai}\,_{\xi _1,\eta _1}(\alpha '\lambda _1,\lambda _2) \text {Ai}\,_{\Delta \xi ,\Delta \eta }(-\alpha \lambda _2,v_2)\,d^2\lambda , \end{aligned}$$
(6.11)
$$\begin{aligned} k_3(v_1,v_2)&=\frac{\alpha e^{-\delta v_2}}{(2\pi \mathrm {i})^2}\int _{\Gamma _{-d_3}}d\zeta \int _{\Gamma _{-d_2}}d\omega \frac{1}{G_{\xi _1+v_2,\eta _1}(\zeta )G_{\Delta \xi +v_1,\Delta \eta }(\omega )(\alpha \omega -\zeta )}\nonumber \\&=\alpha e^{-\delta v_2}\int _{\mathbb {R}_+}\text {Ai}\,_{\Delta \xi ,-\Delta \eta }(v_1,-\alpha \lambda )\text {Ai}\,_{\xi _1,-\eta _1}(\lambda ,v_2)\,d\lambda , \end{aligned}$$
(6.12)
$$\begin{aligned}&k_4(v_1,v_2)=\frac{\alpha e^{-\delta v_2}}{\alpha '2\pi \mathrm {i}}\int _{\Gamma _{-d_2}}\frac{d\omega }{G_{\xi _2+(v_1+\alpha v_2)/\alpha ',\eta _2}(\omega )}\nonumber \\&\quad =e^{-\delta v_2}\frac{\alpha }{\alpha '}\text {Ai}\,_{\xi _2,-\eta _2}\left( \frac{v_1}{\alpha '},\frac{\alpha v_2}{\alpha '}\right) , \end{aligned}$$
(6.13)
$$\begin{aligned}&k_5(v_1,v_2)\nonumber \\&\quad =\frac{\alpha }{(2\pi \mathrm {i})^3}\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_3}}d\zeta \int _{\Gamma _{-d_2}}d\omega \nonumber \\&\qquad \times \frac{G_{\xi _2+v_2/\alpha ',\eta _2}(w)}{G_{\xi _1,\eta _1}(\zeta )G_{\Delta \xi +v_1,\Delta \eta }(\omega )(\alpha w-\alpha '\zeta )(\alpha \omega -\zeta )}\nonumber \\&\quad =\alpha \int _{\mathbb {R}_+^2}\text {Ai}\,_{\Delta \xi ,-\Delta \eta }(v_1,-\alpha \lambda _1)\text {Ai}\,_{\xi _1,-\eta _1}(\lambda _1,\alpha '\lambda _2) \nonumber \\&\qquad \times \text {Ai}\,_{\xi _2,\eta _2}\left( \alpha \lambda _2,\frac{v_2}{\alpha '}\right) \,d^2\lambda , \end{aligned}$$
(6.14)
$$\begin{aligned}&k_6(v_1,v_2)\nonumber \\&=\frac{e^{\delta v_1}}{(2\pi \mathrm {i})^4}\int _{\Gamma _{D_3}}dz_1\int _{\Gamma _{D_1}}dz_2\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_1}}d\zeta \nonumber \\&\qquad \times \frac{G_{\xi _1,\eta _1}(z_1)G_{\xi _1+v_1,\eta _1}(z_2)G_{\Delta \xi +v_2,\Delta \eta }(w)}{G_{\xi _1,\eta _1}(\zeta )(z_1-\zeta )(z_2-\zeta )(z_1-\alpha w)}\nonumber \\&=e^{\delta v_1}\int _{\mathbb {R}_+^3}\text {Ai}\,_{\xi _1,\eta _1}(v_1,\lambda _1)\text {Ai}\,_{\xi _1,-\eta _1}(\lambda _1,\lambda _2)\text {Ai}\,_{\xi _1,\eta _1}(\lambda _2,\lambda _3)\nonumber \\&\qquad \times \text {Ai}\,_{\Delta \xi ,\Delta \eta }(-\alpha \lambda _3,v_2)\,d^3\lambda , \end{aligned}$$
(6.15)

and

$$\begin{aligned}&k_7(v_1,v_2)\nonumber \\&\quad =\frac{e^{\delta v_1}}{(2\pi \mathrm {i})^3}\int _{\Gamma _{D_1}}dz\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_1}}d\zeta \frac{G_{\xi _1+v_1,\eta _1}(z)G_{\xi _2+v_2/\alpha ',\eta _2}(w)}{G_{\xi _1,\eta _1}(\zeta )(\alpha w-\alpha '\zeta )(z-\zeta )}\nonumber \\&\quad =e^{\delta v_1}\int _{\mathbb {R}_+^2}\text {Ai}\,_{\xi _1,\eta _1}(v_1,\alpha '\lambda _1)\text {Ai}\,_{\xi _1,-\eta _1}(\lambda _1,\alpha '\lambda _2) \text {Ai}\,_{\xi _2,\eta _2}(\alpha \lambda _2,\frac{v_2}{\alpha '})\,d^2\lambda . \end{aligned}$$
(6.16)

The kernels \(M_i\) and \(k_i\) depend on the parameters \(\alpha , \xi _1, \Delta \xi , \eta _1, \Delta \eta \) and \(\delta \). When we need to indicate this dependence we write \(M_i(\alpha , \xi _1, \Delta \xi , \eta _1, \Delta \eta , \delta )\) and \(k_i(\alpha , \xi _1, \Delta \xi , \eta _1, \Delta \eta , \delta )\). We then think of \(\xi _2\) and \(\eta _2\) as functions of \(\alpha \), \(\xi _1\) and \(\Delta \xi \), and \(\alpha \), \(\eta _1\) and \(\Delta \eta \) respectively. Explicitly,

$$\begin{aligned} \xi _2= & {} \xi _2(\alpha , \xi _1, \Delta \xi )=\frac{1}{\alpha '}(\alpha \xi _1+\Delta \xi ), \end{aligned}$$
(6.17)
$$\begin{aligned} \eta _2= & {} \eta _2(\alpha , \eta _1, \Delta \eta )=\frac{1}{\alpha '^2}(\alpha ^2\eta _1+\Delta \eta ). \end{aligned}$$
(6.18)

Let

$$\begin{aligned} Y=L^2(\mathbb {R}_+)\oplus L^2(\mathbb {R}_+) \end{aligned}$$
(6.19)

On Y, we define a matrix operator kernel Q(u) by

$$\begin{aligned} Q(u)=\begin{pmatrix} Q_{11}(u) &{}\quad Q_{12}(u)\\ Q_{21}(u) &{}\quad Q_{22}(u) \end{pmatrix}, \end{aligned}$$
(6.20)

where

$$\begin{aligned} Q_{11}(u)&=(2-u-u^{-1})k_1+(u-1)(k_2+k_5)+(u-1)M_3-uM_2\nonumber \\ Q_{12}(u)&=(u+u^{-1}-2)k_3+(1-u)k_4\nonumber \\ Q_{21}(u)&=(1-u^{-1})k_6-k_7\nonumber \\ Q_{22}(u)&=(u^{-1}-1)M_1. \end{aligned}$$
(6.21)

We will write \(Q(u,\alpha , \xi _1, \Delta \xi , \eta _1, \Delta \eta , \delta )\) to indicate the dependence on all parameters.

Proposition 6.1

The two-time distribution (2.15) is given by

$$\begin{aligned} F_{\text {two-time}}(\xi _1,\eta _1;\xi _2,\eta _2;\alpha )=\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det (I+Q(u))_Y\,du, \end{aligned}$$
(6.22)

where \(r>1\).

We will give the proof below. The formula (6.22) is suitable for investigating the limit \(\alpha \rightarrow 0\) (long time separation). For more on this limit see [7]. To study the limit \(\alpha \rightarrow \infty \) (short time separation), we can use (6.22) and the next Proposition which gives an \(\alpha \) and \(1/\alpha \) relation. Let

$$\begin{aligned} \beta =\frac{1}{\alpha },\quad \beta '=(1+\beta ^3)^{1/3}=\frac{\alpha '}{\alpha }. \end{aligned}$$
(6.23)

To indicate the dependence of the kernel K(u) on all parameters we write \(K(u,\alpha , \xi _1, \Delta \xi , \eta _1, \Delta \eta , \delta )\).

Proposition 6.2

We have the formula

$$\begin{aligned}&F_{\text {two-time}}(\xi _1,\eta _1;\xi _2,\eta _2;\alpha )\nonumber \\&\qquad =\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det (I+K(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\delta ))_X\,du, \end{aligned}$$
(6.24)

where \(r>1\).

The proof is given below. Recall that

$$\begin{aligned} \Delta \xi =\alpha '\xi _2-\alpha \xi _1,\quad \Delta \eta =\alpha '^2\eta _2-\alpha ^2\eta _1. \end{aligned}$$
(6.25)

Combining the two Propositions above we see that

$$\begin{aligned}&F_{\text {two-time}}(\xi _1,\eta _1;\xi _2,\eta _2;\alpha )\nonumber \\&\qquad =\frac{1}{2\pi \mathrm {i}}\int _{\gamma _r}\frac{1}{u-1}\det (I+Q(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\delta ))_Y\,du. \end{aligned}$$
(6.26)

Note that \(\alpha \) is replaced by \(\beta =1/\alpha \), \(\xi _1\) and \(\Delta \xi \), as well as \(\eta _1\) and \(\Delta \eta \), are interchanged, and u is replaced by \(u^{-1}\). This formula is suitable for studying the limit \(\alpha \rightarrow \infty \) since this corresponds to \(\beta \rightarrow 0\), see [7]. Note that combining (6.17), (6.18) and (6.25), we get

$$\begin{aligned} \xi _2=\xi _2(\beta ,\Delta \xi ,\xi _1),\quad \eta _2=\eta _2(\beta ,\Delta \eta ,\eta _1). \end{aligned}$$
(6.27)

We now turn to the proofs of the Propositions.

Proof of Proposition 6.1

Define the kernels

$$\begin{aligned} p_1(x,v)&=-\frac{e^{-\delta x}}{(2\pi \mathrm {i})^3}\int _{\Gamma _{D_3}}dz\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_1}}d\zeta \frac{G_{\xi _1,\eta _1}(z)G_{\Delta \xi +v,\Delta \eta }(w)}{G_{\xi _1-x,\eta _1}(\zeta )(z-\zeta )(z-\alpha w)},\nonumber \\ p_2(x,v)&=-\frac{e^{-\delta x}1(x>0)}{2\pi \mathrm {i}}\int _{\Gamma _{D_2}}G_{\Delta \xi +\alpha x+v,\Delta \eta }(w)\,dw,\nonumber \\ p_3(x,v)&=\frac{e^{-\delta (x+v)}}{2\pi \mathrm {i}}\int _{\Gamma _{-d_1}}\frac{d\zeta }{G_{\xi _1+v-x,\eta _1}(\zeta )},\nonumber \\ p_4(x,v)&=-\frac{e^{-\delta x}}{(2\pi \mathrm {i})^2}\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_1}}d\zeta \frac{G_{\xi _2+v/\alpha ',\eta _2}(\alpha ' w)}{G_{\xi _1-x,\eta _1}(\zeta )(\alpha w-\zeta )},\nonumber \\ q_1(v,y)&=\frac{\alpha e^{\delta y}}{2\pi \mathrm {i}}\int _{\Gamma _{-d_2}}\frac{d\omega }{G_{\Delta \xi +\alpha y+v,\Delta \eta }(\omega )},\nonumber \\ q_2(v,y)&=\frac{e^{\delta (y+v)}1(y<0)}{2\pi \mathrm {i}}\int _{\Gamma _{D_1}}G_{\xi _1+v-y,\eta _1}(z)\,dz. \end{aligned}$$
(6.28)

The factors involving \(\delta v\) have been introduced in order to get well-defined operators. We also define

$$\begin{aligned} S_4(x,y)= & {} -\frac{\alpha e^{\delta (y-x)}}{(2\pi \mathrm {i})^3}\int _{\Gamma _{D_2}}dw\int _{\Gamma _{-d_1}}d\zeta \int _{\Gamma _{-d_2}}d\omega \nonumber \\&\times \frac{G_{\xi _2,\eta _2}(\alpha 'w)}{G_{\xi _1-x,\eta _1}(\zeta )G_{\Delta \xi +\alpha y,\Delta \eta }(\omega )(\alpha w-\zeta )(w-\omega )}. \end{aligned}$$
(6.29)

From (5.39) and (5.41), we see that

$$\begin{aligned} S_4=S_1-T_1, \end{aligned}$$
(6.30)

by moving the z-integration contour. We then pick up a contribution from the pole at \(z=\alpha w\), which gives \(S_4\). It follows from (5.41), (5.42), (5.43), (6.29) and (6.28) that

$$\begin{aligned} T_1(x,y)&=-\int _{\mathbb {R}_+}p_1(x,v)q_1(v,y)\,dv,\nonumber \\ 1(x>0)S_2(x,y)&=-\int _{\mathbb {R}_+}p_2(x,v)q_1(v,y)\,dv,\nonumber \\ S_3(x,y)1(y<0)&=\int _{\mathbb {R}_+}p_3(x,v)q_2(v,y)\,dv,\nonumber \\ S_4(x,y)&=\int _{\mathbb {R}_+}p_4(x,v)q_1(v,y)\,dv. \end{aligned}$$
(6.31)

From the definition of R(u), (6.30) and (6.31), we see that

$$\begin{aligned} R(u)(x,y)= & {} (u^{-1}-1)\int _{\mathbb {R}_+}p_1(x,v)q_1(v,y)+p_2(x,v)q_1(v,y)\nonumber \\&+\,p_3(x,v)q_2(v,y)\,dv+\int _{\mathbb {R}_+}p_4(x,v)q_1(v,y)\,dv. \end{aligned}$$
(6.32)

Let \(p_i^{\pm }\) be the operator from \(L^2(\mathbb {R}_+)\) to \(L^2(\mathbb {R}_\pm )\) with kernel \(p_i(x,v)\), and \(q_i^{\pm }\) be the operator from \(L^2(\mathbb {R}_\pm )\) to \(L^2(\mathbb {R}_+)\) with kernel \(q_i(v,y)\). From the definition of K(u) and (6.32) it follows that

$$\begin{aligned} K(u)=pq, \end{aligned}$$

where

$$\begin{aligned} p= \begin{pmatrix}(u^{-1}-1)p_1^-+p_4^- &{}\quad (u^{-1}-1)p_3^-\\ (1-u)p_1^++(1-u)p_2^++up_4^+ &{}\quad (1-u)p_3^+ \end{pmatrix} \end{aligned}$$

and

$$\begin{aligned} q= \begin{pmatrix} &{}q_1^- &{}q_1^+\\ &{}q_2^- &{}0 \end{pmatrix}, \end{aligned}$$

are matrix operators \(p:Y\mapsto X\) and \(q:X\mapsto Y\). Note that \(p_2^-=q_2^+=0\). Let

$$\begin{aligned} Q(u)=qp \end{aligned}$$

which gives an operator from Y to itself. A straightforward computation using (6.28), (6.10)–(6.16) and (6.6)–(6.8) shows that

$$\begin{aligned} \begin{array}{llll} &{}q_1^-p_1^-=-k_1, &{}q_1^+p_1^+=k_1-k_2, &{}q_1^+p_2^+=-M_3, \\ &{}q_1^-p_3^-=k_3, &{}q_1^+p_3^+=-k_3+k_4, &{}q_1^-p_4^-=-k_5, \\ &{}q_1^+p_4^+=k_5-M_2, &{}q_2^-p_1^-=-k_6, &{}q_2^-p_4^-=-k_7,\\ &{}q_2^-p_3^-=M_1. &{} &{} \end{array} \end{aligned}$$
(6.33)

From this we see that Q(u) is given by (6.21). In these computations we use (6.17) and (6.18) to get \(\xi _2,\eta _2\) from \(\xi _1,\Delta \xi , \eta _1,\Delta \eta \). The Proposition now follows from

$$\begin{aligned} \det (I+K(u))_X= \det (I+pq)_X=\det (I+qp)_Y=\det (I+Q(u))_Y. \end{aligned}$$

\(\square \)

Proof of Proposition 6.2

To indicate the dependence of S, T and R(u) on all parameters we write \(S(\alpha ,\xi _1,\Delta \xi ,\eta _1,\Delta \eta ,\delta )\) etc. It is straightforward to check from the definitions that

$$\begin{aligned} \frac{1}{\alpha }S(\alpha ,\xi _1,\Delta \xi , \eta _1,\Delta \eta ,\delta )\left( \frac{x}{\alpha },\frac{y}{\alpha }\right) =T(\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta )(-y,-x), \end{aligned}$$

and

$$\begin{aligned} \frac{1}{\alpha }T(\alpha ,\xi _1,\Delta \xi , \eta _1,\Delta \eta ,\delta )\left( \frac{x}{\alpha },\frac{y}{\alpha }\right) =S(\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta )(-y,-x). \end{aligned}$$

It follows that

$$\begin{aligned} \frac{1}{\alpha }R(u,\alpha ,\xi _1,\Delta \xi , \eta _1,\Delta \eta ,\delta )\left( \frac{x}{\alpha },\frac{y}{\alpha }\right) =u^{-1}R(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta )(-y,-x). \end{aligned}$$

If we write

$$\begin{aligned} \tilde{R}(u)(x,y)=R(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta )(x,y), \end{aligned}$$

we see that

$$\begin{aligned} \frac{1}{\alpha }R(u)\left( -\frac{y}{\alpha },-\frac{x}{\alpha }\right) =u^{-1}\tilde{R}(u^{-1})(x,y). \end{aligned}$$

Let \(K^*_{\alpha }(u)(x,y)=\alpha ^{-1}K(\alpha ^{-1}y,\alpha ^{-1}x)\), and define \(V:X\mapsto X\) by

$$\begin{aligned} V\begin{pmatrix} f_1(x) \\ f_2(x) \end{pmatrix} =\begin{pmatrix} f_2(-x) \\ f_1(-x) \end{pmatrix}. \end{aligned}$$

Note that \(V^2=I\). Since taking the adjoint and rescaling the kernel does not change the Fredholm determinant, we see that

$$\begin{aligned} \det (I+K(u))_{X}=\det (I+K^*_{\alpha }(u))_X=\det (I+VK^*_{\alpha }(u)V)_X, \end{aligned}$$

Using these definitions a computation shows that

$$\begin{aligned} VK^*_{\alpha }(u)V=\begin{pmatrix} \tilde{R}(u^{-1})(x,y) &{}\quad \tilde{R}(u^{-1})(x,y) \\ \tilde{R}(u^{-1})(x,y) &{}\quad \tilde{R}(u^{-1})(x,y) \end{pmatrix} \begin{pmatrix} I &{}\quad 0\\ 0 &{}\quad u^{-1}I \end{pmatrix} \end{aligned}$$

This operator has the same determinant as

$$\begin{aligned}&\begin{pmatrix} I &{}\quad 0\\ 0 &{}\quad u^{-1}I \end{pmatrix} \begin{pmatrix} \tilde{R}(u^{-1})(x,y) &{}\quad \tilde{R}(u^{-1})(x,y) \\ \tilde{R}(u^{-1})(x,y) &{}\quad \tilde{R}(u^{-1})(x,y) \end{pmatrix}= \begin{pmatrix} \tilde{R}(u^{-1})(x,y) &{}\quad \tilde{R}(u^{-1})(x,y)\\ u^{-1}\tilde{R}(u^{-1})(x,y) &{}\quad u^{-1}\tilde{R}(u^{-1})(x,y) \end{pmatrix}\\&\quad =K(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta )(x,y). \end{aligned}$$

Thus,

$$\begin{aligned} \det (I+K(u,\alpha ,\xi _1,\Delta \xi ,\eta _1,\Delta \eta ,\delta ))_X&=\det (I+K(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\beta \delta ))_X\\ {}&= \det (I+K(u^{-1},\beta ,\Delta \xi ,\xi _1,\Delta \eta ,\eta _1,\delta )_X \end{aligned}$$

since the Fredholm determinant is independent of the value of \(\delta \) as long as the condition (2.5) is satisfied. Note that this condition is \(\delta >\max (\eta _1,\alpha \Delta \eta )\) so \(\beta \delta >\max (\Delta \eta ,\beta \eta _1)\) and we can replace \(\beta \delta \) with \(\delta \) as long as \(\delta >\max (\Delta \eta ,\beta \eta _1)\). \(\square \)

7 Relation to the previous two-time formula

The approach in the present paper can be modified to study the probability

$$\begin{aligned} p(a;A)=\mathbb {P}[G(m,n)=a,\,G(M,N)<A], \end{aligned}$$
(7.1)

under the same scaling (2.2).

Let

$$\begin{aligned} X'=L^2(\mathbb {R}_-,dx)\oplus L^2(\mathbb {R}_+,dx)\oplus L^2(\{0\},\delta _0) \end{aligned}$$

and modify the definition of S and T into

$$\begin{aligned} S(x,y)= & {} S_1(x,y)+1(x\ge 0)S_2(x,y)-S_3(x,y)1(y<0), \end{aligned}$$
(7.2)
$$\begin{aligned} T(x,y)= & {} -T_1(x,y)-1(x>0)S_2(x,y)+S_3(x,y)1(y\le 0). \end{aligned}$$
(7.3)

Define the matrix kernel

$$\begin{aligned} K_{uv}(x,y)=\begin{pmatrix} R(u)(x,y) &{}\quad R(u)(x,y) &{}\quad R(u)(x,y) \\ uR(u)(x,y) &{}\quad uR(u)(x,y) &{}\quad uR(u)(x,y)\\ vR(u)(x,y) &{}\quad vR(u)(x,y) &{}\quad vR(u)(x,y) \end{pmatrix}, \end{aligned}$$
(7.4)

where R(u) is defined as in (2.12) but with S and T given by (7.2) and (7.3) instead. Then, under (2.2),

$$\begin{aligned} \lim _{T\rightarrow \infty } c_3(t_1T)^{1/3}p(a;A)=\frac{1}{(2\pi \mathrm {i})^2}\int _{\gamma _r}du\int _{\gamma _r}\frac{dv}{v^2}\det (I+K_{uv})_{X'}, \end{aligned}$$
(7.5)

for any \(r>0\). From this formula, it is possible to derive the formula for the two-time distribution given in [17]. It should be possible to get the formula in [17] also by taking the partial derivative with respect to \(\xi _1\) in (2.15). We have not been able to carry out that computation.