The two-time distribution in geometric last-passage percolation

We study the two-time distribution in directed last passage percolation with geometric weights in the first quadrant. We compute the scaling limit and show that it is given by a contour integral of a Fredholm determinant.


Introduction
In this paper we will consider the so called two-time distribution in directed lastpassage percolation with geometric weights. This last-passage percolation model has several interpretations. It can be related to the Totally Asymmetric Simple Exclusion Process (TASEP) and to local random growth models. It is a basic example of a solvable model in the KPZ universality class. It has been less clear to what extent the two-time problem is also solvable but recently there has been some developments in this direction [1,5,9,13,17,18]. The approach in this paper is different in many ways from that in our previous work [17]. It is closer to standard computations for determinantal processes, more straightforward and simpler.
To define the model, let (w(i, j)) i, j≥1 be independent geometric random variables with parameter q, Consider the last-passage times G(m, n) = max π :(1,1) (m,n) (i, j)∈π w(i, j), (1.1) where the maximum is over all up/right paths from (1,1) to (m, n), see [14]. We are interested in the correlation between G(m 1 , n 1 ) and G(m 2 , n 2 ), when (m 1 , n 1 ) and (m 2 , n 2 ) are ordered in the time-like direction, i.e. m 1 < m 2 and n 1 < n 2 . To see why this is called a time-like direction, and give one reason why we are interested in the two-time problem, let us reinterpret the model as a discrete polynuclear growth model. It is clear from (1.1) that G(m, n) = max(G(m − 1, n), G(m, n − 1)) + w(m, n). (1.2) Let G(m, n) = 0 if (m, n) / ∈ Z 2 + , and define the height function h(x, t) by for x + t odd, and extend it to all x ∈ R by linear interpolation. Then (1.2) leads to a growth rule for h(x, t) and this is the discrete time and space polynuclear growth model. We think of x → h(x, t) as the height above x at time t, and we get a random one-dimensional interface. Let the constants c i be given by (2.1). It is known, see [15], that the rescaled process as a process in η ∈ R for a fixed t > 0, converges as T → ∞ to A 2 (η) − η 2 , where A 2 (η) is the Airy-2-process [21]. In particular, for any fixed η, t, where F 2 is the Tracy-Widom distribution, and is the Airy kernel. The two-time problem is concerned with the question of the correlation between heights at different times. What is the limiting joint distribution of H T (η 1 , t 1 ) and H T (η 2 , t 2 ) for t 1 < t 2 , as T → ∞? From (1.3), we see that this is related to understanding the correlation between last-passage times in the time-like direction. That a time separation of order T is the correct order to get non-trivial correlations is quite clear if we think about how much random environment e.g. G(n, n) and G (N , N ), n < N , share. It can also be seen from the slow de-correlation phenomenon, see [4,12]. Looking at (1.4) we see that we have the fluctuation exponent 1/3 (fluctuations have order T 1/3 ), the spatial correlation exponent 2/3, and we also have the time correlation exponent 1 = 3/3 as explained. This is the KPZ 1:2:3 scaling.
For further references and more on random growth models in the KPZ-universality class and related interacting particle systems, we refer to the survey papers [2,3,22]. The main result of the present paper is a limit theorem for the following two-time probability. Fix m, M, n, N with 1 ≤ m < M and 1 ≤ n < N . For a, A ∈ Z, we will consider the probability P(a, A) = P[G(m, n) < a, G(M, N ) < A], (1.5) c 0 = q −1/3 (1 + √ q) 1/3 , c 1 = q −1/6 (1 + √ q) 2/3 , We will investigate the asymptotics of the probability distribution defined by (1.5).
We can now formulate our main theorem. The theorem will be proved in Sect. 3. The fact that K (u) is a trace-class operator is Lemma 4.1 below.
The formula for the two-time distribution can be written in different ways. In Sect. 6, we will give formulas suitable for studying the limits α → 0, α → ∞ and expansions in α and 1/α respectively. We will not discuss these expansions here, but refer to [7] for more on this and comparison with the results in [18].
For comments on the relation between this formula and the formula derived in [17], see the discussion in Sect. 7.

Remark 2.2
It would be interesting to be able to prove the same type of scaling limit for the multi-time case, i.e. to consider the probability function where m 1 < m 2 < · · · < m L , and n 1 < n 2 < · · · < n L . It is possible to write a formula analogous to (3.17) below but with L −1 contour integrals. This can be proved in a very similar way as the proof of (3.17). We hope to say more on this problem in future work.

Proof of the main theorem
In this section we will prove the main theorem. Along the way we will use several lemmas that will be proved in Sects. 4 and 5. Write for m ≥ 0, and a fixed N ≥ 1. Let G(0) = 0. By we denote the finite difference operator defined on functions f : for all functions f for which the series converges. The negative binomial weight is Note that G(m) ∈ W N . The following proposition is the starting point for the proof. It is proved in [16] following the paper by Warren [25], see also [8] for a more systematic treatment.
Write 5) and We can the write Here we would like to perform the sum over y, which is straightforward, and then the sum over x, which is tricky since we cannot use the Cauchy-Binet identity directly. An important step is part a) of the following lemma, which is proved in Sect. 4. The proof of (3.7) uses successive summations by parts and generalizes the proof of Lemma 3.2 in [16].
Lemma 3.2 Let f , g : Z → R be given functions and assume that there is an If we use (3.7) and (3.8) in (3.6), we find (3.9) Before we show how we can use the Cauchy-Binet identity to do the summation in (3.9), we will modify it somewhat. Below, this modification will be a kind of orthogonalization procedure, and will be important for obtaining a Fredholm determinant. and where w m is the negative binomial weight (3.2). If we shift (3.9), and use (3.10), (3.11) and (3.12), we get This formula is the basis for the next lemma, the proof of which is based on the Cauchy-Binet identity. However, because of the restriction x n < 0 in the summation in (3.13), we cannot apply the identity directly. In order to state the result we need some further notation. Define 14) Let u be a complex parameter and set (3.16) Lemma 3. 3 We have the formula,
The lemma is proved in Sect. 4. The contour integral come from the need to capture the restriction x n < 0 and still use the Cauchy-Binet identity.
We now come to the choice of the matrices A and B. The aim is to get a good formula for f 0,1 and f 1,2 and make it possible to write the determinant in (3.17) as a Fredholm determinant suitable for asymptotic analysis. Define Using a generating function for the negative binomial weight (3.2), it is straightforward to show that for all m ≥ 1, k, x ∈ Z, provided |z| > τ. We now define the matrices A and B. Let c(i) be a conjugation factor defined below in (3.25) which we need to make the asymptotic analysis work. Set From the properties of β k , we see that (a ik ) is lower-and (b k j ) upper-triangular, and that the condition (3.10) is satisfied.
The proof of the lemma, which will be given in Sect. 4, is a straightforward computation using the definitions and (3.21).
We can now express L p , p = 1, 2, in terms of these objects.

33)
and The proof is based on (3.14), (3.15), and Lemma 3.4, and suitable contour deformations in order to get the contours into positions that can be used in the asymptotic analysis, see Sect. 4. Combining (3.16) with Lemma 3.5 we obtain and we also set Next, we want to rewrite the determinant in (3.37) in a block determinant form, corresponding to i ≤ n and i > n, and similarly for j. For r , s ∈ {1, 2}, and x, y ∈ R, we define where [·] denotes the integer part. The right side of (3.38) does not depend on r or s explicitely but we have x < 0 for r = 1 and x ≥ 0 for r = 2, and correspondingly for y depending on s. Let = {1, 2} × R and define the measures On we define a measure ρ by for every integrable function f : → R. F u defines an integral operator F u on L 2 ( , ρ) with kernel F u (r , x; s, y). Note that the space L 2 ( , ρ) is isomorphic to the space X defined in (2.13), and we can also think of F u as a matrix operator. Lemma 3. 6 We have the identity, This is straightforward, using Fredholm expansions, and the lemma will be proved in Sect. 4.
We can now insert the formula (3.40) into (3.37). This leads to a formula that can be used for taking a limit, but before considering the limit, we have to introduce the appropriate scalings. For s = 1, 2, we definẽ where c 0 is given by (2.1). The next lemma follows from (3.37), Lemma 3.6, and (3.41), see Sect. 4.

Lemma 3.7
We have the formula, Theorem 2.1 now follows by combining this lemma with the next lemma which will be proved in Sect. 5. (2.14). Then,

Lemma 3.8 Consider the scaling (2.2) and let K (u) be the matrix kernel defined by
uniformly for u in a compact set.

Proof of Lemmas
In this section we will prove the lemmas that were used in Sect. 3. Some results related to the asymptotic analysis will be proved in Sect. 5.

Proof of Lemma 3.2 Write
Then, To prove (4.1), we use the summation by parts identity, Consider the x -summation in the left side of (4.1) with all the other variables fixed. Let x +1 = ∞ if = N and let x denote the finite difference with respect to the variable x. Using (4.2) in the second inequality we get = 0 (assumption that all series are convergents, expressions well-defined), so one column in the second determinant the first boundary term in (4.3) is = 0. If < N , then the first boundary term in (4.3) is = 0 because c = c +1 , and x → x +1 means that columns and + 1 will be identical in the second determinant.
we see that columns and − 1 in the first determinant in the second boundary term in (4.3) will be identical.
Part (b) of the lemma follows from the identity To prove (4.5), first sum over x N from x N −1 to a in the last row. This gives The last term does not contribute since it is the same as in row N − 1. We can now sum over x N −1 from x N −2 to a in row N − 1 etc. In this way we obtain (4.5).
and we obtain .
Proof of Lemma 3. 6 We start with the right side of (3.40), Proof of Lemma 3.7 By the formula (3.37) for P(a; A) and Lemma 3.6, we see that (4.14) We have the Fredholm expansion, The change of variables x p → c 0 (t 1 T ) 1/3 x p gives Take the factor c 0 (t 1 T ) 1/3 into row p. We see then that the right side of (4.15) equals, .
Combining this with (4.14) we have proved the lemma.
We want to prove that the operator K (u) in the definition of the two-time distribution is a trace-class operator.

Lemma 4.1
The operator K (u) defined by (2.14) is a trace-class operator on the space X given by (2.13).
By splitting K (u) into several parts and factoring out multiplicative constants, we see that it is enough to prove that

A A A A
is a trace-class operator on X for A = S 1 , T 1 , S * 2 , S * 3 . We can think of A as an operator on L 2 ( , ρ) instead, where = {1, 2} × R and ρ is given by (3.39).
Define the kernels Using the definitions, we see that 1 (x, s))a 2 (s, y) ds, T 1 (x, y) = To get kernels on L 2 ( , ρ), we define for r 1 = 1, 2, andã Then, by (4.17) and (3.39), Similarly, we see that T 1 =ã 1ã2 , S * 2 = b 1 b 2 and S * 3 = c 1 c 2 . Using (2.5) and asymptotic properties of the Airyfunction, we see that a 1 , a 2 , b 1 , b 2 , c 1 , c 2 are square integrable over R 2 , and also over R if we fix one of the variables to be zero. It follows from this that a 1 , a 2 ,ã 1 , . . . , c 2 are Hilbert-Schmidt operators on L 2 ( , ρ). Since the composition of two Hilbert-Schmidt operators is a trace-class operator, we have that S 1 , T 1 , S * 2 and S * 3 are trace-class operators on L 2 ( , ρ), and hence K (u) is a trace-class operator also.

Asymptotic analysis
In this section we will prove Lemma 3.8. The proof has several steps and we will split it into a sequence of lemmas. The proofs of these lemmas will appear later in the section.
For k = 1, 2, 3, we define the rescaled kernels The lemma is proved below. In order to prove the convergence of the Fredholm determinant we also need some estimates.

Lemma 5.2
Assume that |ξ |, |η| ≤ L for some fixed L. If we choose δ in (3.25) sufficiently large, depending on q and L, there are positive constants C 0 , C 1 , C 2 that only depend on q and L, so that for all x, y satisfying
The proof is given below. We now have the estimates that we need to prove Lemma 3.8 for r , s ∈ {1, 2} and all x, y ∈ R. Note that, by definitionF u,T is zero if x, y do not satisfy (5.4). We can expand the Fredholm determinant,

Proof of Lemma 3.8 Recall from (2.12) and (2.14) that
in its Fredholm expansion. It follows from (5.6), (5.7) and Hadamard's inequality that we can take the limit T → ∞ in (4.15) and get This completes the proof. Consider Here the constants c i are given by (2.1). Write (5.10) If η = ξ = v = 0, then f (w) has a double critical point at The local asymptotics around the critical point is given by the next lemma.

Lemma 5.3
Fix L > 0 and assume that |ξ |, |η|, |v| ≤ L. Furthermore, assume that we have the scaling (5.9). Then, uniformly for w in a compact set in C where and we find Also, and To prove the estimates that we need, we use some explicit contours in (3.27) to (3.30). Let d > 0 and define and for |σ | ≤ π K 1/3 , where K is as in (5.9). Thus, w 1 gives a circle around the origin of radius w c (1 − d K 1/3 ), and w 2 gives a circle of radius

Lemma 5.4
Fix L > 0. Assume that we have the scaling (5.9) and that |ξ |, |η|, |v| ≤ L. Then, there are positive constants C j , 1 ≤ j ≤ 4 that only depend on q and L, so that
We will also need estimates that work for large v.

Lemma 5.5
Assume that |ξ |, |η| ≤ L for some fixed L > 0, and assume that we have the scaling (5.9) and v is such that k ≥ 0. Then, we can These two Lemmas will be proved below. We can use Lemma 5.3 and Lemma 5.4 to prove Lemma 5.1.
Now, a computation shows that, for some constant C, for all σ i satisfying (5.28). Thus, for x, y in a compact set, we have the following bound on the integrand in (5.25), where the last inequality follows from Lemma 5.4. For σ i in a bounded set, we see that It follows from (2.2) that , , (5.36) and we have the condition and let . (5.39) If d, D > 0, we have the formulas, with absolutely convergent integrals. Using (5.37), we see that It follows from these formulas, (5.39) and (5.40) that S 1 is also given by (2.6). The proof of (5.3) is identical with D 1 replaced by D 3 satisfying (5.37). The integral formula for T 1 reads . (5.41) The other cases are treated similarly. For S 2 and S 3 we get the formulas , (5.42) and .

Proof of Lemma 5.2
Consider firstÃ 1,T . By Lemma (5.4), we can choose d 1 and d 2 , with d 1 < αd 2 , so that where C 3 , C 4 are some positive constants independent of σ 1 and σ 2 . By Lemma 5.5, we can choose d = d 3 (x) ≥ C 0 , and d = d 4 (y) ≥ C 0 , so that and ω = w 1 (σ 4 , d 4 (y)), then there is a constant C 5 so that Introducing these parametrizations into (5.25) and using the estimates above, we find We see that for large enough |x|, we can choose δ so large that for some positive constants C 1 , C 2 . This proves the estimate forÃ 1,T . The proof for B 1,T is completely analogous. Consider nowÃ 3,T , .
(5.47) Using Lemma 5.5, we see that, just as forÃ 1,T , we can choose d 1 (y) and d 2 (x) so that and we get the estimate This gives us the estimate we want by choosing δ large enough. The proof forÃ 2,T is analogous.
The statements in Lemma 5.4 and in Lemma 5.5 are consequences of two other lemmas that we will now state and prove. The first lemma is concerned with the decay along the paths given by w 1 (σ ) and w 2 (σ ).
The next Lemma is concerned with the decay for large |v|.

Lemma 5.7
Assume that we have the scaling (5.9) and that v is such that k ≥ 0, which will always be the case. Also, assume that |ξ |, |η| ≤ L for some L > 0. There are positive constants μ 1 , μ 2 , μ 3 that only depend on q, L, and a choice d = d(v) satisfying (5.48) so that There is also a choice d = d(v) satisfying (5.48) so that so we want to estimate g 1 (0) − log f (w c ) from below, and then make a good choice of d. We see that To estimate this expression, we will use the inequalities for x ≥ 0. It follows from (5.64) and these inequalities that Substitute the expressions in (5.9). After some manipulation this gives Here C 1 , C 2 , C 3 only depend on q, L. If v ≤ 0, then it follows from (5.67) that (5.68) Then, by (5.68), Choose D 1 large, depending on only q, L, so that which is satisfied if We can choose D 1 so large that C 1 /D 1 is as small as we want, and hence we can choose so small that It then follows from (5.69) that for √ −v ≥ D 1 . By adjusting μ 3 , we see that (5.62) holds if v ≤ 0. If v ≥ 0, we choose a d satistying (5.48) depending on q, L, but not on v or K . It follows from (5.68) that there are constants μ 1 and μ 3 , so that Hence (5.62) holds also when v ≥ 0.
To prove (5.63) we consider instead by (5.65) and (5.66). Into this estimate we insert the expressions in (5.9), and after some computation we get We can now proceed in analogy with the previous case to show (5.63).

More formulas for the two-time distribution
In this section we give an alternative formula for the two-time distribution, see Proposition 6.1 below. Recall the notation (5.38), Looking at (5.40), we see that it is natural to write since we then get the formulas for any d, D > 0. We can think of (6.2) as the kernel of an integral operator on L 2 (R + ).

Proposition 6.1 The two-time distribution (2.15) is given by
We will give the proof below. The formula (6.22) is suitable for investigating the limit α → 0 (long time separation). For more on this limit see [7]. To study the limit α → ∞ (short time separation), we can use (6.22) and the next Proposition which gives an α and 1/α relation. Let To indicate the dependence of the kernel K (u) on all parameters we write K (u, α, ξ 1 , ξ, η 1 , η, δ).
Proof of Proposition 6.2 To indicate the dependence of S, T and R(u) on all parameters we write S (α, ξ 1 , ξ, η 1 , η, δ) Let K * α (u)(x, y) = α −1 K (α −1 y, α −1 x), and define V : X → X by Note that V 2 = I . Since taking the adjoint and rescaling the kernel does not change the Fredholm determinant, we see that det(I + K (u)) X = det(I + K * α (u)) X = det(I + V K * α (u)V ) X , Using these definitions a computation shows that V K * α (u)V = R (u −1 )(x, y)R(u −1 )(x, y) R(u −1 )(x, y)R(u −1 )(x, y) I 0 0 u −1 I for any r > 0. From this formula, it is possible to derive the formula for the two-time distribution given in [17]. It should be possible to get the formula in [17] also by taking the partial derivative with respect to ξ 1 in (2.15). We have not been able to carry out that computation.