Point-to-line last passage percolation and the invariant measure of a system of reflecting Brownian motions

This paper proves an equality in law between the invariant measure of a reflected system of Brownian motions and a vector of point-to-line last passage percolation times in a discrete random environment. A consequence describes the distribution of the all-time supremum of Dyson Brownian motion with drift. A finite temperature version relates the point-to-line partition functions of two directed polymers, with an inverse-gamma and a Brownian environment, and generalises Dufresne’s identity. Our proof introduces an interacting system of Brownian motions with an invariant measure given by a field of point-to-line log partition functions for the log-gamma polymer.


Introduction
In this paper we generalise to a random matrix setting the classical identity: where B is a Brownian motion, μ > 0 a drift and e(μ) is a random variable which has the exponential distribution with rate 2μ. In our generalisation, the Brownian motion is replaced by the largest eigenvalue process of a Brownian motion with drift on the space of Hermitian matrices (see Sect. 2) and the single exponentially distributed random variable is replaced by a random variable constructed from a field of independent exponentially distributed random variables using the operations of summation and maximum. In fact this latter random variable is well known as a point-to-line last passage percolation time. This result gives a connection between random matrix theory and the Kardar-Parisi-Zhang (KPZ) universality class, a collection of models related to random interface growth including growth models, directed polymers in a random environment and various interacting particle systems. Connections of this form originated in the seminal work of Baik, Deift, Johansson [2] showing that the limiting distribution of the largest increasing subsequence in a random permutation is given by the Tracy-Widom GUE distribution. They have been extensively studied since then: for curved initial data (in our context point-to-point last passage percolation) in [4,27,35,36,39,44] where the Robinson-Schensted-Knuth (RSK) correspondence plays a key role and for flat initial data (in our context point-to-line last passage percolation) in [3,6,11,21,33,40] where the relationships are more mysterious.
There are two results which particularly relate to Theorem 1. Baik and Rains [3] used a symmetrised version of the RSK correspondence to prove an equality in law between the point-to-line last passage percolation time and the largest eigenvalue from the Laguerre orthogonal ensemble (LOE), see Sect. 2 for the definition; while a more recent work by Nguyen and Remenik [33] used the approach of multiplicative functionals from [9] to prove an equality in law between the supremum of non-intersecting Brownian bridges and the square root of the largest eigenvalue of LOE. In Sect. 2 we show these two results can be combined to establish Theorem 1 in the case of equal drifts: α 1 = α 2 = · · · = α n .
One aspect of the links between random matrices and growth models in the KPZ class is a striking variational representation for the largest eigenvalue of Hermitian Brownian motion. Specifically, consider a system of reflected Brownian motions, where each particle is reflected up from the particle below (see Sect. 3). Then the largest particle of this system is equal in distribution, as a process, to the largest eigenvalue of a Hermitian Brownian motion, see [4,25,36,44]. This can be combined with a time reversal, as in [10], to show that the all-time supremum of the largest eigenvalue has the same distribution as the largest particle in a stationary system of reflecting Brownian motions but with an additional reflecting wall at the origin. This is a generalisation of the classical argument that deduces from the identity (1) that the invariant measure of a reflected Brownian motion with negative drift is the exponential distribution. Thus we are motivated to study the invariant measure of this system of reflecting Brownian motions with a wall and unexpectedly we find that the entire invariant measure -rather than just the marginal distribution of the top particle -can be described by last passage percolation.
Let α j > 0 for each j = 1, . . . , n and let (B ) be independent Brownian motions with drifts (−α 1 , . . . , −α n ). A system of reflected Brownian motions with a wall at the origin can be defined inductively using the Skorokhod construction, We will show in Sect. 3 that the distribution of Y (t) = (Y 1 (t), . . . , Y n (t)) converges to a unique invariant measure and we denote a random variable with this law by (Y * 1 , . . . , Y * n ). This is equal in distribution to a vector of point-to-line last passage percolation times where we allow the point from which the directed paths start to vary: let Π flat n (k, l) denote the set of all directed (up and right) nearest-neighbour paths from the point (k, l) to the line {(i, j) : i + j = n + 1} and let G(k, l) = max π ∈Π flat n (k,l) (i j)∈π e i j (4) where e i j are independent exponential random variables indexed by N 2 ∩ {(i, j) : i + j ≤ n + 1} with rates α i + α n− j+1 .
We will prove Theorem 2 by finding transition densities for both systems of a similar form to those found for TASEP in Schütz [41] and reflected Brownian motions in Warren [44] and use these to calculate explicit densities for both vectors. Then Theorem 1, with general drifts, follows from Theorem 2 by the time reversal argument discussed previously.
Point-to-line last passage percolation is related to the totally asymmetric exclusion process (TASEP) by interpreting last passage times as the time at which a particle jumps. The point-to-line geometry corresponds to a periodic initial condition for TASEP, where particles are initially located at every even site of the negative integers. The joint distribution of particle positions at a fixed time is given by a Fredholm determinant in [11,40] and under a suitable limit the authors obtain the Airy 1 process. Their techniques also provide Fredholm determinants more generally, for example for the vector (G(1, n), . . . , G(n, n)). In TASEP and in the systems of reflected Brownian motion studied in [45] the role of the flat geometry is played by a periodic initial condition, whereas for the Brownian model (Y (t)) t≥0 considered above this role is played by a reflecting wall at the origin. This is a substantial difference: a natural path-valued process to consider is the evolution as n varies of the path of the top particle; in this setting the techniques used in [11,40,45] are no longer applicable. The path of the top particle is a candidate for a finite n analogue of the Airy 1 process.
Another motivation for this reflected system is provided by queueing theory: reflected Brownian motions have been considered as a model for tandem queues in heavy traffic and the invariant measures have been studied extensively both analytically and numerically [14,17,20,24,26,35]. It is known from [26] that the invariant measure has an explicit product form when a skew symmetry condition for the angles of reflection holds and it is known from [17] that the invariant measure can be expressed as a sum of exponential random variables if a weaker relation between the angles holds. In our case, the presence of a wall, which has a natural queueing interpretation as a deterministic arrival process, ensures that the skew symmetry condition fails; nonetheless Theorem 2 describes the non-reversible invariant measure and we give an explicit formula for its density in Sect. 3.
A further classical result from probability theory that we consider is Dufresne's identity. Let μ > 0, let B (−μ) be a Brownian motion with drift −μ and let γ −1 (μ) denote an inverse gamma random variable with shape parameter μ and rate 1. Then Dufresne's identity is an equality in law, which has been studied in mathematical finance and diffusion in a random environment (see [32,46] and the references within). This is a positive temperature version of the fact that the all-time supremum of Brownian motion with negative drift has an exponential distribution and suggests the following positive temperature version of Theorem 1.
be independent Brownian motions with drifts −α i . Let W i j be a collection of inverse gamma random variables indexed by N 2 ∩ {(i, j) : i + j ≤ n + 1} with shape parameters α i + α n− j+1 and rate 1 and let Π flat n denote the set of all directed paths from (1, 1) to the line The left hand side of this expression is the partition function for a point-to-line polymer in a Brownian environment while the right hand side is the partition function for the point-to-line log-gamma polymer. The point-to-point polymers have been studied in a number of recent papers: the Brownian model in [7,35,37] and the log-gamma polymer in [8,16,38,42] with one motivation being their relationship to the KPZ equation (see [15] for a survey). The point-to-line log-gamma polymer, which corresponds to a flat initial condition for the KPZ equation, has been studied recently by [6,34] using a local version of the geometric RSK correspondence and an expression is given for the Laplace transform of the point-to-line partition function of the log-gamma polymer in terms of Whittaker functions. From Theorem 3 it follows that the Laplace transform of the partition function of the point-to-line Brownian model, which has not been studied previously, is also given by the same expression.
For the proof, we use a time reversal argument to show that Theorem 3 follows from a stronger result on the invariant measure of a system of Brownian motions where the reflection rules of the system in Theorem 2 are replaced by smooth exponential interactions. We find this invariant measure by embedding the Brownian system in a larger system of interacting Brownian motions, indexed by a triangular array, such that the invariant measure of this system is given by a field of point-to-line log partition functions for the log-gamma polymer.

Equal drifts and connections to LOE
This section discusses in more detail the connection between the results of Nguyen and Remenik [33], and Baik and Rains [3].
We first introduce the relevant random matrix ensembles and processes. We consider a Brownian motion on the space of n × n Hermitian matrices denoted (H (t)) t≥0 and constructed from independent entries {H i, j : i ≤ j} such that along the diagonal H ii are real standard Brownian motions, the entries below the diagonal {H i j : i < j} are standard complex Brownian motions, and the remaining entries are determined by the Hermitian constraint H i j =H ji . The ordered eigenvalues λ 1 , . . . , λ n form a system of Brownian motions conditioned (in the sense of Doob) not to collide and with a specified entrance law from the origin which can be constructed as a limit from the interior of the Weyl chamber (for example, see [31]). The time changed matrix-valued process (H br (t)) t∈ [0,1] 1] is a Brownian bridge in the space of Hermitian matrices and the eigenvalues are given by applying this time change to the above system of Brownian motions conditioned not to collide. It can be checked, for example by calculating the joint distribution of particles at a sequence of times, that the eigenvalues of a Hermitian Brownian bridge are given by a system of Brownian bridges which we denote (B br 1 , . . . , B br n ) with the ordering B br 1 ≤ · · · ≤ B br n all started at zero at time 0 and ending at zero at time 1 with a specified entrance and exit law constructed as a limit from the interior of the Weyl chamber, and conditioned (in the sense of Doob) not to collide in the time interval t ∈ (0, 1).
Let X be an m × n matrix with entries given by independent standard normal random variables and assume m ≥ n. Then M = X T X is an n × n matrix from the Laguerre orthogonal ensemble (LOE) and the joint density of eigenvalues is given by where c n is a normalisation constant and the parameter a = (m − n − 1)/2. Throughout this paper we will only be interested in the case a = 0, or equivalently m = n + 1. The main result of Nguyen and Remenik [33] states that We use the time change between Hermitian Brownian motions and bridges to express this in terms of a Hermitian Brownian motion: where the change of variables are given by u = t/(1 − t) and v = ux 2 and the largest eigenvalue inherits the scaling property of Brownian motion. Therefore This is connected to last passage percolation by the results of Baik and Rains [3]. We refer to Section 10.5 and 10.8.2 of Forrester [22] for the precise statements we use which are obtained after taking a suitable limit of the geometric data considered in [3] to exponential data. Let Π flat n denote the set of all directed nearest-neighbour paths from the point (1, 1) to the line {(i, j) : i + j = n + 1}, where the directed paths consist only of up and right steps: that is to say, paths whose co-ordinates are non-decreasing. We let e i j be independent exponential random variables indexed by N 2 ∩ {(i, j) : i + j ≤ n + 1} with rate α i + α n− j+1 and define the last passage percolation time This can be compared with point-to-point last passage percolation in a symmetric random environment. Fix n and define exponential data {ê i j : i, j ≤ n} byê i j = e ji = e i j for i < n − j + 1, andê i j = 1 2 e i j for i = n − j + 1. Let Π n denote the set of all directed (up and right) nearest-neighbour paths from the point (1, 1) to the point (n, n). Due to the symmetry of the random environment 2 max The RSK correspondence can be applied to any rectangular array of data and generates a pair of semi-standard Young tableaux (P, Q) with shape ν such that ν 1 is equal to the point-to-point last passage percolation time. When applied to exponential data with symmetry (see Section 10.5.1 of Forrester [22]), the two tableaux can be constructed from each other and the distribution of ν has a density with respect to Lebesgue measure given by for distinct α i . In the case when α i = 1 for each i = 1, . . . , n this can be evaluated as a limit and gives the eigenvalue density for LOE (scaled by a constant factor of 2). In combination with Eq. (6) this shows that, Therefore the combination of Eqs. (5) and (7) proves Theorem 1 in the case when D is a multiple of the identity matrix. We could use this time change argument in the reverse direction to provide an alternative proof of Nguyen and Remenik starting from Eq. (7) and our proof of Theorem 1.

Time reversal
In the introduction we defined a system of reflected Brownian motions with a wall at the origin Y = (Y 1 , . . . , Y n ) and we now define the system without the wall. Let α j > 0 for each j = 1, . . . , n and let (B ) be independent Brownian motions with drifts. A system of reflected Brownian motions can be defined inductively using the Skorokhod construction, An iterative application of the above gives the n-th particle the representation This gives an interpretation of the largest particle in a reflected system as a pointto-point last passage percolation time in a Brownian environment. Similarly the n-th particle in the system with a wall defined by (2, 3) has a representation where the only difference is that there is one extra supremum over t 0 and we have reversed the order of the drifts. These systems are related: in [10] it was proved in the zero drift case that for each fixed t, by a time reversal argument which easily extends to the case with drifts. We prove a vectorised version of this time reversal which can also be useful for studying the full vector (Y 1 , . . . , Y n ). We first extend the definition of Z to a triangular array Z = (Z k j : with the representation We note that the Z process is still constructed from only n independent Brownian motions. In particular, the equality in law of the marginal distribution of the last co-ordinate gives the extension of [10] to general drifts, Proof Fix t and observe that where the final equality requires changing the index of summation from i to k − i + 1.
(i) The vector sup 0≤s≤t Z 1 1 (s), . . . , sup 0≤s≤t Z n n (s) converges almost surely as t → ∞ to a finite random variable. From this and Proposition 1 we can deduce that (Y 1 (t), . . . , Y n (t)) converges in distribution as t → ∞ to a random variable which we denote (Y * 1 , . . . , Y * n ) and satisfies (ii) The top particle satisfies (iii) Suppose that α i = 1 for all i = 1, . . . , n, then the top particle satisfies The random variable (Y * 1 , . . . , Y * n ) is distributed according to the unique invariant measure of the Markov process Y , which will follow from Lemma 3.
Proof We first show the almost sure convergence in part (i).
It is sufficient to show the suprema sup 0≤s≤∞ Z 1 1 (s), . . . , sup 0≤s≤∞ Z n n (s) are almost surely finite. We prove a stronger statement that will be useful later, namely, that Denote min(α k , α k−1 , . . . , α k− j+1 ) by δ k j . We proceed, for each k, by induction on j.
n−k+1 (t) and the required statement is a property of Brownian motion with drift. For the inductive step, , and, making use of the inductive hypothesis, Thus we deduce that Z k j (t)/t tends to − min(α k− j+1 , δ k j−1 ) = δ k j . For parts (ii) and (iii), the first equality in distribution follows by the time reversal at the start of this section. The second equality in distribution follows from the well known equality in distribution of processes between the largest particle in a reflected system of Brownian motions and the largest eigenvalue of Hermitian Brownian motion. For equal parameters a proof can be found in any of [4,25,36,44] and for general parameters a proof can be found in [1]. The final equality in distribution for part (iii) follows from the results of Nguyen and Remenik and the time change in Sect. 2.
The fluctuations of the largest eigenvalue of the Laguerre orthogonal ensemble are governed in the large n limit by the Tracy-Widom GOE distribution. This distribution arises as the scaling limit for models in the KPZ universality class with flat initial data and so we now see that (the marginals of) the stationary distribution of reflecting Brownian motions with a wall also lies within this universality class. This is explained by Eq. (9) or the relationship to sup 0≤s≤∞ Z n (s) along with Eq. (8) which both identify Y * n as a point-to-line last passage percolation time in a Brownian environment.

Transition density
The system of reflected Brownian motions with a wall can be defined through a system of SDEs and we use this to define the process with a general initial condition. Let 0 ≤ y 1 ≤ y 2 ≤ · · · ≤ y n and define where L 1 is the local time process at zero of Y 1 and L j is the local time process at zero of Y j −Y j−1 for each j = 2, . . . , n. This is a Markov process and we give its transition density. This has a form similar to [1,10,41,44,45]. Let W + n = {0 ≤ z 1 ≤ · · · ≤ z n } denote the state space of a system of reflected Brownian motions with a wall. We define differential and integral operators acting on infinitely differentiable functions f : [0, ∞) → R which have superexponential decay at infinity as follows, where we define the derivative at zero to be the right derivative at zero. The operators satisfy easy to verify identities: (i) Commutation relations: for any real α, β, (ii) Inverse relations: let Id denote the identity map, for any real α, (iii) Relations to ordinary differentiation and integration: for any real α, We use the notation D α 1 ,...,α n = D α 1 . . . D α n and J α 1 ,...,α n = J α 1 . . . J α n to denote concatenated operations and D α x , J α x in order to specify a variable x on which the operators act. We note that when the operators act on different variables they also commute. Let φ (α) t (resp. ψ (α) t and η (α) t ) be the transition density of a Brownian motion (resp. Brownian motion killed at the origin and reflected at the origin) with drift α. These have the following explicit expressions The transition density for Brownian motion with drift reflected at the origin can be found in Appendix 1, Section 16 of [13]. When the drift is zero we may omit the superscript. Observe that for all x, y ≥ 0. The right hand side can be defined for all x, y and can be used to specify the right derivative of ψ t at zero to ensure that the operation D can be applied to ψ t . A similar procedure can be used to specify the right derivative at zero of ψ (α) t , η (α) t and all of these functions lie in the class of functions specified at the start of this section. We define

Proposition 3
The transition probabilities of (Y 1 (t), . . . , Y n (t)) t≥0 have a density with respect to Lebesgue measure given by r t (x, y).
The following calculation shows that the proposition holds in the case n = 1 by using Siegmund duality. This can be stated in an integral form, for any fixed t, We differentiate this expression in y, apply Girsanov's theorem and symmetry to the killed Brownian motion and use the identities in (iii) to obtain for all x, y ≥ 0, In the case of equal drifts this identity can be used to give an alternative form of The transition probabilities of (Y 1 (t), . . . , Y n (t)) t≥0 with drift vector (−1, . . . , −1) have a density with respect to Lebesgue measure on W + n given byr t (x, y). Lemma 1 For any f : W + n → R which is bounded, continuous and zero in a neighbourhood of the boundary of W + n , uniformly for all x ∈ W + n . This also holds with r replaced byr.

Proof (Proposition 3) We show that r satisfies the Kolmogorov backward equations, together with its boundary conditions, for the process
and observe that To show that ∂r /∂ x 1 = 0 at x 1 = 0 we consider the matrix in the definition of r and bring the prefactor e α 1 x 1 in r into the top row of this matrix. We use the identity and observe that the derivative in x 1 of the right hand side equals zero when evaluated at x 1 = 0. This shows that the derivative of every term in the top row of this matrix equals zero because the derivative in x 1 commutes with the operations acting in y j . Therefore ∂r /∂ x 1 = 0 at x 1 = 0.
To show that the Kolmogorov backward equation is satisfied for x, y in the interior of We differentiate in t, and use the fact that ψ t satisfies the heat equation, to obtain It is convenient to express the terms in brackets using the operations D and J , 1 2 The operations J x and D x commute and therefore Therefore, since r t (x, y) = e − α i y i det(r i j (t; x i , y j )), The proposed transition densities r satisfy the Kolmogorov backward equation for Y = (Y 1 , . . . , Y n ) and the arguments in [44] show that r are the transition densities for Y . We sketch this argument but refer to [44] for the details. Let f be a bounded continuous function which is zero in a neighbourhood of the boundary of W + n and define F(u, x) = W + n r u (x, y) f (y)dy for u ≥ 0 and x ∈ W + n . Fix some T > 0 and > 0. By using Itô's formula and the fact that r t solves the Kolmogorov backward equation we obtain that The is introduced in order to ensures smoothness of F and allow the application of Itô's formula, however, using Lemma 1 we can take the limit as tends to zero to conclude that This holds for all bounded continuous f which are zero in a neighbourhood of the boundary of W + n which is sufficient to prove that r T (y, ·) is the density of the distribution of Y T since this distribution does not charge the boundary.

Proof (Lemma 1)
The proof follows the argument in [44]. The transition density for killed Brownian motion satisfies and q 2 := q − q 1 . We first show that We observe that q 2 is a sum of products of factors where in each product there is at least one factor of the form for some 1 ≤ i, j ≤ n. For {y 1 ≤ } the function f takes the value zero and on {y 1 > } the factor (16) is approaching zero exponentially fast as 1/t → ∞. As a result (15) holds. We now consider q 1 and observe that the entries in the matrix simplify due to the translation invariance of the function: in particular D α for any smooth function h. This means that the matrix in q 1 has diagonal entries Therefore the term corresponding to the identity permutation in the determinant of q 1 is a standard n-dimensional heat kernel. The remaining terms are negligible as in [44].
The transition densities must satisfy the semigroup property and this suggests a generalisation of the Andréief (or Cauchy-Binet) identity. This identity states that for any functions We prove a generalisation involving the inhomogeneous derivative and integral operators, J and D.
and (g j ) n j=1 be collections of infinitely differentiable functions on [0, ∞) such that g j has superexponential decay at infinity for each j = 1, . . . , n while f i has at most exponential growth at infinity for each i = 1, . . . , n.
(ii) Let D α , J α be defined as in Eq. (13) and assume f i (0) = 0 for each i = 1, . . . , n. Then We note that (i) is not quite the homogeneous case of (ii) because (ii) involves applying integration by parts to x 1 , whereas (i) does not. We also note that g (−k) = J (k) g so that part (i) can be applied to the transition densityr from Eq. (14). We have not intended to make the conditions on g optimal and have simply chosen some conditions which are sufficient for our purposes.
Proof We start with the proof of (ii). We observe that for 0 ≤ x < z, We use this formula iteratively to prove that For the first step we use a Laplace expansion of the determinants appearing on the left hand side and then apply Eq. (18) with parameter α = α n and integrating with respect to x n from x n−1 to ∞. Then we reconstruct the resulting expressions as determinants. This gives three terms. The first term is The other two terms are boundary terms given by the following expression evaluated at x n = x n−1 and x n = ∞, These boundary terms are both zero: the determinant of A i j vanishes at x n = x n−1 , because two columns are equal, and we obtain zero at infinity by virtue of the growth and decay conditions imposed on f and g.
The general structure becomes clear after the second step. We perform the same procedure with the integration by parts (18) with parameter α = α n−1 , and integrating with respect to the variable x n−1 between x n−2 and x n . We obtain three terms as above with and the boundary terms evaluated at x n−1 = x n−2 and x n−1 = x n with The determinant of A i j will vanish at x n−1 = x n−2 while the determinant of B i j will vanish at x n−1 = x n . Therefore both boundary terms vanish. Equation (19) now follows by iterating this procedure. The order of the integration by parts with respect to the variables and choice of the parameter α in (18) is important to ensure there are no boundary terms and is the following: In the integration by parts with respect to (x 1 , α 1 ) there is a boundary term at zero, however, this is also zero due to the constraint that f i (0) = 0 for each i = 1, . . . , n.
Finally part (ii) of the lemma follows from applying the Andréief identity (17) to the righthand side of Eq. (19). Part (i) of the Lemma is the same except that there is no integration by parts in x 1 so that the condition f i (0) = 0 is not required. [20]) Let (Y 1 (t), . . . , Y n (t)) t≥0 be the system of reflected Brownian motions with a wall given in Eq. (12) and P t (x, ·) denote the law of (Y 1 (t), . . . , Y n (t)) when started from an initial state x ∈ W + n . Then Y has a unique invariant measure denoted π and satisfies P t (

Lemma 3 (Dupuis and Williams
There are stronger results in the literature including convergence rates: for example Theorem 4.12 of [14] can be applied to prove V -uniform ergodicity for Y . For the process where all particles are started from the origin, the convergence in distribution is contained in Proposition 2. Proposition 4 (i) When α 1 = · · · = α n = 1, then (Y * 1 , . . . , Y * n ) has a density with respect to Lebesgue measure on W + n given bȳ with the sequence of functions ( f i ) i≥0 defined inductively as follows: has a density with respect to Lebesgue measure on W + n given by We make two remarks: (i) For equal drifts the initial function f 0 satisfies G * f 0 = 0 and f 0 (0) + 2 f 0 (0) = 0. The functions f i could also have been defined so as to satisfy the boundary condition f i (0) + 2 f i (0) = 0 for i ≥ 1, however,π would be unchanged as we can use row operations to add on constant multiples of f 0 . (ii) When the drifts are distinct, Dieker and Moriarty [17] show the invariant measure is a sum of exponential random variables and this sum can be calculated explicitly for small values of n. However, when the drifts coincide Proposition 4 part (i) shows the invariant measure contains polynomial prefactors in the style of repeated eigenvalues.

Lemma 4
The functionsπ and π are positive on W + n and satisfy W + nπ = W + n π = 1. We will prove this in Sect. 4 and for the moment prove Theorem 2 assuming this lemma.

Proof (Proposition 4)
In the case of equal rates we apply part (i) of Lemma 2 to calculate the convolution between the proposed invariant measure and the transition densities from Proposition 3. The functions f i and η satisfy the growth and decay conditions at infinity for Lemma 2 and this shows that is the transition density of reflected Brownian motion with drift −1. Fixing y, we use the notation . The follows from integrating by parts where the boundary terms are given by ) each evaluated at zero and infinity. The boundary terms all equal to zero by the boundary conditions on η and f k . Integrating in t, and iterating this gives, since Thus the functions f k are invariant under the action of the η (−1) t modulo multiples of f 0 , . . . , f k−1 . Consequently, for any t > 0 we can apply row operations to obtain In the case when the drifts are not equal we apply Lemma 5 to express the convolution of our proposed invariant measure and the transition density from Proposition 3 as a single determinant, The conditions for Lemma 3 are satisfied because f i (0) = 0 for each i = 1, . . . , n and the conditions on the growth and decay of f i and ψ at infinity are satisfied. We have and therefore = π(y).

Transition densities
Last passage percolation times can be interpreted as an interacting particle system with a pushing interaction between the particles. We define a Markov chain (G pp (k)) k≥0 with n particles with positions on the real line ordered as G pp 1 < · · · < G pp n . We update the system between time k − 1 and time k by applying the following local update rule sequentially to G pp 1 , . . . , G pp n as follows: where (e jk ) 1≤ j≤n,k≥1 are an independent sequence of exponential random variables and G  1) to (n, n).
The advantage of such an interpretation is that there is an explicit transition density for this Markov chain. This was proven in the case of equal parameters (and geometric data) by Johansson [28] and with inhomogeneous parameters (and geometric data) by Dieker and Warren [18]. This Markov chain plays an important role in the recent work, for example [29], on the two-time distribution of last passage percolation. In this section we show how this Markov chain can also be used to study point-to-line last passage percolation.
For α ∈ R, let D α , I α be defined by acting on functions f : R → R which are infinitely differentiable for x > 0, are equal to zero on x ≤ 0 and satisfy that f (k) (0 + ) exists for each k ≥ 0. On such a class of functions define Then D α , I α preserve this class of functions and satisfy D α I α f = f for functions of this form. We also define homogeneous analogues: for a function g satisfying the above, define g (r ) (x) or D (r ) g to be the r -th iterated derivative of g for x > 0 and equal to zero for x ≤ 0 and similarly g (−r ) (x) or I (−r ) g to be the iterated integral x 0 (x−y) r −1 (r −1)! g m (y)dy for x > 0 and equal to zero for x ≤ 0.
Proposition 5 Let (G pp (k)) k≥0 be the Markov chain described above with n particles constructed from independent exponentially distributed random variables (e i j ) 1≤i≤n, j≥1 with e i j having rate α i > 0.
(i) In the case of equal rates: α 1 = · · · = α n = 2, the m-step transition probabilities have a density with respect to Lebesgue measure on W + n given by, for x, y ∈ W + n , where g m (z) = 2 m Γ (m) z m−1 e −2z 1 z>0 and g (r ) m are defined above. (ii) For α j > 0 for each j = 1, . . . , n, the m-step transition densities have a density with respect to Lebesgue measure on W + n given by, for x, y ∈ W + n , with D and I defined in Eq. (24).
Our proof is a generalisation of the method in Johansson [28] to the case of inhomogeneous parameters and exponential rather than geometric jump distributions. An exponential case is not an entirely straightforward generalisation of the formulas in the geometric case because of taking derivatives of functions with a discontinuity. In order to obtain m-step transition densities from 1-step transition densities we prove a version of Lemma 2 for our operators D and I . There are two differences: we allow for possible discontinuities in the functions at the origin and part (ii) of the Lemma allows for new particles to be added at the origin. This will be used in the next subsection to study point-to-line last passage percolation.
Lemma 5 (i) Let f , g be functions satisfying the conditions at the start of this section.
Then for x, z ∈ W + n , be a collection of infinitely differentiable functions on R + with f i (0) = 0 for each i = 1, . . . , n − 1. Let g be a function satisfying the conditions at the start of this section. Then for z ∈ W + n , and using the notation y 1 := 0 all defined analogously to (25).

Proof (Proposition 5)
We first prove that the one-step transition densities are given by Q 1 . This is equivalent to showing that for all n ≥ 1, and for x, y ∈ W + n , where we use the convention y 0 := 0. The right hand side is zero unless x j < y j for all j = 1, . . . , n. We check this for the left hand side. If y k ≤ x k for some 1 ≤ k ≤ n then the first k columns of the matrix in (26) only have non-zero elements in the first k − 1 rows since for j ≤ k and i ≥ k the (i, j)-th entry of the matrix in (26) is a function which only takes non-zero values for positive arguments and the argument is For the remainder of the proof, we can suppose x j < y j for j = 1, . . . , n. We prove (26) by induction on n and observe that the result holds at n = 1. For the inductive step we use a Laplace expansion of the determinant in the last row We prove the terms in the sum for 1 ≤ k ≤ n − 2 are zero by considering separately the cases y k ≤ x n and y k > x n . If y k ≤ x n then f Observe that for z > 0 and j > 1, Since y k > x n , then (28) can be used to re-express the columns indexed by j = k + 1, . . . , n of the final determinant in (27) which involve strictly positive arguments where We apply the inductive hypothesis to the determinant of M with the variables x 1 , . . . , x n−1 and y 1 , . . . , y k−1 , y k+1 , . . . y n and parameters α 1 , . . . , α n−1 to observe that (29) equals We observe that max(y j , x j ) = y j for each j = k + 1, . . . , n − 1. Therefore the expression in {·} is differentiable in y k+1 , . . . , y n , and furthermore, equals a factor of e α n−1 y n−1 multiplied by a factor independent of y n−1 . Therefore the expression in {·} vanishes once we apply ∂ ∂ y n−1 − α n−1 and (30) equals zero. Therefore the sum in Eq. (27) can be restricted to the sum of two terms We consider the two cases when y n−1 ≤ x n and y n−1 > x n separately. If y n−1 ≤ x n then the only non-zero contribution comes from the second term in Eq. (31). In this case by applying the inductive hypothesis and noting that max(y n−1 , x n ) = x n we obtain the required result that Suppose instead y n−1 > x n and consider Eq. (31). Observe that We consider the first determinant in Eq. (31). The argument in the last column is strictly positive and so Eq. (28) can be used to re-express this column as follows We apply the inductive hypothesis to the determinant of K with variables x 1 , . . . , x n−1 and y 1 , . . . , y n−2 , y n and parameters α 1 , . . . , α n−1 to obtain, The expression in {·} is independent of y n . Therefore the term in (34) involving ∂/∂ y n applied to {·} equals zero.
Using (33), (34) and the inductive hypothesis we evaluate (31) multiplied by the prefactor given by exp(− n j=1 α j (y j − x j )) for y n−1 > x n and obtain and To complete the inductive step of the proof of (26) in the case y n−1 > x n we use (27) and (31) to simplify the left hand side of (26) and observe that (32) and (35) cancel while (36) equals the required expression. This completes the inductive step and we establish that (26) holds. The formula for the m-step transition densities follows from Lemma 5.
In the case when all parameters are equal, say α 1 = · · · = α n = 2, the transition density is given by We transform this equation into the expression given in the statement of the proposition by iteratively using the identities that e −2z D 2 f m (z) = D 0 (e −2z f m (z)) and e −2z I 2 f m (z) = I 0 (e −2z f m (z)) for all z ∈ R.

Proof (Lemma 5)
We first prove part (i) for f and g which satisfy the conditions of the Lemma and furthermore are infinitely differentiable on all of R. We apply Lemma 2 with the functions f i (·) = (I α 1 ,...,α i f )(· − x i ) and g j (·) = (D α 1 ,...,α j g)(z j − ·) and observe that D α 1 ,..., The D α have been defined on a more general class of functions in this section but agree with the definition used in Lemma 2 when the functions are smooth. The condition on the growth of f at infinity in Lemma 2 can be removed because g(z j − ·) is zero in a neighbourhood of infinity. As a result Lemma 2 proves that where we have used the following to simplify the right hand side, where the operators pass through the convolution because f and g are smooth on all of R. Therefore the Lemma holds for functions which are infinitely differentiable on all of R in addition to satisfying the stated conditions. We now use approximation to extend the class of functions f and g to those stated in the Lemma. For each > 0, let f be an infinitely differentiable function satisfying f (x) = f (x) for x ≥ and f (x) = 0 for x ≤ 0, and that there exists a constant C such that | f (x)| < C for all and all x ∈ [−1, 1]. For any z ∈ R and j ≥ 1, We prove (39); Eq. (38) is more straightforward. Observe that if z ≤ 0 then both sides are zero and for z > 0, tends to zero as → 0 because for < z/2 then g ( j) (z − y) = g ( j) (z − y) and for 0 ≤ y ≤ , and g and f are bounded.
Equation (37) holds with f and g replaced by f and g because these are smooth. Defining f (i, j) and g (i, j) analogously to (25) we obtain, We want to pass to the limit as ↓ 0. Equations (38) and (39) show that the right hand side of Eq. (41) converges. Let x 1 < · · · < x n and z 1 < · · · < z n and let < min(min i< j {z j − z i }, min i< j {x j − x i }). Consider the Laplace expansions of the determinants on the left hand side of (41). A term in the expansion corresponding to permutations σ and ρ equals If ρ is the identity then each factor g (z ρ(i) − y i ) is bounded uniformly in for 0 ≤ y i ≤ z ρ(i) . If ρ is not the identity then there exists i < j with ρ(i) > i and ρ( j) ≤ i. The (i, ρ(i)) factor is equal to g (i,ρ(i) (z ρ(i) − y i ) and is bounded uniformly in on the region 0 ≤ y i ≤ z ρ(i) − . On the region, y i > z ρ(i) − this factor may be unbounded, however, the ( j, ρ( j)) factor is zero because y j ≥ y i > z ρ(i) − > z ρ( j) and therefore the argument in the ( j, ρ( j)) factor is strictly negative. The same argument applies to σ . This shows that the integrand is bounded uniformly in and since it converges pointwise then the convergence of the left hand side of (41) follows from the dominated convergence theorem. We have established part (i) when x 1 < · · · < x n and z 1 < · · · < z n . We will complete the proof of part (i) by showing that both sides are continuous in x and z for x, z ∈ W + n . For the right hand side of part (i), we observe that y → ( f * g) (i, j) (y) is continuous except if j > i and y = 0. We consider the Laplace expansion of the right hand side with the sum indexed by permutations ρ. If ρ is the identity then each factor is continuous. If ρ is not the identity, then there exists i < j with ρ(i) > i and ρ( j) ≤ i. The argument of the ( j, ρ( j)) factor is z ρ( j) − x j and so the ( j, ρ( j)) factor is zero on As a result the right hand side of part (i) is continuous in x and z. The integrand on the left hand side of part (i) is bounded over compact intervals and so the left hand side is continuous in x and z. This completes the proof of part (i).
Formally, part (ii) of the Lemma follows from embedding the matrix of size n −1 on the left hand side of part (ii) in a matrix of size n with the addition of a delta function where f 0 := δ 0 (·) and f (1, j) 0 are interpreted as weak derivatives. Continuing formally part (ii) is now an application of Lemma 2 where g j (y i ) = D α 2 ,...,α j g(z j − y i ) and f 0 := δ 0 . The top row on the right hand side is equal to (δ 0 , g j ) = g (1, j) (z j ).
To give a rigorous proof of part (ii) we use a similar integration by parts argument to Lemma 2 and approximate g by a smooth g as in part (i) of the current Lemma. In the proof, the condition f i (0) = 0 for each i = 1, . . . , n − 1 is needed for the boundary term from the integration by parts with respect to y 2 to be zero.

Proof of Theorem 2
We apply the results of the previous section to study point-to-line last passage percolation. Recall the point to line last passage percolation times G(k, l) are defined by (4). It is convenient to view the exponential data and last passage percolation times to be set-up in the following array: where we can view the vertical direction as time, increasing upwards, and each horizontal layer as describing the positions of a system of particles with an additional particle added after each time step. These last passage percolation times form a Markov chain + 1, k), . . . , G(n − k + 1, 1)). We use the notation G pl (k) = (G pl 1 (k), . . . , G pl k (k)). The recursive property of last passage percolation implies that G pl satisfies for all 1 ≤ j ≤ k ≤ n, (42) where we recall that e i j has rate α i + α n− j+1 and we use the notation G pl 0 (k) := 0 for all k = 0, . . . , n. Comparing this with the update rule for the point to point case given at (23) we see that it is the same up to a shift in the labels of the particles. Thus we can repeatedly apply the 1-step transition densities of Proposition 5 while adding in an extra particle at the origin after each step to compute the joint distribution of the vector (G(1, n), . . . , G(n, n)). This will show that the distribution of this vector agrees with the invariant measure of the Brownian system considered in Theorem 4. This also proves the positivity and normalisation ofπ and π stated in Lemma 4 which is required to complete the proof of Theorem 4.

Proof (Theorem 2)
We prove the result by induction on n and observe that the case n = 1 is true. We first prove the case of equal rates: α 1 = · · · = α n = 1. Suppose that the distribution of G pl (n − 1) is given by the densitȳ where the functions f 0 , f 1 , . . . f n−1 are specified in Proposition 4. In view of Eq. (42) and Proposition 5 the distribution of G pl (n) has density given by where we re-label the particle positions at time n −1 as x 2 , . . . , x n and use the notation x 1 := 0. We use Lemma 5 part (ii) to express this as a single determinant where D ( j) denotes the j-th derivative, and The convolutions can be calculated by using the defining property of the f i , namely that for each i = 1, . . . , n − 1 we have From this it follows that for x > 0, by differentiation and using the boundary conditions f i (0) = 0 for i = 1, . . . , n − 1 and F i (0) = 0 for i = 0, . . . , n − 2. Finally note that g(x) = f 0 (x) for x > 0. Therefore the distribution of G pl (n) has density given by and this completes the inductive step with equal rates.
In the case of distinct rates we proceed again by induction. The inductive hypothesis allows us to suppose that the distribution of G pl (n − 1) is given by the density Then the density of G pl (n) is computed using one step transition density for general jump rates in Proposition 5 to be where f (i, j;α 1 +α) 1 is defined as in (25) but with parameters α 1 + α i for i = 1, . . . , n and once again we have used the notation x 1 := 0. In applying the transition density from Proposition 5 we need to substitute α 1 + α i for α i to take account of the fact that the random variable e 1,n− j+1 which contributes to G pl j (n) has rate α 1 + α j .
Therefore the density of G pl (n) is given by This is now in the form to apply Lemma 5 part (ii) to obtain, where D ∅ = Id. The first row is given by For each i = 2, . . . , n the integrals can be computed explicitly (noting that the α i are distinct): is some constant in y 1 and Ce −α 1 y can be removed from the i-th row by row operations. This shows that the density of G pl (n) is given by and so completes the inductive step with distinct (α 1 , . . . , α n ). For general (α 1 , . . . , α n ) such that α i > 0 for each i = 1, . . . , n we prove the result by a continuity argument in α. By Proposition 2 we have the following representation of the invariant measure: and in the proof we also showed that almost surely there exists some random time v such that all of the suprema on the right hand side have stabilised. Moreover for any > 0 this time can be chosen uniformly over drifts bounded away from the origin α 1 ≥ . . . , α n ≥ . We can construct a realisation of the Brownian paths (B ) so that they are continuous in α 1 , . . . , α n in the supremum norm on compact time intervals. Therefore since is arbitrary we obtain that the right hand side is almost surely continuous in the variables (α 1 , . . . , α n ) on the set (0, ∞) n . Therefore the distribution of (Y * 1 , . . . , Y * n ) is continuous on the same set, and so is the distribution of (G (1, n), . . . , G(1, 1)) (as a finite number of operations of summation and maxima applied to exponential random variables). This continuity completes the proof for any α i > 0 for i = 1, . . . , n.

Proof (Theorem 1)
The Theorem follows by combining Theorem 2 with Proposition 2 part (ii).

Time reversal
The partition function for a 1 + 1 dimensional directed point-to-point polymer in a Brownian environment (also known as the O'Connell-Yor polymer and studied in [35,37]) is the random variable, We define a second random variable with an extra integral over s 0 and with the drifts reordered, This is the partition function for a 1 + 1 dimensional directed polymer in a Brownian environment with a flat initial condition. A change of variables shows that where the final equality follows by changing the index of summation from i to n−i +1.
As t → ∞, the right hand side converges to ∞ 0 Z n (s)ds and we now check that this is an almost surely finite random variable. We consider the drifts and Brownian motions separately and bound the contribution from the Brownian motions. For each j = 1, . . . , n let δ j > 0 and observe that there exists random constants K 1 , . . . , K n such that B 1 (s) ≤ K 1 + δ 1 s for all s > 0 and sup 0≤s≤t B j (t) − B j (s) ≤ K j + δ j t for t ≥ 0 and each j = 2, . . . , n. By choosing δ 1 + · · · + δ n < min 1≤ j≤n α j this shows that the negative drifts dominate and the integral is almost surely finite. As a result the left hand side of (45) converges in distribution to a random variable which we denote Y * n which satisfies

Exponentially reflecting Brownian motions with a wall
We extend (44) to a definition of a vector (Y 1 , . . . , Y n ) as a functional of n independent Brownian motions with drifts (B (−α 1 ) The system (Y 1 , . . . , Y n ) can be described by a system of SDEs. Let X j = log 1 2 Y j and observe that by Itô's formula, We will call X a system of exponentially reflecting Brownian motions with a (soft) wall at the origin. We observe that (Y 1 , . . . , Y n ) starts with each co-ordinate at zero and that each co-ordinate is strictly positive for all strictly positive times. This constructs an entrance law for the process (X 1 , . . . , X n ) from negative infinity. We will be interested in the invariant measure of this system which is related to log partition functions of the log-gamma polymer (see Theorem 4).
To prove this we embed exponentially reflecting Brownian motions with a wall in a larger system of interacting Brownian motions indexed by a triangular array (X i j (t) : i + j ≤ n + 1, t ≥ 0) with a unique invariant measure given by a whole field Fig. 1 The interactions in the system {X i j : i + j ≤ n + 1} of log partition functions for the log-gamma polymer. The Brownian system that we consider (see Eq. (49) for a formal definition) involves particles evolving according to independent Brownian motions with a drift term which depends on the neighbouring particles. The interactions in the drift terms are one-sided and drawn as → or in Fig. 1 where the particle at the point of the arrow has a drift depending on the particle (or wall) at the base of the arrow. There are two types of interaction: (i) → is an exponential drift depending on the difference of the two particles. This corresponds in a zero-temperature limit to particles which are instantaneously reflected in order to maintain an interlacing.
is a more unusual interaction and corresponds in a zero temperature limit to a weighted indicator function applied to the difference of the two particles. The effect of introducing this interaction is that the process X i j when started from its invariant measure and run in reverse time is given by the process where the direction of each interaction is reversed (see Proposition 6).
More formally we consider a diffusion process with values in R n(n+1)/2 whose generator is an operator L acting on functions f ∈ C n(n+1)/2 c (R) according to, We observe that L restricted to functions of (x 1n , . . . , x 11 ) alone is the generator for a system of exponentially reflecting Brownian motions with a wall, defined in (47, 48).
For foundational results on such a system we refer to Varadhan [43] (see pages 197, 254, 259-260) which can be summarised in the following lemma.
Suppose there exists a smooth function u : R d → (0, ∞) such that u(x) → ∞ as |x| → ∞ and Lu ≤ cu for some c > 0. Then there exists a unique process with generator L and the process does not explode. Suppose furthermore there exists a smooth function φ such that , then the measure with density φ is the unique invariant measure for the process with generator L.

Lemma 7
Let L be the generator defined in (49). There exists a smooth function u : R d → (0, ∞) such that u(x) → ∞ as |x| → ∞ and L u ≤ cu for some c > 0.
Therefore the conditions of Lemma 6 are satisfied and there exists a unique process with generator L given by (49) which does not explode.
Proof We define the function The diffusion terms and terms involving a bounded drift can all be easily bounded by a constant times u. We check this also holds for the terms involving unbounded drifts. The terms involving a wall satisfy, The terms involving interlacing interactions between particles satisfy and We sum over all interactions to prove that u has the required properties.

The log-gamma polymer
The invariant measure of both the exponentially reflecting Brownian motions with a wall defined in (47) and (48) and the X array defined in (49) can be described by the log-gamma polymer. The log-gamma polymer originated in the work of Seppäläinen [42] and is defined as follows. Let {W i j : (i, j) ∈ N 2 , i + j ≤ n + 1} be a family of independent inverse gamma random variables with densities, and parameters γ i j = α i + α n− j+1 . Let Π flat n (k, l) denote the set of all directed (up and right) paths from the point (k, l) to the line {(i, j) : i + j = n + 1} and define the partition functions and log partition functions: These are the partition functions for a (1 + 1) dimensional directed polymer in a random environment given by {W i j : (i, j) ∈ N 2 , i + j ≤ n + 1}.

Lemma 8
The distribution of ξ i j given ξ i+1, j = x i+1, j and ξ i, j+1 = x i, j+1 has a density with respect to Lebesgue measure proportional to The distribution of the field (ξ i, j : i + j ≤ n +1) has a density with respect to Lebesgue measure on R n(n+1)/2 proportional to Proof The partition functions satisfy a local update rule ζ i j = (ζ i, j+1 + ζ i+1, j )W i j and equivalently ξ i j = log W i j + log(e ξ i, j+1 + e ξ i+1, j ). This combined with the explicit density for the inverse gamma density (50) proves the first statement. The second part then follows by an iterative application of the first part.

The invariant measure of exponentially reflecting Brownian motions with a wall and the log-gamma polymer
This has a unique invariant measure which we denote (X * i j : i + j ≤ n + 1) and satisfies A consequence is that (ξ 1n , . . . , ξ 11 ) is distributed as the unique invariant measure of the system of exponentially reflecting Brownian motions with a wall, defined in (47, 48).
A key role in the proof will be played by inductive decompositions of the generator for the Brownian system in (49) and the explicit density for the log-gamma polymer in Lemma 8. Let S ⊂ N 2 ∩ {(i, j) : i + j ≤ n + 1} have a boundary given by a down-right path in the orientation of Fig. 2 (the boundary is denoted by the dotted line)-explicitly we require that if (i, j) ∈ S then (i + k, j + l) ∈ S for all k, l ≥ 0 such that i + j + k + l ≤ n + 1. We can define the log-gamma polymer on S and we denote the density of log partition functions on S by π S (x). Lemma 8 proves that where D n = {(i, j) ∈ N 2 : i + j = n + 1}. We can build the density of the log-gamma polymer inductively by adding an extra vertex (i, j) to S and assuming that both S and S ∪ (i, j) have down-right boundaries in the orientation of Fig. 2. We observe that We now consider an inductive decomposition of the generator in (49) which is related to the above decomposition of the log-gamma polymer. We consider a Brownian system with particles indexed by S which (i) agrees with the process with generator L when S = {(i, j) : i + j ≤ n + 1} and (ii) has an invariant measure with density π S . The process can be represented by the interactions present in the diagram on the left hand side of Fig. 2. We consider a diffusion with values indexed by S with generator L S , acting on functions f ∈ C n(n+1)/2 c (R) as follows, Proposition 6 and Lemma 7 show that there exists unique processes with generators L S and A S and that these processes do not explode. The motivation for considering A S is that the process with this generator will be the time reversal of the process with generator L S when the process is run in its invariant measure π S . The process with operator A S can be represented by a diagram in the same way as L S in Fig. 1, where for the A S process the direction of every interaction is reversed.
We add in a vertex (i, j) as described in Fig. 2, where we assume that both S and S ∪ (i, j) have boundaries with down-right paths in the orientation of Fig. 2. Then, and Fig. 2, the diffusion with generator 1 2 (L S + A S ) is a gradient diffusion satisfying

Lemma 9 For any subset S with a down-right boundary in the orientation of
In particular, the process with generator 1 2 (L S + A S ) has invariant measure given by π S and is reversible when run in its invariant measure.
Proof We use the inductive decompositions of L , A and V to check the Lemma inductively. For the base case we let S = {(i, j) : i + j = n + 1} and observe that in this case L S = A S and both are the generators for n independent exponentially reflecting Brownian motions with a wall. Then the Lemma follows from: For the inductive step we consider a set S with a down-right boundary and add an extra vertex (i, j) with the property that S ∪ (i, j) also has a down-right boundary. We show that by calculating the non-zero co-ordinates of ∇V * : and observing that this gives equality with the right hand side of (57) by using Eqs. (55) and (56). Fig. 2 and let d S denote the difference in drifts between L S and A S . Then

Lemma 10 Let S be a subset S with a down-right boundary in the orientation of
where d S is extended to be R S∪{i, j} valued by setting d S (i, j) = 0. Every component of d * is zero except for the following: We observe that ∇ · d * = 0 by differentiating (58-60) to obtain the following, and observing that the sum equals zero. Combining this with the inductive hypothesis, that ∇ · d S = 0, shows that ∇ · d S∪(i, j) = 0. For part (ii), we assume the inductive hypothesis, that (d S , ∇V S ) = 0, and observe that this means (d S∪(i, j) , ∇V S∪(i, j) ) = 0 is equivalent to the following identity: We observe that d * and ∇V * are only non-zero in the co-ordinates (i, j + 1), (i + 1, j) and (i, j) so we can restrict to considering ∇V S and d S in these coordinates.
We observe that by definition d S (i, j) = 0 and The indicator functions correspond to the effect of and ↓ or → interactions which may or may not be present depending on the shape of S. We also note that for i + j = n, then we have α i − α n− j = α i+1 − α n− j+1 = 0. For i + j < n, the terms in V S which involve any of x i, j+1 , x i+1, j or x i j are given via the following decompositions: whereṼ S does not depend on any of: whereṼ S does not depend on any of: x i j+1 , x i+1 j , or x i j . Therefore we will check (61) by using Eqs. (52, 58-60, 62-63, 64-65) in the following. We will first observe that the terms involving indicator functions vanish. The terms in ∇V S (i, j + 1) involving 1 {(i−1, j+1)∈S} are equal to This is the negative of the terms in d S (i, j + 1) involving 1 {(i−1, j+1)∈S} from (62). We have shown above that ∇V * (i, j + 1) = d * (i, j + 1). Therefore the terms involving indicator functions 1 {(i−1, j+1)∈S} cancel in the sum (d * , ∇V S )+(d S , ∇V * ). The terms involving 1 {(i+1, j−1)∈S} also cancel in the sum (d * , ∇V S ) + (d S , ∇V * ). In this case, ∇V * (i +1, j) = −d * (i +1, j) and the terms involving 1 {(i+1, j−1)∈S} in ∇V S (i +1, j) and d S (i + 1, j) are equal. Therefore it is sufficient to show that Eq. (61) holds in the case when neither (i − 1, j + 1) nor (i + 1, j − 1) are in S. This is a useful simplification and we calculate in this case for i + j < n, The following (non-obvious) cancellation then proves that Eq. (61) holds. For i + j < n, it is easy to see that all terms involving e −(x i j+1 −x i j+2 ) cancel and this similarly holds for the terms e −(x i+1 j −x i+2 j ) . It is useful to consider all terms that involve either e −(x i j+1 −x i+1 j+1 ) or e −(x i+1 j −x i+1 j+1 ) together and all such terms cancel. In the case i + j = n, none of these terms are present, however, there is an extra −e x i j+1 − e x i+1 j in ∇V S which cancels in (d * , ∇V S ). The remaining calculation for the cases i + j < n and i + j = n is the same. Once these cancellations have been performed the left hand side of (61) is a function of x i j+1 , x i+1 j and x i j alone, and has a much simpler form. In particular, after this cancellation (d * , ∇V S ) + (d S , ∇V * ) equals We can observe that (d * , ∇V * ) simplifies to equal the negative of this: (i) the terms in (d * , ∇V * ) that do not involve any α parameters cancel; (ii) the terms involving a single α parameter are equal to and (iii) the terms involving a product of α parameters are equal to Therefore (61) holds and part (ii) of the Lemma follows by induction.

Proof (Theorem 4)
Let S be a subset with a boundary given by a down-right path in the orientation of Fig. 2. Lemma 9 shows that 1 2 (L * S + A * S )π S = 0 and Lemma 10 shows that As a result L * S π S = 0 and Lemma 6 proves that π S is the invariant measure for the process with generator L S . In particular, the case S = {(i, j) : i + j ≤ n + 1} proves the Theorem.

Proof (Theorem 3) A consequence of Theorem 4 is that
where Y * n is equal in distribution to ∞ 0 Z n (s) by the time reversal at the start of this section and by definition ξ (1, 1) = log ζ(1, 1). The definition of Z n has α 1 , . . . , α n in a reversed order to the left hand side of Theorem 3, however, the distribution of ζ(1, 1) is invariant under reversing the order of the parameters -this follows from the deterministic fact that ζ(1, 1) takes the same value when constructed from the data {W i j : i + j ≤ n + 1} and the reflected data {W ji : i + j ≤ n + 1} (in fact the distribution of ζ(1, 1) is left invariant under any permutation of the α parameters as a consequence of the same invariance for the process Z n , proven in [37]).

Time reversals and intertwinings
The generator L in (49) depends on a sequence of parameters (α 1 , . . . , α n ) and we use the notation (X (α 1 ,...,α n ) i j (t)) t∈R,i+ j≤n+1 for the process with this generator when we want to make the dependence on the α parameters explicit. (X (α 1 ,...,α n ) i j (t) : i + j ≤ n + 1, t ∈ R) denote the diffusion process with generator (49) in stationarity. This process has the following properties: In particular, for equal drifts, part (i) proves that the top particle has the same distribution when run started from its invariant measure either forward or backwards in time: (X 11 (t)) t∈R d = (X 11 (−t)) t∈R . This fact does not strike us a priori because the SDEs (47, 48) do not appear to define a reversible diffusion unless n = 1.

Proof
The reversed time dynamics of the process started in its invariant measure is a Markov process with generatorL given by the Doob h-transform of the adjoint generator with respect to its invariant measure, in particular,L f = 1 π L * (π f ). Let b be the drift of the process with generator L and a the drift of the process with generator A (where we define A = A S when S = {(i, j) : i + j ≤ n + 1}). The Doob h-transform simplifies due to the fact that L * π = 0 and we obtain where we use that −∇V = a + b from Lemma 9. Therefore the time reversal of the process with generator L is the process with generator A . The process with generator A is represented by Fig. 2 where the direction of every interactions is reversed. This is equivalent to swapping the i j-th particle with the ji-th particle and reversing the order of the parameters. This proves part (i). We first prove part (ii) for the columns of the X array. When run forwards in time the X array has a nested structure in which particles do not depend on particles to the right of them. This means that when considering a particular column, say (X n−k+1k , . . . , X 1k ), we can restrict to a subarray (X i j : j ≥ k, i + j ≤ n + 1) where The equality of (68) and (69) proves an intertwining between Q n−1 t and Q n t with intertwining kernel P n−1→n . This can be expressed in operator notation as Q n−1 t P n−1→n = P n−1→n Q n t .

Zero-temperature limits
We can take a zero temperature limit of the construction we have considered above.
In the limit, particles follow the coupled system of SDEs: for j = 1, . . . , n, and for i > 1 and i + j ≤ n + 1, where (i) L 1 i j is the local time process at zero of X i j − X i, j−1 for i + j < n + 1, (ii) L 1 i j is the local time process at zero of X i j for i + j = n, and (iii) L 2 i j is the local time process at zero of X i j − X i−1, j for i ≥ 2. This process can be represented by Fig. 1 where the interaction → is now reflection and the interaction is now a weighted indicator function. The zero-temperature limit of the field of log partition functions is the field of point-to-line last passage percolation times {G(i, j) : i + j ≤ n + 1} (see [5,6]) and it is natural to expect that {G(i, j) : i + j ≤ n + 1} is the invariant measure of {X i j : i + j ≤ n + 1}. However, we do not prove this because the discontinuities in the drifts means that the conditions for Lemma 6 are no longer satisfied. Instead, we argue that a second proof of Theorem 2 can be provided as a zero temperature limit of Theorem 4. We can introduce an extra inverse temperature parameter β into the definitions of the processes X , Y and Z given in this section and the results of this section continue to hold. In particular, Theorem 4 and the time reversal in Sect. 5 i j : i + j ≤ n + 1} are random variables with inverse gamma distributions and rates β −1 (α i + α n− j+1 ). As β → ∞, the left hand side converges almost surely by Laplace's Theorem and the right hand side converges by [5,6]  The time reversal in Proposition 2 allows the distribution of the left hand side to be identified as Y * n . This argument is easily extended to prove Theorem 2 in its entirety.

Further random matrix interpretations
We now discuss an alternative version of Theorem 1 that connects two families of random matrices. Let X be a symmetric complex matrix of size n × n where for i < j the entries X i j are independent complex Gaussian with mean zero and variance given by 1 2(α i +α j ) and the entries along the diagonal X ii are independent complex Gaussian with mean zero and variance 1 2α i . We call the matrix X * X a perturbed symmetric LUE matrix. In the case when the α i are distinct, we will show the eigenvalues of X * X have a density with respect to Lebesgue measure given by When some of the α i coincide this can be evaluated as a limit and in the case when all α i are equal it agrees with the eigenvalue density of LOE. Our interest in this random matrix ensemble arises from the connection of its eigenvalue density to point-to-line last passage percolation. In the case when the parameters are equal, a similar case appears in Theorem 7.7 of [3] but with a different variance along the diagonal for the random matrix model and different rates along the diagonal for the exponential data -that the variances and rates along the diagonal can be tuned is a property of RSK (for example, see Chapter 10 of [22]) and that the sum of diagonal entries is the trace of a matrix. Point-to-point last passage percolation with inhomogeneous rates for the exponential data was related to random matrices with inhomogeneous variances in [12,19].
To calculate the eigenvalue density we compute the Jacobian (see Chapter 1 of [22] for related examples), d X ∝ j<k |λ k − λ j | j dλ j dΩ of the transformation from matrix elements X to the eigenvalues λ and angular variables Ω. The choice of parameters ensures the distribution on matrices can be expressed as a trace, where dx is Lebesgue measure on the independent (complex) entries (x i j : i ≤ j) of the matrix X , the matrix A = diag(α 1 , . . . α n ) and c n is a constant. Let the singular value decomposition be given by X = U DU T where U ∈ U(n) the set of n ×n unitary matrices, D = diag( √ x 1 , . . . , √ x n ) is the diagonal matrix consisting of the singular values of X and the singular value decomposition takes this form due to the symmetry of X (also referred to as the Autonne-Takagi factorisation). Let V = U T ∈ U(n) and Λ = D 2 = diag(x 1 , . . . , x n ). The joint density of eigenvalues is given by where the integral over the unitary group is calculated by the Harish-Chandra-Itzykson-Zuber formula. This agrees with the density of the output of RSK when applied to last passage percolation with symmetric exponential data with modified rates along the diagonal as described in Sect. 2. Therefore we obtain the following extension of Theorem 1:

Proposition 7
Let ξ max denote the largest eigenvalue of a perturbed symmetric LUE matrix with parameters α i , let (H (t) : t ≥ 0) be an n ×n Hermitian Brownian motion, let D be an n × n diagonal matrix with diagonal entries α j > 0 for each j = 1, . . . , n and let e i j be an independent collection of exponential random variables indexed by the lattice N 2 with rate α i + α n+1− j . Then There does not appear to be any process level equality between a vector of last passage percolation times and the largest eigenvalues of minors of either (i) the perturbed symmetric LUE or (ii) the Laguerre orthogonal ensemble (nor does the connection between last passage percolation and LOE generalise to non-equal rates).

Distribution of the largest particle
In this section we consider the distribution of the largest particle of the system of reflected Brownian motions with a wall in its invariant measure. This has a number of alternative representations from Theorem 2, Propositions 2 and 7 in particular as a point-to-line last passage percolation time. A variety of expressions have been found for this in [3,6,11,23,30] which are convenient for asymptotic analysis. The expression that arises most naturally from Proposition 4 is an expression in terms of the τ -function of a Toda lattice given in Forrester and Witte, Section 5.4 of [23] (also see Proposition 10.8.1 of Forrester [22]). Their result is part of a more general and powerful theory developed in a series of papers (see [23] and the references within); however, it is natural to see how expressions in terms of a Toda lattice arise from Proposition 4 in an elementary manner.