Skip to main content

TASEP and generalizations: method for exact solution

Abstract

The explicit biorthogonalization method, developed in [24] for continuous time TASEP, is generalized to a broad class of determinantal measures which describe the evolution of several interacting particle systems in the KPZ universality class. The method is applied to sequential and parallel update versions of each of the four variants of discrete time TASEP (with Bernoulli and geometric jumps, and with block and push dynamics) which have determinantal transition probabilities; to continuous time PushASEP; and to a version of TASEP with generalized update. In all cases, multipoint distribution functions are expressed in terms of a Fredholm determinant with an explicit kernel involving hitting times of certain random walks to a curve defined by the initial data of the system. The method is further applied to systems of interacting caterpillars, an extension of the discrete time TASEP models which generalizes sequential and parallel updates.

This is a preview of subscription content, access via your institution.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Fig. 1
Fig. 2

Notes

  1. For fixed \(\vec {y}\in \Omega _N(L-1)\) there are other choices of initial data so that \(X^{{\mathrm {head}}}_0=\vec {y}\) and which are equivalent to the one above, in the sense that the evolution of the heads (and of the other sections after time \(L-1\)) is the same, as can be checked directly from the definition of the process (for example one could take \(X^1_0(k)=y_k\) and \(X^i_0(k)=y_k-1\) for \(i=2,\dotsc ,L\)).

  2. The “space-like paths” terminology is related to the intepretation of particle systems related to TASEP as growth models, see the explanation in the introduction and Sect. 2.2 of [3], where it was introduced.

  3. Here and later we use \(\vec {x}(t_i)\) to parametrize vectors by time points. In particular, we postulate that \(\vec {x}(t_i)\) and \(\vec {x}(t_{i+1})\) are different vectors even if \(t_{i} = t_{i+1}\). This slight abuse of notation, which makes clear the correspondence between vectors and the associated time points, will simplify the presentation later on.

  4. This is the (signed) determinantal point process on a certain space of Gelfand-Tsetlin patterns having K as its correlation kernel; see Appendix 1 for more details.

  5. We could also take \({{\bar{n}}}=\infty \) and consider instead a sequence \((y_i)_{i\ge 1}\) as initial data, but this does not make any difference, since in applications we are always interested in the evolution of a finite number of particles (see in particular the comment after Assumption 1.1).

References

  1. Arai, Y.: The KPZ fixed point for discrete time TASEPs. J. Phys. A 53(41), 415202, 33 (2020)

  2. Borodin, A., Corwin, I., Remenik, D.: Multiplicative functionals on ensembles of nonintersecting paths. Ann. Inst. H. Poincaré Probab. Statist. 51(1), 28–58 (2015)

    Article  MATH  Google Scholar 

  3. Borodin, A., Ferrari, P.L.: Large time asymptotics of growth models on space-like paths. I. PushASEP. Electron. J. Probab. 13(50), 1380–1418 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Borodin, A., Ferrari, P.L.: Anisotropic growth of random surfaces in 2 + 1 dimensions. Comm. Math. Phys. 325(2), 603–684 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  5. Borodin, A., Ferrari, P.L.: Random tilings and Markov chains for interlacing particles. Markov Process. Related Fields 24(3), 419–451 (2018)

    MathSciNet  MATH  Google Scholar 

  6. Borodin, A., Ferrari, P.L., Prähofer, M.: Fluctuations in the discrete TASEP with periodic initial configurations and the Airy1 process. Int. Math. Res. Pap. IMRP, Art. ID rpm002, 47 (2007)

  7. Borodin, A., Ferrari, P.L., Prähofer, M., Sasamoto, T.: Fluctuation properties of the TASEP with periodic initial configuration. J. Stat. Phys. 129(5–6), 1055–1080 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  8. Borodin, A., Ferrari, P.L., Sasamoto, T.: Large time asymptotics of growth models on space-like paths II PNG and parallel TASEP. Comm. Math. Phys. 283(2), 417–449 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  9. Borodin, A., Gorin, V.: Lectures on integrable probability. In: Probability and statistical physics in St. Petersburg. Vol. 91. Proc. Sympos. Pure Math. Amer. Math. Soc., Providence, RI, pp. 155-214 (2016)

  10. Brankov, J.G., Priezzhev, V.B., Shelest, R.V.: Generalized determinant solution of the discrete-time totally asymmetric exclusion process and zero-range process. Phys. Rev. E (3) 69(6), 066136, 9 (2004)

    Article  MathSciNet  Google Scholar 

  11. Corwin, I., Ferrari, P.L., Péché, S.: Limit processes for TASEP with shocks and rarefaction fans. J. Stat. Phys. 140(2), 232–267 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  12. Derbyshev, A.E., Poghosyan, S.S., Povolotsky, A.M., Priezzhev, V.B.: The totally asymmetric exclusion process with generalized update. J. Stat. Mech. Theory Exp. 5, P05014, 13 (2012)

    MathSciNet  MATH  Google Scholar 

  13. Derrida, B., Lebowitz, J.L., Speer, E.R., Spohn, H.: Dynamics of an anchored Toom interface. J. Phys. A 24(20), 4805–4834 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dieker, A.B., Warren, J.: Determinantal transition kernels for some interacting particles on the line. Ann. Inst. Henri Poincaré Probab. Stat. 44(6), 1162–1172 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Eynard, B., Mehta, M.L.: Matrices coupled in a chain. I. Eigenval. correlat. J. Phys. A 31(19), 4449–4456 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  16. Imamura, T., Sasamoto, T.: Fluctuations of the one-dimensional polynuclear growth model with external sources. Nuclear Phys. B 699(3), 503–544 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  17. Johansson, K.: Shape fluctuations and random matrices. Comm. Math. Phys. 209(2), 437–476 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  18. Johansson, K.: Discrete polynuclear growth and determinantal processes. Comm. Math. Phys. 242(1–2), 277–329 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  19. Johansson, K.: Random matrices and determinantal processes. In: Mathematical statistical physics. Elsevier B. V., Amsterdam, pp. 1-55 (2006)

  20. Johansson, K.: A multi-dimensional Markov chain and the Meixner ensemble. Ark. Mat. 48(1), 79–95 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Koekoek, R., Lesky, P.A., Swarttouw, R.F.: Hypergeometric orthogonal polynomials and their q-analogues. Springer Monographs in Mathematics. With a foreword by Tom H. Koornwinder. Springer-Verlag, Berlin, 2010, pp. xx+578

  22. Karlin, S., McGregor, G.: Scale invariance of the PNG droplet and the Airy process. J. Stat. Phys. 108(5–6), 1071–1106 (2002)

    MathSciNet  Google Scholar 

  23. Liechty, K., Nguyen, G.B., Remenik, D.: Airy process with wanderers, KPZ fluctuations, and a deformation of the Tracy-Widom GOE distribution. (2020). arXiv: 2009.07781

  24. Matetski, K., Quastel, J., Remenik, D.: The KPZ fixed point. To appear in Acta Math. (2021). arXiv: 1701.00018

  25. Nica, M., Quastel, J., Remenik, D.: One-sided reflected Brownian motions and the KPZ fixed point. Forum Math. Sigma 8 , Paper No. e63, 16 (2020)

  26. Nica, M., Quastel, J., Remenik, D.: Solution of the Kolmogorov equation for TASEP. Ann. Probab. 48(5), 2344–2358 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  27. Petrov, L.: Asymptotics of random lozenge tilings via Gelfand-Tsetlin schemes. Probab. Theory Related Fields 160(3–4), 429–487 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  28. Povolotsky, A.M., Priezzhev, V.B.: Determinant solution for the totally asymmetric exclusion process with parallel update. J. Stat. Mech.: Theory Exp 2006(07), P07002–P07002 (2006)

    Article  Google Scholar 

  29. Poghosyan, S.S., Povolotsky, A.M., Priezzhev, V.B.: Universal exit probabilities in the TASEP. J. Stat. Mech.: Theory Exp 2012(08), P08013 (2012)

    Article  Google Scholar 

  30. Povolotsky, A.M., Priezzhev, V.B., Schütz, G.M.: Generalized Green functions and current correlations in the TASEP. J. Stat. Phys. 142(4), 754–791 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  31. Prähofer, M., Spohn, H.: Scale invariance of the PNG droplet and the Airy process. J. Stat. Phys. 108(5–6), 1071–1106 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  32. Rákos, A., Schütz, G.M.: Current distribution and random matrix ensembles for an integrable asymmetric fragmentation process. J. Statist. Phys. 118(3), 511–530 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  33. Sasamoto, T.: Spatial correlations of the 1D KPZ surface on a flat substrate. J. Phys. A Math. Gener. 38(33), L549 (2005)

    Article  MathSciNet  Google Scholar 

  34. Schütz, G.M.: Exact solution of the master equation for the asymmetric exclusion process. J. Statist. Phys. 88(1–2), 427–445 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  35. Tracy, C.A., Widom, H.: Level-spacing distributions and the Airy kernel. Comm. Math. Phys. 159(1), 151–174 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  36. Tracy, C.A., Widom, H.: On orthogonal and symplectic matrix ensembles. Comm. Math. Phys. 177(3), 727–754 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  37. Warren, J.: Dyson’s Brownian motions, intertwining and interlacing. Electron. J. Probab. 12(19), 573–590 (2007)

    MathSciNet  MATH  Google Scholar 

  38. Warren, J., Windridge, P.: Some examples of dynamics for Gelfand-Tsetlin patterns. Electron. J. Probab. 14(59), 1745–1769 (2009)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Alexei Borodin for discussions several years ago which motivated some of the results of this paper; Patrik Ferrari for pointing out to us the connection between sequential and parallel updates at the level of Markov chains on Gelfand-Tsetlin patterns [5]; and Jeremy Quastel for many valuable discussions related to this work. KM was partially supported by NSF grant DMS-1953859. The also thank an anonymous referee for a very detailed and helpful report. DR was supported by Centro de Modelamiento Matemático (CMM) Basal Funds FB210005 from ANID-Chile, by Fondecyt Grant 1201914, and by Programa Iniciativa Científica Milenio grant number NC120062 through Nucleus Millennium Stochastic Models of Complex and Disordered Systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Konstantin Matetski.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Convolution of determinantal functions

We are going to prove results, which allow one to compute convolutions of determinantal functions of the type (4.9).

Fix \(N \in {\mathbb {N}}\) and \(\vec {v} = (v_i)_{i \in \llbracket N \rrbracket }\) such that \(v_i > 0\) for each i. For each \(i, j \in \llbracket N \rrbracket \) we consider a function \(L_{i, j}\!: {\mathbb {Z}}^2 \longrightarrow {\mathbb {R}}\) such that there are constants \(C>0\) and \(r > \mathop {\mathrm {max}}_i v_i\) so that \(|L_{i, j}(x,y)| \le C r^{x - y}\). Then, using the kernels defined in (4.3) and (4.5), for \(\vec {x}, \vec {y} \in \Omega _N\) we define a determinantal function

(A.1)

The sums involved in the compositions of kernels inside the determinant are all absolutely convergent by the same argument as the one provided below (4.7). The following result, which is a generalization of [20,   Lemma 3.2] shows that, in a particular case, convolutions of such functions preserve their structure.

Proposition A.1

Consider two families of kernels \(R_{i}\) and \(S_{j}\) on \({\mathbb {Z}}^2\), for \(i,j\in \llbracket N \rrbracket \), and write \((R\cdot 1)_{i,j}=R_i\), \((1\cdot S)_{i,j}=S_j\) and \((R \cdot S)_{i, j} = R_{i}S_{j}\). If all these kernels satisfy the properties listed above, then for \(\vec {x}, \vec {y} \in \Omega _N\)

$$\begin{aligned} \sum _{\vec {z} \in \Omega _N} {\mathbf {F}}_{\!R\cdot 1} (\vec {y}, \vec {z}) {\mathbf {F}}_{1\cdot S} (\vec {z}, \vec {x}) = {\mathbf {F}}_{R \cdot S} (\vec {y}, \vec {x}). \end{aligned}$$

As a particular case (c.f. (4.7)/(4.8)) we get the following:

Corollary A.2

Consider two Markov chains on \(\Omega _N\) with transition probabilities \(G^{(\ell )}_t(\vec {y},\vec {x})\) of the form (1.1), i.e., for \(\ell =1,2\), \(G^{(\ell )}_t(\vec {y},\vec {x})=\mathop {\mathrm {det}}[F^{(\ell )}_{i-j}(X_{N+1-i}-y_{N+1-j},t)]_{i,j\in \llbracket N \rrbracket }\) where \(F^{(\ell )}\) has the form (1.2) with \(\varphi =\varphi _\ell \) for some complex functions \(\varphi _1,\varphi _2\). Assume that these last two functions satisfy Assumption 1.1 for a common choice of \(\rho ,{\bar{\rho }}\). Then for each \(t_1,t_2\ge 0\) and each \(\vec {x},\vec {y}\in \Omega _N\),

$$\begin{aligned} \sum _{\vec {z}\in \Omega _N}G^{(1)}_{t_1}(\vec {y},\vec {z})G^{(2)}_{t_2}(\vec {z},\vec {x})={{\bar{G}}}_{t_1,t_2}(\vec {y},\vec {x}) \end{aligned}$$

with \({{\bar{G}}}_{t_1,t_2}(\vec {y},\vec {x})\) again of the form (1.1) with the right hand side of (1.2) now defined using \(t=1\) and \(\varphi (w)=\varphi _1(w)^{t_1}\varphi _2(w)^{t_2}\).

Before proving Proposition A.1 we need a version of the generalized Cauchy-Binet/Andréief identity.

Lemma A.3

For a measure space \((\Lambda , {\mathcal {B}}, \lambda )\), let \(\varphi _i, \psi _i : \Lambda \rightarrow {\mathbb {R}}\) be measurable functions such that \(\varphi _i \psi _j\) is integrable for any \(i, j \in \llbracket N \rrbracket \). Assume also that \(\Lambda \) is a totally ordered set, and define the Weyl chamber \(\Omega ^\Lambda _N = \{\vec {x} \in \Lambda ^N : x_1> x_2> \cdots > x_N\}\). Then

$$\begin{aligned}&\mathop {\mathrm {det}}\left[ \int _\Lambda \varphi _i(x) \psi _j(x) \mathrm {d}\lambda (x) \right] _{i, j \in \llbracket N \rrbracket }\nonumber \\&\quad = \int _{\Omega ^\Lambda _N} \mathop {\mathrm {det}}[\varphi _i(x_j)]_{i, j \in \llbracket N \rrbracket } \mathop {\mathrm {det}}[\psi _i(x_j)]_{i, j \in \llbracket N \rrbracket } \mathrm {d}\lambda ^N(\vec {x}). \end{aligned}$$
(A.2)

The identity is usually stated (see e.g. [19,   Proposition 2.10]) with the integral on the right hand side over \(\Lambda ^N\) and an additional factor of 1/N!; (A.2) follows from this by antisymmetry of determinant.

Proof of Proposition A.1

Applying the Cauchy-Binet identity (A.2) we get

The key will be to prove that

(A.3)

In fact, using these two identities we may write

which, after another application of (A.2), equals as desired.

So we need to prove (A.3). To have a shorter notation we write . Then using the definitions in (4.5), the left hand side of (A.3) can be written as

(A.4)

We will use the summation by parts formula, which follows from (4.2),

(A.5)

Using multilinearity, the first determinant in (A.4) can be written as

where we wrote for the \(j^\mathrm{{th}}\) column of the matrix . Recalling that \(z_N\) is summed from \(-\infty \) to \(z_{N-1}\) and applying (A.5), (A.4) becomes

(A.6)

To see this we need to check that the last two terms in (A.5) do not contribute: for the first of the two terms this holds because for every \(z_N\) sufficiently small (this follows readily from the definition (4.4) and the residue theorem), while for the second one it holds because in the case \(z_N = z_{N-1}\), the matrix in the first determinant in (A.6) has two equal columns and hence the determinant vanishes. Applying the same operations for the variables \(z_{N-1}, \dotsc , z_2, z_1\), then for \(z_{N}, \dotsc , z_3, z_2\) and so on, (A.6) turns to

which is exactly (A.3). \(\square \)

The following two results extend Proposition A.1 to a setting where the matrices in the determinants have different sizes; we need this in order to handle the setting of Theorem 1.4. For \(2 \le k \le N\) using (4.3) we define and . Then for functions \(R_{i,j} \) as in the beginning of this section we define, for \(\vec {x}, \vec {y} \in \Omega _{N-1}\),

For a vector \(\vec {z}\) and a scalar \({{\tilde{y}}}\) we write \({{\tilde{y}}} \sqcup \vec {z}\) for the vector obtained from \(\vec {z}\) by adding \({{\tilde{y}}}\) as the first entry.

Proposition A.4

Consider kernels \((R_i)_{i\in \llbracket N-1 \rrbracket }\) and \((S_i)_{i\in \llbracket N \rrbracket }\) with properties as in Proposition A.1, such that \(R_i\) and commute. Then for \(\vec {x} \in \Omega _N\), \(\vec {y} \in \Omega _{N-1}\) and \({{\tilde{y}}} \in {\mathbb {Z}}\) one has

$$\begin{aligned} \sum _{\vec {z} \in \Omega _{N-1}} \bar{{\mathbf {F}}}_{\!R\cdot 1} (\vec {y}, \vec {z}) {\mathbf {F}}_{\!1\cdot S} ({{\tilde{y}}} \sqcup \vec {z}, \vec {x}) = {\mathbf {F}}_{\!U} ({{\tilde{y}}} \sqcup \vec {y}, \vec {x}), \end{aligned}$$
(A.7)

where \(U_{1, j} = S_{j}\) and \(U_{i, j} = R_{i-1}S_{j}\) for \(2 \le i \le N\) and \(j \in \llbracket N \rrbracket \).

Proof

Repeating the argument in the proof of (A.3), we can write the left hand side of (A.7) as

(A.8)

where \({{\tilde{z}}}_1 = {{\tilde{y}}}\) and \({{\tilde{z}}}_{i} = z_{i-1}\) for \(i = 2, \dotsc ,N\). The second determinant on the right hand side can be expanded as , and plugging this into (A.8) and then applying the Cauchy-Binet identity (A.2) we get

Since commutes with \(R_i\) and the other ’s commute, this is just the cofactor expansion of the right hand side of (A.7) along its first row. \(\square \)

The following results can be proved similarly.

Proposition A.5

Given kernels \((R_i)_{i\in \llbracket N \rrbracket }\) and \((S_i)_{i\in \llbracket N-1 \rrbracket }\) with properties as in Proposition A.1, such that \(S_i\) and commute, for \(\vec {x} \in \Omega _{N-1}\), \(\vec {y} \in \Omega _{N}\) and \({{\tilde{y}}} \in {\mathbb {Z}}\) one has

$$\begin{aligned} \sum _{\vec {z} \in \Omega _{N-1}} {\mathbf {F}}_{\!R\cdot 1} (\vec {y}, \vec {z} \sqcup {{\tilde{y}}}) {\tilde{{\mathbf {F}}}}_{\!1\cdot S} (\vec {z}, \vec {x}) = {\mathbf {F}}_{\!V} (\vec {y}, \vec {x} \sqcup {{\tilde{y}}}), \end{aligned}$$

where \(V_{i, j} = R_{i}S_{j}\) and \(V_{i, N} = R_{i}\) for \(i \in \llbracket N \rrbracket \) and \(j\in \llbracket N-1 \rrbracket \).

Proposition A.6

Let R and S be as in Proposition A.5. Then for \(\vec {x} \in \Omega _{N-1}\), \(\vec {y} \in \Omega _{N}\) and \({{\tilde{y}}} \in {\mathbb {Z}}\) one has

$$\begin{aligned} \sum _{\vec {z} \in \Omega _{N-1}} {\mathbf {F}}_{\!R\cdot 1} (\vec {y}, {{\tilde{y}}} \sqcup \vec {z}) {\bar{{\mathbf {F}}}}_{\!1\cdot S} (\vec {z}, \vec {x}) = {\mathbf {F}}_{\!{{\bar{V}}}} (\vec {y}, {{\tilde{y}}} \sqcup \vec {x}), \end{aligned}$$

where \({{\bar{V}}}_{i, 1} = R_{i}\) and \({{\bar{V}}}_{i, j} = R_{i}S_{j-1}\) for \(i \in \llbracket N \rrbracket \) and \(2 \le j \le N\).

Appendix B: Proof of the biorthogonal characterization of the kernel

In this section we prove Theorem 4.3. Before that we provide a sketch of the proof and comment on the relation to previous work. The key step of the proof is to express the function (4.14) as a projection of a signed determinantal point process, which we do in Proposition B.4. The correlation kernel of this process can be obtained from the Eynard-Mehta theorem [15], which due to a special form of the domain (a triangular array) can be written in our case in a biorthogonal form; this is the content of Theorem B.5. Then the formula (4.17) follows from a standard result sometimes referred to as the “gap probability” of a determinantal point process. A formula like (4.17) was first derived in [7, 33] for continuous time TASEP with equal starting and ending times. Our proof will follow the generalization of this result to space-like paths derived in [3]. However, as we described after Theorem 4.3, there are some differences with the latter result.

Throughout the section we fix a space-like path \({\mathcal {S}}= \{(n_1, t_1), \dotsc , (n_m, t_m)\} \in {\mathbb {S}}_N\). Then we have \(n_1 \le n_2 \le \cdots \le n_m\) and \(t_1 \ge t_2 \ge \cdots \ge t_m\). It will be convenient to change the order of elements by introducing \(\underline{n}_i = n_{m - i + 1}\) and \(\underline{t}_i = t_{m - i + 1}\), so that \(\underline{n}_1 \ge \underline{n}_2 \ge \cdots \ge \underline{n}_m\) and \(\underline{t}_1 \le \underline{t}_2 \le \cdots \le \underline{t}_m\). We will also write \(\underline{t}_0=0\). Respectively, for a vector \(\vec {x} = (x_1, \dotsc , x_m) \in {\mathbb {Z}}^m\), let \(\vec {\underline{x}} \) denote the reversed vector \((x_m, \dotsc , x_1)\). Then \(G_{\vec {T}, {\mathcal {S}}}\), defined in (4.14), can be rewritten as

$$\begin{aligned} G_{\vec {T}, {\mathcal {S}}} (\vec {y}, \vec {x}) = \sum _{\vec {x}(0) \in \Omega _{N}} \sum _{\begin{array}{c} \vec {x}(\underline{t}_i) \in \Omega _{\underline{n}_i}: \\ x_{\underline{n}_i}(\underline{t}_i) = \underline{x}_i, i \in \llbracket m \rrbracket \end{array}} G^{-}_{\vec {T}} (\vec {y}, \vec {x}(0)) \prod _{i=1}^{m} G_{\underline{t}_{i-1}, \underline{t}_{i}}(\vec {x}_{\le \underline{n}_i}(\underline{t}_{i-1}), \vec {x}(\underline{t}_i)).\nonumber \\ \end{aligned}$$
(B.1)

The key fact is that the function (B.1) can be written as a marginal of a signed determinantal measure on a larger space. To this end we define a triangular array of integer variables \({\mathbb {D}}_{n} = \bigl \{{\mathsf {x}}^\ell _{k} \in {\mathbb {Z}}: \ell \in \llbracket n \rrbracket ,\; k \in \llbracket \ell \rrbracket \bigr \}\), whose generic element we denote by \({\mathsf {X}}\). We will also use “virtual” variables \({\mathsf {x}}^{\ell - 1}_{\ell }\) which can be thought of as having fixed values \(\infty \). We define the Gelfand-Tsetlin cone of size \(n \in {\mathbb {N}}\) as

$$\begin{aligned} {{\mathbb {G}}}{{\mathbb {T}}}_{n} = \bigl \{{\mathsf {x}}^\ell _k \in {\mathbb {Z}}: \ell \in \llbracket n \rrbracket , k \in \llbracket \ell \rrbracket ,\; {\mathsf {x}}^{\ell -1}_k < {\mathsf {x}}^\ell _k \le {\mathsf {x}}^{\ell +1}_{k+1} \bigr \} \subset {\mathbb {D}}_{n}. \end{aligned}$$

As in Sect. 4 we parametrize variables by time points, \({\mathsf {x}}^\ell _{k}(t)\) (see also footnote 3). Then the respective arrays of time-dependent variables are \({\mathbb {D}}_{n}(t)\) and \({{\mathbb {G}}}{{\mathbb {T}}}_{n}(t)\), with a generic element \({\mathsf {X}}(t)\).

1.1 B.1. Determinantal measure on triangular arrays

We begin by stating some results about the function \(F_{k, \ell }\) defined in (4.8). It will actually be more convenient to work with the function

$$\begin{aligned} {\tilde{F}}_{k, \ell }(x_1, x_2; t) = F_{k, \ell }(x_1, x_2; t) v_{k}^{x_1} / v_{\ell }^{x_2}. \end{aligned}$$
(B.2)

Note that from (4.8) we get \({\tilde{F}}_{k, \ell }(x_1, x_2; t) = {\tilde{F}}_{k, \ell }(x_2 - x_1; t){:}{=}{\tilde{F}}_{k, \ell }(0, x_2-x_1; t)\). Define also

$$\begin{aligned} \tilde{\phi }_\ell (x_1, x_2) = v_\ell ^{x_2 - x_1} {\mathbf {1}}_{x_1 \le x_2}, \qquad \phi _\ell (x_1, x_2) = v_\ell ^{x_2 - x_1} {\mathbf {1}}_{x_1 > x_2}. \end{aligned}$$
(B.3)

Then we have the following recurrence relations for \({\tilde{F}}_{k, \ell }\), which follow directly from (B.2) and (4.8):

$$\begin{aligned} {\tilde{F}}_{k, \ell -1}(x; t) = \tilde{\phi }_{\ell } * {\tilde{F}}_{k, \ell }(x; t), \qquad {\tilde{F}}_{k + 1, N}(x; t) = \phi _{k+1} * {\tilde{F}}_{k, N}(x; t), \end{aligned}$$
(B.4)

with \(\phi *F(x;t)=\sum _{y \in {\mathbb {Z}}}\phi (x,y)F(y;t)\). The three results that follow will be useful later on.

Lemma B.1

For \({\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{n}\) one has

$$\begin{aligned} \prod _{\ell = 2}^{n} \prod _{k = 1}^{\ell - 1} \tilde{\phi }_\ell ({\mathsf {x}}_k^{\ell - 1}, {\mathsf {x}}_{k+1}^{\ell }) = \prod _{j = 1}^{n} v_{j}^{-{\mathsf {x}}^j_1} \mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k, {\mathsf {x}}^{j}_\ell )\bigr ]_{k,\ell \in \llbracket j \rrbracket }, \end{aligned}$$
(B.5)

where the functions \({\tilde{\phi }}_\ell \) and \(\phi _\ell \) are defined in (B.3), and where \({\mathsf {x}}^{\ell -1}_\ell \) are “virtual” variables, for which we postulate \(\phi _\ell ({\mathsf {x}}^{\ell -1}_\ell ,y) = v_\ell ^{y}\). Moreover, if \(\vec {x} \in \Omega _{n}\) and \({\mathsf {X}}\in {\mathbb {D}}_{n}\) is such that \({\mathsf {x}}_{1}^{\ell } = x_\ell \) for \(\ell \in \llbracket n \rrbracket \), then the right hand side of (B.5) is non-zero only if \({\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{n}\).

Proof

The case \(n = 2\) is easy to check, and both statements can be proved by induction over \(n \ge 2\). \(\square \)

Lemma B.2

For \(\vec {x}, \vec {y} \in \Omega _{n}\) and for arbitrary time points \(t_1, \dotsc , t_n \in {\mathbb {T}}\) we have

$$\begin{aligned}&\mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, \ell }(y_{k}, x_{\ell }; t_k)\bigr ]_{k,\ell \in \llbracket n \rrbracket } = (-1)^{\lfloor n / 2\rfloor }\!\!\! \sum _{\begin{array}{c} {\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{n}: \\ {\mathsf {x}}_{1}^{\ell } = x_\ell , \ell \in \llbracket n \rrbracket \end{array}} \left( \prod _{j = 1}^{n} v_{j}^{-{\mathsf {x}}^j_1} \mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k, {\mathsf {x}}^{j}_\ell )\bigr ]_{k,\ell \in \llbracket j \rrbracket } \right) \nonumber \\&\quad \times \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, n}(y_{k}, {\mathsf {x}}^{n}_\ell ; t_k)\bigr ]_{k,\ell \in \llbracket n \rrbracket }. \end{aligned}$$
(B.6)

Proof

Changing the index \(\ell \longmapsto n - \ell + 1\) and applying the first identity in (B.4) multiple times, we get that the left hand side of (B.6) equals

$$\begin{aligned} (-1)^{\lfloor n / 2\rfloor } \mathop {\mathrm {det}}\bigl [\tilde{\phi }_{n - \ell + 2} * \tilde{\phi }_{n - \ell + 3} * \cdots * \tilde{\phi }_{n} * {{\tilde{F}}}_{k, n}(x_{n - \ell + 1} - y_{k}; t_k)\bigr ]_{k,\ell \in \llbracket n \rrbracket }.\nonumber \\ \end{aligned}$$
(B.7)

We write the convolution inside the determinant explicitly by introducing new variables \({\mathsf {x}}^{n - \ell + j}_j\) for \(2 \le j \le \ell \) such that \({\mathsf {x}}^{n - \ell + 1}_1 = x_{n - \ell + 1}\) for each \(\ell \in \llbracket n \rrbracket \):

$$\begin{aligned} \textstyle \sum _{{{\mathsf {x}}^{n - \ell + j}_j \in {\mathbb {Z}}, \, 2 \le j \le \ell }} \left( \prod _{j = 1}^{\ell - 1} \tilde{\phi }_{n - \ell + j + 1} \bigl ({\mathsf {x}}^{n - \ell + j}_j, {\mathsf {x}}^{n - \ell + j + 1}_{j+1}\bigr ) \right) {{\tilde{F}}}_{k, n}({\mathsf {x}}^{n}_\ell - y_{k}; t_k). \end{aligned}$$

Using the multilinearity of the determinant to take the summation outside of the determinant in (B.7) we get

$$\begin{aligned} \textstyle (-1)^{\lfloor n / 2\rfloor } \sum _{{{\mathsf {x}}^{\ell }_j \in {\mathbb {Z}}, \, 2 \le j \le \ell \le n}} \left( \prod _{\ell = 2}^{n} \prod _{j = 1}^{\ell -1} \tilde{\phi }_\ell ({\mathsf {x}}_j^{\ell - 1}, {\mathsf {x}}_{j+1}^{\ell }) \right) \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, n}({\mathsf {x}}^{n}_\ell - y_{k}; t_k)\bigr ]_{k,\ell \in \llbracket n \rrbracket },\nonumber \\ \end{aligned}$$
(B.8)

where \({\mathsf {x}}^{\ell }_1 = x_{\ell }\) for \(\ell \in \llbracket n \rrbracket \). Applying Lemma B.1, expression (B.8) can be then written as (B.6). \(\square \)

Lemma B.3

For \(\vec {x}, \vec {y} \in \Omega _{n}\) and for arbitrary time points \(t_1, \dotsc , t_n \in {\mathbb {T}}\) we have

$$\begin{aligned} \mathop {\mathrm {det}}&\bigl [ {{\tilde{F}}}_{n, n}(y_k, x_\ell ; t_\ell ) \bigr ]_{k,\ell \in \llbracket n \rrbracket } \nonumber \\&\quad = (-1)^{\lfloor n / 2\rfloor }\!\!\! \sum _{\begin{array}{c} {\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{n}: \\ {\mathsf {x}}_k^n = y_k, k \in \llbracket n \rrbracket \end{array}} \left( \prod _{j = 1}^{n} v_j^{-{\mathsf {x}}^{j}_{1}} \mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k, {\mathsf {x}}^{j}_\ell )\bigr ]_{k,\ell \in \llbracket j \rrbracket }\right) \mathop {\mathrm {det}}\bigl [ {{\tilde{F}}}_{k, n}({\mathsf {x}}_1^{k}, x_\ell ; t_\ell ) \bigr ]_{k,\ell \in \llbracket n \rrbracket }.\nonumber \\ \end{aligned}$$
(B.9)

Proof

The proof is similar to that of Lemma B.2. We change the order of rows \(k \longmapsto n - k + 1\) and apply the second identity in (B.4) to write the left hand side of (B.9) as

$$\begin{aligned} (-1)^{\lfloor n / 2\rfloor } \mathop {\mathrm {det}}\bigl [\tilde{\phi }_{n} * \cdots * \tilde{\phi }_{k + 1} * {{\tilde{F}}}_{k, n}(x_{\ell } - y_{n - k + 1}; t_\ell )\bigr ]_{k,\ell \in \llbracket n \rrbracket }. \end{aligned}$$
(B.10)

Denoting \({\mathsf {x}}^n_{n - k + 1} = y_{n - k + 1}\) and introducing new variables \({\mathsf {x}}_{n - k - j + 2}^{n - j + 1}\) for \(2 \le j \le n - k\), the \((k, \ell )^\mathrm{{th}}\) entry of the matrix in (B.10) can be written as

$$\begin{aligned} \textstyle \sum _{{{\mathsf {x}}_{n - k - j + 2}^{n - j + 1} \in {\mathbb {Z}}\, 2 \le j \le n - k}} \left( \prod _{j = 1}^{n - k} \tilde{\phi }_{n - j + 1} ({\mathsf {x}}_{n - k - j + 1}^{n - j}, {\mathsf {x}}_{n - k - j + 2}^{n - j + 1}) \right) {{\tilde{F}}}_{k, n}(x_{\ell } - {\mathsf {x}}^k_{1}; t_\ell ). \end{aligned}$$

Then multilinearity of determinant allows to write (B.10) as

$$\begin{aligned} \textstyle (-1)^{\lfloor n / 2\rfloor } \sum _{{{\mathsf {X}}\in {\mathbb {D}}_{n}: \, {\mathsf {x}}^n_k = y_k, k \in \llbracket n \rrbracket }} \left( \prod _{\ell = 2}^{n} \prod _{k = 1}^{\ell - 1} \tilde{\phi }_\ell ({\mathsf {x}}_k^{\ell - 1}, {\mathsf {x}}_{k+1}^{\ell })\right) \mathop {\mathrm {det}}\bigl [ {{\tilde{F}}}_{k, n}({\mathsf {x}}_1^{k}, x_\ell ; t_\ell ) \bigr ]_{k,\ell \in \llbracket n \rrbracket }. \end{aligned}$$

Applying Lemma B.1, we can write this expression as (B.9). \(\square \)

We turn now to the main goal of this section, which is to write \(G_{\vec {T}, {\mathcal {S}}}\) as a marginal of a determinantal measure on triangular arrays. Fix a vector \(\vec {y} \in \Omega _N\); some of the functions below will depend on \(\vec {y}\) but we will not indicate it in our notation. For \(s, t \in {\mathbb {T}}\) and for \(k\le n\) in \({\mathbb {N}}\) we define the functions

$$\begin{aligned} \textstyle {\mathcal {T}}_{t, s}(x_1, x_2)&\textstyle = \frac{1}{2\pi \mathrm{i}}\oint _{\gamma _r}\mathrm {d}w\,\frac{\varphi (w)^{t - s}}{w^{x_1 - x_2 + 1}}, \end{aligned}$$
(B.11)
$$\begin{aligned} \textstyle \Psi ^{n}_{n - k} (x)&\textstyle = \frac{1}{2\pi \mathrm{i}}\oint _{\gamma _r}\mathrm {d}w\,\frac{\prod _{i = k+1}^{n} (v_{i} - w)}{w^{x - y_k +n - k + 1}} \varphi (w)^{-T_k}. \end{aligned}$$
(B.12)

Furthermore, for the space-like path \({\mathcal {S}}\) fixed above, we define the domain

$$\begin{aligned} {\mathbb {D}}_{{\mathcal {S}}}&= \bigl \{{\mathsf {x}}^{\underline{n}_0}_{\ell }(\underline{t}_0) \in {\mathbb {Z}}: \ell \in \llbracket \underline{n}_0 \rrbracket \bigr \} \nonumber \\&\qquad \cup \bigcup _{i \in \llbracket m \rrbracket } \Bigl \{{\mathsf {x}}^n_{\ell }(\underline{t}_i) \in {\mathbb {Z}}: \underline{n}_{i+1} \le n \le \underline{n}_i, \ell \in \llbracket n \rrbracket ~\mathrm { such that }~ {\mathsf {x}}^{n+1}_\ell (\underline{t}_i) < {\mathsf {x}}^n_\ell (\underline{t}_i) \le {\mathsf {x}}^{n+1}_{\ell +1}(\underline{t}_i)\Bigr \}, \end{aligned}$$
(B.13)

where \(\underline{t}_0 = 0\), \(\underline{n}_0 = N\) and \(\underline{n}_{m+1} = 0\). Then we define a signed measure \({\mathcal {W}}\) on \({\mathsf {X}}\in {\mathbb {D}}_{{\mathcal {S}}}\) through

$$\begin{aligned} {\mathcal {W}}({\mathsf {X}})&= \mathop {\mathrm {det}}\bigl [\Psi ^{\underline{n}_0}_{\underline{n}_0 - k} ({\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}))\bigr ]_{k, \ell \in \llbracket \underline{n}_0 \rrbracket } \prod _{j = \underline{n}_1+1}^{\underline{n}_0} \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(\underline{t}_0), {\mathsf {x}}^{j}_\ell (\underline{t}_0))\bigr ]_{k,\ell \in \llbracket j \rrbracket }\nonumber \\&\qquad \times \prod _{i=1}^m \mathop {\mathrm {det}}\bigl [{\mathcal {T}}_{\underline{t}_{i}, \underline{t}_{i-1}} ({\mathsf {x}}_k^{\underline{n}_i}(\underline{t}_i), {\mathsf {x}}_\ell ^{\underline{n}_i}(\underline{t}_{i-1}))\bigr ]_{k, \ell \in \llbracket \underline{n}_i \rrbracket } \prod _{j = \underline{n}_{i+1} + 1}^{\underline{n}_i} \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(\underline{t}_i), {\mathsf {x}}^{j}_\ell (\underline{t}_i))\bigr ]_{k,\ell \in \llbracket j \rrbracket }, \end{aligned}$$
(B.14)

where in the case \(\underline{n}_{i+1} = \underline{n}_{i}\) the product \(\prod _{j = \underline{n}_{i+1} + 1}^{\underline{n}_i} a_n\) is by definition 1. The following result gives a formula for \(G_{\vec {T}, {\mathcal {S}}}\) as a marginal of \({\mathcal {W}}\).

Proposition B.4

For any \(\vec {y} \in \Omega _N\) and \(\vec {x} \in \Omega _m\) the function (B.1) can be written as

$$\begin{aligned} G_{\vec {T}, {\mathcal {S}}}(\vec {y}, \vec {x}) = C \sum _{{{\mathsf {X}}\in {\mathbb {D}}_{{\mathcal {S}}}:\, {\mathsf {x}}_1^{\underline{n}_i}(\underline{t}_i) = x_i, i \in \llbracket m \rrbracket }}{\mathcal {W}}({\mathsf {X}}), \end{aligned}$$
(B.15)

where \(C = \bigl ( \prod _{j = 1}^{N} v_j^{ - y_j} \bigr ) \prod _{k = 1}^{\underline{n}_0} \varphi (v_k)^{T_k} \prod _{i = 1}^m \prod _{j = 1}^{\underline{n}_i} \varphi (v_j)^{\underline{t}_{i-1} - \underline{t}_{i} }\).

Proof

Using formulas (4.13) and (4.9) in (B.1), we can write

$$\begin{aligned}&G_{\vec {T}, {\mathcal {S}}} (\vec {y}, \vec {x}) := {{\tilde{C}}}_1 \sum _{\vec {{\mathsf {x}}}_1(\underline{t}_0) \in \Omega _{N}} \sum _{\vec {{\mathsf {x}}}_1(\underline{t}_i) \in \Omega _{\underline{n}_i}: \, {\mathsf {x}}_1^{\underline{n}_i}(\underline{t}_i) = \underline{x}_i, i \in \llbracket m \rrbracket } \mathop {\mathrm {det}}\bigl [F_{k, \ell }(y_{k}, {\mathsf {x}}_1^{\ell }(\underline{t}_0); -T_k)\bigr ]_{k, \ell \in \llbracket \underline{n}_0 \rrbracket } \nonumber \\&\quad \times \prod _{i=1}^{m} \mathop {\mathrm {det}}\bigl [F_{k, \ell }({\mathsf {x}}^k_1(\underline{t}_{i-1}), {\mathsf {x}}^{\ell }_1(\underline{t}_i); \underline{t}_{i} - \underline{t}_{i-1})\bigr ]_{k, \ell \in \llbracket \underline{n}_i \rrbracket }, \end{aligned}$$
(B.16)

where \({{\tilde{C}}}_1 = \prod _{k = 1}^{\underline{n}_0} \varphi (v_k)^{T_k} \prod _{i = 1}^m \prod _{j = 1}^{\underline{n}_i} \varphi (v_j)^{\underline{t}_{i-1} - \underline{t}_{i} }\). Now using (B.2) to replace \(F_{k, \ell }\) with \({{\tilde{F}}}_{k, \ell }\) and applying (B.6) to the determinant involving \(\vec {y}\) we get

$$\begin{aligned} \mathop {\mathrm {det}}\bigl [F_{k, \ell }(y_{k}, {\mathsf {x}}^{\ell }_1(\underline{t}_0); -T_k)\bigr ]_{k, \ell \in \llbracket \underline{n}_0 \rrbracket }&= C_0 \!\!\! \sum _{{{\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{\underline{n}_0}(\underline{t}_0), \, \mathrm {fixed } {\mathsf {x}}_{1}^{\ell }(\underline{t}_0)}}~\\&\quad \prod _{j = 1}^{\underline{n}_0} \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(\underline{t}_0), {\mathsf {x}}^{j}_\ell (\underline{t}_0))\bigr ]_{k,\ell \in \llbracket j \rrbracket } \\&\quad \times \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, \underline{n}_0}(y_k, {\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}); -T_k)\bigr ]_{k,\ell \in \llbracket \underline{n}_0 \rrbracket }, \end{aligned}$$

where \(C_0 = (-1)^{\lfloor \underline{n}_0 / 2\rfloor } \prod _{j = 1}^{\underline{n}_0} v_j^{ - y_j}\). Similarly, the \(i^\mathrm{{th}}\) factor in the second line of (B.16) equals

$$\begin{aligned}&C_i\quad \sum _{{\mathsf {X}}\in {{\mathbb {G}}}{{\mathbb {T}}}_{\underline{n}_i}(\underline{t}_i),\,\mathrm {fixed }{\mathsf {x}}_{1}^{\ell }(\underline{t}_i)} \prod _{j = 1}^{\underline{n}_i}\mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k(\underline{t}_i), {\mathsf {x}}^{j}_\ell (\underline{t}_i))\bigr ]_{k,\ell \in \llbracket j \rrbracket } \mathop {\mathrm {det}}\\&\quad \bigl [{{\tilde{F}}}_{k, \underline{n}_i}({\mathsf {x}}^{k}_1(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1})\bigr ]_{k,\ell \in \llbracket \underline{n}_i \rrbracket }, \end{aligned}$$

where \(C_i = (-1)^{\lfloor \underline{n}_i / 2\rfloor } \prod _{j = 1}^{\underline{n}_i} v_j^{ - {\mathsf {x}}^j_1(\underline{t}_{i-1})}\). Substituting these expansions into (B.16), we obtain

$$\begin{aligned}&{{\tilde{C}}}_2 \sum _{{\mathsf {X}}} \left( \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, \underline{n}_0}(y_k, {\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}); -T_k)\bigr ]_{k,\ell \in \llbracket \underline{n}_0 \rrbracket } \prod _{j = 1}^{\underline{n}_0} \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(\underline{t}_0), {\mathsf {x}}^{j}_\ell (\underline{t}_0))\bigr ]_{k,\ell \in \llbracket j \rrbracket } \right) \nonumber \\&\quad \times \prod _{i = 1}^{m} \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, \underline{n}_i}({\mathsf {x}}^{k}_1(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1})\bigr ]_{k,\ell \in \llbracket \underline{n}_i \rrbracket } \nonumber \\&\quad \prod _{j = 1}^{\underline{n}_i} \mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k(\underline{t}_i), {\mathsf {x}}^{j}_\ell (\underline{t}_i))\bigr ]_{k,\ell \in \llbracket j \rrbracket }, \end{aligned}$$
(B.17)

where \({{\tilde{C}}}_2 = {{\tilde{C}}}_1 \prod _{i = 0}^{m} C_i\) and where the sum runs over \({\mathsf {X}}\in \bigcup _{i = 0}^m {{\mathbb {G}}}{{\mathbb {T}}}_{\underline{n}_i}(\underline{t}_i)\) such that \({\mathsf {x}}_1^{\underline{n}_i}(\underline{t}_i) = \underline{x}_i\) for \(i \in \llbracket m \rrbracket \).

Our next aim is to reduce the sum in (B.17) to the domain \({\mathbb {D}}_{\mathcal {S}}\), defined in (B.13). To this end, for each \(i = 1, \dotsc , m\) we sum over the variables \({\mathsf {x}}_\ell ^{k}(\underline{t}_{i-1})\) for \(k \in \llbracket \underline{n}_i - 1 \rrbracket \) and \(\ell \in \llbracket k \rrbracket \) applying Lemma B.3. Then the functions \({{\tilde{F}}}_{k, \underline{n}_i}({\mathsf {x}}^{k}_1(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1})\) get replaced by \({{\tilde{F}}}_{\underline{n}_i, \underline{n}_i}({\mathsf {x}}^{\underline{n}_i}_{k}(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1})\), the products \(\prod _{j = 1}^{\underline{n}_i}\) get replaced by the products \(\prod _{j = \underline{n}_{i+1} + 1}^{\underline{n}_i}\), and we obtain

$$\begin{aligned}&{{\tilde{C}}}_3 \sum _{{\mathsf {X}}} \left( \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{k, \underline{n}_0}(y_k, {\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}); -T_k)\bigr ]_{k,\ell \in \llbracket \underline{n}_0 \rrbracket } \prod _{j = \underline{n}_1+1}^{\underline{n}_0} \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(\underline{t}_0), {\mathsf {x}}^{j}_\ell (\underline{t}_0))\bigr ]_{k,\ell \in \llbracket j \rrbracket }\right) \nonumber \\&\quad \times \prod _{i = 1}^{m} \mathop {\mathrm {det}}\bigl [{{\tilde{F}}}_{\underline{n}_i, \underline{n}_i}({\mathsf {x}}^{\underline{n}_i}_{k}(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1})\bigr ]_{k,\ell \in \llbracket \underline{n}_i \rrbracket } \prod _{j = \underline{n}_{i+1} + 1}^{\underline{n}_i} \mathop {\mathrm {det}}\bigl [\phi _j({\mathsf {x}}^{j-1}_k(\underline{t}_i), {\mathsf {x}}^{j}_\ell (\underline{t}_i))\bigr ]_{k,\ell \in \llbracket j \rrbracket }, \end{aligned}$$
(B.18)

where \(\underline{n}_{m+1} = 0\), where the sum runs over \({\mathsf {X}}\in {\mathbb {D}}_{\mathcal {S}}\) such that \({\mathsf {x}}_1^{\underline{n}_i}(\underline{t}_i) = \underline{x}_i\) for \(i \in \llbracket m \rrbracket \), and where \({{\tilde{C}}}_3 = {{\tilde{C}}}_2 \prod _{i = 1}^m (-1)^{\lfloor \underline{n}_i / 2\rfloor } \prod _{j = 1}^{\underline{n}_i} v_j^{ {\mathsf {x}}^j_1(\underline{t}_{i-1})}\). In the case \(\underline{n}_{i+1} = \underline{n}_{i}\) the product \(\prod _{j = \underline{n}_{i+1} + 1}^{\underline{n}_i} a_n\) is, by definition, 1. Definitions (B.11), (B.12) and (B.2) yield

$$\begin{aligned} {\mathcal {T}}_{\underline{t}_{i}, \underline{t}_{i-1}} ({\mathsf {x}}_\ell ^{\underline{n}_i}(\underline{t}_i), {\mathsf {x}}_k^{\underline{n}_i}(\underline{t}_{i-1}))&= {{\tilde{F}}}_{\underline{n}_i, \underline{n}_i}({\mathsf {x}}^{\underline{n}_i}_{k}(\underline{t}_{i-1}), {\mathsf {x}}^{\underline{n}_i}_\ell (\underline{t}_{i}); \underline{t}_{i} - \underline{t}_{i-1}),\\ \Psi ^{\underline{n}_0}_{\underline{n}_0 - k} ({\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}))&= (-1)^{\underline{n}_0 - k} {{\tilde{F}}}_{k, \underline{n}_0}(y_k, {\mathsf {x}}^{\underline{n}_0}_\ell (\underline{t}_{0}); -T_k). \end{aligned}$$

Then (B.18) can be written as (B.15) with the constant multiplier \({{\tilde{C}}}_3 \prod _{k = 1}^{\underline{n}_0} (-1)^{\underline{n}_0 - k}\), which is exactly as in the statement of this proposition. \(\square \)

1.2 B.2. Proof of the biorthogonalization formula

Our goal is to show how Theorem 4.3 can be deduced from Proposition B.4. In order to swap the products in the two lines of (B.14), for every \(n \in {\mathbb {N}}_0\) we define \(c(n) = \#\{0 \le i \le m : \underline{n}_i = n\}\) (note that \(0 \le c(n) \le m + 1\)). Furthermore, for each n such that \(c(n) \ne 0\) we introduce the time variables \(t_1^n< \cdots < t_{c(n)}^n\) such that the space-like path S contains the pairs \((n, t_1^n)\), \(\dotsc \), \((n, t_{c(n)}^n)\). Moreover, we let \(t_{0}^n = t_{c(n+1)}^{n+1}\), \(t_{0}^{N} = 0\) and \(t_0^0 = t_1\). Then, recalling that \(\underline{n}_0 = N\), (B.14) can be written as

$$\begin{aligned} {\mathcal {W}}({\mathsf {X}})&= \prod _{j = 1}^{N} \left( \mathop {\mathrm {det}}\bigl [\phi _j ({\mathsf {x}}^{j-1}_k(t_0^{j-1}), {\mathsf {x}}^{j}_\ell (t_{c(j)}^j))\bigr ]_{k,\ell \in \llbracket j \rrbracket } \prod _{i=1}^{c(j)} \mathop {\mathrm {det}}\bigl [{\mathcal {T}}_{t^{j}_{i}, t^{j}_{i-1}} ({\mathsf {x}}_k^{j}(t_i^j), {\mathsf {x}}_\ell ^{j}(t_{i-1}^j))\bigr ]_{k, \ell \in \llbracket j \rrbracket } \right) \nonumber \\&\quad \times \mathop {\mathrm {det}}\bigl [\Psi ^{N}_{N - k} ({\mathsf {x}}^{N}_\ell (t_0^N))\bigr ]_{k, \ell \in \llbracket N \rrbracket }. \end{aligned}$$
(B.19)

In order to proceed we need to introduce several functions, which depend on the values n and \(t^n_i\). As a consequence of (B.11) we have \({\mathcal {T}}_{t^{n}_{c(n)}, t^{n}_{0}} = {\mathcal {T}}_{t^{n}_{c(n)}, t^{n}_{c(n) - 1}} * \cdots * {\mathcal {T}}_{t^{n}_{1}, t^{n}_{0}}\), which we denote for brevity by \({\mathcal {T}}^{n} = {\mathcal {T}}_{t^{n}_{c(n)}, t^{n}_{0}}\), and where we write \(A * B (x,y) = \sum _{z \in {\mathbb {Z}}} A(x,z) B(z, y)\). For two pairs \({\mathfrak {n}}_i = (n_i, t_{a_i}^{n_i})\) and \({\mathfrak {n}}_j = (n_j, t_{a_j}^{n_j})\) such that \({\mathfrak {n}}_i \prec {\mathfrak {n}}_j\) we define

$$\begin{aligned} \phi ^{({\mathfrak {n}}_i, {\mathfrak {n}}_j)} = {\mathcal {T}}_{t^{n_i}_{a_i}, t^{n_i}_{0}} * \phi _{n_i + 1} * {\mathcal {T}}^{n_i + 1} * \cdots * \phi _{n_j} * {\mathcal {T}}_{t^{n_j}_{c(n_j)} ,t^{n_j}_{a_j}}. \end{aligned}$$

Then using definitions (B.11) and (B.3) we can write explicitly

$$\begin{aligned} \phi ^{({\mathfrak {n}}_i, {\mathfrak {n}}_j)}(x_i, x_j) = \frac{1}{2\pi \mathrm{i}}\oint _{\gamma _{r}}\mathrm {d}w\,\frac{\varphi (w)^{t^{n_i}_{a_i} - t^{n_j}_{a_j}}}{w^{x_i-x_j - n_j + n_i + 1}} \prod _{k = n_i + 1}^{n_j} (v_k - w)^{-1}. \end{aligned}$$
(B.20)

Using (B.12), for \({\mathfrak {n}}= (n, t^n_a)\) such that \({\mathfrak {n}}\prec (N, 0)\) and for \(1 \le k \le N\), we define

$$\begin{aligned} \Psi ^{{\mathfrak {n}}}_{n - k} = \phi ^{({\mathfrak {n}}, (N, 0))} * \Psi ^{N}_{N - k}, \end{aligned}$$
(B.21)

which can be written explicitly as

$$\begin{aligned} \Psi ^{{\mathfrak {n}}}_{n - k}(x) = \frac{1}{2\pi \mathrm{i}}\oint _{\gamma _{r}}\mathrm {d}w\,\frac{\varphi (w)^{t^{n}_{a}}}{w^{x - y_k + n - k + 1}} \frac{\prod _{i = 1}^n (v_i - w)}{\prod _{i = 1}^k (v_i - w)} \varphi (w)^{-T_k}. \end{aligned}$$
(B.22)

Finally, we define a matrix \(M = (M_{k, \ell })_{k, \ell \in \llbracket N \rrbracket }\) with entries

$$\begin{aligned} M_{k, \ell } = (\phi _k * {\mathcal {T}}^{k} * \cdots * \phi _{N} * {\mathcal {T}}^{N} * \Psi ^{N}_{N - \ell }) ({\mathsf {x}}^{k-1}_k). \end{aligned}$$
(B.23)

We will use the following result, which is [3,   Theorem 4.2].

Theorem B.5

Suppose that the matrix M is non-singular and upper triangular and define \({\hat{{\mathcal {W}}}} = \mathop {\mathrm {det}}[M^{-1}] {\mathcal {W}}\). Then \(\sum _{{\mathsf {X}}\in {\mathbb {D}}_{{\mathcal {S}}}}{\hat{{\mathcal {W}}}}({\mathsf {X}})=1\). Furthermore the measure \({\hat{{\mathcal {W}}}}\), interpreted as a (possibly signed) point process is determinantal, with correlation kernel given, for any \({\mathfrak {n}}_i = (n_i, t^{n_i}_{a_i}), {\mathfrak {n}}_j = (n_j, t^{n_j}_{a_j}) \in S\) and \(x_i, x_j \in {\mathbb {Z}}\), by

$$\begin{aligned} K ({\mathfrak {n}}_i, x_i; {\mathfrak {n}}_j, x_j) = - \phi ^{({\mathfrak {n}}_i, {\mathfrak {n}}_j)}(x_i, x_j) {\mathbf {1}}_{{\mathfrak {n}}_i \prec {\mathfrak {n}}_j} + \sum _{k = 1}^{n_j} \Psi ^{{\mathfrak {n}}_i}_{n_i - k}(x_i) \Phi ^{{\mathfrak {n}}_j}_{n_j - k}(x_j),\nonumber \\ \end{aligned}$$
(B.24)

where the functions \(\Phi ^{(n, t^n_a)}_{n-k}\) for all \(n \in \llbracket N \rrbracket \) and \(k \in \llbracket n \rrbracket \) are given by

$$\begin{aligned} \Phi ^{(n, t^n_a)}_{n-k}(x) = \sum _{\ell = 1}^n [M^{-1}]_{k, \ell } \bigl ( \phi _\ell * \phi ^{((\ell , t^\ell _{c(\ell )}), (n, t_{a}^{n}))} \bigr )({\mathsf {x}}_{\ell }^{\ell -1}, x). \end{aligned}$$
(B.25)

In particular, these functions are uniquely defined by the following two conditions:

  1. (1)

    for \(k, \ell \in \llbracket n \rrbracket \) the biorthogonalization relation \(\sum _{x \in {\mathbb {Z}}} \Psi ^{(n, t^{n}_{a})}_{n - k}(x) \Phi ^{(n, t^{n}_{a})}_{n - \ell }(x) = {\mathbf {1}}_{k =\ell }\) holds,

  2. (2)

    \(\{x \in {\mathbb {Z}}\longmapsto \Phi ^{(n, t^n_a)}_{n-k}(x) : k \in \llbracket n \rrbracket \}\) is a basis of the linear span of the functions

    $$\begin{aligned} \bigl \{ x \in {\mathbb {Z}}\longmapsto \phi _k * \phi ^{((k, t^k_{c(k)}), (n, t_{a}^{n}))}({\mathsf {x}}_{k}^{k-1}, x) : k \in \llbracket n \rrbracket \bigr \}. \end{aligned}$$
    (B.26)

Moreover, for any \({\mathfrak {n}}_i \prec {\mathfrak {n}}_j\) and for respective values of k one has the identity \(\phi ^{({\mathfrak {n}}_i, {\mathfrak {n}}_j)} * \Phi ^{{\mathfrak {n}}_j}_{k} = \Phi ^{{\mathfrak {n}}_i}_{k}\).

Proof of Theorem 4.3

We start with the case of different values \(v_N> v_{N-1}> \cdots> v_1 > 0\), for which we apply Theorem B.5 to our measure \({\mathcal {W}}\) given by (B.19). Our first task is to show that the matrix M in (B.23) is non-singular and upper-triangular. Let us denote for brevity \({\mathfrak {n}}_k = (k, t^k_{c(k)})\). Then using (B.21), the entry (B.23) can be written as \(M_{k, \ell } = (\phi _k * \Psi ^{{\mathfrak {n}}_k}_{k - \ell }) ({\mathsf {x}}^{k-1}_k)\). Therefore (B.22) yields

$$\begin{aligned} M_{k, \ell } = \sum _{x \in {\mathbb {Z}}} v_k^x \,\frac{1}{2\pi \mathrm{i}}\oint _{\gamma _r}\mathrm {d}w\,\frac{\varphi (w)^{t^k_{c(k)}}}{w^{x - y_\ell + k - \ell + 1}} \frac{\prod _{i = 1}^k (v_i - w)}{\prod _{i = 1}^\ell (v_i - w)} \varphi (w)^{-T_\ell }. \end{aligned}$$
(B.27)

Since \(|w| < v_k\) for \(w\in \gamma _r\), the sum over \(x<0\) can be computed directly, using \(\sum _{x < 0} (v_k / w)^x = w / (v_k - w)\). For \(x \ge 0\) we may enlarge the contour to a circle of radius a bit larger than \(|v_k|\) because for \(k\ge \ell \) the integrand has no singularities at any of the \(v_i\)’s, while if \(k < \ell \) then the singularities occur only at the points \(v_{k+1}, \dotsc , v_\ell \), which are strictly larger than \(v_k\) by assumption; in this case we use \(\sum _{x \ge 0} (v_k / w)^x = - w / (v_k - w)\). Putting both sums together yields

$$\begin{aligned} M_{k, \ell } = - \frac{1}{2\pi \mathrm{i}}\oint _{\Gamma _{v_k}}\mathrm {d}w\,\frac{\varphi (w)^{t^k_{c(k)}}}{w^{- y_\ell + k - \ell }} \frac{\prod _{i = 1}^{k-1} (v_i - w)}{\prod _{i = 1}^\ell (v_i - w)} \varphi (w)^{-T_\ell }, \end{aligned}$$
(B.28)

where the contour \(\Gamma _{v_k}\) encloses only the singularity at \(v_k\). If \(\ell < k\) then \(M_{k, \ell } = 0\), so the matrix M is upper-triangular, with diagonal entries given by \(M_{k, k} = - \frac{1}{2\pi \mathrm{i}}\oint _{\Gamma _{v_k}}\mathrm {d}w\,\frac{\varphi (w)^{t^k_{c(k)}}}{w^{- y_k}} \frac{\varphi (w)^{-T_k}}{(v_k - w)} = \varphi (v_k)^{t^k_{c(k)}-T_k} v_k^{y_k}\). Then we can compute the determinant \(\mathop {\mathrm {det}}[M] = \prod _{k = 1}^N \varphi (v_k)^{t^k_{c(k)}-T_k} v_k^{y_k} \ne 0\). One can readily check that \(\mathop {\mathrm {det}}[M^{-1}]\) is exactly the constant C from Proposition B.4.

Hence, defining the normalized measure \({\hat{{\mathcal {W}}}} = \mathop {\mathrm {det}}[M^{-1}] {\mathcal {W}}\), expression (B.15) can be written as

$$\begin{aligned} G_{\vec {T}, {\mathcal {S}}}(\vec {y}, \vec {x}) = \sum _{{{\mathsf {X}}\in {\mathbb {D}}_{{\mathcal {S}}}: \, {\mathsf {x}}_1^{\underline{n}_i}(\underline{t}_i) = x_i, i \in \llbracket m \rrbracket }} {\hat{{\mathcal {W}}}}({\mathsf {X}}). \end{aligned}$$
(B.29)

Theorem B.5 implies that the measure \({\hat{{\mathcal {W}}}}\) is determinantal with correlation kernel given in (B.24). Furthermore, using (B.20) for \({\mathfrak {n}}= (n, t) \in {\mathcal {S}}\) such that \({\mathfrak {n}}_k \prec {\mathfrak {n}}\) we can compute

$$\begin{aligned} \textstyle \phi _k * \phi ^{({\mathfrak {n}}_k, {\mathfrak {n}})}({\mathsf {x}}_{k}^{k-1}, x) = \sum _{y \in {\mathbb {Z}}} v_k^y \, \frac{1}{2\pi \mathrm{i}}\oint _{\gamma _{r}}\mathrm {d}w\,\frac{\varphi (w)^{t^k_{c(k)} - t}}{w^{y - x - n + k + 1}} \prod _{i = k + 1}^{n} (v_i - w)^{-1}. \end{aligned}$$

Computing this sum in the same way as we did for (B.27), we obtain

$$\begin{aligned}&\textstyle \phi _k * \phi ^{({\mathfrak {n}}_k, {\mathfrak {n}})}({\mathsf {x}}_{k}^{k-1}, x) = - \frac{1}{2\pi \mathrm{i}}\oint _{\Gamma _{v_k}}\!\mathrm {d}w\,\frac{\varphi (w)^{t^k_{c(k)} - t}}{w^{ - x - n + k}} \nonumber \\&\quad \prod _{i = k}^{n} (v_i - w)^{-1} = \frac{\varphi (v_k)^{t^k_{c(k)} - t}}{v_k^{ - x - n + k}} \prod _{i = k+1}^{n} (v_i - v_k)^{-1}. \end{aligned}$$
(B.30)

Hence, the set (B.26) is the span of \(\{ x \in {\mathbb {Z}}\longmapsto v_k^x : k \in \llbracket n \rrbracket \}\), which is exactly the set defined in (4.16), and thus the correlation kernel (B.24) coincides with (4.18). We deduce the identity (4.17) as a standard consequence of (B.29), which expresses \(G_{\vec {T}, {\mathcal {S}}}\) as a marginal of the distribution of a determinantal process (see e.g. [19,   Proposition 2.9] for a version of this in the one point case).

The other values of \(v_i\) can be treated by analytic continuation. More precisely, consider values \(0< v_1, \dotsc , v_N < v_+\), for some fixed \(v_+ > 0\). We will first show that the left hand side of (4.17) is analytic with respect to the \(v_i\)’s in this domain and then show that the right hand side can be analytically extended to this domain, which will give the claim (4.17) for any choice of the parameters \(v_i\).

From (B.12) we conclude that the functions \( \Psi ^{N}_{N - k} (x)\) are analytic with respect to the \(v_i\)’s and satisfy \(| \Psi ^{N}_{N - k} (x) | \le C a^{|x|}\), for any \(a > 0\) and for any \(v_i\) in the compact set as above. Hence, (B.3) and (B.11) imply that the measure \({\mathcal {W}}({\mathsf {X}})\) in (B.19) can be bounded by a power series in the values \(v_i\). The same can be shown for the left hand side of (4.17), because it is a sum of \({\mathcal {W}}({\mathsf {X}})\) over a suitable domain for \({\mathsf {X}}\) (see (4.15) and (B.15)).

In order to show that the right hand side of (4.17) is analytic in the \(v_i\)’s, we will show that the correlation kernel is so. In view of (B.28) and (B.30), \(\Phi ^{(n, t^n_a)}_{n-k}(x)\) in (B.25) is analytic in \(v_+> v_N> \cdots> v_1 > 0\). Analyticity of the other functions implies that the kernel (B.24) is analytic. Hence, we can conclude that the right hand side of (4.17) can be extended analytically to all \(0< v_1, \dotsc , v_N < v_+\).

Since \(v_+\) was chosen arbitrarily, identity (4.17) holds for any strictly positive values \(v_i\). Finally, one can readily check that for general values \(v_i\) the integral in (B.30) equals \(v_k^x P_k(x)\), where \(P_k(x)\) is a polynomial in x of degree \(\#\{i > k : v_i = v_k\}\). Hence, the span of the functions (B.26) equals , defined in (4.16). \(\square \)

Appendix C: Proof of Assumption 1.3 for right Bernoulli jumps

1.1 C.1 Proof of Assumption 1.3(a)

Throughout the section we use \(\{\vec {e}_{i}\}_{i \in \llbracket N \rrbracket }\) to denote the vectors from the canonical basis of \({\mathbb {R}}^N\). Let \(G^{\mathrm {r-B}}_{0,t}\) be the right hand side of (2.2). We begin by deriving some of its algebraic properties. Although we used this function only on the Weyl chamber \(\Omega _N\), it is defined on all of \({\mathbb {Z}}^N\). The following result can be obtained by direct computations, using properties of the function \(F^{\mathrm {r-B}}_n(x,t)\) defined in (2.3).

Lemma C.1

  1. (i)

    Let \(\vec {x}, \vec {y} \in \Omega _N\) and \(k \in \llbracket N \rrbracket \) be such that \(\vec {y} = (y_1, y_1 - 1, \dotsc , y_1 - k, y_{k+1}, \dotsc , y_N)\), \(\vec {x} = (y_1, y_1 - 1, \dotsc , y_1 - k, x_{k+1}, \dotsc , x_N)\) and \(y_{k+1} < y_1 - k - 1\). Then

    $$\begin{aligned} \textstyle q^{-1} G^{\mathrm {r-B}}_{0,1} (\vec {y}, \vec {x}) = \sum _{\begin{array}{c} \vec {z} \in \Omega _k : \\ z_i \ge x_i, i \in \llbracket k \rrbracket \end{array}} G^{\mathrm {r-B}}_{0,1} \bigl (\vec {y}, (z_1, \dotsc , z_k, x_{k+1}, \dotsc , x_N)\bigr ). \end{aligned}$$
    (C.1)
  2. (ii)

    If \(y_k = y_{k+1}\) for some \(k \in \llbracket N-1 \rrbracket \), then \(G^{\mathrm {r-B}}_{0,t}(\vec {y}, \vec {x}) = G^{\mathrm {r-B}}_{0,t}(\vec {y} - \vec {e}_{k+1}, \vec {x})\).

Fix \(\vec {T}= (T_i)_{i \in \llbracket N \rrbracket }\) with \(T_i = -\kappa (i-1)\) and \(\vec {y} \in \Omega _N(\kappa )\), where \(\kappa \ge 1\) and \(\Omega _N(\kappa )\) is defined in (2.4). Then for \(m \in \llbracket N \rrbracket \), \(\vec {x} \in \Omega _{N - m + 1}\) and \(T_{m} \le t < T_{m - 1}\) (with the convention \(T_0 = \infty \)) we define

$$\begin{aligned} G^{[m, N]}_{\vec {T},t} (\vec {y}, \vec {x}) = {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{t}(m + i - 1) = x_i,\,i\in \llbracket N - m + 1 \rrbracket \big | X^{\mathrm {r-B}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr ), \end{aligned}$$

which is the transition probability (at time t), for the particles \(X^{\mathrm {r-B}}(m), \dotsc , X^{\mathrm {r-B}}(N)\) when the system starts at locations \(y_1, \dotsc , y_N\) at respective times \(T_1, \dotsc , T_N\), to the locations \(\vec {x}\). Let also, for \(s \ge 0\) and \(\vec {a}, \vec {b} \in \Omega _{N - m + 1}\),

$$\begin{aligned} G^{[m, N]}_{0,s} (\vec {a}, \vec {b}) = \mathop {\mathrm {det}}\bigl [F^{\mathrm {r-B}}_{i - j}(b_{N - m + 2 - i} - a_{N - m + 2 - j}, s)\bigr ]_{i, j \in \llbracket N - m + 1 \rrbracket }, \end{aligned}$$
(C.2)

which is the analog of (2.2) for the system which only has the particles \(X^{\mathrm {r-B}}(m), \dotsc , X^{\mathrm {r-B}}(N)\).

Lemma C.2

In above setting, and with the notation \(y\sqcup \vec {z}\) from Appendix A,

$$\begin{aligned}&\textstyle G^{[m, N]}_{\vec {T},t} (\vec {y}, \vec {x}) = \sum \limits _{\begin{array}{c} \vec {z}(T_i) \in \Omega _{N - i} \\ m \le i < N \end{array}} \left( \prod _{i = m + 1}^{N} G^{[i, N]}_{T_{i}, T_{i - 1}} (y_{i} \sqcup \vec {z}(T_i), \vec {z}(T_{i-1}))\right) \nonumber \\&\quad G^{[m, N]}_{T_{m}, t} (y_{m} \sqcup \vec {z}(T_m), \vec {x}), \end{aligned}$$
(C.3)

where \(\vec {z}(T_N)\) is an empty vector.

Up to multipliers, the Markov property yields the formula (C.3) with the additional condition in the sum that all entries of \(\vec {z} (T_i)\) be strictly smaller than \(y_i\). However, this restriction precludes the application of Proposition A.4 to the convolutions of determinants in (C.3). We will show that the properties of \(G^{\mathrm {r-B}}_{0,t}\) provided in Lemma C.1 imply that this restriction can be omitted.

Proof

We will prove (C.3) by induction over \(m = N, N - 1, \dotsc , 1\). For the base case, \(m = N\), note that on the time interval \(T_{N} \le t < T_{N - 1}\) only the \(N^\mathrm{{th}}\) particle moves. Therefore (C.2) with \(N = 1\) and \(\vec {x} = (x_1)\) yields \(G^{[N, N]}_{\vec {T},t} (\vec {y}, \vec {x}) = G^{[N, N]}_{T_{N}, t} (y_N, x_1)\), which is (C.3).

Assuming now that (C.3) holds for some \(2 \le m \le N\), we will prove it for \(m-1\). For \(T_{m-1} \le t < T_{m - 2}\) and \(\vec {x} \in \Omega _{N - m+2}\) the Markov property yields

$$\begin{aligned} \textstyle G^{[m - 1, N]}_{\vec {T},t} (\vec {y}, \vec {x})&\textstyle = \sum _{{\vec {u}, \vec {a} \in \Omega _{N - m + 1}\!:\, a_1 < y_{m-1}}} {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u} \big | X^{\mathrm {r-B}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr )\nonumber \\&\textstyle \quad \times {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) \nonumber \\&\textstyle \quad \times {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{t}(m + i - 2) = x_i,\,i\in \llbracket N - m + 2 \rrbracket \big | X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a}\bigr ). \end{aligned}$$
(C.4)

The induction hypothesis yields \({\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u} \big | X^{\mathrm {r-B}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr ) = G^{[m, N]}_{\vec {T},T_{m-1} - 1} (\vec {y}, \vec {u})\), where the latter is given by the right hand side of (C.3). Moreover, (C.2) gives

$$\begin{aligned} \textstyle {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{t}(m + i - 2) = x_i,\,i\in \llbracket N - m + 2 \rrbracket \big | X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a}\bigr ) = G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x}). \end{aligned}$$

Now, we will write explicitly the transition probability from \(\vec {u}\) to \(\vec {a}\) in (C.4). Our assumption on the initial state \(\vec {y} \in \Omega _N(\kappa )\) guarantees that \(X^{\mathrm {r-B}}_{T_{m-1} - 1}(m) < X^{\mathrm {r-B}}_{T_{m-1}}(m-1)\). However, it can happen that \(X^{\mathrm {r-B}}_{T_{m-1} - 1}(m) = X^{\mathrm {r-B}}_{T_{m-1}}(m-1) - 1\) and on the next step the \(m^\mathrm{{th}}\) particle can try to jump on top of the \((m-1)^\mathrm{{st}}\), which should be prevented. We will consider these cases more precisely.

If \(u_{1} < y_{m-1} - 1\), then we have \({\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) = G^{[m, N]}_{0,1} (\vec {u}, \vec {a})\), and this probability is non-zero for \(a_1 < y_{m-1}\) and zero otherwise. Then we can write

$$\begin{aligned} \textstyle \sum _{\begin{array}{c} \vec {a} \in \Omega _{N - m + 1} : \\ a_1 < y_{m-1} \end{array}}&\textstyle {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x}) \nonumber \\&\textstyle = \sum _{\vec {a} \in \Omega _{N -m + 1}} G^{[m, N]}_{0,1} (\vec {u}, \vec {a}) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x}). \end{aligned}$$
(C.5)

If \(u_{1} = y_{m-1} - 1\), let \(1 \le k \le N - m + 1\) be so that \(\vec {u} = (y_{m-1} - 1, \dotsc , y_{m-1} - k, u_{k+1}, \dotsc , u_{N - m+1})\), where \(u_{k+1} < y_{m-1} - k - 1\) in the case \(k \le N - m\). Then the transition probability from \(\vec {u}\) to \(y_{m-1} \sqcup \vec {a}\) is non-zero only if \(a_{i} = u_{i}\) for each \(1 \le i \le k\). In this case we have \({\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) = q^{-1} G^{[m, N]}_{0,1} (\vec {u}, \vec {a})\) (i.e. in the probability measure \(G^{[m, N]}_{0,1} (\vec {u}, \cdot )\) on \(\Omega _{N - m + 1}\), we change the probability for the \(m^\mathrm{{th}}\) particle to stay put from q to 1). Applying (C.1) we obtain

$$\begin{aligned} \textstyle {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) = \sum _{\!\!\!\begin{array}{c} \vec {z} \in \Omega _k : \\ z_i \ge a_i, 1 \le i \le k \end{array}} G^{[m, N]}_{0,1} \bigl (\vec {u}, (z_1, \dotsc , z_k, a_{k+1}, \dotsc , a_N)\bigr ). \end{aligned}$$

This yields, for \(u_{1} = y_{m-1} - 1\),

$$\begin{aligned}&\textstyle \sum _{\begin{array}{c} \vec {a} \in \Omega _{N - m + 1} : \\ a_1 < y_{m-1} \end{array}} {\mathbb {P}}\bigl (X^{\mathrm {r-B}}_{T_{m-1}} = y_{m-1} \sqcup \vec {a} \big | X^{\mathrm {r-B}}_{T_{m-1} - 1} = \vec {u}\bigr ) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x})\nonumber \\&\quad \textstyle = \sum _{\begin{array}{c} \vec {a} \in \Omega _{N - m + 1} : \\ a_i = u_i, 1 \le i \le k \end{array}} \sum _{\!\!\!\!\!\!\begin{array}{c} \vec {z} \in \Omega _k : \\ z_i \ge a_i, 1 \le i \le k \end{array}} G^{[m, N]}_{0,1} \bigl (\vec {u}, (z_1, \dotsc , z_k, a_{k+1}, \dotsc , a_{N-m+1})\bigr )\nonumber \\&\qquad G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x})\nonumber \\&\quad \textstyle = \sum _{\begin{array}{c} \vec {z} \in \Omega _{N - m + 1} : \\ z_i \ge u_i, 1 \le i \le k \end{array}} G^{[m, N]}_{0,1} (\vec {u}, \vec {z}) G^{[m-1, N]}_{T_{m-1}, t} \nonumber \\&\qquad \bigl ((y_{m-1}, u_1, \dotsc , u_k, z_{k+1}, \dotsc , z_{N-m+1}), \vec {x}\bigr ). \end{aligned}$$
(C.6)

The terms in this sum vanish unless \(z_i - u_i \in \{0,1\}\) for each \(1\in \llbracket k \rrbracket \), and one can see that there is a \(k^*\in \llbracket k \rrbracket \) such that \((z_1, \dotsc , z_k) = (u_1 + 1, \dotsc , u_{k_*}+1, u_{k_*}, \dotsc , u_k)\). Moreover, \(u_{i} + 1 = y_{m-1} - i + 1\) for each \(1 \le i \le k_*\). Then applying Lemma C.1(ii) consecutively to the entries \(z_1\), \(z_2\), \(\dotsc \), \(z_{k_*}\), we get \(G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {z}, \vec {x}) = G^{[m-1, N]}_{T_{m-1}, t} \bigl ((y_{m-1}, u_1, \dotsc , u_k, z_{k+1}, \dotsc , z_{N-m+1}), \vec {x}\bigr )\). Furthermore, if \(z_i < u_i\) for some \(1 \le i \le k\), then the function \(G^{[m, N]}_{0,1} (\vec {u}, \vec {z})\) vanishes, which means that (C.6) can be written as

$$\begin{aligned} \textstyle \sum _{\vec {z} \in \Omega _{N - m + 1}} G^{[m, N]}_{0,1} (\vec {u}, \vec {z}) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {z}, \vec {x}). \end{aligned}$$
(C.7)

Combining identities (C.5) and (C.7), formula (C.4) can be written as

$$\begin{aligned} \textstyle G^{[m - 1, N]}_{\vec {T},t} (\vec {y}, \vec {x}) = \sum _{\vec {u}, \vec {a} \in \Omega _{N - m + 1}} G^{[m, N]}_{\vec {T},T_{m-1} - 1} (\vec {y}, \vec {u}) G^{[m, N]}_{0,1} (\vec {u}, \vec {a}) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x}). \end{aligned}$$

The induction hypothesis implies that the function \(G^{[m, N]}_{\vec {T},T_{m-1} - 1}\) has the required form (C.3). Moreover, direct computations show that the functions \(G^{[m, N]}_{0,1}\) and \(G^{[m-1, N]}_{T_{m-1}, t}\) are in the form (A.1), which allows one to apply Proposition A.1 to their convolution. Then the last expression turns to

$$\begin{aligned}&\textstyle \sum _{\vec {a} \in \Omega _{N - m + 1}} \sum _{\begin{array}{c} \vec {z}^{\,i} \in \Omega _{N - i} : \\ m \le i < N \end{array}} \prod _{i = m + 1}^{N} G^{[i, N]}_{T_{i}, T_{i - 1}} (y_{i} \\&\quad \sqcup \vec {z}^{\,i}, \vec {z}^{\,i-1}) G^{[m, N]}_{T_{m}, T_{m-1}} (y_{m} \sqcup \vec {z}^{\,m}, \vec {a}) G^{[m-1, N]}_{T_{m-1}, t} (y_{m-1} \sqcup \vec {a}, \vec {x}), \end{aligned}$$

which is exactly (C.3) for \(m-1\). \(\square \)

Lemma C.3

For \(\kappa \ge 1\), \(\vec {y}\in \Omega _N(\kappa )\) and \(\vec {x}\in \Omega _N\),

$$\begin{aligned} G^{\mathrm {r-B}}_{\vec {T},0} (\vec {y}, \vec {x}) = \mathop {\mathrm {det}}\bigl [F^{\mathrm {r-B}}_{i - j}(x_{N + 1 - i} - y_{N + 1 - j}, \kappa (j-1))\bigr ]_{i, j \in \llbracket N \rrbracket }. \end{aligned}$$

Proof

This follows from applying Proposition A.4 consecutively to the determinants in (C.3). \(\square \)

1.2 C.2. Proof of Eqn. 1.17

Lemma C.4

Identity (1.17) holds for the model (2.2) with right Bernoulli jumps, where in the definition of the function (2.3) with negative time the singularity \(-q/p\) should be excluded from the contour.

Proof

If we exclude the singularity from the contour, then the function (2.3) satisfies \(F^{\mathrm {r-B}}_{i - N + 1}(z_{N + 1 - i} - x_{2}, -t) = 0\) if \(x_2 > z_2\), where we use the variables as in (1.17). Hence, the restriction \(x_2 < x_1\) in the sum in (1.17) can be omitted (this is because each term of the sum may be non-vanishing only when \(x_2 \le z_2\), and the dynamics implies \(y_1 \le x_1\), which by the assumptions on the variables yields \(x_2 < x_1\)). Applying then (2.2) and Proposition A.6, the left hand side of (1.17) turns to

$$\begin{aligned} \mathop {\mathrm {det}}\bigl [F_{i - j}(x_1 \cdot {\mathbf {1}}_{i = N} + z_{N + 1 - i} \cdot {\mathbf {1}}_{i < N} - y_{N + 1 - j}, t \cdot {\mathbf {1}}_{i = N})\bigr ]_{i, j \in \llbracket N \rrbracket }. \end{aligned}$$

One can prove that this determinant equals the right hand side of (1.17) in the same way as the initial condition is checked in [3,   Proposition 2.1]. \(\square \)

Appendix D: Proofs for right geometric jumps with sequential update

As we described in Sect. 2.4, TASEP with right geometric jumps is different from the other models in that section, and in particular Assumption 1.3 and hence Theorem 1.4 do not hold for it. Because of this we need to prove the formula (2.23) directly in the case of sequential update (\(\kappa =-1\)).

We start with an auxiliary result. Let \(G^{[1, N]}_{0, t} (\vec {y}, \vec {x})\) be the function on the right hand side of (2.22). Then for \(\vec {T}= (T_i)_{i \in \llbracket N \rrbracket }\) with \(T_i = i - N\), for \(m \in \llbracket N \rrbracket \), \(\vec {x},\vec {y} \in \Omega _{m}\) and \(T_{m} \le t < T_{m + 1}\) (with the convention \(T_{N+1} = \infty \)) we define

$$\begin{aligned} G^{[1, m]}_{\vec {T},t} (\vec {y}, \vec {x}) = {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{t}(i) = x_i,\,i\in \llbracket m \rrbracket \big | X^{\mathrm {r-G}}_{T_i}(i) = y_i, i \in \llbracket m \rrbracket \bigr ), \end{aligned}$$

which is the transition probability for the particles \(X^{\mathrm {r-G}}(1), \dotsc , X^{\mathrm {r-G}}(m)\) from \(y_1, \dotsc , y_{m}\) at times \(T_1, \dotsc , T_m\) to the locations \(\vec {x}\) at time t.

Lemma D.1

In above setting, and with the notation \(\vec {z} \sqcup y\) from Appendix A,

$$\begin{aligned} \textstyle G^{[1, m]}_{\vec {T},t} (\vec {y}, \vec {x}) = \sum _{\begin{array}{c} \vec {z}(T_i) \in \Omega _{i} \\ 1 \le i < m \end{array}} \left( \prod _{i = 1}^{m-1} G^{[1, i]}_{T_{i}, T_{i + 1}} (\vec {z}(T_i) \sqcup y_{i}, \vec {z}(T_{i+1}))\right) G^{[1, m]}_{T_{m}, t} (\vec {z}(T_m) \sqcup y_{m}, \vec {x}),\nonumber \\ \end{aligned}$$
(D.1)

where \(\vec {z}(T_1)\) is an empty vector.

Proof

We will prove (D.1) by induction over \(m = 1, 2, \dotsc , N\). For the base case, \(m = 1\), note that on the time interval \(T_1 \le t < T_{2}\) only the \(1^\mathrm{{st}}\) particle moves. Therefore \(G^{[1, 1]}_{\vec {T}, t} (\vec {y}, \vec {x}) = G^{[1, 1]}_{T_1, t} (y_1, x_1)\), which is (D.1). Now assuming that (D.1) holds for some \(1 \le m < N\), we will prove it for \(m+1\). For \(T_{m+1} \le t < T_{m + 2}\) and \(\vec {x} \in \Omega _{m+1}\) the Markov property yields

$$\begin{aligned}&\textstyle G^{[1, m + 1]}_{\vec {T},t} (\vec {y}, \vec {x}) \textstyle = \sum _{\!\begin{array}{c} \vec {u}, \vec {a} \in \Omega _{m} : \\ a_m > y_{m+1} \end{array}} {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{T_{m+1} - 1} = \vec {u} \big | X^{\mathrm {r-G}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr )\quad \nonumber \\&\quad \textstyle \times {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{T_{m+1}} = \vec {a} \sqcup y_{m+1}\big | X^{\mathrm {r-G}}_{T_{m+1} - 1} = \vec {u}\bigr ) {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{t}(i) = x_i,\,i\in \llbracket m + 1 \rrbracket \big | X^{\mathrm {r-G}}_{T_{m+1}} = \vec {a} \sqcup y_{m+1}\bigr ). \end{aligned}$$
(D.2)

The definition of the model implies that the terms contributing to the sum have \(a_m \ge y_m > y_{m+1}\). Hence, the restriction \(a_m > y_{m+1}\) in the sum can be omitted. The induction hypothesis yields \({\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{T_{m+1} - 1} = \vec {u} \big | X^{\mathrm {r-G}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr ) = G^{[1, m]}_{\vec {T},T_{m+1} - 1} (\vec {y}, \vec {u})\), where the latter is given by the right hand side of (D.1). Moreover, \({\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{T_{m+1}} = \vec {a} \sqcup y_{m+1}\big | X^{\mathrm {r-G}}_{T_{m+1} - 1} = \vec {u}\bigr ) = G^{[1, m]}_{0,1} (\vec {u}, \vec {a})\) and

$$\begin{aligned} \textstyle {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{t}(i) = x_i,\,i\in \llbracket m + 1 \rrbracket \big | X^{\mathrm {r-G}}_{T_{m+1}} = \vec {a} \sqcup y_{m+1}\bigr ) = G^{[1, m+1]}_{T_{m+1}, t} (\vec {a} \sqcup y_{m+1}, \vec {x}). \end{aligned}$$

Then (D.2) can be written as

$$\begin{aligned} \textstyle G^{[1, m + 1]}_{\vec {T},t} (\vec {y}, \vec {x}) = \sum _{\vec {u}, \vec {a} \in \Omega _{m}} G^{[1, m]}_{\vec {T},T_{m+1} - 1} (\vec {y}, \vec {u}) G^{[1, m]}_{0,1} (\vec {u}, \vec {a}) G^{[1, m+1]}_{T_{m+1}, t} (\vec {a} \sqcup y_{m+1}, \vec {x}). \end{aligned}$$

By the induction hypothesis, the function \(G^{[1, m]}_{\vec {T},T_{m+1} - 1} (\vec {y}, \vec {u})\) has the necessary form (D.1). Moreover, the functions \(G^{[1, m]}_{0,1}\) and \(G^{[1, m+1]}_{T_{m+1}, t}\) are in the form (A.1), and we can apply Proposition A.1 to their convolution. Then the last expression turns into (D.1) for \(m+1\). \(\square \)

By analogy with (1.11) in the case \(\kappa = 1\), we define the event

$$\begin{aligned} {\bar{{\mathcal {E}}}} = \bigcap _{i \in \llbracket N \rrbracket } \bigl \{i^\mathrm{{th}} ~\mathrm {particle stays put till time}~ i - N\bigr \}. \end{aligned}$$

The following result is the analogue of identity (1.12) which we are going to use for this model.

Lemma D.2

For any \(\vec {x}, \vec {y} \in \Omega _N\),

$$\begin{aligned} {\mathbb {P}}(X^{\mathrm {r-G}}_0 = \vec {x} | X^{\mathrm {r-G}}_{1 - N} = \vec {y}, {\bar{{\mathcal {E}}}}) = \mathop {\mathrm {det}}\bigl [F^{\mathrm {r-G}}_{i - j}(x_{N + 1 - i} - y_{N + 1 - j}, N-j)\bigr ]_{i, j \in \llbracket N \rrbracket }.\nonumber \\ \end{aligned}$$
(D.3)

Proof

Formula (D.3) is obtained by applying Proposition A.5 consecutively to the determinants in (D.1). \(\square \)

The proof of Lemma D.2 does not use the fact that the particles have geometric jumps, and in fact (D.3) also holds for other models, e.g. right Bernoulli jumps. However, only for \(X_t^{\mathrm {r-G}}\) the formula proves to be useful when considering different ending times \(t + i - N\) for each particle i. Indeed, if we consider such ending times for the model with right Bernoulli jumps and sequential update, then after the \(i^\mathrm{{th}}\) particle stops, the \((i+1)^\mathrm{{st}}\) particle needs to make one step and could jump on top of its right neighbor. In contrast, for the model with right geometric jumps, since the basic update rule is parallel, when the \(i^\mathrm{{th}}\) particle stops, the \((i+1)^\mathrm{{st}}\) particle cannot jump over it on the next step, because it is still blocked by the position of its neighbor at the previous time. This suggests the following analog of Lemma 2.2:

Lemma D.3

For \(t \ge N-1\) and \(\vec {x}, \vec {y} \in \Omega _N\), and with \({\mathcal {N}}(\vec {x})\) as defined in Lemma 2.2,

$$\begin{aligned}&{\mathbb {P}}(X^{\mathrm {r-G}}_{t + i - N}(i) = x_i, ~i \in \llbracket N \rrbracket | X^{\mathrm {r-G}}_{0} = \vec {y}) = p^{-{\mathcal {N}}(\vec {x})} \mathop {\mathrm {det}}\nonumber \\&\quad \bigl [F^{\mathrm {r-G}}_{i - j}(x_{N + 1 - i} - y_{N + 1 - j}, t + i - N)\bigr ]_{i, j \in \llbracket N \rrbracket }. \end{aligned}$$
(D.4)

Proof

Write \(S_i = t + i - N\) and let \(G^{[1,N]}_{0,\vec {S}} (\vec {y}, \vec {x})\) be the probability on the left hand side of (D.4). At any time point s we consider \(N \ge 1\) particles \(X^{\mathrm {r-G}}_{s}(N)< \cdots < X^{\mathrm {r-G}}_{s}(1)\), such that on the time interval \(s > S_{i-1}\) (with the convention \(S_0 = 0\)) only the particles \(X^{\mathrm {r-G}}_{s}(i)\), \(\dotsc \), \(X^{\mathrm {r-G}}_{s}(N)\) move. Denote these moving particles by \(X^{[i, N]}_{s} = (X^{\mathrm {r-G}}_{s}(i) , \dotsc , X^{\mathrm {r-G}}_{s}(N))\), and let \(G^{[i, N]}\) be their transition function.

We prove (D.4) by induction over \(N \ge 1\). The base case \(N = 1\) is trivial. Assuming that (D.4) holds for \(N - 1 \ge 1\), we will prove it for N. From the Markov property we may write \(G^{[1,N]}_{0,\vec {S}} (\vec {y}, \vec {x})\) as

$$\begin{aligned}&\textstyle \sum _{{\vec {u} \in \Omega _{N-1} : \, u_{1} < x_1}} \sum _{{\vec {a} \in \Omega _{N-1} : \, a_{1} = x_{2}}} {\mathbb {P}}\bigl (X^{[1, N]}_{S_1} = x_1 \sqcup \vec {u} \big | X^{[1, N]}_{0} = \vec {y}\bigr ) {\mathbb {P}}\bigl (X^{[2, N]}_{S_2} = \vec {a} \big | X^{[1, N]}_{S_1} = x_1 \sqcup \vec {u}\bigr )\nonumber \\&\quad \times {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{S_k}(k) = x_k,\, 2 \le k \le N \big | X^{[2, N]}_{S_2} = \vec {a}\bigr ). \end{aligned}$$
(D.5)

We have \({\mathbb {P}}(X^{[1, N]}_{S_1} = x_1 \sqcup \vec {u} | X^{[1, N]}_{t} = \vec {y}) = G^{[1, N]}_{t, S_1} (\vec {y}, x_1 \sqcup \vec {u})\), \({\mathbb {P}}(X^{[2, N]}_{S_2} = \vec {a} | X^{[1, N]}_{S_1} = x_1 \sqcup \vec {u}) = p^{-{\mathbf {1}}_{x_1 - x_2 = 1}} G^{[2, N]}_{0, 1} (\vec {u}, \vec {a})\), and \({\mathbb {P}}(X^{\mathrm {r-G}}_{S_k}(k) = x_k,\, 2 \le k \le N | X^{[2, N]}_{S_2} = \vec {a}) = G^{[2, N]}_{S_2, \vec {S}_{> 1}} (\vec {a}, \vec {x}_{> 1})\), where \(\vec {S}_{> 1}\) and \(\vec {x}_{> 1}\) are obtained from \(\vec {S}\) and \(\vec {x}\) respectively by removing the first entries. The multiplier \(p^{-{\mathbf {1}}_{x_1 - x_2 = 1}}\) is needed to change the jump probability of the \(2^\mathrm{{nd}}\) particle in the case \(x_1 - x_2 = 1\).

The function \(G^{[2, N]}_{0, 1} (\vec {u}, x_2 \sqcup \vec {a})\) equals the probability for \(N-1\) particles to go from \(\vec {u}\) to \(x_2 \sqcup \vec {a}\) during a unit time interval. Hence, it can be non-zero only if \(u_1 \le x_2\), which yields \(u_1 \le x_2 < x_1\), so the restriction \(u_{1} < x_1\) in the sum can be omitted. Moreover, for \(a_1 \ne x_2\) the last probability in (D.5) vanishes, and the restriction \(a_1 = x_2\) in the sum can be omitted. Therefore

$$\begin{aligned} \textstyle G^{[1,N]}_{0,\vec {S}} (\vec {y}, \vec {x}) = p^{-{\mathbf {1}}_{x_1 - x_2 = 1}} \sum _{\vec {u} \in \Omega _{N-1}} \sum _{\vec {a} \in \Omega _{N-1}} G^{[1, N]}_{0, S_1} (\vec {y}, x_1 \sqcup \vec {u}) G^{[2, N]}_{0, 1} (\vec {u}, \vec {a}) G^{[2, N]}_{S_2, \vec {S}_{> 1}} (\vec {a}, \vec {x}_{> 1}). \end{aligned}$$

Using the induction hypothesis for the function \(G^{[2, N]}_{S_2, \vec {S}_{> 1}} (\vec {a}, \vec {x}_{> 1})\) and applying Proposition A.1 to the sum over \(\vec {a}\) allows to write the preceding expression as

$$\begin{aligned} \textstyle p^{-{\mathbf {1}}_{x_1 - x_2 = 1}} \sum _{\vec {u} \in \Omega _{N-1}} G^{[1, N]}_{0, S_1} (\vec {y}, x_1 \sqcup \vec {u}) G^{[2, N]}_{S_2, \vec {S}_{> 1}} (\vec {u}, \vec {x}_{> 1}). \end{aligned}$$

Using again the induction hypothesis to the function \(G^{[2, N]}_{S_2, \vec {S}_{> 1}}\) and applying Proposition A.6 to the sum, we obtain (D.4). \(\square \)

Proof of Proposition 2.10

In the case of parallel update, formula (2.23) follows from Theorem 1.2. From now on we consider the case of sequential update (\(\kappa =-1\)). Then the proof goes along the lines of the proof of Theorem 1.4, the only difference being the orientation of space-like paths, so we only provide a sketch of the proof.

Define the set of space-like paths for this model as

$$\begin{aligned} {\bar{{\mathbb {S}}}}_N = \bigcup _{m \ge 1} \bigl \{({\mathfrak {n}}_i)_{i \in \llbracket m \rrbracket }\!: {\mathfrak {n}}_i \in \llbracket N \rrbracket \times {\mathbb {N}}_0, {\mathfrak {n}}_i {\overline{\prec }} {\mathfrak {n}}_{i+1}\bigr \}, \end{aligned}$$

where the relation \((n_1, t_1) {\overline{\prec }} (n_2, t_2)\) now means \(n_1 \le n_2\), \(t_1 \le t_2\) and \((n_1, t_1) \ne (n_2, t_2)\). Then for \(T_i = i - N\), \({\mathcal {S}}= \{(n_1, t_1), \dotsc , (n_m, t_m)\} \in {\bar{{\mathbb {S}}}}_N\) and for \(\vec {y} \in \Omega _N\) and \(\vec {x} \in \Omega _m\) we define

$$\begin{aligned} G^{\mathrm {r-G}}_{\vec {T}, {\mathcal {S}}}(\vec {y}, \vec {x}) = {\mathbb {P}}\bigl (X^{\mathrm {r-G}}_{t_i}(n_i) = x_i,\,i\in \llbracket m \rrbracket \big | X^{\mathrm {r-G}}_{T_i}(i) = y_i, i \in \llbracket N \rrbracket \bigr ). \end{aligned}$$

Let the set \(\Omega _{n, N}\) contain the vectors \((x_n, \ldots , x_N)\) such that \(x_{n}< x_{n+1}< \cdots < x_{N}\). Then by analogy with (B.1) we can write

$$\begin{aligned} \textstyle G^{\mathrm {r-G}}_{\vec {T}, {\mathcal {S}}} (\vec {y}, \vec {x}) = \sum _{\vec {x}(0) \in \Omega _{N}} \sum _{\!\!\begin{array}{c} \vec {x}(t_i) \in \Omega _{n_i, N}: \\ x_{1}(t_i) = x_i, i \in \llbracket m \rrbracket \end{array}} G^{\mathrm {r-G}}_{\vec {T},0} (\vec {y}, \vec {x}(0)) \prod _{i=1}^{m} G^{\mathrm {r-G}}_{t_{i} - t_{i-1}}(\vec {x}_{\ge n_i}(t_{i-1}), \vec {x}(t_i)), \end{aligned}$$

where \(t_0 = 0\), the function \(G^{\mathrm {r-G}}_{\vec {T},0}\) equals (D.1) in the case \(t = 0\), and where \(G^{\mathrm {r-G}}_{t}\) is the transition function given by the right hand side of (2.22).

Define \(\underline{n}_i = N - n_i + 1\), so that \(\underline{n}_1 \ge \underline{n}_2 \ge \cdots \ge \underline{n}_m\). Analogously to (B.13) we define the domain

$$\begin{aligned} {\bar{{\mathbb {D}}}}_{{\mathcal {S}}}&= \bigl \{{\mathsf {x}}^{\underline{n}_0}_{\ell }(t_0) \in {\mathbb {Z}}: \ell \in \llbracket \underline{n}_0 \rrbracket \bigr \} \\&\qquad \cup \bigcup _{i \in \llbracket m \rrbracket } \Bigl \{{\mathsf {x}}^n_{\ell }(t_i) \in {\mathbb {Z}}: \underline{n}_{i+1} \le n \le \underline{n}_i, \ell \in \llbracket n \rrbracket ~\mathrm { such that }~ {\mathsf {x}}^{n+1}_\ell (t_i) < {\mathsf {x}}^n_\ell (t_i) \le {\mathsf {x}}^{n+1}_{\ell +1}(t_i)\Bigr \}, \end{aligned}$$

where \(t_0 = 0\), \(\underline{n}_0 = N\) and \(\underline{n}_{m+1} = 0\). Next we define a signed measure \(\bar{{\mathcal {W}}}\) on \({\mathsf {X}}\in {\bar{{\mathbb {D}}}}_{{\mathcal {S}}}\) through (B.14) using the time points \(t_i\) in place of \(\underline{t}_i\) and where the functions (B.11) and (B.12) are defined with \(\varphi (w) = p / (1 - q w)\). Then, as in Proposition B.4, we can write

$$\begin{aligned} \textstyle G^{\mathrm {r-G}}_{\vec {T}, {\mathcal {S}}}(\vec {y}, \vec {x}) = C\sum _{\,{{\mathsf {X}}\in {\bar{{\mathbb {D}}}}_{{\mathcal {S}}}: {\mathsf {x}}_{\underline{n}_i}^{\underline{n}_i}(t_i) = x_i, i \in \llbracket m \rrbracket }}{\bar{{\mathcal {W}}}}({\mathsf {X}}), \end{aligned}$$

for a constant \(C \ne 0\). As in Theorem 4.3 we can compute the correlation kernel of the determinantal measure \({\bar{{\mathcal {W}}}}\), which yields formula (2.23) with the kernel given by (4.18) with the value of \(\kappa \) equal \(-1\). Applying then Theorem 5.15 with the functions \(\psi (w) = \varphi (w)^t\) and \(a(w) = 1/\varphi (w)\), we get (2.23).\(\square \)

Appendix E: Formulas for discrete-time RSK-solvable models

In this appendix we derive the transition probabilities for the discrete-time variants of TASEP described in Sect. 2, by rewriting in the form (1.1) the formulas which were derived in [14] using the four basic variants of the Robinson-Schensted-Knuth (RSK) algorithm.

Fix a vector \(\vec {\alpha } = (\alpha _1, \dotsc , \alpha _N) \in {\mathbb {R}}^N\). The \(r^\mathrm{{th}}\) complete homogeneous symmetric polynomial and the \(r^\mathrm{{th}}\) elementary symmetric function are given respectively by

$$\begin{aligned} \textstyle h_r(\vec {\alpha }) = \sum _{\!\!\begin{array}{c} k_1, \dotsc , k_N \ge 0 \\ k_1 + \cdots + k_N = r \end{array}} \alpha _1^{k_1} \alpha _2^{k_2} \cdots \alpha _N^{k_N}, \qquad e_r(\vec {\alpha }) = \sum _{k_1< k_2< \cdots < k_r} \alpha _{k_1} \alpha _{k_2} \cdots \alpha _{k_r}, \end{aligned}$$

where by convention \(h_0 \equiv e_0 \equiv 1\) and \(h_r \equiv e_r \equiv 0\) for \(r < 0\). For \(0 \le k < \ell \le N\), let \(\vec {\alpha }^{(k, \ell )} = (0, \dotsc , 0, \alpha _{k+1}, \dotsc , \alpha _\ell , 0, \dotsc , 0)\) be the vector obtained from \(\vec {\alpha }\) by setting the first k and the last \(N-\ell \) entries to 0. Write \(h^{(k, \ell )}_r(\vec {\alpha }) = h_r(\vec {\alpha }^{(k, \ell )})\) and \(e^{(k, \ell )}_r(\vec {\alpha }) = e_r( \vec {\alpha }^{(k, \ell )})\) with \(h^{(k, k)}_r( \vec {\alpha }) = e^{(k, k)}_r( \vec {\alpha }) = {\mathbf {1}}_{r = 0}\) and then, for a fixed function f on \({\mathbb {Z}}\), define

$$\begin{aligned} \textstyle f^{(ij)}_{\vec {\alpha }}(k)=\sum _{\ell =0}^{i-j}(-1)^\ell e_\ell ^{(ji)}(\vec {\alpha })f(k+\ell ){\mathbf {1}}_{i\ge j}+\sum _{\ell =0}^{\infty }h_\ell ^{(ij)}(\vec {\alpha })f(k+\ell ){\mathbf {1}}_{i<j}, \end{aligned}$$

provided the series converges absolutely, and define \({\hat{f}}^{(ij)}_{\vec {\alpha }}(k)\) in the same way except that \(f(k+\ell )\) is replaced by \(f(k-\ell )\) The formulas in [14] are written in terms of \(f^{(ij)}_{\vec {\alpha }}(k)\) and \({\hat{f}}^{(ij)}_{\vec {\alpha }}(k)\). Our first task is to find an alternative expression for them.

Taking all entries of \(\vec {\alpha }\) to be non-zero, define, for \(k,\ell \in \llbracket N \rrbracket \) and \(x,y\in {\mathbb {Z}}\),

$$\begin{aligned} \textstyle H^{(\vec {\alpha })}_{k, \ell }(x,y) = \frac{1}{2\pi \mathrm{i}}\oint _{\gamma } \frac{\mathrm {d}w}{w^{x-y+1}} \frac{\prod _{i = 1}^{\ell } (1 - \alpha _i w)}{\prod _{i = 1}^{k} (1 - \alpha _i w)}, \end{aligned}$$
(E.1)

where the contour \(\gamma \) encloses 0 but not any pole \(w=1/\alpha _i\). The kernel \(H^{(\vec {\alpha })}_{k, \ell }\) is related to symmetric functions:

Lemma E.1

\(H^{(\vec {\alpha })}_{k, \ell }(x,y) = (-1)^{x-y} e^{(k, \ell )}_{x-y}(\vec {\alpha })\) if \(\ell \ge k\), and \(H^{(\vec {\alpha })}_{k, \ell }(x,y) = h^{(\ell , k)}_{x-y}(\vec {\alpha })\) if \(\ell < k\).

Proof

Write \(H^{(\vec {\alpha })}_{k, \ell }(x)=H^{(\vec {\alpha })}_{k, \ell }(x,0)\). If \(x < 0\), then \(H^{(\vec {\alpha })}_{k, \ell }(x) = 0\), because the contour in (E.1) does not enclose any poles of the function in the integral. This yields the required identities, because \(e^{(k, \ell )}_{x}(\vec {\alpha }) = h^{(\ell , k)}_{x}(\vec {\alpha }) = 0\).

Consider now \(x \ge 0\). The case \(\ell = k\) is trivial, because \(H^{(\vec {\alpha })}_{k, k}(x) = {\mathbf {1}}_{x = 0}\), which coincides with \((-1)^{x} e^{(k, k)}_{x}(\vec {\alpha })\). In the case \(\ell > k\), the Cauchy residue theorem yields

$$\begin{aligned} \textstyle H^{(\vec {\alpha })}_{k, \ell }(x)= & {} \frac{1}{x!} \frac{\mathrm {d}^{x}}{\mathrm {d}w^{x}} \prod _{i = k+1}^{\ell } (1 - \alpha _i w) \Big |_{w = 0}\\= & {} (-1)^{x} \sum _{k+1 \le j_1< \cdots < j_{x} \le \ell } \alpha _{j_1} \cdots \alpha _{j_x} = (-1)^{x} e^{(k, \ell )}_{x}(\vec {\alpha }). \end{aligned}$$

In the case \(\ell < k\), choosing the contour so that \(|w| < 1/\alpha _i\) for each i, we can write \((1 - \alpha _i w)^{-1} = \sum _{k_i \ge 0} (\alpha _i w)^{k_i}\), and the Cauchy residue theorem yields

$$\begin{aligned} \textstyle H^{(\vec {\alpha })}_{k, \ell }(x) = \sum _{\!\begin{array}{c} j_{\ell + 1}, \dotsc , j_k \ge 0 \\ j_{\ell +1} + \cdots + j_k = x \end{array}} \alpha _{\ell +1}^{j_{\ell +1}} \cdots \alpha _k^{j_k} = h^{(\ell , k)}_{x}(\vec {\alpha }). \end{aligned}$$

\(\square \)

Using the lemma and the fact that \(e^{(k,\ell )}_y(\vec {\alpha })=0\) if \(y>\ell -k\), the above functions can be written as

$$\begin{aligned} \textstyle f^{(\ell , k)}_{\vec {\alpha }} = (H^{(\vec {\alpha })}_{k, \ell })^*f\qquad \text {and}\qquad {\hat{f}}^{(\ell , k)}_{\vec {\alpha }} = H^{(\vec {\alpha })}_{k, \ell }f. \end{aligned}$$
(E.2)

1.1 E.1. Proof of Eqn. 2.2

For the process \(X^{\mathrm {r-B}}_t \in \Omega _N\) defined in Sect. 2.1, set \(Y^{\mathrm {r-B}}_t(i) = X^{\mathrm {r-B}}_t(i) + i\), so that \(Y^{\mathrm {r-B}}_t \in {\bar{\Omega }}_N\) (see (2.16) and the evolution of \(Y^{\mathrm {r-B}}_t\) coincides with the model from Case B in [14,   Sect. 2]. If we denote \(v_i = p_i / q_i\) and let the function \(H^{(\vec {v})}\) be defined by (E.1) with values \(\alpha _i = v_i\), then in view of (E.2) the formula from [14,   Theorem 1] becomes

$$\begin{aligned} \textstyle {\mathbb {P}}(Y^{\mathrm {r-B}}_t = \vec {x} \,|\, Y^{\mathrm {r-B}}_0 = \vec {y}) = \left( \prod _{i=1}^N q_i^{t} v_i^{x_i - y_i}\right) \mathop {\mathrm {det}}\bigl [ (H^{(\vec {v})}_{k, \ell })^* \nu _t(x_\ell - y_k - \ell + k) \bigr ]_{k, \ell \in \llbracket N \rrbracket },\nonumber \\ \end{aligned}$$
(E.3)

for two configurations \(\vec {x}, \vec {y} \in {\bar{\Omega }}_N\) and with \(\nu _t(x) = {t \atopwithdelims ()x} {\mathbf {1}}_{0 \le x \le t}\). Using (E.1) and the contour integral formula \(\nu _t(x)=\frac{1}{2\pi \mathrm{i}} \oint _{\Gamma _{0}}\!\mathrm {d}w\frac{(1+ w)^t}{w^{x+1}}\) we get

$$\begin{aligned} \textstyle (H^{(\vec {v})}_{k, \ell })^*\nu _t(x) =\frac{1}{2\pi \mathrm{i}} \oint _{\gamma '} \frac{\mathrm {d}w}{w^{x + \ell - k +1}} \frac{\prod _{i=1}^\ell (w - v_i)}{\prod _{i=1}^k (w - v_i)} (1+ w)^t, \end{aligned}$$

where the integration contour \(\gamma '\) includes 0 and all entries of \(\vec {v}\). Changing back to \(X^{\mathrm {r-B}}_t(i) = Y^{\mathrm {r-B}}_t(i) - i\) in (E.3) and taking all speeds to be equal, we arrive at (2.2) after a simple change of variables.

1.2 E.2 Proof of Eqn. 2.12

For \(X^{\mathrm {l-B}}_t\) as in Sect. 2.2, we define the process \(Y^{\mathrm {l-B}}_t(i) = - X^{\mathrm {l-B}}_t(i) - i\). Then \(Y^{\mathrm {l-B}}_t(1) \le Y^{\mathrm {l-B}}_t(2) \le \cdots \le Y^{\mathrm {l-B}}_t(N)\); we denote by \({\tilde{\Omega }}_N\) the set of such configurations. Proceeding as in the previous case, Case D of [14,   Theorem 1] yields, for \(\nu _t\) as in the previous case,

$$\begin{aligned} \textstyle {\mathbb {P}}(Y^{\mathrm {l-B}}_t = \vec {x} | Y^{\mathrm {l-B}}_0 = \vec {y}) = \left( \prod _{i=1}^N q_i^{t} v_i^{y_i - x_i}\right) \mathop {\mathrm {det}}\bigl [H^{(\vec {v})}_{k, \ell } \nu _t (x_\ell - y_k + \ell - k) \bigr ]_{k, \ell \in \llbracket N \rrbracket },\nonumber \\ \end{aligned}$$
(E.4)

where \(\vec {x}, \vec {y} \in {\tilde{\Omega }}_N\) and \(v_i = q_i / p_i\). Using (E.1) and the integral representation of \(\nu _t\) from Sect. 1, we get

$$\begin{aligned} \textstyle H^{(\vec {v})}_{k, \ell } \nu _t(x) =\frac{1}{2\pi \mathrm{i}} \oint _{\gamma '} \frac{\mathrm {d}w}{w^{-x + \ell - k + 1}} \frac{\prod _{i = 1}^{\ell } (w - v_i)}{\prod _{i = 1}^{k} (w - v_i)} (1 + 1 / w)^{t}, \end{aligned}$$

where the integration contour \(\gamma '\) includes 0 and all entries of \(\vec {v}\). Changing back to \(X^{\mathrm {l-B}}_t(i) = - Y^{\mathrm {l-B}}_t(i) - i\) in (E.4) and taking all speeds to be equal, we get (2.12).

1.3 E.3. Proof of Eqn. 2.19

For \(X^{\mathrm {l-G}}_t\) as in Sect. 2.3, we define the process \(Y^{\mathrm {l-G}}_t(i) = -X^{\mathrm {l-G}}_t(i) - i\) so that \(Y^{\mathrm {l-G}}_t\) lives in \({\tilde{\Omega }}_N\) as in the previous case. We set \(\mu _t(x) = {t + x - 1 \atopwithdelims ()x} {\mathbf {1}}_{x \ge 0, t \ge 1} + {\mathbf {1}}_{t = x = 0}\), which can be written as \(\mu _t(x) = \frac{1}{2\pi \mathrm{i}} \oint _{{\tilde{\gamma }}} \frac{(1 - w)^{-t}}{w^{x+1}} \mathrm {d}w\), where the contour \({\tilde{\gamma }}\) includes 0, but not 1. Then Case A of [14,   Theorem 1] yields

$$\begin{aligned} \textstyle {\mathbb {P}}(Y^{\mathrm {l-G}}_t = \vec {x} | Y^{\mathrm {l-G}}_0 = \vec {y}) = \left( \prod _{i=1}^N p_i^t q_i^{x_i - y_i}\right) \mathop {\mathrm {det}}\bigl [ H^{(\vec {v})}_{k, \ell } \mu _t(x_\ell - y_k + \ell - k) \bigl ]_{k, \ell \in \llbracket N \rrbracket },\nonumber \\ \end{aligned}$$
(E.5)

where \(\vec {x}, \vec {y} \in {\tilde{\Omega }}_N\) and \(v_i = 1/q_i\). Then using (E.1) we can write

$$\begin{aligned} \textstyle H^{(\vec {v})}_{k, \ell } \mu _t(x) = \frac{1}{2\pi \mathrm{i}} \oint _{\gamma } \frac{\mathrm {d}w}{w^{-x + \ell - k +1}} \frac{\prod _{i=1}^\ell (w - 1 / q_i)}{\prod _{i=1}^k (w - 1 / q_i)} (1- 1/w)^{-t}, \end{aligned}$$

where the contour \(\gamma \) encloses 0, 1 and all values \(v_i\). Changing back to \(X^{\mathrm {l-G}}_t(i) = -Y^{\mathrm {l-G}}_t(i) - i\) in (E.5) and taking all speeds to be equal, we get (2.19).

1.4 E.4. Proof of Eqn. 2.22

For \(X^{\mathrm {r-G}}_t\) as in Sect. 2.4.1, let us define the process \(Y^{\mathrm {r-G}}_t(i) = X^{\mathrm {r-G}}_t(i) + i\), so that \(Y^{\mathrm {r-G}}_t \in {\bar{\Omega }}_N\). Case C of [14,   Theorem 1] yields, for \(\mu _t\) as in the previous case,

$$\begin{aligned} \textstyle {\mathbb {P}}(Y^{\mathrm {r-G}}_t = \vec {x} | Y^{\mathrm {r-G}}_0 = \vec {y}) = \left( \prod _{i=1}^N p_i^{t} q_i^{x_i - y_i}\right) \mathop {\mathrm {det}}\bigl [ (H^{(\vec {v})}_{k, \ell })^* \mu _t (x_\ell - y_k - \ell + k) \bigr ]_{k, \ell \in \llbracket N \rrbracket },\nonumber \\ \end{aligned}$$
(E.6)

where \(\vec {x}, \vec {y} \in {\bar{\Omega }}_N\) and \(v_i = q_i\). Using the integral representation of \(\mu _t\) and (E.1), we may write

$$\begin{aligned} \textstyle (H^{(\vec {v})}_{k, \ell })^* \mu _t(x) = \frac{1}{2\pi \mathrm{i}} \oint _{\gamma } \frac{\mathrm {d}w}{w^{x + \ell - k +1}} \frac{\prod _{i=1}^\ell (w - q_i)}{\prod _{i=1}^k (w - q_i)} (1- w)^{-t}, \end{aligned}$$

where the contour \(\gamma \) includes 0 and all entries of \(\vec {v}\), but does not include 1. Changing back to \(X^{\mathrm {r-G}}_t(i) = Y^{\mathrm {r-G}}_t(i) - i\) in (E.6) and taking all speeds to be equal, we get (2.22).

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Matetski, K., Remenik, D. TASEP and generalizations: method for exact solution. Probab. Theory Relat. Fields 185, 615–698 (2023). https://doi.org/10.1007/s00440-022-01129-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00440-022-01129-w

Mathematics Subject Classification

  • 60K35 Interacting random processes; statistical mechanics type models
  • percolation theory