1 Introduction

1.1 Background and literature

The Kardar–Parisi–Zhang (KPZ) universality class is a large class of stochastic systems with highly correlated components that exhibit a similar statistical asymptotic behavior under space-time rescaling. They include \((1+1)\)-dimensional random growth models, interacting particle systems, eigenvalues of random matrices, and stochastic partial differential equations. These models can be characterized by means of a space-time ‘height function’, which typically features random non-Gaussian fluctuations depending on the initial height profile. For certain specific initial configurations (e.g. ‘step’ and ‘flat’), the one-point distributions are given by the Tracy–Widom laws (first introduced in the random matrix literature [TW94, TW96]) and the multi-point distributions are given by Airy processes. Such precise asymptotics have been obtained so far only for a few integrable models, whose rich algebraic structure lead to exact formulas that are suitable for asymptotic analysis; see e.g. [BG16, Zyg22]. Among these integrable models, the most accessible ones can be described in terms of determinantal measures: popular examples are the corner growth model, the polynuclear growth model, last passage percolation, and various types of exclusion processes.

It is conjectured that all KPZ models, under the so-called 1:2:3-scaling, converge to a universal, scale-invariant Markov process, known as the KPZ fixed point. Such a limiting process was constructed in [MQR21] for the continuous-time Totally Asymmetric Simple Exclusion Process (\(\textsf{TASEP}\)), a prototypical interacting particle system on the integer line. The transition probabilities of \(\textsf{TASEP}\) were first shown to admit determinantal formulas in [Sch97], using the coordinate Bethe ansatz. Based on these formulas, [Sas05, BFPS07] showed that the \(\textsf{TASEP}\) evolution is encoded by a determinantal point process; consequently, for arbitrary initial configurations, the multi-point distribution of \(\textsf{TASEP}\) particles was given as a Fredholm determinant, whose kernel is implicitly characterized by a biorthogonalization problem. This problem was solved in [BFPS07] in the case of flat (2-periodic) initial configuration for the particles. The solution to the problem of biorthogonalization for general initial configuration was given in [MQR21], where the kernel was expressed in terms of a functional of a random walk and its hitting times to a curve encoding the (arbitrary) initial configuration. The KPZ fixed point was then constructed by taking a Donsker type scaling limit, under which random walk and associated hitting times turn into Brownian motion and corresponding hitting times.

Since the seminal contribution of [MQR21], numerous works have considered other, not only determinantal, KPZ models with general initial configurations, obtaining similar Fredholm determinant formulas and, in certain cases, also convergence to the KPZ fixed point. Here is a non-exhaustive list of such results. A system of one-sided reflected Brownian motions was studied in [NQR20]. Two variations of discrete-time \(\textsf{TASEP}\) with geometric and Bernoulli jumps were considered in [Ara20]. Convergence to the KPZ fixed point was proved in [QS23] in the case of finite range asymmetric exclusion processes and the KPZ equation, for certain classes of initial conditions. In [MR22] it was shown that the method of [MQR21] can cover a general class of models whose multipoint distributions possess a Schütz type determinantal formula.

In the present work, we consider a discrete-time \(\textsf{TASEP}\) with inhomogeneous jump probabilities. We provide an explicit, step-by-step route from the very definition of the model to a Fredholm determinant representation of the joint distribution of the particle positions in terms of random walk hitting times. This is, to the best of our knowledge, the first such formulation for an interacting particle system with both particle- and time-inhomogeneous rates. As discussed above, a result of this type, for the continuous-time and homogeneous-rate \(\textsf{TASEP}\), was first obtained in [MQR21]. However, our approach differs from that of [MQR21]: Firstly, our starting point is not a Schütz type formula, but rather the combinatorial structure of the integrable model. Moreover, instead of solving the biorthogonalization problem, we map to ensembles of non-intersecting paths. We work directly with the corresponding determinantal processes, exploiting some special features that emerge through mapping to path ensembles and from the expression of the path weights via local operators. We hope this perspective can shed additional light on the structure of the KPZ fixed point formulas and may also be useful in different settings, in particular for other particle systems that can be characterized by intertwining relations. In the next subsection we present our result and discuss our approach in detail.

1.2 Our result and approach

In all variations of \(\textsf{TASEP}\), particles occupy sites of \(\mathbb {Z}\) and, according to a stochastic mechanism, perform jumps in the same direction (to the right, by convention). The interaction between particles consists in the exclusion rule: no two particles may occupy the same position at any given time. Therefore, if a particle attempts to jump to an occupied site, the jump is suppressed. There are several possible stochastic mechanisms inducing the dynamics. In this article, as a working example, we consider a version of \(\textsf{TASEP}\) where:

  1. (i)

    there is a finite number \(N\ge 2\) of particles;

  2. (ii)

    the dynamics evolves in discrete time;

  3. (iii)

    jump sizes are given by independent Bernoulli random variables;

  4. (iv)

    the expected jump size may depend both on the particle (particle-inhomogeneous rates) and on time (time-inhomogenous rates);

  5. (v)

    particle positions are updated sequentially from right to left.

Let us define the version of \(\textsf{TASEP}\) we are concerned with. For \(k=1,\ldots ,N\), denote by \(Y_k(t) \in \mathbb {Z}\) the position of the k-th particle from the right at time \(t \in \mathbb {Z}_{\ge 0}\). Therefore, the configuration encoding the positions of the particles, at time t, will be

$$\begin{aligned} Y(t) = (Y_1(t)> Y_2(t)> \cdots > Y_N(t) ), \end{aligned}$$

with arbitrary initial configuration

$$\begin{aligned} Y(0) = {\varvec{y}} = (y_1> y_2> \cdots > y_N). \end{aligned}$$

Let \(p_t\), \(t\ge 1\), and \(q_k\), \(1\le k\le N\), be positive parameters. The random dynamics is then given by sequential updates from right to left, i.e. from the particle labeled 1 to the particle labeled N, as follows: Assume we have already updated the position of the \((k-1)\)-th particle at time t (with \(k\ge 2\)). Then, the k-th particle updates its position at time t by jumping one step to the right with probability \(p_tq_k/(1+p_tq_k)\), assuming that the neighboring site on its right is not occupied by the \((k-1)\)-th particle; otherwise, the particle remains in its current position. We call this model discrete-time (Bernoulli) \(\textsf{TASEP}\) and abbreviate it as \(\textsf{dTASEP}\).

To state our main result, we first define the following kernels of operators from \(\mathbb {Z}\) to \(\mathbb {Z}\). For \(1\le j\le k\) and \(0\le r\le t\), let

$$\begin{aligned} \mathcal {S}_{[j,k],(r,t]}(x,y)&:= \oint _{\Gamma _0}\frac{\textrm{d}z}{2\pi \textrm{i}z} \frac{\prod _{\ell =j}^{k}(q_\ell -z)\cdot \prod _{\ell =r+1}^{t}(1+p_\ell z)}{z^{x-y+k-j+1}}, \end{aligned}$$
(1.1)
$$\begin{aligned} \bar{\mathcal {S}}_{[j,k],(r,t]}(x,y)&:= -\prod _{\ell =j}^{k}(q_\ell -1)\cdot \oint _{\Gamma _{{\varvec{q}}}}\frac{\textrm{d}z}{2\pi \textrm{i}z} \frac{z^{y-x+k-j+1}}{\prod _{\ell =j}^{k}(q_\ell -z)\cdot \prod _{\ell =r+1}^{t}(1+p_\ell z)}, \end{aligned}$$
(1.2)

where \(\Gamma _0\) and \(\Gamma _{{\varvec{q}}}\) are simple closed contours with counterclockwise orientation, enclosing 0 and \(\{q_i\}_{i=1}^N\) as the only poles, respectively. The kernels \(\mathcal {S}_{[j,k],(r,t]}\) are compositions of some random walk transition kernels and \(\bar{\mathcal {S}}_{[j,k],(r,t]}\) are dual versions of \(\mathcal {S}_{[j,k],(r,t]}\) with contribution coming from the poles \(\{q_i\}_{i=1}^N\) instead of 0; see Proposition 4.1 for the precise statement. The random walk/path interpretations of the kernels will be explained through §2-4.

Let S be a geometric random walk (as defined in (4.24)). Let

$$\begin{aligned} \textrm{epi}({\varvec{y}}):= \{ (i,x):{0\le i\le N-1}, \, x> y_{i+1} \} \end{aligned}$$

be the (discrete) strict epigraph of the (discrete) curve \((i,y_{i+1})_{0\le i\le N-1}\). For \(n\le N\), let \(\tau \) be the first time \(\le n\) at which the random walk S hits \(\textrm{epi}(y)\):

$$\begin{aligned} \tau :=\min \{m\in \{0,\ldots ,n\}:S_m>y_{m+1}\}. \end{aligned}$$
(1.3)

The kernel encoding the initial configuration \({\varvec{y}}=(y_1>\cdots >y_N)\) is then expressed in terms of \(\bar{\mathcal {S}}\) and \(\tau \) as the expectation

$$\begin{aligned} \bar{\mathcal {S}}_{[1,n],(r,t]}^{\textrm{epi}({\varvec{y}})}(x,y):= \frac{\mathbb {E}_{S_0=x}[\bar{\mathcal {S}}_{[\tau +1,n],(r,t]}(S_\tau ,y)\mathbbm {1}_{\tau <n}]}{\prod _{\ell =1}^{n}(q_\ell -1)}. \end{aligned}$$
(1.4)

Let also \(Q_i(x,y):= q_i^{y-x}\mathbbm {1}_{\{y< x\}}\) for \(1\le i\le N\) and \(x,y\in \mathbb {Z}\) and \(Q_{(m,n]}:=Q_{m+1}\circ \cdots \circ Q_n\) for \(n>m\).

Finally, for two operators A and B with kernels A(xy) and B(xy), \(x,y\in \mathbb {Z}\), we define the composition operator \(A\circ B\) through the kernel \((A\circ B) (x,z):=\sum _{y\in \mathbb {Z}} A(x,y) B(y,z)\). With these notations, our main result is the following.

Theorem 1.1

(Multipoint distributions of \(\textsf{dTASEP}\) with particle- and time-inhomogeneous rates). Let \((Y(t))_{t\ge 0}=(Y_1(t)>\cdots >Y_N(t))_{t\ge 0}\) be the locations of N particles evolving according to the \(\textsf{dTASEP}\) dynamics with parameters \(\{p_t\}_{t\ge 1}\) and \(\{q_k\}_{k=1}^N\) and initial configuration \(Y(0)={\varvec{y}}\). Assume that \(q_kp_t<1\) for all kt and \(q_k>1\) for all k. Then, the joint distribution of particle locations at time t is given by the Fredholm determinant

(1.5)

for any \(m\in \mathbb {N}\), \(1\le k_1<k_2<\cdots <k_m\le N\) and \((s_1,\ldots ,s_m)\in {\mathbb {R}^m}\), where \(\chi _s(k_i,x):= \mathbbm {1}_{x<s_i}\) and K is the kernel

$$\begin{aligned} K(m,x;n,x')=- Q_{(m,n]}{(x,x')}\mathbbm {1}_{n>m} + \mathcal {S}_{[1,m],(0,t]}\circ \bar{\mathcal {S}}_{[1,n],(0,t]}^{\textrm{epi}({\varvec{y}})} (x,x'). \end{aligned}$$
(1.6)

We remark that the condition \(q_k p_t<1\) for all kt is equivalent to assuming that, for all kt, the k-th particle attempts its t-th jump with probability \((q_k p_t)/(1+q_k p_t)<\frac{1}{2}\). The case when all the jump probabilities are greater than 1/2 can be analyzed using particle-hole duality (see for example [Fer04]). Our theorem does not cover the case where some of the jump rates are \(>\frac{1}{2}\) and others are \(<\frac{1}{2}\). This seems to be a common restriction, appearing for example also in [MR22, Assumption 1.1].

The assumption \(q_k>1\) is innocuous, as we will explain in Remark 4.8.

The original work [MQR21] dealt with homogeneous rates, while [Ara20] considered time-inhomogeneous rates only. In their recent work [MR22], Matetski and Remenik expressed interest in the case of particle-inhomogeneous rates, considering it “meaningful from a physical point of view”. However, they only considered this general case in the preliminary part of their analysis, and did not solve the corresponding biorthogonalization problem.Footnote 1

Let us mention that shape functions and hydrodynamic limits of inhomogeneous \(\textsf{TASEP}\) and corner growth model have been studied since the 1990s; see [SK99, Emr16, EJS21]. Furthermore, multipoint formulas for \(\textsf{TASEP}\) with particle- and time-inhomogeneous rates have been obtained e.g. in [BP08, KPS19, JR22, IMS22]. However, Theorem 1.1 expresses, for the first time in the case of a particle- and time-inhomogeneous \(\textsf{TASEP}\), the joint law of particle locations in terms of random walk hitting times. As we explain later in the introduction, we obtain these formulas in a quite different way from [MQR21, MR22], overcoming the technical difficulties that such a generalization presents.

The route we follow to prove Theorem 1.1 is also new, in various aspects, compared to the methods used so far, as we now discuss. The remarkable idea of expressing the multipoint law in terms of a random walk hitting problem is due to [MQR21]. Before that, the standard representation was in terms of a Fredholm determinant involving two families of biorthogonal functions \(\{\Psi _k^n(\cdot ), \Phi _k^n(\cdot ) \}_{1\le k \le n }\), where \(\{\Psi _k^n(\cdot )\}\) are typically given explicitly, but \(\{\Phi _k^n(\cdot )\}\) are to be determined by solving the biorthogonalisation problem with respect to \(\{\Psi _k^n(\cdot )\}\). This formulation was first established in [Sas05, BFPS07], where the case of flat (2-periodic) initial configuration for continuous-time \(\textsf{TASEP}\) (which corresponds to a biorthogonalisation problem for shifted Charlier polynomials) was solved.

Starting from the determinantal formula of [BFPS07] for \(\textsf{TASEP}\) with general initial configuration, [MQR21] were able to solve the biorthogonalisation problem in terms of a hitting time expectation of the form (1.4). At the core of this lay an expression of the family \(\{\Phi _k^n(\cdot )\}\) in terms of a terminal-boundary value problem for a discrete heat equation. The fact that the family \(\{\Phi _k^n(\cdot )\}\) obtained by this method solves the biorthogonal problem with respect to \(\{\Psi _k^n(\cdot )\}\) was achieved via a direct check. As explained in [MQR21] (see also [Rem22]), the intuition that led to such a guess was based on two points. Firstly, thanks to the “skew time reversibility” of \(\textsf{TASEP}\), the one-point distribution of \(\textsf{TASEP}\) with homogeneous parameters and a general initial profile is equal to the multi-point distribution of \(\textsf{TASEP}\) starting from the so-called step initial configuration (which represented the first solvable example of a particle system [Joh00]). Secondly, the multi-point distribution of \(\textsf{TASEP}\) with the step initial condition has been known since [PS02] to possess the Fredholm determinant expression

where \(Q^n\) is the n-step transition kernel of a homogeneous, geometric random walk, \(K_t^{(n)}\) is the kernel of the one-point distribution of the model and \(\chi '_{s}(k_j,x):=\mathbbm {1}_{x>s_j}\). The expression \(Q^{k_1-k_m}\chi '_{s_1} Q^{k_2-k_1}\chi '_{s_2}\cdots Q^{k_m-k_{m-1}}\chi '_{s_m}\) can then be interpreted as the probability that a homogeneous, geometric random walk with transition kernel Q lies above \(s_1,\ldots ,s_m\) at times \(k_1,\ldots ,k_m\). We remark that, due to the aforementioned skew time reversibility of \(\textsf{TASEP}\), the levels \(s_1,\ldots ,s_m\) are related to the initial (rather than final) positions of the particles.

Our approach to Theorem 1.1 differs from the above, even though our guiding principle has been the already mentioned terminal-boundary value problem and a desire to better understand its foundations. We do not start from the determinantal formulas; instead, we work with the combinatorial foundations of discrete \(\textsf{TASEP}\) and its links, via intertwinings and Markov functions, to determinantal point processes. We first compute the transition kernel of \(\textsf{dTASEP}\) by using the column insertion, dual version of the Robinson–Schensted–Knuth (\(\textsf{RSK}\)) correspondence, which we abbreviate as \(\textsf{dRSK}\). As it turns out, this combinatorial algorithm, viewed from a dynamical standpoint, encodes the \(\textsf{dTASEP}\) dynamics as a projection. Furthermore, the transition kernel of \(\textsf{dTASEP}\) intertwines with the transition kernel of the evolution of the shape of the tableaux generated by the \(\textsf{dRSK}\) dynamics and, thus, can be written as the latter kernel conjugated by an ‘intertwining’ kernel; see (2.25). The general link between \(\textsf{TASEP}\) dynamics and \(\textsf{RSK}\) correspondences is of course well known; see for example [DW08] and references therein. In particular, our approach can be regarded as a time-inhomogeneous generalization of [DW08] (see Case B: ‘Bernoulli jumps with blocking’).

Next, we interpret all the kernels appearing in our representation of the \(\textsf{dTASEP}\) transition kernel in terms of weights of ensembles of non-intersecting lattice paths; see (3.29). For our later goals, it is important to remark that these weights can be expressed in terms of one-step (local) transition operators.

The Lindström–Gessel–Viennot theorem leads, then, to a determinantal formulation for all these kernels, thus allowing us to view the transition distribution of \(\textsf{dTASEP}\) as a marginal of a determinantal point process; see (3.35). Using standard methods in the theory of determinantal point processes (as in [BFPS07, Joh03]), we express the fixed-time joint distribution of \(\textsf{dTASEP}\) particles as a Fredholm determinant; see Proposition 3.10. The correlation kernel of the Fredholm determinant involves the local operators encoding the transition weights of the path ensemble as well as the inverse of a matrix M; see (3.53)–(3.54). The geometric picture that we obtain through the non-intersecting path ensembles leads us to conclude that M is upper triangular and, therefore, explicitly invertible. This crucial aspect leads to the boundary-terminal value problem (Proposition 4.2), which we next solve to arrive at the random walk hitting formula (1.4). This task turns out to be more challenging than in [MQR21], since, in the case of particle-inhomogeneous rates, the solution is not spanned by polynomials; see Remark 4.4. In particular, we develop a very careful double induction argument (see the proof of Proposition 4.6) that involves some subtle cancellations of inclusion–exclusion type that take place in the formulas.

As outlined in Sect. 1.1, the KPZ fixed point was constructed in [MQR21] as the 1:2:3 scaling limit, at large time and length scales, of the homogeneous continuous-time \(\textsf{TASEP}\). The present work paves the way to construct analogous processes from particle systems with variable, fast/slow, rates. We leave this task for future work.

1.3 Outline of the article

In Sect. 2 we start by presenting some combinatorial objects and, in particular, the dual, column \(\textsf{RSK}\) algorithm (\(\textsf{dRSK}\)) and its link with discrete-time \(\textsf{TASEP}\) (\(\textsf{dTASEP}\)); we also obtain an expression for the transition probability kernel of \(\textsf{dTASEP}\) via an intertwining relation. In Sect. 3 we re-express the transition kernel of \(\textsf{dTASEP}\) in terms of weights of ensembles of non-intersecting paths and determinantal point processes, thus arriving at an initial Fredholm determinant formula. In Sect. 4 we prove our main result, Theorem 1.1, first formulating a terminal-boundary value problem and then solving it, to arrive at the hitting time representation for the correlation kernel of \(\textsf{dTASEP}\) with inhomogeneous rates. To solve the terminal-boundary value problem, we first use path representations for certain subsets of \(\mathbb {Z}\), and then extend the solution to the whole space via a subtle double induction argument (see Proposition 4.6).

2 \(\textsf{TASEP}\) Dynamics and Combinatorics

In this section we present the main combinatorial tools that we need for the analysis of the \(\textsf{dTASEP}\) dynamics. In Sect. 2.1, we introduce a few standard algebraic combinatorial objects. In Sect. 2.2, we describe the dual, column \(\textsf{RSK}\) (\(\textsf{dRSK}\)) correspondence and its main properties. In Sect. 2.3, we discuss the link between \(\textsf{dTASEP}\) and \(\textsf{dRSK}\). Finally, in Sect. 2.4 we establish certain intertwining relations and deduce a preliminary expression for the \(\textsf{dTASEP}\) transition kernel.

2.1 Partitions, tableaux, and Schur polynomials

A partition \(\lambda \) of \(n\ge 0\) is a sequence \(\lambda = (\lambda _1\ge \lambda _2 \ge \cdots )\) of weakly decreasing non-negative integers, called parts of \(\lambda \), such that \(|\lambda |:=\sum _{i\ge 1} \lambda _i =n\). If \(\lambda \) is a partition of n, we write \(\lambda \vdash n\) and refer to n as the size of \(\lambda \). We will also say that \(\lambda \) is a partition without referring to its size. We will denote by \(\emptyset \) the only partition of 0. Any partition of n can be graphically represented as a Young diagram of size n, i.e. a collection of n cells arranged in left-justified rows, with \(\lambda _i\) cells in the i-th row. Every such a cell can be identified with a pair \((i,j) \in \mathbb {Z}_{\ge 1}^2\) with row index i and column index j; thus, we may alternatively write \(\lambda \) as the set of such pairs:

$$\begin{aligned} \lambda = \{(i,j) :i\ge 1, \, 1\le j\le \lambda _i \}. \end{aligned}$$
(2.1)

The conjugate partition of \(\lambda \), which we denote by \(\lambda ^{\top }\), is defined by setting \(\lambda _i^{\top }\) to be the number of \(k\ge 1\) such that \(\lambda _k\ge i\); conjugating a partition corresponds to transposing the associated Young diagram. The length of \(\lambda \) is the number of its non-zero parts; since it clearly coincides with the first part of the conjugate partition \(\lambda ^{\top }\), we denote it by \(\lambda ^{\top }_1\).

We define the (discrete) Weyl chamber as

$$\begin{aligned} {\textsf{W}}_n:= \left\{ {\varvec{y}} = (y_1,\ldots ,y_n)\in \mathbb {Z}^n:y_1\ge y_2\ge \cdots \ge y_n \right\} . \end{aligned}$$
(2.2)

Throughout this section, elements of \({\textsf{W}}_n\) will be implicitly taken to have non-negative components and, thus, to be integer partitions of length \(\le n\). In later sections, we will drop this assumption and consider elements of \({\textsf{W}}_n\) with possibly negative components.

For any two partitions \(\lambda \) and \(\mu \), we write \(\mu \subseteq \lambda \) if \(\mu _i\le \lambda _i\) for all \(i\ge 1\), or equivalently if \(\mu \) is a subset of \(\lambda \), viewing the partitions as sets as in (2.1). A skew Young diagram \(\lambda /\mu \) is the set difference between two partitions \(\lambda \) and \(\mu \) such that \(\mu \subseteq \lambda \). The size of \(\lambda /\mu \), denoted by \(|\lambda /\mu |\), is the number of its cells, which equals \(|\lambda | - |\mu |\). If \(\lambda /\mu \) has at most one cell per column, we call it horizontal strip; if \(\lambda /\mu \) has at most one cell per row, we call it vertical strip. We say that two partitions \(\mu \) and \(\lambda \) interlace, and write \(\mu \prec \lambda \), if \(\lambda /\mu \) is a horizontal strip, or equivalently if \(\lambda _i \ge \mu _i \ge \lambda _{i+1}\) for all \(i\ge 1\).

A Young tableau \({\textsf{T}}=\{{\textsf{T}}_{i,j} :(i,j)\in \lambda \}\) of shape \(\lambda \) is a filling of a Young diagram \(\lambda \) with elements of an alphabet \(A\subseteq \mathbb {Z}_{\ge 1}\). We write \({{\,\mathrm{\textrm{sh}}\,}}({\textsf{T}})\) for the shape of \({\textsf{T}}\). The transpose of \({\textsf{T}}\), denoted by \({\textsf{T}}^{\top }\), is the tableau of shape \(\lambda ^{\top }\) that is obtained from \({\textsf{T}}\) by exchanging its rows with its columns. A Young tableau \({\textsf{T}}\) is called column-strict if its entries weakly increase along rows and strictly increase down columns. Every column-strict Young tableau \({\textsf{T}}\) of shape \(\lambda \) can be alternatively represented as a sequence of interlacing partitions:

$$\begin{aligned} {\textsf{T}}\equiv \big ({\emptyset =:} \lambda ^{(0)} \prec \lambda ^{(1)} \prec \lambda ^{(2)} \prec \cdots \big ), \end{aligned}$$
(2.3)

where each \(\lambda ^{(k)}\) is the shape of the Young tableau obtained from \({\textsf{T}}\) by removing all the cells containing numbers \(>k\). By the column-strict property of \({\textsf{T}}\), we have \(\lambda ^{(k)} \in {\textsf{W}}_k\) for all k, and the partitions interlace. Clearly, \(\lambda ^{(k)}\) coincides with \(\lambda \) for k large enough; therefore, one can think of the sequence as finite, by stopping it at any \(\lambda ^{(k)}\) such that \(\lambda ^{(k)}=\lambda \). See Fig. 1 for an example of a column-strict Young tableau. Similarly, a Young tableau \({\textsf{T}}\) is called row-strict if its rows are strictly increasing and its columns are weakly increasing, or equivalently if \({\textsf{T}}^{\top }\) is column-strict.

Fig. 1
figure 1

On the left-hand side, an example of a column-strict Young tableau of shape (6, 4, 2) and left edge (3, 2). On the right-hand side, its corresponding sequence of interlacing partitions

We define the left edgeFootnote 2 of a column-strict tableau \({\textsf{T}}= \big (\lambda ^{(0)} \prec \lambda ^{(1)} \prec \lambda ^{(2)} \prec \cdots \big )\) to be the partition

$$\begin{aligned} {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{T}}):= \big (\lambda ^{(1)}_1\ge \lambda ^{(2)}_2\ge \cdots \big ). \end{aligned}$$
(2.4)

Notice that, as all entries of the k-th row of \({\textsf{T}}\) are \(\ge k\) by the column-strict property, \(\lambda ^{(k)}_k\) is simply the number of k’s in the k-th row of \({\textsf{T}}\). See again Fig. 1 for an example.

Finally, we give two equivalent, combinatorial definitions of Schur polynomials. Let \(n\ge 1\) and \(\lambda \in {\textsf{W}}_n\). The Schur polynomial in n variables of shape \(\lambda \) is given by

$$\begin{aligned} \textrm{s}_{\lambda }(x_1,\ldots ,x_n):= \sum _{\begin{array}{c} \lambda ^{(0)} \prec \cdots \prec \lambda ^{(n)} :\\ \lambda ^{(n)}=\lambda \end{array}} \prod _{k=1}^n x_k^{|\lambda ^{(k)}/\lambda ^{(k-1)}|} = \sum _{{\textsf{T}}:{{\,\mathrm{\textrm{sh}}\,}}({\textsf{T}})=\lambda } \, \prod _{k=1}^n x_k^{|(i,j):{\textsf{T}}_{i,j}=k|}.\nonumber \\ \end{aligned}$$
(2.5)

The first sum is taken over any sequence \(\big (\lambda ^{(0)} \prec \cdots \prec \lambda ^{(n)}\big )\) of interlacing partitions such that \(\lambda ^{(k)} \in {\textsf{W}}_k\) for all k and \(\lambda ^{(n)}=\lambda \). The second sum is taken over any column-strict Young tableau \({\textsf{T}}\) of shape \(\lambda \) in the alphabet \(\{1,\ldots ,n\}\). It is also convenient to define \(\textrm{s}_{\lambda }(x_1,\ldots ,x_n):=0\) whenever the length of \(\lambda \) exceeds n.

2.2 Dual column \(\textsf{RSK}\)

As we will see, the \(\textsf{TASEP}\) dynamics we are concerned with are encoded by a certain variation of the Robinson–Schensted–Knuth correspondence (\(\textsf{RSK}\)), a celebrated combinatorial algorithm [Knu70, Ful97, Sta99].

All the \(\textsf{RSK}\) variations map a matrix (input) to a pair of Young tableaux (output). They can be differentiated based on two key factors:

  • The input may be a non-negative integer matrix or a \(\{0,1\}\)-matrix (in the latter case, one usually talks about dual \(\textsf{RSK}\));

  • The algorithm may be based on the so-called row insertion or column insertion.

According to these factors, one obtains four variations of \(\textsf{RSK}\): row \(\textsf{RSK}\), column \(\textsf{RSK}\), dual row \(\textsf{RSK}\), and dual column \(\textsf{RSK}\). For our purposes we need the latter variation, dual column \(\textsf{RSK}\), which we abbreviate as \(\textsf{dRSK}\). We introduce it here and refer to [Ful97, A.4.3] for further details.

It is convenient to first define, for \(j\ge 1\), a mapping

$$\begin{aligned} \mathcal {I}_j :({\textsf{T}},x) \mapsto ({\textsf{T}}',y), \end{aligned}$$
(2.6)

which should be interpreted as the insertion of a number x into the j-th column of a tableau \({\textsf{T}}\). Here:

  1. (i)

    \({\textsf{T}}\) is an input column-strict Young tableau \({\textsf{T}}\) of shape \(\lambda \), with \(\lambda _1 \ge j-1\);

  2. (ii)

    x is an input positive integer such that, if \(j>1\) and \(\lambda ^{\top }_j = \lambda ^{\top }_{j-1}\), then \(x\le \max _{i} {\textsf{T}}_{i,j}\);

  3. (iii)

    \({\textsf{T}}'\) is an output column-strict Young tableau;

  4. (iv)

    y is either an output positive integer or a ‘stop symbol’ .

The mapping works as follows. For fixed \(j\ge 1\), if all entries \({\textsf{T}}_{i,j}\), \(1\le i\le \lambda _j^{\top }\), of the j-th column of \({\textsf{T}}\) are \(<x\) (so that, by (ii), we have \(\lambda ^{\top }_j < \lambda ^{\top }_{j-1}\)), then a new cell \((\lambda ^{\top }_j+1,j)\) containing x is added to the column, thus yielding a new column-strict tableau \({\textsf{T}}'\); the outputs are then . Otherwise, let i be the smallest integer such that \(x \le {\textsf{T}}_{i,j} =: y\); define \({\textsf{T}}'\) to be the same tableau as \({\textsf{T}}\) except for the (ij)-entry \({\textsf{T}}'_{i,j}:=x\); the outputs are then \(({\textsf{T}}',y)\).

We now define the column insertion algorithm as a composition of several mappings of the form (2.6). Consider the sequence

where k is the smallest integer such that . Notice that, by construction, every \(y^{(j-1)}\), \(1\le j\le k\), can be inserted into the j-th column of \(T^{(j-1)}\), in the sense that hypothesis (ii) above is satisfied. We then set \({\textsf{T}}'\) to be the outcome of the column insertion of x into the tableau \({\textsf{T}}\). Clearly, if \({\textsf{T}}\) is of size n, then \({\textsf{T}}'\) will be of size \(n+1\). See Fig. 2 for a graphical representation of the column insertion algorithm.

Fig. 2
figure 2

Example of column insertion of an integer x into a tableau \({\textsf{T}}\). We start with the pair \(({\textsf{T}},x)\), on the left-hand side, and apply the mappings \(\mathcal {I}_1, \mathcal {I}_2, \ldots \) until we get a pair of the form . The tableau \({\textsf{T}}'\) is then the outcome of the column insertion. At the j-th step, the red number is to be inserted into the j-th column: either it replaces the blue number (first two steps) or it is inserted in a new cell at the end of the column (third step). In the former case, the blue number becomes the red one at the next step; in the latter case, a ‘stop symbol’ is returned and the procedure stops

We now construct the \(\textsf{dRSK}\) algorithm. Given an input matrix \(w=\{w_{i,j} :1\le i\le n, \, 1\le j\le N \}\) with entries in \(\{0,1\}\), we define a sequence

$$\begin{aligned} (\emptyset ,\emptyset ) =: ({\textsf{P}}(0),{\textsf{Q}}(0)) \mapsto ({\textsf{P}}(1),{\textsf{Q}}(1)) \mapsto \cdots \mapsto ({\textsf{P}}(n),{\textsf{Q}}(n))=:({\textsf{P}},{\textsf{Q}}) \end{aligned}$$
(2.7)

of Young tableaux pairs starting from the pair of empty tableaux and ending at the \(\textsf{dRSK}\) output pair \(({\textsf{P}},{\textsf{Q}})\) (for an example, see Fig. 3). Essentially, each \({\textsf{P}}(i)\) is constructed by column inserting into \({\textsf{P}}(i-1)\) the column indices j that correspond to ones in the i-th row of w, whereas each \({\textsf{Q}}(i)\) records the cells that are added in the construction of \({\textsf{P}}(i)\). More precisely, for all \(i=1,\ldots ,n\), given \(({\textsf{P}}(i-1),{\textsf{Q}}(i-1))\), the next pair \(({\textsf{P}}(i),{\textsf{Q}}(i))\) is obtained as the last element of the sequence

$$\begin{aligned} \begin{aligned} ({\textsf{P}}(i-1),{\textsf{Q}}(i-1))&=: ({\textsf{P}}(i,0),{\textsf{Q}}(i,0)) \mapsto ({\textsf{P}}(i,1),{\textsf{Q}}(i,1)) \\&\mapsto \cdots \mapsto ({\textsf{P}}(i,N),{\textsf{Q}}(i,N)) =: ({\textsf{P}}(i),{\textsf{Q}}(i)), \end{aligned} \end{aligned}$$

where, for \(j=1,\ldots ,N\):

  • if \(w_{i,j}=0\), then \({\textsf{P}}(i,j)={\textsf{P}}(i,j-1)\) and \({\textsf{Q}}(i,j)={\textsf{Q}}(i,j-1)\);

  • if \(w_{i,j}=1\), then

    • \({\textsf{P}}(i,j)\) is the tableau obtained by column inserting j into \({\textsf{P}}(i,j-1)\), and

    • \({\textsf{Q}}(i,j)\) is obtained from \({\textsf{Q}}(i,j-1)\) by adding a cell, filled with i, at the same location where a cell was added in the column insertion of j into \({\textsf{P}}(i,j-1)\).

By construction, for all ij, \({\textsf{P}}(i,j)\) and \({\textsf{Q}}(i,j)\) are Young tableaux of the same shape. Each \({\textsf{P}}(i,j)\) is column-strict. Moreover, it is not difficult to see that each \({\textsf{Q}}(i,j)\) is row-strict.

Fig. 3
figure 3

An example of the \(\textsf{dRSK}\) correspondence, constructed as in (2.7). An input \(\{0,1\}\)-matrix w yields a sequence of tableaux pairs, which terminates at the \(\textsf{dRSK}\) output pair \(({\textsf{P}},{\textsf{Q}})=({\textsf{P}}(3),{\textsf{Q}}(3))\)

In the next theorem we summarize the properties of this mapping that are useful for our purposes. They are all either immediate from the construction or easy to prove, and can be visualized in the example of Fig. 3. We refer e.g. to [Ful97, A.4.3] for a proof.

Theorem 2.1

The dual column Robinson–Schensted–Knuth correspondence \(\textsf{dRSK}:w \mapsto ({\textsf{P}},{\textsf{Q}})\) is a bijection between a matrix with entries in \(\{0,1\}\) and a pair \(({\textsf{P}},{\textsf{Q}})\) of Young tableaux of the same shape such that \({\textsf{P}}\) is column-strict and \({\textsf{Q}}\) is row-strict. If the input matrix is \(n\times N\), then \({\textsf{P}}\) is in the alphabet \(\{1,\ldots ,N\}\) and \({\textsf{Q}}\) is in the alphabet \(\{1,\ldots ,n\}\), so one can identify

$$\begin{aligned} {\textsf{P}}= \big (\lambda ^{(0)} \prec \cdots \prec \lambda ^{(N)}\big ) \qquad \text {and}\qquad {\textsf{Q}}^{\top } = \big (\mu ^{(0),\top } \prec \cdots \prec \mu ^{(n),\top }\big ) \,, \end{aligned}$$

where \(\lambda ^{(k)}, \mu ^{(k),\top } \in {\textsf{W}}_k\) for all k. Referring to the sequence of pairs (2.7) that defines \(\textsf{dRSK}\), we then have

$$\begin{aligned} \mu ^{(i)}= {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(i)) \qquad \text {for } i=1,\ldots ,n. \end{aligned}$$
(2.8)

Moreover, we have

$$\begin{aligned} \sum _{i=1}^n w_{ij}&= \big |\lambda ^{(j)}/\lambda ^{(j-1)}\big |{} & {} \text {for all } j=1,\ldots ,N , \end{aligned}$$
(2.9)
$$\begin{aligned} \sum _{j=1}^N w_{ij}&=\big |\mu ^{(i)}/\mu ^{(i-1)}\big |{} & {} \text {for all } i=1,\ldots ,n . \end{aligned}$$
(2.10)

2.3 \(\textsf{dTASEP}\) dynamics and \(\textsf{dRSK}\)

Let us now elaborate on the definition of \(\textsf{dTASEP}\) given in the introduction and describe its relation to the dynamics of \(\textsf{dRSK}\). Recall that the N-particle \(\textsf{dTASEP}\) is encoded by the discrete-time Markov chain \((Y(t))_{t\ge 0}\) of particle configurations \(Y(t) = (Y_1(t)> Y_2(t)> \cdots > Y_N(t))\), where \(Y_k(t)\) is the location of the k-th particle from the right. We consider an arbitrary initial configuration \(Y(0) = {\varvec{y}} = (y_1> y_2> \cdots > y_N)\). Let \({\varvec{p}}=(p_t)_{t\ge 1}\) and \({\varvec{q}}=(q_1,\ldots ,q_N)\) be positive parameters and let \(W = \{ W_{t,k} :t \ge 1, \, 1\le k\le N\}\) be a collection of independent Bernoulli random variables with

$$\begin{aligned} \mathbb {P}\left( W_{t,k} = 0 \right) = \frac{1}{1 + p_t q_k }, \qquad \qquad \mathbb {P}\left( W_{t,k} = 1 \right) = \frac{p_t q_k}{1 + p_t q_k}. \end{aligned}$$
(2.11)

The random dynamics is then given by sequential updates from right to left, i.e. from the particle labeled 1 to the particle labeled N, driven by these random variables as follows:

$$\begin{aligned} Y_k(t): = \min \left\{ Y_{k-1}(t)-1, \, Y_{k}(t-1) + W_{t,k} \right\} , \end{aligned}$$
(2.12)

with the convention that \(Y_0(t) = \infty \) for all \(t \ge 0\). These dynamics clearly preserve the ordering of the particles (exclusion rule). We will abbreviate the N-particle \(\textsf{dTASEP}\) with parameters \({\varvec{p}}\) and \({\varvec{q}}\) as \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\).

The construction of \(\textsf{dRSK}\) given in Sect. 2.2 (see in particular (2.7) and Fig. 3) is ‘dynamic’: at the t-th step, the tableau pair \(({\textsf{P}}(t-1),{\textsf{Q}}(t-1))\) and the t-th row of the input matrix w are used to generate a new tableau pair \(({\textsf{P}}(t),{\textsf{Q}}(t))\) through the column insertion algorithm. We will now see how this dynamic procedure encodes the evolution of the \(\textsf{dTASEP}\), if one interprets the input matrix entries as the Bernoulli random variables governing the particle jumps.

Notice that the collection \(W = \{ W_{t,k} :t \ge 1, \, 1\le k\le N\}\) of Bernoulli random variables defined in (2.11) can be seen as a (random) matrix with infinitely many rows. For all \(t\ge 0\), let \(({\textsf{P}}(t),{\textsf{Q}}(t))\) be the tableau pair obtained by applying \(\textsf{dRSK}\) to the (random) matrix \(\{ W_{i,j} :1\le i\le t, \, 1\le j\le N\}\) consisting of the first t rows of W. As each \({\textsf{P}}(t)\) is a column-strict tableau, we may write \({\textsf{P}}(t) = \big (\lambda ^{(0)}(t) \prec \lambda ^{(1)}(t) \prec \cdots \prec \lambda ^{(N)}(t)\big )\) as a sequence of interlacing partitions; see (2.3).

Recall from (2.4) that the left edge of \({\textsf{P}}(t)\) is the partition \({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = (\lambda ^{(1)}_1 \ge \cdots \ge \lambda ^{(N)}_N)\), where each \(\lambda ^{(k)}_k(t)\) is the number of k’s in the k-th row of \({\textsf{P}}(t)\). It is a consequence of the column insertion algorithm that the time evolution of \({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t))\) is autonomous from any additional information carried by \({\textsf{P}}(t)\). To see this, suppose that the first \(t-1\) rows of W have been inserted, yielding a \({\textsf{P}}\)-tableau \({\textsf{P}}(t-1)\). If \(W_{t,1}=1\), then a 1 is column inserted into \({\textsf{P}}(t-1)\), thus yielding a new tableau \({\textsf{P}}(t,1)\) (according to the notation of Sect. 2.2) that contains one more 1 in the first row. As the subsequent insertion of any \(k>1\) does not affect the cells containing 1’s, we have \(\lambda ^{(1)}_1(t) = \lambda ^{(1)}_1(t-1)+W_{t,1}\). Suppose now that, at time t, for some \(k\ge 2\), the numbers \(<k\) have been sequentially inserted, thus yielding a tableau \({\textsf{P}}(t,k-1)\). Now, if \(W_{t,k}=1\), then \({\textsf{P}}(t,k)\) is generated by column inserting a k into \({\textsf{P}}(t,k-1)\): if there are more \((k-1)\)’s in the \((k-1)\)-th row than k’s in k-th row, then the ‘new’ k will end up in the k-th row; however, if there are as many \((k-1)\)’s in the \((k-1)\)-th row as there are k’s in the k-th row, then the ‘new’ k will end up in the j-th row, for some \(j<k\). Again, since the cells containing k are not affected by subsequent insertions of larger numbers, we conclude that

$$\begin{aligned} \lambda ^{(k)}_k(t) = {\left\{ \begin{array}{ll} \lambda ^{(k)}_k(t-1) + W_{t,k} &{}\quad \text {if } \lambda ^{(k-1)}_{k-1}(t) > \lambda ^{(k)}_k(t-1), \\ \lambda ^{(k)}_k(t-1) &{}\quad \text {if } \lambda ^{(k-1)}_{k-1}(t) = \lambda ^{(k)}_k(t-1). \end{array}\right. } \end{aligned}$$
(2.13)

The latter formula is also valid for \(k=1\), if we adopt the convention \(\lambda ^{(0)}_0(t)=\infty \) for all \(t\ge 0\). Notice that the update rules (2.13) must be applied sequentially, from \(k=1\) to \(k=N\). It is then straightforward to check that the N-tuple

$$\begin{aligned} \big (\lambda ^{(k)}_k(t)-k:1\le k\le N\big ) =\big (\lambda ^{(1)}_1(t)-1> \lambda ^{(2)}_2(t)-2> \cdots > \lambda ^{(N)}_N(t)-N \big ) \end{aligned}$$

satisfies the same recursion Equations (2.12) that the \(\textsf{dTASEP}\) satisfies.

Remark 2.2

Integer partitions coming from Young tableaux have of course nonnegative parts. As a result, the transition kernels arising in the \(\textsf{dRSK}\) dynamics that will be computed in the next subsection will be acting, in principle, on elements of \({\textsf{W}}_N\) with nonnegative components. On the other hand, \(\textsf{dTASEP}\) particles may occupy any site of \(\mathbb {Z}\). However, this is not an issue: The kernels coming from \(\textsf{dRSK}\) can be extended to elements of \({\textsf{W}}_N\) with components of any sign, just by shifting all the parts by the same (integer) amount.

2.4 Transition probabilities for \(\textsf{dRSK}\) and \(\textsf{dTASEP}\)

We now study the evolution of the \({\textsf{P}}\)- and \({\textsf{Q}}\)-tableaux under the \(\textsf{dRSK}\) dynamics considered in Sect. 2.3. This will yield useful formulas for the transition probabilities of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\), as defined in (2.11)–(2.12).

By (2.11), the joint probability distribution of the Bernoulli weights up to time t is

$$\begin{aligned} \mathbb {P}(W_{i,j}=w_{i,j}:1\le i\le t, 1\le j\le N) = \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(0,t]}} \prod _{i=1}^t p_i^{\sum _{j=1}^N w_{i,j}} \prod _{j=1}^N q_j^{\sum _{i=1}^t w_{i,j}} , \end{aligned}$$

where \(\{w_{i,j}:1\le i\le t, 1\le j\le N\}\) is any \(\{0,1\}\)-matrix and, for \(0\le r< t\),

$$\begin{aligned} Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}:= \prod _{i=r+1}^t \prod _{j=1}^N (1+ p_i q_j). \end{aligned}$$
(2.14)

Then, by Theorem 2.1, the pushforward law of the tableaux under the \(\textsf{dRSK}\) bijection at time t is given by

$$\begin{aligned} \begin{aligned}&\mathbb {P}\Big ( {\textsf{P}}(t)=\big (\lambda ^{(0)}\prec \cdots \prec \lambda ^{(N)}\big ),\, {\textsf{Q}}(t)^\top =\big (\mu ^{(0),\top } \prec \cdots \prec \mu ^{(t),\top }\big ) \Big ) \\&\quad = \;\mathbbm {1}_{\lambda ^{(N)}=\mu ^{(t)}} \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(0,t]}} \prod _{i=1}^t p_i^{|\mu ^{(i)}/\mu ^{(i-1)}|} \prod _{j=1}^N q_j^{|\lambda ^{(j)} / \lambda ^{(j-1)}|}, \end{aligned} \end{aligned}$$
(2.15)

where \(\big (\lambda ^{(0)}\prec \cdots \prec \lambda ^{(N)}\big )\) and \(\big (\mu ^{(0),\top }\prec \cdots \prec \mu ^{(t),\top }\big )\) are any sequences of interlacing partitions such that \(\lambda ^{(k)}, \mu ^{(k),\top } \in {\textsf{W}}_k\) for all k.

It follows from (2.15) and from the definition (2.5) of Schur polynomials that the marginal law of the common shape of \({\textsf{P}}(t)\) and \({\textsf{Q}}(t)\) is given by a Schur measure:

$$\begin{aligned} \mathbb {P}\big ( {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) = {{\,\mathrm{\textrm{sh}}\,}}({\textsf{Q}}(t)) = \lambda \big ) = \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(0,t]}} \, \textrm{s}_{\lambda ^{\top }}\big ({\varvec{p}}_{[1,t]} \big ) \, \textrm{s}_{\lambda } ({\varvec{q}}) , \end{aligned}$$
(2.16)

where \({\varvec{p}}_{[1,t]}:= (p_1,\ldots ,p_t)\). By summing the above probabilities over all partitions \(\lambda \), one obtains the so-called dual Cauchy identity (see e.g. [Sta99, §7.14]):

$$\begin{aligned} \sum _{\lambda } \textrm{s}_{\lambda ^\top }\big ({\varvec{p}}_{[1,t]}\big ) \, \textrm{s}_{\lambda }({\varvec{q}}) = \prod _{\begin{array}{c} 1\le i\le t \\ 1\le j\le N \end{array}} (1+p_iq_j). \end{aligned}$$

Recalling (2.8) and taking a marginal of (2.15), we see that the joint distribution of the shapes of the \({\textsf{P}}\)-tableaux up to time t is given by

$$\begin{aligned} \begin{aligned}&\mathbb {P}\Big ( \big ({{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)),\ldots ,{{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) \big ) = \big ( \mu ^{(0)},\ldots ,\mu ^{(t)}\big ) \Big ) \\&=\mathbb {P}\Big ({\textsf{Q}}(t)^\top =\big (\mu ^{(0),\top } \prec \cdots \prec \mu ^{(t),\top }\big ) \Big ) \\&=\sum _{\begin{array}{c} \lambda ^{(0)} \prec \cdots \prec \lambda ^{(N)}: \\ \lambda ^{(N)}=\mu ^{(t)} \end{array}} \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(0,t]}} \prod _{i=1}^t p_i^{|\mu ^{(i)}/\mu ^{(i-1)}|} \prod _{j=1}^N q_j^{|\lambda ^{(j)} / \lambda ^{(j-1)}|} = \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(0,t]}} \prod _{i=1}^t p_i^{|\mu ^{(i)}/\mu ^{(i-1)}|} \textrm{s}_{\mu ^{(t)}}({\varvec{q}}) . \end{aligned} \end{aligned}$$
(2.17)

For \(0\le r<t\), define now the kernels \(\hat{{{\mathcal {R}}} }_{(r,t]}\) and \({{\mathcal {R}}} _{(r,t]}\) by setting

$$\begin{aligned} \hat{{{\mathcal {R}}} }_{(r,t]}(\mu ,\lambda ):= \frac{\textrm{s}_{\lambda }({\varvec{q}}) }{\textrm{s}_{\mu }({\varvec{q}}) } {{\mathcal {R}}} _{(r,t]}(\mu ,\lambda ):= \frac{\textrm{s}_{\lambda }({\varvec{q}}) }{\textrm{s}_{\mu }({\varvec{q}}) } \frac{1}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\begin{array}{c} \nu ^{(r)} \prec \nu ^{(r+1)} \prec \cdots \prec \nu ^{(t)}: \\ \nu ^{(r)} = \mu ^\top , \, \nu ^{(t)}=\lambda ^\top \end{array}} \prod _{i=r+1}^t p_i^{|\nu ^{(i)} / \nu ^{(i-1)} |} \nonumber \\ \end{aligned}$$
(2.18)

for any \(\mu ,\lambda \in {\textsf{W}}_N\), where \(\nu ^{(k)} \in {\textsf{W}}_k\) for all k. From the first equality, we see that \(\hat{{{\mathcal {R}}} }_{(r,t]}\) can be interpreted as a Doob h-transform of \({{\mathcal {R}}} _{(r,t]}\), with Schur polynomials as h-functions (for a precise account of Doob’s h-transforms, see e.g. [RW00, Doo01]). It follows immediately from (2.17) that \(\hat{{{\mathcal {R}}} }_{(r,t]}\) is the transition kernel of the shape of the \({\textsf{P}}\)-tableau from time r to time t:

$$\begin{aligned} \hat{{{\mathcal {R}}} }_{(r,t]}(\mu ,\lambda )= & {} \mathbb {P}\left( {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) =\lambda \;\bigg |\; {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) =\mu , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r-1)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)) \right) \nonumber \\= & {} \mathbb {P}\left( {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) =\lambda \;\bigg |\; {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) =\mu \right) . \end{aligned}$$
(2.19)

Next, define the kernel \(\hat{K}\) and K by setting

$$\begin{aligned} \hat{K}(\lambda ,{\varvec{y}}) := \frac{1}{\textrm{s}_\lambda ({\varvec{q}})} K(\lambda ,{\varvec{y}}) := \frac{1}{\textrm{s}_\lambda ({\varvec{q}})} \sum _{\begin{array}{c} \lambda ^{(0)}\prec \cdots \prec \lambda ^{(N)}=\lambda : \\ \big (\lambda ^{(1)}_1,\ldots ,\lambda ^{(N)}_N\big ) = {\varvec{y}} \end{array}} \prod _{j=1}^N q_j^{|\lambda ^{(j)} / \lambda ^{(j-1)}|} \end{aligned}$$
(2.20)

for any \(\lambda , {\varvec{y}} \in {\textsf{W}}_N\), where, as usual, \(\lambda ^{(k)} \in {\textsf{W}}_k\) for all k. Notice that K is an unnormalised version of \(\hat{K}\), which is a probability kernel. It follows from (2.15) and (2.17) that, for all \(t\ge 0\),

$$\begin{aligned} \begin{aligned} \hat{K}(\lambda ,{\varvec{y}})&= \mathbb {P}\left( {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\bigg |\; {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) =\lambda , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t-1)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)) \right) \\&= \mathbb {P}\left( {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\bigg |\; {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) =\lambda \right) . \end{aligned} \end{aligned}$$
(2.21)

Finally, recall from Sect. 2.3 that the left edge of \({\textsf{P}}\) evolves as a Markov chain in its own filtration (i.e., autonomously from the rest of \({\textsf{P}}\)). Thus, we may write its transition kernel from time r to time t, for \(0\le r<t\), as

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}}, {\varvec{y}}'):= \;&\mathbb {P}\left( {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} ' \;\bigg |\; {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(r)) = {\varvec{y}} \right) \\ = \;&\mathbb {P}\left( {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}}' \;\bigg |\; {{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(r)) = {\varvec{y}}, {\textsf{P}}(r),\ldots ,{\textsf{P}}(0) \right) \end{aligned} \end{aligned}$$
(2.22)

for \({\varvec{y}}, {\varvec{y}}' \in {\textsf{W}}_N\). We will soon derive explicit expressions for the kernel \(Q_{(r,t]}\) defined above.

The \({{\mathcal {Q}}} \)- and \(\hat{{{\mathcal {R}}} }\)-kernels are transition kernels of the left edge and of the shape of the \({\textsf{P}}\)-tableau, respectively; on the other hand, \(\hat{K}\) encodes the conditional law of the left edge of \({\textsf{P}}\) given its shape at any given time. Therefore, from the theory of Markov functions (see e.g. [RP81]), we expect these kernels to satisfy intertwining relations, and this is indeed the case. We state the result in the next proposition and, for completeness, we also provide a proof, following [DW08].

Proposition 2.3

For \(0\le r<t\), the following intertwining relations between operators from \({\textsf{W}}_N\) to \({\textsf{W}}_N\) hold:

$$\begin{aligned} \hat{{{\mathcal {R}}} }_{(r,t]} \hat{K} = \hat{K} \, {{\mathcal {Q}}} _{(r,t]} \qquad \text {and} \qquad {{\mathcal {R}}} _{(r,t]} K = K \, {{\mathcal {Q}}} _{(r,t]} . \end{aligned}$$
(2.23)

Proof

Let \({\varvec{y}}\in {\textsf{W}}_N\). By (2.21) and (2.19), we have

$$\begin{aligned} \begin{aligned}&\mathbb {P}\big ({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ) \\&\quad = \; \mathbb {E}\big [ \mathbb {P}\big ({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)) \big ) \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ] \\&\quad = \; \mathbb {E}\big [ \hat{K}({{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(t)), {\varvec{y}}) \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ] = \sum _{\lambda } \hat{{{\mathcal {R}}} }_{(r,t]}( {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)), \lambda ) \, \hat{K}(\lambda , {\varvec{y}}). \end{aligned} \end{aligned}$$

We point out that, by the definition of \(\hat{{{\mathcal {R}}} }_{(r,t]}\) in (2.18), \(\hat{{{\mathcal {R}}} }_{(r,t]}( {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)), \lambda )\) is non-zero only for a finite number of partitions \(\lambda \).

On the other hand, by (2.22) and (2.21), we have

$$\begin{aligned} \begin{aligned}&\mathbb {P}\big ({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ) \\&\quad = \; \mathbb {E}\big [ \mathbb {P}\big ({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(t)) = {\varvec{y}} \;\big |\; {\textsf{P}}(0), \ldots , {\textsf{P}}(r) \big ) \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ] \\&\quad = \; \mathbb {E}\big [ {{\mathcal {Q}}} _{(r,t]}({{\,\mathrm{\mathrm {l-edge}}\,}}({\textsf{P}}(r)), {\varvec{y}}) \;\big | {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(0)), \ldots , {{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)) \big ] \\&\quad = \sum _{\lambda } \hat{K}({{\,\mathrm{\textrm{sh}}\,}}({\textsf{P}}(r)), \lambda ) \, {{\mathcal {Q}}} _{(r,t]}(\lambda ,{\varvec{y}}). \end{aligned} \end{aligned}$$

Comparing the two expressions above leads to the first intertwining relation in (2.23). The second one follows from the first, by using (2.18) and (2.20) and noting that the Schur polynomials cancel out. \(\square \)

Remark 2.4

Notice that, by construction, \({{\mathcal {Q}}} _{(r,t]}({\varvec{y}}, {\varvec{y}}')\) equals zero unless \(y_j\le y_j'\) for all \(1\le j\le N\).

Define now the modified kernels

$$\begin{aligned} \Lambda (\lambda ,{\varvec{y}}) := q_1^{-y_1} \cdots q_N^{-y_N} K(\lambda , {\varvec{y}} ), \qquad \hat{{{\mathcal {Q}}} }_{(r,t]}({\varvec{y}},{\varvec{y}}'):= \Bigg ( \prod _{j=1}^N q_j^{y_j-y_j'}\Bigg ) {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}') . \end{aligned}$$
(2.24)

It is immediate to see that the second intertwining relation in (2.23) still holds when replacing K with \(\Lambda \) and \({{\mathcal {Q}}} _{(r,t]}\) with \(\hat{{{\mathcal {Q}}} }_{(r,t]}\). Moreover, it was proven in [DW08, Prop. 3] that \(\Lambda \) is invertible (with an explicit inverse). This provides an explicit expression for the kernel \(\hat{{{\mathcal {Q}}} }_{(r,t]}\). We summarize these facts in the next proposition.

Proposition 2.5

The intertwining relation \({{\mathcal {R}}} _{(r,t]} \Lambda = \Lambda \, \hat{{{\mathcal {Q}}} }_{(r,t]}\) holds. Moreover, the operator \(\Lambda \) is invertible, so that

$$\begin{aligned} \hat{{{\mathcal {Q}}} }_{(r,t]} = \Lambda ^{-1} {{\mathcal {R}}} _{(r,t]} \, \Lambda . \end{aligned}$$
(2.25)

In the next section we will interpret (2.25) in terms of weights of non-intersecting paths.

3 Path Ensembles and Determinantal Point Processes

This section concerns the non-intersecting path constructions that lie at the core of our approach. In Sect. 3.1, we introduce certain ‘local’ Toeplitz operators that we will use throughout this work. In Sect. 3.2, we provide a non-intersecting path interpretation of the \(\textsf{dTASEP}\) transition kernel in terms of these local operators. In Sect. 3.3, we deduce an expression for the law of \(\textsf{dTASEP}\) in terms of a determinantal point process, which we then study in Sect. 3.4, obtaining an initial expression for its correlation kernel in terms of biorthogonal functions and local operators.

3.1 Local operators

We will express the weights of the path ensembles in terms of convolutions of local operators, which we now introduce. Let us first define the conventions for the operator formalism. For an operator A defined through a kernel \((A(x,y) :x\in X, y\in Y)\) on suitable spaces XY and a function (or vector) \(f=(f(y) :y\in Y)\), we define the function \((A\circ f)(x):=\sum _{y\in Y} A(x,y) f(y)\), whenever the sum is absolutely convergent. Similarly, for a function \(g=(g(x) :x\in X)\), we define the function \((g\circ A)(y):=\sum _{x \in X} g(x) A(x,y)\). For two operators A and B with kernels \((A(x,y) :x\in X, y\in Y\big )\) and \((B(y,z) :y\in Y, z\in Z)\), respectively, we define the operator \(A\circ B\) through the kernel \((A\circ B) (x,z):=\sum _{y\in Y} A(x,y) B(y,z)\). Finally, we define the adjoint of A as the operator \(A^*\) with kernel \(A^*(x,y):= A(y,x)\).

Let us now introduce some specific local operators we are concerned with. Recall from Sect. 2.3 that we fixed positive parameters \({\varvec{p}}=(p_i)_{i\ge 1}\) and \({\varvec{q}}=(q_1,\ldots ,q_N)\).

The first family of operators encode geometric jumps weakly to the right: for \(i=1,\ldots ,N\), let

$$\begin{aligned} Q^\dagger _i(x,y):= q_i^{y-x}\mathbbm {1}_{\{y\ge x\}}, \qquad x,y\in \mathbb {Z}. \end{aligned}$$
(3.1)

We also define a family of operators encoding geometric jumps strictly to the left:

$$\begin{aligned} Q_i(x,y):= q_i^{y-x}\mathbbm {1}_{\{y< x\}},\qquad x,y\in \mathbb {Z}. \end{aligned}$$
(3.2)

We note that, under the hypothesis \(q_i>1\) (which we will always assume, without explicitly mentioning, from now on), the kernel \(Q_i(x,y)\) defines a bounded operator on \(\ell ^1(\mathbb {Z})\) with a well-defined inverse:

$$\begin{aligned} Q_i^{-1}(x,y):= -\mathbbm {1}_{y=x}+q_i\mathbbm {1}_{y=x+1},\qquad x,y\in \mathbb {Z}. \end{aligned}$$
(3.3)

Finally, for \(i\ge 1\), we define the operators

$$\begin{aligned} R_i(x,y):=\mathbbm {1}_{\{y=x\}}+p_i\mathbbm {1}_{\{y=x+1\}}. \end{aligned}$$
(3.4)

For \(1\le m\le n\le N\), we will use the compact notations

$$\begin{aligned} Q_{(m-1,n]}&:=Q_{[m,n]} := Q_{m}\circ \cdots \circ Q_n, \end{aligned}$$
(3.5)
$$\begin{aligned} Q^{-1}_{(m-1,n]}&:=Q^{-1}_{[m,n]}:= Q_{n}^{-1}\circ \cdots \circ Q_m^{-1}. \end{aligned}$$
(3.6)

We will abuse the notation slightly by defining

$$\begin{aligned} Q_{[m,n]}:= Q_{m}\circ \cdots \circ Q_N\circ Q_N^{-1}\circ \cdots Q_{n+1}^{-1}, \end{aligned}$$
(3.7)

which makes sense even for \(m>n\), in which case \(Q_{[m,n]}=Q_{m-1}^{-1} \circ \cdots \circ Q_{n+1}^{-1}\). In particular, we have \(Q_{(n,n]}=Q_{[n+1,n]}:=I\) for \(1\le n\le N\). We will use similar conventions for the \(Q^\dagger \)- and R-operators.

Certain convolutions of the operators defined above may be expressed in terms of symmetric functions. Given indeterminates \(x_1,\ldots ,x_N\), let

$$\begin{aligned}{} & {} h_n(x_1,\ldots ,x_N):=\sum _{1\le i_1\le \cdots \le i_n \le N} x_{i_1}\cdots x_{i_n}, \\{} & {} \qquad e_{n}(x_1,\ldots ,x_N):=\sum _{1\le i_1<\cdots <i_n\le N} x_{i_1}\cdots x_{i_n} \end{aligned}$$

be the complete symmetric polynomial of degree n and the elementary symmetric polynomial of degree n, respectively. By convention, we set \(h_0=e_0:=1\) and \(h_n=e_n:=0\) for all \(n<0\). Then, it is not difficult to check that

$$\begin{aligned} Q^\dagger _{(i,N]}(x,y)&= h_{y-x}(0,\ldots ,0,q_{i+1},\ldots ,q_N), \end{aligned}$$
(3.8)

and

$$\begin{aligned} \begin{aligned} (-1)^{N-i} Q_{(i,N]}^{-1}(x,y)&= e_{y-x}(0,\ldots ,0, -q_{i+1},\ldots ,-q_N) \\&= (-1)^{y-x} e_{y-x}(0,\ldots ,0, q_{i+1},\ldots ,q_N) , \end{aligned} \end{aligned}$$
(3.9)

for \(x,y\in \mathbb {Z}\), where the first i indeterminates of the symmetric polynomials are set to be 0. These identities follow from the definitions of the symmetric functions \(h_n\) and \(e_n\), the form of the operators \(Q^\dagger \) and \(Q^{-1}\) in (3.1) and (3.3) and the definition of the operation \(\circ \).

Observe that the values of \(Q^\dagger _i(x,y)\), \(Q_i(x,y)\), \(Q_i^{-1}(x,y)\) and \(R_i(x,y)\) only depend on \(y-x\). The operators with such a property are known as (bi-infinite) Toeplitz operators. To each Toeplitz operator T with kernel T(xy) on \(\mathbb {Z}\times \mathbb {Z}\), we associate a formal Laurent series \(\varphi _{T}(z)\), known as the symbol of T, defined by

$$\begin{aligned} \varphi _{T}(z):= \sum _{x\in \mathbb {Z}} T(0,x)z^{-x}. \end{aligned}$$

Inside its domain of convergence, which is a (possibly empty) annulus \(\{r<|z|<R\}\), the function \(\varphi _T(z)\) is analytic in z. We summarize some standard properties of Toeplitz operators that will be used later; the proofs are elementary, so we omit them. From now on, all contours will be implicitly taken to have a counterclockwise orientation.

Proposition 3.1

  1. (i)

    Let T be a (bi-infinite) Toeplitz operator whose symbol \(\varphi _T(z)\) is analytic in a non-empty annulus \(\{r<|z|<R\}\). Then, the entries T(xy) can be computed through the contour integral

    $$\begin{aligned} T(x,y)=\oint _{|z|=r_1}\frac{\textrm{d}z}{2\pi \textrm{i} z}z^{y-x}\cdot \varphi _T(z), \end{aligned}$$
    (3.10)

    for any \(r<r_1<R\).

  2. (ii)

    Let T and S be two (bi-infinite) Toeplitz operators whose symbols \(\varphi _T(z)\) and \(\varphi _S(z)\) are both analytic inside a common non-empty annulus \(\{r<|z|<R\}\). Then, the convolutions \(T\circ S\) and \(S\circ T\) both converge, with

    $$\begin{aligned} \varphi _{T\circ S}(z) = \varphi _{T}(z)\varphi _{S}(z) = \varphi _{S\circ T}(z) \end{aligned}$$
    (3.11)

    for all z on the annulus. In particular, T and S commute. Assuming that T is invertible with \(T^{-1}=S\), we have

    $$\begin{aligned} 1=\varphi _{\textrm{id}}(z) =\varphi _T(z)\varphi _{T^{-1}}(z), \end{aligned}$$

    or equivalently

    $$\begin{aligned} \varphi _{T^{-1}}(z)=\varphi _{T}(z)^{-1}. \end{aligned}$$

For example, the symbols of the \(Q^\dagger \)- and Q-operators defined above are given by

$$\begin{aligned} \varphi _{Q^\dagger _i}(z)&=\frac{z}{z-q_i}{} & {} \text {for } |z|>q_i, \end{aligned}$$
(3.12)
$$\begin{aligned} \varphi _{Q_i}(z)&= \frac{z}{q_i-z}{} & {} \text {for } |z|<q_i, \end{aligned}$$
(3.13)
$$\begin{aligned} \varphi _{Q_i^{-1}}(z)&= \frac{q_i-z}{z}{} & {} \text {for } |z|>0. \end{aligned}$$
(3.14)

As a consequence of Proposition 3.1 and the fact that the series are absolutely convergent in the domains considered below, we then obtain the contour integral representations

$$\begin{aligned} Q^\dagger _{[m,n]}(x,y)&=(-1)^{n-m+1}\oint _{|z|=R} \frac{\textrm{d}z}{2\pi \textrm{i} z} \frac{z^{y-x+n-m+1}}{\prod _{\ell =m}^{n}(q_\ell -z)}{} & {} \text {for } R>\max \{q_i\}_{i=m}^n, \end{aligned}$$
(3.15)
$$\begin{aligned} Q_{[m,n]}(x,y)&=\oint _{|z|=r} \frac{\textrm{d} z}{2\pi \textrm{i}z} \frac{z^{y-x+n-m+1}}{\prod _{\ell =m}^{n}(q_\ell -z)}{} & {} \text {for } 0<r<\min \{q_i\}_{i=m}^n, \end{aligned}$$
(3.16)
$$\begin{aligned} Q^{-1}_{[m,n]}(x,y)&=\oint _{|z|=r} \frac{\textrm{d} z}{2\pi \textrm{i}z} z^{y-x+m-n-1}\prod _{\ell =m}^{n}(q_\ell -z){} & {} \text {for } r>0. \end{aligned}$$
(3.17)

3.2 Non-intersecting path ensembles

We now define two ensembles of paths, which we call h-paths \({\textsf{h}}\Pi \) (related to the complete symmetric polynomials h) and e-paths \({\textsf{e}}\Pi \) (related to the elementary symmetric polynomials e).

Let \((y_1,i_1),\ldots ,(y_n,i_n),(x_1,j_1),\ldots ,(x_n,j_n) \in \mathbb {Z}^2\). We denote by \({{\textsf{h}}\Pi }_{\{(y_1,i_1),\ldots ,(y_n,i_n)\}}^{\{(x_1,j_1),\ldots ,(x_n,j_n)\}}\) the (possibly empty) ensemble of all of n-tuples \((\pi _1,\ldots ,\pi _n)\) of non-intersecting paths in \(\mathbb {Z}^2\), such that each path \(\pi _k\) starts from \((y_k,i_k)\), ends at \((x_k,j_k)\), and moves either straight up or straight to the right at each step; namely, from a point (xj) the path moves either to \((x,j+1)\) or to \((x+1,j)\). We also denote by \({\textsf{h}}\Pi _{\{(y_1,i_1),\ldots ,(y_n,i_n)\}, \uparrow }^{\{(x_1,j_1),\ldots ,(x_n,j_n)\}}\) the subset of \(\Pi _{\{(y_1,i_1),\ldots ,(y_n,i_n)\}}^{\{(x_1,j_1),\ldots ,(x_n,j_n)\}}\) of all \((\pi _1,\ldots ,\pi _n)\) such that the first step of each path \(\pi _k\) is vertical, upwards.

We also denote by \({{\textsf{e}}\Pi }_{\{(y_1,i_1),\ldots ,(y_n,i_n)\}}^{\{(x_1,j_1),\ldots ,(x_n,j_n)\}}\) the ensemble of all n-tuples \((\pi _1,\ldots ,\pi _n)\) of non-intersecting paths in \(\mathbb {Z}^2\), such that each path \(\pi _k\) starts from \((y_k,i_k)\), ends at \((x_k,j_k)\), and moves either straight up or diagonally up-right at each step; namely, from a point (xi) the path moves either to \((x,i+1)\) or to \((x+1,i+1)\). Finally, we denote by \(({{\textsf{e}}\Pi }^*)_{\{(y_1,i_1),\ldots ,(y_n,i_n)\}}^{\{(x_1,j_1),\ldots ,(x_n,j_n)\}}\) a similar e-path ensemble, where the allowed diagonal steps are up-left, instead of up-right.

In the following, both h- and e-paths will be assigned weights, based on suitable weights \({\textsf{w}}{\textsf{t}}({\textsf{e}})\) assigned to each edge \({\textsf{e}}\). The rules are as follows. The weight of a path \(\pi \) with edges \({\textsf{e}}_1,{\textsf{e}}_{2},\ldots \) is defined as \({\textsf{w}}{\textsf{t}}(\pi )= {\textsf{w}}{\textsf{t}}({\textsf{e}}_1) {\textsf{w}}{\textsf{t}}({\textsf{e}}_2)\cdots \). The total weight of an n-tuple \((\pi _1,\ldots ,\pi _n)\) of paths is defined as \({\textsf{w}}{\textsf{t}}(\pi _1,\ldots ,\pi _n):=\prod _{i=1}^n{\textsf{w}}{\textsf{t}}(\pi _i)\). Finally, the weight of an ensemble \(\Pi \) of n-tuples of paths is defined as \({\textsf{w}}{\textsf{t}}(\Pi ):= \sum _{(\pi _1,\ldots ,\pi _n)\in \Pi } {\textsf{w}}{\textsf{t}}(\pi _1,\ldots ,\pi _n)\).

We are now ready to provide the path and local operator representations of the kernels appearing in (2.25).

Proposition 3.2

(Path and local operator representation of \(\Lambda \)). Let a vertical edge connecting (xi) to \((x,i+1)\) be assigned weight 1 and a horizontal edge connecting (xi) to \((x+1,i)\) be assigned weight \(q_i\), for \(x\in \mathbb {Z}\) and \(1\le i\le N\). For \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N)\) and \({\varvec{y}}'=(y_1'\ge \cdots \ge y_N')\) in \({\textsf{W}}_N\), the kernel \( \Lambda (\lambda ,{\varvec{y}}')\) defined in (2.24) can be written as

$$\begin{aligned} \Lambda (\lambda ,{\varvec{y}}')&= {\textsf{w}}{\textsf{t}}\Big ({{\textsf{h}}\Pi }_{\{ (y_i'-i,i) :1\le i\le N\}, \uparrow }^{\{(\lambda _i-i,N):1\le i\le N\}} \Big ) \end{aligned}$$
(3.18)
$$\begin{aligned}&= \det \Big ( {\textsf{w}}{\textsf{t}}\Big ({{\textsf{h}}\Pi }_{(y_i'-i,i),\uparrow }^{(\lambda _j-j,N)} \Big )\Big )_{1\le i,j\le N} \end{aligned}$$
(3.19)
$$\begin{aligned}&= \det \Big ( Q^\dagger _{(i,N]}(y_i'-i, \lambda _j-j)\Big )_{1\le i,j\le N}. \end{aligned}$$
(3.20)

Proof

The first equality is a rewriting of the definition of \(\Lambda \) (see (2.24) and (2.20)) in terms of weights of non-intersecting path ensembles; see e.g. [FK97, Section 4] for a description of the connection between tableaux and non-intersecting lattice paths. The second equality is an application of the Lindström–Gessel–Viennot theorem. Note now that, by the definition of path weights and by the form of the complete symmetric polynomials, we have

$$\begin{aligned} {{\textsf{w}}{\textsf{t}}}\Big ({{\textsf{h}}\Pi }_{(y,i),\uparrow }^{(x,N)} \Big ) = h_{x-y}(0,\ldots ,0,q_{i+1},\ldots ,q_N). \end{aligned}$$

Combining this with (3.8), we arrive at the third equality. Notice that this proposition can be also seen as a reformulation of [DW08, Prop. 2]. \(\square \)

As stated in Proposition 2.5, the operator \(\Lambda \) is invertible: We now provide a determinantal and path representation of its inverse.

Proposition 3.3

(Path and local operator representation of \(\Lambda ^{-1}\)). Let a vertical edge connecting \((x,i-1)\) to (xi) be assigned weight 1 and a diagonal up-right edge connecting \((x,i-1)\) to \((x+1,i)\) be assigned weight \(-q_{N-i+1}\), for \(x\in \mathbb {Z}\) and \(1\le i\le N-1\). For \({\varvec{y}}=(y_1\ge \cdots \ge y_N)\) and \(\mu =(\mu _1\ge \cdots \ge \mu _N)\) in \({\textsf{W}}_N\), the kernel \(\Lambda ^{-1}({\varvec{y}},\mu )\) can be written as

$$\begin{aligned} \Lambda ^{-1}({\varvec{y}},\mu )&= \det \Big ( (-1)^{N-j} Q_{(j,N]}^{-1}(\mu _i-i, y_j-j) \Big )_{1\le i,j \le N} \end{aligned}$$
(3.21)
$$\begin{aligned}&= \det \Big ( {\textsf{w}}{\textsf{t}}\Big ({\textsf{e}}\Pi ^{(y_j-j,N-j)}_{(\mu _i-i,0)} \Big )\Big )_{1\le i,j \le N} \end{aligned}$$
(3.22)
$$\begin{aligned}&= {\textsf{w}}{\textsf{t}}\Big ({\textsf{e}}\Pi ^{\{(y_i-i,N-i) :1\le i\le N\}}_{\{(\mu _i-i,0) :1\le i\le N\}} \Big ) . \end{aligned}$$
(3.23)

Proof

It was proved in [DW08, Prop. 3] that \(\Lambda \) is invertible, with an inverse given by

$$\begin{aligned} \Lambda ^{-1}({\varvec{y}},\,\mu ) =\det \Big ( (-1)^{y_j-\mu _i-j+i} e_{y_j-\mu _i-j+i}(0,\ldots ,0, q_{j+1},\ldots ,q_N) \Big )_{1\le i,j \le N}. \end{aligned}$$
(3.24)

This, together with (3.9), yields the first equality. On the other hand, from the form of the elementary symmetric functions, it is easy to see that

$$\begin{aligned} e_{y-x}(0,\ldots ,0, -q_{j+1},\ldots ,-q_N) = {\textsf{w}}{\textsf{t}}\Big ({\textsf{e}}\Pi ^{(y,N-j)}_{(x,0)} \Big ). \end{aligned}$$

The latter, together with (3.24) and (3.9), yields the second equality. Finally, the third equality follows from the Lindström–Gessel–Viennot theorem. \(\square \)

The proof of the following proposition follows the same lines as the proofs of Propositions 3.2 and 3.3, so we will be brief.

Proposition 3.4

(Path and local operator representation of \({{\mathcal {R}}} _{(r,t)}\)). Let \(0\le r< t\). Let a vertical edge connecting (xi) to \((x,i+1)\) be assigned weight 1 and a diagonal up-right edge connecting (xi) to \((x+1,i+1)\) be assigned weight \(p_{i+1}\), for \(x\in \mathbb {Z}\) and \(r\le i\le t-1\). Then, for \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N)\) and \(\mu =(\mu _1\ge \cdots \ge \mu _N)\) in \({\textsf{W}}_N\) with \(\mu \subseteq \lambda \), the kernel \({{\mathcal {R}}} _{(r,t]}(\mu ,\lambda )\) defined in (2.18) can be written as

$$\begin{aligned} Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]} \cdot {{\mathcal {R}}} _{(r,t]}(\mu ,\lambda )&= {\textsf{w}}{\textsf{t}}\Big ( {\textsf{e}}\Pi _{\{(\mu _i-i,r):1\le i\le N\}}^{\{(\lambda _i-i,t) :1\le i\le N\}} \Big ) \end{aligned}$$
(3.25)
$$\begin{aligned}&= \det \Big ( {\textsf{w}}{\textsf{t}}\Big ( {\textsf{e}}\Pi _{(\mu _i-i,r)}^{(\lambda _j-j,t)} \Big ) \Big )_{1\le i,j\le N} \end{aligned}$$
(3.26)
$$\begin{aligned}&=\det \Big ( R_{(r,t]} (\mu _i-i,\lambda _j-j)\Big )_{1\le i,j\le N}, \end{aligned}$$
(3.27)

where the R-operators are defined in (3.4).

Proof

The first equality follows from the definition of \({{\mathcal {R}}} _{(r,t]}(\mu ,\lambda )\) in (2.18) and its representation in terms of weights of non-intersecting lattice paths. The second equality is then a consequence of the Lindström–Gessel–Viennot theorem, while the last equality is a direct consequence of the representation of the weight of a single e-path ensemble in terms of the R-operator. \(\square \)

The following proposition provides a path representation of the transition kernel of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\), obtained by path concatenation. The graphical depiction of this result is shown in Fig. 4. Let us first explain what we mean by path concatenation, again referring to Fig. 4 for an illustration. Let \(\Sigma ^{(1)}\) and \(\Sigma ^{(2)}\) be two lattice path ensembles, each consisting of N-tuples of paths. Suppose that, for all \((\pi ^{(1)}_1,\ldots ,\pi ^{(1)}_N)\in \Sigma ^{(1)}\) and \((\pi ^{(2)}_1,\ldots ,\pi ^{(2)}_N)\in \Sigma ^{(2)}\) and for all j, the endpoint of \(\pi ^{(1)}_j\) equals the starting point of \(\pi ^{(2)}_j\). Then, we define the path concatenation \(\Sigma _1\sqcup \Sigma _2\) to be the ensemble consisting of all paths \((\pi ^{(1)}_1 \cup \pi ^{(2)}_1,\ldots ,\pi ^{(1)}_N \cup \pi ^{(2)}_N)\), for some \((\pi ^{(1)}_1,\ldots ,\pi ^{(1)}_N)\in \Sigma ^{(1)}\) and \((\pi ^{(2)}_1,\ldots ,\pi ^{(2)}_N)\in \Sigma ^{(2)}\) (here, union of paths is understood in terms of both edges and vertices).

Proposition 3.5

(Path representation of \(\textsf{dTASEP}\) transition kernel). Let \({\varvec{y}}, {\varvec{y}}'\in {\textsf{W}}_N\) with \({\varvec{y}} \subseteq {\varvec{y}}'\). The transition kernel of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\), encoding the probability that particles starting from locations \((y_1-1>y_2-2>\cdots >y_N-N)\) at time r end up at locations \((y_1'-1>y_2'-2>\cdots >y_N'-N)\) at time t, admits the following weighted path representation:

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&= \frac{\big ( \prod _{i=1}^N q_i^{y'_i-y_i}\big )}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\begin{array}{c} \lambda , \mu \in {\textsf{W}}_N:\\ \mu \subseteq \lambda , \, \mu \subseteq {\varvec{y}}, \, {\varvec{y}}'\subseteq \lambda \end{array}} {\textsf{w}}{\textsf{t}}\bigg ( {\textsf{h}}\Pi _{\{(y'_i-i,i) :1\le i\le N\}, \uparrow }^{\{(\lambda _i-i,N) :1\le i\le N\}} \, \bigsqcup \\&\quad \, \bigsqcup \, ({\textsf{e}}\Pi _r^*)_{\{(\lambda _i-i,N) :1\le i\le N\}}^{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \, \bigsqcup \,{\textsf{e}}\Pi ^{\{(y_i-i,2N+t-r-i) :1\le i\le N\}}_{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \bigg ). \end{aligned} \end{aligned}$$
(3.28)

The weights assigned to the edges are as follows:

  • All vertical edges are assigned weight 1.

  • Horizontal edges between (xi) and \((x+1,i)\) for \(x\in \mathbb {Z}\) and \(1\le i\le N\) are assigned weight \(q_i\).

  • Diagonal, up-left edges between (xi) and \((x-1,i+1)\), for \(x\in \mathbb {Z}\) and \(N\le i\le N+t-r-1\), are assigned weight \(p_{N+t-i}\).

  • Finally, diagonal, up-right edges between (xi) and \((x+1,i+1)\) for \(x\in \mathbb {Z}\) and \(N+t-r\le i\le 2N+t-r-2\), are assigned weight \({-q_{2N+t-r-i}}\).

Choose now \(x_0\) so that \(x_0-1< y_N-N\) and consider an auxiliary vector \({\varvec{x}}^{(0)}=(x^{(0)}_1,\ldots ,x^{(0)}_{N})\), with \(x^{(0)}_i=x_0-i\) for \(1\le i\le N\). Then, with the same assignment of weights and setting \(\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}:=Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]} \prod _{i=1}^N q_i^{y_i-x_0}\), we also have

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&= \frac{1}{\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\begin{array}{c} \lambda , \mu \in {\textsf{W}}_N:\\ \mu \subseteq \lambda , \, \mu \subseteq {\varvec{y}}, \, {\varvec{y}}'\subseteq \lambda \end{array}} {\textsf{w}}{\textsf{t}}\bigg ( {{\textsf{h}}\Pi }^{\{(y'_i-i,i) :1\le i\le N\}}_{\{(x_{i}^{(0)},i) :1\le i\le N \}} \,\bigsqcup \, {\textsf{h}}\Pi _{\{(y'_i-i,i) :1\le i\le N\},\uparrow }^{\{(\lambda _i-i,N) :1\le i\le N\}} \,\bigsqcup \\&\quad \,\bigsqcup \, ({{\textsf{e}}\Pi }^*_r)_{\{(\lambda _i-i,N) :1\le i\le N\}}^{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \, \bigsqcup \,{\textsf{e}}\Pi ^{\{(y_i-i,2N+t-r-i) :1\le i\le N\}}_{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \bigg ). \end{aligned} \end{aligned}$$
(3.29)
Fig. 4
figure 4

Non-intersecting path representation of the transition kernel of \(\textsf{dTASEP}\) particles, as in (3.29). The figure refers to the transition probability of five \(\textsf{dTASEP}\) particles, starting from locations \((y_1-1>y_2-2>\cdots >y_5-5)\) at time r and ending at locations \((y_1'-1>y_2'-2>\cdots >y_5'-5)\) at time t. Note that the paths do not depict the actual trajectories of the particles. The weights assigned to all vertical steps are equal to 1. In the bottom part of the figure, paths move either vertically up or horizontally to the right, with horizontal weights \(q_1,\ldots ,q_5\), as shown on the left-hand side of the figure. In the middle part, paths move vertically up or diagonally up-left, with diagonal weights \(p_t, p_{t-1}, \ldots , p_{r+2},p_{r+1}\), as shown. In the top part, paths move either vertically up or diagonally up-right, with diagonal weights \(q_5,\ldots ,q_2\), as shown. The solid paths can be extended in such a way that they all start at level zero and include the dashed colored lines in the bottom-left part of the picture; due to the non-intersecting property and the fact that vertical weights are assigned weight 1, such an extension does not change the weight of the ensemble (provided that the horizontal edges on the bottom level are assigned weight 0). The bullets in the bottom part of the figure refer to the point processes \({\textsf{X}}_N\) and \(\overline{{\textsf{X}}}_N\) from Sect. 3.3

Proof

By Proposition 2.5, we have

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&= \Bigg ( \prod _{i=1}^N q_i^{y_i'-y_i}\Bigg ) \big (\Lambda ^{-1} {{\mathcal {R}}} _{(r,t]} \Lambda \big )({\varvec{y}}, {\varvec{y}}') \\&= \Bigg ( \prod _{i=1}^N q_i^{y_i'-y_i}\Bigg ) \sum _{\lambda , \mu } \Lambda ^{-1}({\varvec{y}},\mu ) \, {{\mathcal {R}}} _{(r,t]}(\mu ,\lambda ) \, \Lambda (\lambda ,{\varvec{y}}'), \end{aligned} \end{aligned}$$
(3.30)

where the summation is over all partitions \(\lambda ,\mu \in {\textsf{W}}_N\) such that \(\mu \subseteq \lambda \), \(\mu \subseteq {\varvec{y}}\) and \({\varvec{y}}' \subseteq \lambda \). We now concatenate the non-intersecting paths corresponding to the operators \(\Lambda ^{-1}\), \({{\mathcal {R}}} _{(r,t]}\) and \(\Lambda \), as given in Propositions 3.3, 3.4 and 3.2, respectively, and as shown in Fig. 4. The only point to notice is that, compared to Proposition 3.4, the paths corresponding to the operator \({{\mathcal {R}}} _{(r,t]}\) are flipped upside down, resulting in the ‘reverse’ ensemble \(({{\textsf{e}}\Pi }^*_r)_{\{(\lambda _i-i,N) :1\le i\le N\}}^{\{(\mu _i-i,N+t-r) :1\le i\le N\}}\), in which paths move either upwards or in the up-left direction at each step, starting from the points \(\lambda _i-i\), \(1\le i\le N\), at level N and ending at the points \(\mu _i-i\), \(1\le i\le N\), at level \(N+t-r\). Thus, using Eqs. (3.18)–(3.23)–(3.25) and the weights defined in the proposition, we obtain

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}'\, )&= \frac{\big ( \prod _{i=1}^N q_i^{y'_i-y_i}\big )}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda ,\mu } {\textsf{w}}{\textsf{t}}\Big ( {\textsf{h}}\Pi _{\{(y'_i-i,i) :1\le i\le N\}, \uparrow }^{\{(\lambda _i-i,N) :1\le i\le N\}} \Big ) \cdot \\&\quad \, \cdot {\textsf{w}}{\textsf{t}}\Big ( ({\textsf{e}}\Pi _r^*)_{\{(\lambda _i-i,N) :1\le i\le N\}}^{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \Big ) \cdot {\textsf{w}}{\textsf{t}}\Big ({\textsf{e}}\Pi ^{\{(y_i-i,2N+t-r-i) :1\le i\le N\}}_{\{(\mu _i-i,N+t-r) :i=1,\ldots ,N\}} \Big ), \end{aligned} \end{aligned}$$

which is the same as (3.28).

We now rewrite (3.28), by multiplying and dividing by \(\prod _{i=1}^N q_i^{-x_0}\), as

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&= \frac{\prod _{i=1}^N q_i^{y'_i-x_0}}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]} \prod _{i=1}^N q_i^{y_i-x_0} } \sum _{\lambda , \mu } {\textsf{w}}{\textsf{t}}\bigg ({\textsf{h}}\Pi _{\{(y'_i-i,i) :1\le i\le N\}, \uparrow }^{\{(\lambda _i-i,N) :1\le i\le N\}} \,\bigsqcup \\&\quad \, \bigsqcup \, ({\textsf{e}}\Pi _r^*)_{\{(\lambda _i-i,N) :1\le i\le N\}}^{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \,\bigsqcup \,{\textsf{e}}\Pi ^{\{(y_i-i,2N+t-r-i) :1\le i\le N\}}_{\{(\mu _i-i,N+t-r) :1\le i\le N\}} \bigg ). \end{aligned} \end{aligned}$$

The denominator equals \({\widehat{Z}}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}\). Moreover, the product \(\prod _{i=1}^N q_i^{y'_i-x_0}=\prod _{i=1}^N q_i^{(y'_i-i)-x^{(0)}_i}\) equals the total weight of the path ensemble \({{\textsf{h}}\Pi }^{\{(y'_i-i,i) :1\le i\le N\}}_{\{(x_{i}^{(0)},i) :1\le i\le N \}}\), since the h-weight of a single path from \((x_i^{(0)},i)\) to \((y_i'-i,i)\) (see bottom-left end of the path depictions in Fig. 4) is \(q_i^{(y'_i-i)-x^{(0)}_i}\). Notice that such a path ensemble is nonempty, due to the hypothesis \(x_0-1< y_N-N\), which implies

$$\begin{aligned} x_i^{(0)}=x_0-i< y_N-N\le y_N'-N\le y'_i-i \end{aligned}$$

for \(1\le i\le N\). This readily yields (3.29). \(\square \)

3.3 Determinantal point processes

We now describe the law of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\) as a marginal of a determinantal point process, which we now build out of the above path construction. The configuration space will be integer arrays

$$\begin{aligned} {\textsf{X}}_N:= \{x^{(i)}_j\}_{ 1 \le j\le i \le N} \qquad \text {with}\qquad x^{(i+1)}_{j+1} < x^{(i)}_j\le x^{(i+1)}_j. \end{aligned}$$
(3.31)

Such a point process naturally arises from the non-intersecting path construction given in Proposition 3.5 (see in particular (3.29)) and illustrated in the bottom part of Fig. 4. For \(1\le j\le i\le N\), the point \(x_j^{(i)}\) of \({\textsf{X}}_N\) is identified with the rightmost point on the horizontal line \(\{(x,i):x\in \mathbb {Z}\}\) of the j-th path (enumerating the paths from right to left). We will consider (3.31) as a point process on \(\mathbb {Z}\times \{1,\ldots ,N\}\), such that the line \(\{i\}\times \mathbb {Z}\) has exactly i points \(x^{(i)}_1,\ldots ,x^{(i)}_i\), for \(1\le i\le N\). However, for brevity and when there is no ambiguity, we will usually write \(x^{(i)}_j\) instead of \((x^{(i)}_j,i)\). We note that the non-intersecting property of the paths enforces the inequalities in (3.31). It will be useful to extend the above triangular array to a square array with additional frozen points on every line:

$$\begin{aligned} \begin{aligned} \overline{{\textsf{X}}}_N:= \{&x^{(i)}_j\}_{i,j=1}^ {N} \qquad \text {with}\qquad x^{(i+1)}_{j+1}< x^{(i)}_j\le x^{(i+1)}_j \qquad \text {and} \\&x^{(i)}_j:= (x_0-j, i) \qquad \text {for} \qquad 1\le i<j\le N. \end{aligned} \end{aligned}$$
(3.32)

The auxiliary points \(x^{(i)}_j\) with \( 1\le i<j\le N\) are illustrated as the ‘frozen’ bullets in the bottom-left part of Fig. 4. As in Proposition 3.5, the point \(x_0\) is chosen arbitrarily but such that \(x_0 -1< y_N-N\). We will also use the notation \({\varvec{x}}^{(0)}:=(x^{(0)}_1,x^{(0)}_2,\ldots ,x^{(0)}_N)\), with \(x^{(0)}_i:=x_0-i\), for \(i=1,\ldots ,N\). The freezing is, again, due to the non-intersecting nature of the extended paths (i.e., the paths that start from \(\{(x^{(0)}_i,0):1\le i\le N\}\) and include the dashed lines) in Fig. 4.

For \(0\le r<t\), \(1\le k\le N\) and \(x\in \mathbb {Z}\), we now define the family of functions

$$\begin{aligned} \begin{aligned} \Psi ^{(N)}_{k} (x)&:=\sum _{z\in \mathbb {Z}} Q_N^{-1}\circ \cdots \circ Q_{k+1}^{-1}(z,y_k-k) \, R_{r+1}\circ \cdots \circ R_{t}(z,x) \\&=\sum _{z\in \mathbb {Z}} Q_N^{-1}\circ \cdots \circ Q_{k+1}^{-1}(z,y_k-k) \, R^*_{t}\circ \cdots \circ R^*_{r+1}(x,z) \\&= R^*_{t}\circ \cdots \circ R^*_{r+1}\circ Q_N^{-1}\circ \cdots \circ Q_{k+1} ^{-1}(x,y_k-k) \\&= R^*_{(r,t]}\circ Q^{-1}_{(k,N]} (x,y_k-k), \end{aligned} \end{aligned}$$
(3.33)

where, as usual, \(R^{*}_{(r,t]}\) denotes the adjoint of \(R_{(r,t]}\), i.e. \(R^{*}_{(r,t]}(x,y):= R_{(r,t]}(y,x)\). Notice that the function \(\Psi ^{(N)}_{k} (x)\) depends implicitly on \({\varvec{y}}=(y_1\ge \cdots \ge y_N)\). With reference to Fig. 4, \((-1)^{N-k}\Psi ^{(N)}_{k} (x)\) captures the weight of a path starting at location (xN) on the lower solid black line and ending at \((y_k-k, 2N+t-r-k)\) at the top of the figure (passing by any z on the upper solid black line). Note also that, even though the sums in the first two lines of (3.33) are over the \(\mathbb {Z}\), these sums actually have only a finite number of non-zero terms, since the values of x and \(y_k-k\) are fixed.

Observe that the left edge of the triangular array \({\textsf{X}}_N\), i.e. \({{\,\mathrm{\mathrm {l-edge}}\,}}( {\textsf{X}}_N ):=(x^{(i)}_i:1\le i\le N)\), coincides with the terminal positions \((y_1'-1>y_2'-2>\cdots >y_N'-N)\) of the \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\) particles, by Proposition 3.5. Thanks to this, we are able to obtain an initial representation of the transition kernel of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\) as a marginal of the determinantal point process \({\textsf{X}}_N\).

Proposition 3.6

Let \({\varvec{y}}\in {\textsf{W}}_N\). Let \(x_0\in \mathbb {Z}\) with \(x_0 -1< y_N-N\) and write \({\varvec{x}}^{(0)}:=(x^{(0)}_1,x^{(0)}_2,\ldots ,x^{(0)}_N)\), where \(x^{(0)}_i:=x_0-i\). Define the (signed) determinantal measure

$$\begin{aligned} \begin{aligned} {{\mathbb {P}}} \big ( {\textsf{X}}_N {\;\big |\;{\varvec{y}}} \big )&:= \frac{1}{\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \left( \prod _{k=1}^{N} \det \Big ( Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j) \Big )_{i,j=1}^{k} \right) \\&\quad \cdot \det \Big ((-1)^{N-i} \,\Psi ^{(N)}_{i} (x^{(N)}_j) \Big )_{i,j=1}^{N} \end{aligned} \end{aligned}$$
(3.34)

on configurations \({\textsf{X}}_N= \{x^{(i)}_j\}_{ 1 \le j\le i \le N} \) with \(x^{(i+1)}_{j+1} < x^{(i)}_j\le x^{(i+1)}_j\), where we have set \(x^{(k-1)}_k=x^{(0)}_k\) for \(1\le k\le N\), \(\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}:=Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]} \prod _{i=1}^N q_i^{y_i-x_0}\) and \(Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]} \) as in (2.14) (note that the right-hand side of (3.34) depends on \({\varvec{y}}\) through the \(\Psi \)-functions). The determinantal measure does not depend on the auxiliary point \(x_0\), as long as the condition \(x_0-1<y_N-N\) holds. Then, the transition kernel of \(\textsf{dTASEP}(N;{\varvec{p}},{\varvec{q}})\), encoding the probability that particles starting from locations \((y_1-1>y_2-2>\cdots >y_N-N)\) at time r end up at locations \((y_1'-1>y_2'-2>\cdots >y_N'-N)\) at time t, is given by

$$\begin{aligned}&{{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}') = {{\mathbb {P}}} \big ( {{\,\mathrm{\mathrm {l-edge}}\,}}\big ( {\textsf{X}}_N \big ) =(y_1'-1>y_2'-2>\cdots >y_N'-N) \;\big |\; {\varvec{y}}\big ). \end{aligned}$$
(3.35)

Proof

In a nutshell, this result is a consequence of the path representation of \({{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')\), as described in Proposition 3.5 (see in particular (3.29)), as well as the identification of the point process \({\textsf{X}}_N\) as the ‘trace’ of that path ensemble on \(\{(x,i):x\in \mathbb {Z}, \, 1\le i\le N \}\). Starting from (3.30) and using (3.21), (3.27) and (3.20), we obtain

$$\begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&=\Bigg ( \prod _{i=1}^N q_i^{y'_i-y_i}\Bigg ) \sum _{\lambda , \mu } \Lambda ^{-1} ({\varvec{y}},\mu ) \, {{\mathcal {R}}} _{(r,t]} (\mu ,\lambda ) \, \Lambda (\lambda ,{\varvec{y}}')\\&=\frac{\big ( \prod _{i=1}^N q_i^{y'_i-y_i}\big )}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda , \mu } \det \Big ( (-1)^{N-j} Q_{(j,N]}^{-1}\big (\mu _i-i, y_j-j \big ) \Big )_{ i,j =1}^N\\&\quad \, \cdot \det \Big ( R_{(r,t]} (\mu _i-i,\lambda _j-j)\Big )_{ i,j =1}^N \det \Big ( Q^\dagger _{(i,N]}(y'_i-i, \lambda _j-j)\Big )_{ i,j =1}^N, \end{aligned}$$

where the summations are over all \(\lambda ,\mu \in {\textsf{W}}_N\) such that \(\mu \subseteq \lambda \), \(\mu \subseteq {\varvec{y}}\) and \({\varvec{y}}' \subseteq \lambda \). Using the Cauchy–Binet identity to compute the sum over \(\mu \), we have

$$\begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&=\frac{\big ( \prod _{i=1}^N q_i^{y'_i-y_i}\big )}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda \supseteq {\varvec{y}}'} \det \Big ( (-1)^{N-j} \sum _{z\in \mathbb {Z}} Q_{(j,N]}^{-1}\big (z, y_j-j \big ) R_{(r,t]} (z,\lambda _i-i) \Big )_{ i,j =1}^N \\&\quad \cdot \det \Big ( Q^\dagger _{(i,N]}(y_i'-i, \lambda _j-j)\Big )_{ i,j =1}^N \\&= \frac{\big ( \prod _{i=1}^N q_i^{y'_i-y_i}\big )}{Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda \supseteq {\varvec{y}}'} \det \Big ((-1)^{N-j} \Psi ^{(N)}_{j} (\lambda _i-i) \Big )_{ i,j =1}^N \\&\quad \det \Big ( Q^\dagger _{(i,N]}(y'_i-i, \lambda _j-j)\,\Big )_{ i,j =1}^N, \end{aligned}$$

where the latter equality follows from definition (3.33). We next multiply and divide by \(\prod _{i=1}^N q_i^{-x_0}\), recall that \(\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}=\big (\prod _{i=1}^N q_i^{y_i-x_0} \big )Z^{{\varvec{p}},{\varvec{q}}}_{(r,t]}\) and absorb the factor \(\prod _{i=1}^N q_i^{y_i'-x_0}\) into the second determinant, thus obtaining

$$\begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&=\frac{ 1}{\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda \supseteq {\varvec{y}}'} \det \Big ((-1)^{N-i} \Psi ^{(N)}_{i} (\lambda _j-j) \Big )_{i,j=1}^N \\&\quad \cdot \det \Big ({q_i^{y_i'-x_0}} Q^\dagger _{(i,N]}(y'_i-i, \lambda _j-j)\Big )_{ i,j =1}^N. \end{aligned}$$

Using the facts that \(Q^\dagger _{(0,i-1]}(x^{(0)}_i,x^{(0)}_i )=1\) and \(q_i^{y_i'-x_0}=Q^\dagger _i(x^{(0)}_i,y_i'-i)\), we can rewrite

$$\begin{aligned} \begin{aligned}&{q_i^{y_i'-x_0}} Q^\dagger _{(i,N]}(y'_i-i, \lambda _j-j) \\&=Q^\dagger _{(0,i-1]}(x^{(0)}_i,x^{(0)}_i ) \, Q^\dagger _{(i-1,i]}(x^{(0)}_i,y'_i-i ) \, Q^\dagger _{(i,N]}(y'_i-i, \lambda _j-j) \\&=\sum _{\begin{array}{c} (z_0,\ldots ,z_N)\in \mathbb {Z}^{N+1}: \\ z_0=z_1=\cdots =z_{i-1}=x^{(0)}_i, \\ z_i=y'_i-i,\;\; z_{N}= \lambda _j-j \end{array}} \prod _{k=1}^N Q^\dagger _{k}(z_{k-1},z_{k}). \end{aligned} \end{aligned}$$

Using several times the Cauchy–Binet identity, we obtain

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&=\frac{1}{\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \sum _{\lambda \supseteq {\varvec{y}}'} \; \sum _{\begin{array}{c} \overline{{\textsf{X}}}_N:x^{(i-1)}_i=\cdots =x^{(1)}_{i}=x^{(0)}_i, \\ x^{(i)}_i=y'_i-i,\;\; x^{(N)}_i= \lambda _i-i \\ \text {for } 1\le i\le N \end{array}} \det \Big ((-1)^{N-i} \Psi ^{(N)}_{i} (\lambda _j-j) \Big )_{i,j=1}^N \\&\quad \cdot \prod _{k=1}^N\det \Big (Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j ) \Big )_{i,j=1}^N. \end{aligned} \end{aligned}$$

Notice that the constraints \(x^{(i-1)}_i=\cdots =x^{(1)}_{i}=x^{(0)}_i\), \(1\le i\le N\), corresponds to ‘freezing’ the points \(x^{(i)}_j\), \(1\le i<j\le N\), as in (3.32). Due to the inequalities (3.32) that \(\overline{{\textsf{X}}}_N\) satisfies, we have \(x^{(k)}_{j}<x^{(k-1)}_i\) for \(i<j\), hence \(Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j)=0\) for \(i<j\). Furthermore, due the constraints \(x^{(i-1)}_i=\cdots =x^{(1)}_{i}=x^{(0)}_i\), we have \(Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_i )=1\) for \(i>k\). Therefore, the matrix \(\big (Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j ) \big )_{i,j=1}^N\) is lower triangular with diagonal elements \(Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_i )=1\) for \(i>k\). This implies

$$\begin{aligned} \det \Big (Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j ) \Big )_{i,j=1}^N = \det \Big (Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j ) \Big )_{i,j=1}^k, \end{aligned}$$

which leads to

$$\begin{aligned} \begin{aligned} {{\mathcal {Q}}} _{(r,t]}({\varvec{y}},{\varvec{y}}')&=\frac{1}{\widehat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \;\sum _{\begin{array}{c} {\textsf{X}}_N:x^{(i)}_i=y'_i-i \\ \text {for } 1\le i\le N \end{array}} \det \Big ((-1)^{N-i} \Psi ^{(N)}_{i} (x^{(N)}_j) \Big )_{i,j=1}^N \\&\quad \cdot \prod _{k=1}^N\det \Big (Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j ) \Big )_{i,j=1}^k, \end{aligned} \end{aligned}$$

with the convention that \(x^{(k-1)}_k=x^{(0)}_k\) for \(1\le k\le N\). This completes the proof of (3.35).

To show that the determinantal measure does not in fact depend on \(x_0\), notice first that the k-th row of the matrix \(Q^\dagger _{k}(x^{(k-1)}_i,x^{(k)}_j )\) has a common factor \(q^{-x_0}\), for all \(1\le k\le N\). When factoring these terms out of the determinants, we see a cancellation with the corresponding terms in the normalizing constant \(\hat{Z}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}\). The resulting expression has no further dependence on \(x_0\). \(\square \)

For the analysis that will follow in the next sections, it will be convenient to re-express the determinantal measure (3.34) in terms Q-operators, which represent weights of paths moving strictly to the left, rather than \(Q^\dagger \)-operators, which represent weights of paths moving weakly to the right. Towards this, the main observation is the following equality of determinants, which will be a consequence of certain path constructions.

Proposition 3.7

Let \(\{x_j^{(i)}\}_{1\le j\le i\le N}\) be a triangular array of integers and let \(\{x_j^{(j-1)}\}_{j=1}^N\) and \(\{x_0^{(j-1)}\}_{j=1}^N\) be auxiliary integer variables. Assume that \(\{x_j^{(i)}\}_{1\le j\le i\le N}\) satisfies

$$\begin{aligned} x_{j}^{(i)}<x_{j-1}^{(i-1)}\le x_{j-1}^{(i)}\qquad \text {for all} \qquad 1\le i\le N,\; {1\le j\le i+1}. \end{aligned}$$
(3.36)

Then we have

$$\begin{aligned}{} & {} \prod _{k=1}^{N}\left( {q_k^{x_{k}^{(k-1)}}} \det \left( Q^\dagger _k(x_{i}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k}\right) \nonumber \\{} & {} = \prod _{k=1}^{N}\left( {q_k^{x_0^{(k-1)}}} \det \left( Q_k(x_{i-1}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k}\right) , \end{aligned}$$
(3.37)

and both sides are nonzero.

Proof

By the Lindström–Gessel–Viennot theorem, we may view \(\det \big (Q^\dagger _k(x_{i}^{(k-1)},x_j^{(k)})\big )_{i,j=1}^{k}\) as the total weight of k non-intersecting paths starting from \((x_k^{(k-1)},k-1),\ldots ,(x_{1}^{(k-1)},k-1)\) and ending at \((x_k^{(k)},k),\ldots ,(x_{1}^{(k)},k)\), with the first step upwards (with weight 1) and subsequent steps in the horizontal right direction (with weight \(q_k\) at each step). Note that such a non-intersecting path ensemble exists if and only if

$$\begin{aligned} x_{j}^{(k)}<x_{j-1}^{(k-1)}\le x_{j-1}^{(k)}\qquad \text {for all}\qquad 2\le j\le k+1. \end{aligned}$$

In such a case, the weight of the ensemble equals \(\prod _{j=1}^{k}Q^\dagger _k(x^{(k-1)}_j,x^{(k)}_j)\). Taking the product over \(k=1,\ldots ,N\), we obtain the total weight of the non-intersecting paths illustrated in red in Fig. 5:

$$\begin{aligned}&\,\prod _{k=1}^N\det \left( Q^\dagger _k(x_{i}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k} \\&\quad = {\left\{ \begin{array}{ll} \prod _{k=1}^{N}\prod _{j=1}^{k}Q^\dagger _k(x_j^{(k-1)},x_j^{(k)}) &{}\quad \text {if } x_{j}^{(k)}<x_{j-1}^{(k-1)}\le x_{j-1}^{(k)}\text { for } 1\le k\le N,\ 2\le j\le k+1,\\ 0 &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$

We now consider similar, ‘dual’ paths, illustrated in blue in Fig. 5. Recalling from (3.2) that the Q-operators encode geometric jumps strictly to the left, by the Lindström–Gessel–Viennot theorem we view \(\det \big ( Q_k(x_{i-1}^{(k-1)},x_j^{(k)})\big )_{i,j=1}^{k}\) as the total weight of k non-intersecting paths starting from \((x_{k-1}^{(k-1)},k-1),\ldots ,(x_{0}^{(k-1)},k-1)\) and ending at \((x_k^{(k)},k),\ldots ,(x_{1}^{(k)},k)\), with the first step diagonally up-left and subsequent steps in the horizontal left direction, with all steps having weight \(q_k\). Reasoning as before, we obtain

$$\begin{aligned}&\,\prod _{k=1}^N\det \left( Q_k(x_{i-1}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k} \\&\quad = {\left\{ \begin{array}{ll} \prod _{k=1}^{N}\prod _{j=1}^{k} Q_k(x_{j-1}^{(k-1)},x_j^{(k)})&{}\quad \text {if } x_{j}^{(k)}<x_{j-1}^{(k-1)}\le x_{j-1}^{(k)}\text { for } 1\le k\le N,\ 1\le j\le k,\\ 0 &{}\quad \text {otherwise}. \end{array}\right. } \end{aligned}$$

Thus, since by hypothesis the interlacing conditions (3.36) are satisfied, both sides of (3.37) are nonzero. Moreover, we have

$$\begin{aligned} \begin{aligned}&\prod _{k=1}^N\det \left( Q^\dagger _k(x_{i}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k}= {\textsf{w}}{\textsf{t}}(\text {red paths}),\\&\prod _{k=1}^N\det \left( Q_k(x_{i-1}^{(k-1)},x_j^{(k)})\right) _{i,j=1}^{k}= {\textsf{w}}{\textsf{t}}(\text {blue paths}).\\ \end{aligned} \end{aligned}$$
(3.38)

Moreover, up to inverting the weights of the blue paths, the total weight of red and blue paths simply equals the weight of all the horizontal paths from \((x_{k}^{(k-1)}, k)\) to \((x_0^{(k-1)},k)\), \(k=1,\cdots ,N\); in other words, we have

$$\begin{aligned} {\textsf{w}}{\textsf{t}}(\text {red paths})\cdot {\textsf{w}}{\textsf{t}}(\text {blue paths})^{-1}= \prod _{k=1}^{N}q_k^{x_0^{(k-1)}-x^{(k-1)}_k}. \end{aligned}$$
(3.39)

Combining (3.39) with (3.38), we readily arrive at (3.37). \(\square \)

Fig. 5
figure 5

Ensemble of non-intersecting red and blue paths used in the proof of Proposition 3.7

The following corollary re-expresses the determinantal measure (3.34) in a form that will be more suitable to our purposes.

Corollary 3.8

The determinantal measure (3.34) is equal to

$$\begin{aligned} {{\mathbb {P}}} \big ( {\textsf{X}}_N {\;\big |\; {\varvec{y}}}\big ) =\frac{1}{{\tilde{Z}}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \left( \prod _{k=1}^{N} \det \Big ( Q_{k}(x^{(k-1)}_{i-1},x^{(k)}_j) \Big )_{i,j=1}^{k}\right) \det \Big ( \Psi ^{(N)}_{i} (x^{(N)}_j) \Big )_{i,j=1}^{N},\qquad \end{aligned}$$
(3.40)

with

$$\begin{aligned} \tilde{Z}^{p,q}_{(r,t]}:=(-1)^{N(N-1)/2}\prod _{i=r+1}^{t}\prod _{j=1}^{N} (1+p_iq_j)\cdot \prod _{j=1}^N q_j^{y_j-j-x_0^{(j-1)}}. \end{aligned}$$
(3.41)

The determinantal measure (3.40) is actually independent of the auxiliary variables \(\{x_0^{(j-1)}\}_{j=1}^N\). Reasoning as in the proof of Proposition 3.6, the first row of the matrix \( \big ( Q_{k}(x^{(k-1)}_{i-1},x^{(k)}_j) \big )_{i,j=1}^{k}\) has a common factor \(q_k^{-x_0^{(k-1)}}\), for all \(1\le k\le N\). These terms, when factored out of the determinants, cancel the corresponding terms in the normalizing constant \(\tilde{Z}^{p,q}_{(r,t]}\), making the resulting expression independent of \(\{x_0^{(j-1)}\}_{j=1}^N\). Thus, we may set these auxiliary variables to be \(\infty \) and simply define

$$\begin{aligned} Q_k(x_0^{(k-1)},y):= q_k^{y}=\lim _{x\rightarrow \infty } q_k^{x}\cdot Q_k(x,y), \quad y\in \mathbb {Z}. \end{aligned}$$
(3.42)

Using these conventions and defining

$$\begin{aligned} \bar{Z}^{p,q}_{(r,t]}:=(-1)^{N(N-1)/2}\prod _{j=1}^{N}\prod _{i=r+1}^{t}(1+q_jp_i)\cdot \prod _{j=1}^N q_j^{y_j-j}, \end{aligned}$$
(3.43)

the determinantal measure (3.40) may be written in the form

$$\begin{aligned} {{\mathbb {P}}} \big ( {\textsf{X}}_N {\;\big |\; {\varvec{y}}}\big ) = \frac{1}{{\bar{Z}}^{{\varvec{p}},{\varvec{q}}}_{(r,t]}} \prod _{k=1}^{N} \det \Big ( Q_{k}(x^{(k-1)}_{i-1},x^{(k)}_j) \Big )_{i,j=1}^{k} \cdot \det \Big ( \Psi ^{(N)}_{i} (x^{(N)}_j) \Big )_{i,j=1}^{N}. \nonumber \\ \end{aligned}$$
(3.44)

3.4 Correlation kernel and biorthogonal functions

From now on we will make the additional assumption that \(q_1<q_2<\cdots \), throughout this subsection and most of Sect. 4, until when we remove it in the proof of Theorem 1.1 (see Sect. 4). This assumption allows for a bona fide composition of the local operators and makes all infinite sums in this subsection and in the next section well defined; it also justifies swapping sums and exchanging sums with contour integrals. To better understand its need, recall that in (3.42) we defined \(Q_k(x_0^{(k-1)},y):=q_k^y\), for virtual variable \(x_0^{(k-1)}\) regarded as \(\infty \). This convention might seemingly lead to issues when defining

$$\begin{aligned} Q_{[j,k]}(x_0^{(j-1)},x):= Q_j\circ Q_{[j+1,k]}(x_0^{(j-1)},x)=\sum _{y\in \mathbb {Z}} q_j^{y}\cdot Q_{[j+1,k]}(y,x) \end{aligned}$$
(3.45)

for \(k>j\): if, for example, \(k=j+1\), the sum above equals \(\sum _{y>x}q_j^{y}\cdot q_{j+1}^{x-y}\), which diverges for \(q_{j+1}\le q_j\). However, for \(q_1<q_2<\cdots \), (3.45) is well defined. To see this, recall from (3.16) that

$$\begin{aligned} Q_{[j+1,k]}(x,y)=\oint _{|z|=r} \frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{y-x+k-j-1}}{\prod _{\ell =j+1}^{k}(q_\ell -z)},\qquad x,y\in \mathbb {Z}, \end{aligned}$$

where \(0<r<\min \{q_\ell \}_{\ell =j+1}^k\). To get a well-defined expression for \(Q_{[j,k]}(x_0^{(j-1)},y)\), note that, since \(q_j<q_k\) for all \(k>j\), we can write

$$\begin{aligned} \sum _{y\in \mathbb {Z}} q_j^{y}\cdot Q_{[j+1,k]}(y,x)= & {} \sum _{y<0}q_j^{y}\cdot \oint _{|z|=r} \frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x-y+k-j-1}}{\prod _{\ell =j+1}^k(q_\ell -z)}\\{} & {} +\sum _{y\ge 0}q_j^{y}\cdot \oint _{|z|=r'}\frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x-y+k-j-1}}{\prod _{\ell =j+1}^k(q_\ell -z)}, \end{aligned}$$

where \(r,r'\) are chosen such that \(0<r<q_j<r'<\min \{q_\ell \}_{\ell =j+1}^k\). Both geometric series converge and computing them yields

$$\begin{aligned} \sum _{y\in \mathbb {Z}} q_j^{y}\cdot Q_{[j+1,k]}(y,x)&= \oint _{|z|=r}\frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x+k-j}}{\prod _{\ell =j}^k(q_\ell -z)}-\oint _{|z|=r'}\frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x+k-j}}{\prod _{\ell =j}^k(q_\ell -z)}\nonumber \\ {}&= -\oint _{\gamma _{q_j}}\frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x+k-j}}{\prod _{\ell =j}^k(q_\ell -z)}, \end{aligned}$$
(3.46)

where \(\gamma _{q_j}\) is any simple closed contour enclosing \(q_j\) as the only pole for the integrand. This computation motivates the definition

$$\begin{aligned} Q_{[j,k]}(x_0^{(j-1)},x):=-\oint _{\gamma _{q_j}}\frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{x+k-j}}{\prod _{\ell =j}^k(q_\ell -z)}= \frac{q_j^{x+k-j}}{\prod _{\ell =j+1}^{k}(q_\ell -q_j)} \qquad \text {for } 1\le j\le k, \nonumber \\ \end{aligned}$$
(3.47)

where \(\gamma _{q_j}\) is the contour defined above. We also set

$$\begin{aligned} Q_{[j,k]}(x_0^{(j-1)},y):=0\qquad \text {for } 1\le k<j. \end{aligned}$$
(3.48)

We now summarize some basic properties of \(Q_{[j,k]}(x_0^{(j-1)},x)\), which we will use frequently:

Proposition 3.9

Assume that \(q_1<q_2<\cdots \) and take \(Q_{[j,k]}(x_0^{(j-1)},x)\) to be defined as in (3.47)–(3.48), where \(x_0^{(j-1)}\) are virtual variables regarded as \(\infty \).

  1. (i)

    For \(k\ge 1\) and \(y\in \mathbb {Z}\), we have

    $$\begin{aligned} Q_{[k,k]}(x_0^{(k-1)},y) = q_k^{y}=Q_k(x_0^{(k-1)},y), \end{aligned}$$
    (3.49)

    which is consistent with (3.42).

  2. (ii)

    Given \(j,k,n\ge 1\) and \(y\in \mathbb {Z}\) with \(j<k\) and \(j<n\), we have

    $$\begin{aligned} Q_{[j,k]}\circ Q_{(k,n]}(x_0^{(j-1)},y):= \sum _{x'\in \mathbb {Z}}Q_{[j,k]}(x_0^{(j-1)},x')\,Q_{(k,n]}(x',y) =Q_{[j,n]}(x_0^{(j-1)},y), \nonumber \\ \end{aligned}$$
    (3.50)

    where we allow the slight abuse of notation (3.7) for \(k\ge n\).

  3. (iii)

    For any \(j,k\ge 1\), we have

    $$\begin{aligned} Q_{[j,k]}(x_0^{(j-1)},y) = \lim _{x\rightarrow \infty } q_j^{x}\cdot Q_{[j,k]}(x,y). \end{aligned}$$
    (3.51)

Proof

  1. (i)

    This follows immediately from (3.47).

  2. (ii)

    To prove this, one can mimic (3.46), using the integral representation (3.16) and splitting the sum into two geometric series with contours deformed in such a way that both series converge simultaneously.

  3. (iii)

    We first prove (3.51) for \(1\le j\le k\), using again the integral representation (3.16). Since \(q_j<q_{j+1}<\cdots \), if we deform the contour \(|z|=r\) to be a slightly larger circle \(|z|=r'\) with \(0<r<q_j<r'<q_{j+1}\), the only extra contribution of the integral comes from the residue at \(z=q_j\). Hence, we have

    $$\begin{aligned} Q_{[j,k]}(x,y)&= \oint _{|z|=r} \frac{\textrm{d} z}{2\pi \textrm{i}z} \cdot \frac{z^{y-x+k-j+1}}{\prod _{\ell =j}^{k}(q_\ell -z)}\\&= \oint _{|z|=r'} \frac{\textrm{d} z}{2\pi \textrm{i}z} \cdot \frac{z^{y-x+k-j+1}}{\prod _{\ell =j}^{k}(q_\ell -z)}+\frac{q_j^{y-x+k-j}}{\prod _{\ell =j+1}^{k}(q_\ell -q_j)}. \end{aligned}$$

    Note that

    $$\begin{aligned} \left| q_j^x\cdot \oint _{|z|=r'}\frac{\textrm{d} z}{2\pi \textrm{i}z} \cdot \frac{z^{y-x+k-j+1}}{\prod _{\ell =j}^{k}(q_\ell -z)}\right| \le C\cdot \left( \frac{q_j}{r'}\right) ^{x}\rightarrow 0,\qquad \text {as }x\rightarrow \infty , \end{aligned}$$

    for some constant \(C>0\). Thus, by (3.47), we have

    $$\begin{aligned} \lim _{x\rightarrow \infty } q_j^{x}\cdot Q_{[j,k]}(x,y)= \frac{q^{y+k-j}_j}{\prod _{\ell =j+1}^{k}(q_\ell -q_j)}=Q_{[j,k]}(x_0^{(j-1)},y). \end{aligned}$$

    To prove (3.51) for \(j>k\), notice that in this case, using our convention (3.7), we have

    $$\begin{aligned} Q_{[j,k]}(x,y)= Q_{j-1}^{-1}\circ \cdots \circ Q^{-1}_{k+1}(x,y)=0, \end{aligned}$$

    whenever \(x>y\). By definition (3.48), it then follows that

    $$\begin{aligned} \lim _{x\rightarrow \infty } q_j^{x}\cdot Q_{[j,k]}(x,y)= 0 =Q_{[j,k]}(x_0^{(j-1)},y), \end{aligned}$$

    as desired.

\(\square \)

The following proposition is rather standard in the theory of determinantal point processes. However, here we unveil an additional important structure, i.e. the triangularity of the correlation matrix \(\mathsf M\), which can be seen from our non-intersecting paths formulation.

Define first, for \(n< N\), the following generalisation of the functions \(\Psi _k^{(N)}\) from (3.33):

$$\begin{aligned} \Psi _k^{(n)}(x):= Q_{(n,N]}\circ \Psi _{k}^{(N)}(x) = \sum _{z\in \mathbb {Z}} Q_{(n,N]}(x,z) \Psi _{k}^{(N)}(z). \end{aligned}$$
(3.52)

We remark that the summation over z is actually within the finite range \(y_k-N\le z < x\); see Fig. 4 and recall the definition of the operator Q.

Proposition 3.10

(Correlation kernel and Fredholm determinant). Let the functions \(\Psi ^{(n)}_k\) be defined by (3.33) and (3.52). Let \( Q_{[j,n]}(x_0^{(j-1)},y)\) be defined by (3.47)–(3.48). Under the assumption that \(q_1<q_2<\cdots \), the determinantal point process (3.40) admits the Fredholm determinant representation

$$\begin{aligned} {{\mathbb {E}}} \Bigg [ \prod _{1\le {j\le i}\le N} \big (1+g(i,x^{(i)}_j) \big ) \Bigg ] =\det ( I+g K)_{\ell ^2(\{1,\ldots ,N\}\times \mathbb {Z})} \end{aligned}$$

for any bounded test function \(g:\{1,\ldots ,N\}\times \mathbb {Z}\rightarrow \mathbb {R}\). The correlation kernel K is given by

$$\begin{aligned} K(m,x;n,x')=- Q_{(m,n]}(x,x')\mathbbm {1}_{\{n>m\}} + \sum _{i,j=1}^N \Psi ^{(m)}_i(x) \, \big [ {\textsf{M}}^{-1}\big ]_{i,j} \, Q_{[j,n]}(x_{0}^{(j-1)},x'), \end{aligned}$$
(3.53)

where the matrix \({\textsf{M}}\) is defined by

$$\begin{aligned} {\textsf{M}}_{i,j}:= \sum _{z\in \mathbb {Z}} Q_{[i,N]}(x_{0}^{(i-1)},z) \, \Psi ^{(N)}_j(z)\qquad \text {for}\quad 1\le i,j\le N. \end{aligned}$$
(3.54)

Furthermore, \({\textsf{M}}\) is upper-triangular and invertible.

Proof

Except for the stated properties of \({\textsf{M}}\), Proposition 3.10 follows from [BFPS07, Lemma 3.4] and the accompanying remark, see also [Joh03, Proposition 2.1]. We now check that, under our assumptions that \(q_1<q_2<\cdots \), the matrix \(\mathsf M\) is indeed upper-triangular with nonzero diagonal entries, and therefore invertible. By (3.54) and (3.33), we have

$$\begin{aligned} {\textsf{M}}_{i,j} = \sum _{z\in \mathbb {Z}} Q_{[i,N]}(x_0^{(i-1)},z)\cdot \big ( R^*_{(r,t]}\circ Q_{(j,N]}^{-1} \big ) (z,{y_j-j}) \end{aligned}$$
(3.55)

for \(1\le i,j\le N\). Recalling formula (3.47) for \(Q_{[i,N]}(x_0^{(i-1)},z)\) and expressing the Toeplitz operator \(R^*_{(r,t]}\circ Q_{(j,N]}^{-1}\) as a contour integral in the usual way (see (4.6) in the next section for details), we see that

$$\begin{aligned} {\textsf{M}}_{i,j}&= \sum _{z\in \mathbb {Z}}\left( -\oint _{\gamma _{q_i}}\frac{\textrm{d}\xi }{2\pi \textrm{i}} \frac{\xi ^{z+N-i}}{\prod _{\ell =i}^N(q_\ell -\xi )} \right) \oint _{|w|=r} \frac{\textrm{d} w}{2\pi \textrm{i}} w^{{y_j-z-N-1}}\prod _{\ell =j+1}^{N}(q_\ell -w)\prod _{\ell =r+1}^{t}(1+p_\ell w)\\&= -\oint _{\gamma _{q_i}}\frac{\textrm{d}\xi }{2\pi \textrm{i}}\oint _{|w|=r} \frac{\textrm{d} w}{2\pi \textrm{i}} \xi ^{N-i}w^{{y_j-N-1}}\frac{\prod _{\ell =j+1}^{N}(q_\ell -w)\prod _{\ell =r+1}^{t}(1+p_\ell w)}{\prod _{\ell =i}^N(q_\ell -\xi )}\sum _{z\ge {y_j-N}}\left( \frac{\xi }{w}\right) ^{z} \\&= -\oint _{\gamma _{q_i}}\frac{\textrm{d}\xi }{2\pi \textrm{i}}\oint _{|w|=r} \frac{\textrm{d} w}{2\pi \textrm{i}} \xi ^{{y_j-i}}\cdot \frac{\prod _{\ell =j+1}^{N}(q_\ell -w)\prod _{\ell =r+1}^{t}(1+p_\ell w)}{\prod _{\ell =i}^N(q_\ell -\xi )}\cdot \frac{1}{w-\xi }, \end{aligned}$$

where \(r>\max \{|\xi |:\xi \in \gamma _{q_i}\}\), so that the sum over z converges. Note that in the second equality we can restrict the sum to \(z\ge {y_j-N}\), since the contour integral with respect to w vanishes for \(z<{y_j-N}\) due to analyticity. The only pole inside the w-contour is at \(w=\xi \), and calculating its residue yields

$$\begin{aligned} {\textsf{M}}_{i,j} =-\oint _{\gamma _{q_i}}\frac{\textrm{d}\xi }{2\pi \textrm{i}}\xi ^{{y_j-i}}\cdot \frac{\prod _{\ell =j+1}^{N}(q_\ell -\xi )\prod _{\ell =r+1}^{t}(1+p_\ell \xi )}{\prod _{\ell =i}^N(q_\ell -\xi )}. \end{aligned}$$

Note now that, for \(i>j\), the integrand is analytic at \(q_i\), hence the integral vanishes and \({\textsf{M}}_{i,j}=0\). On the other hand, when \(i=j\), by our assumptions on the parameters we have

$$\begin{aligned} {\textsf{M}}_{i,i}=-\oint _{\gamma _{q_i}}\frac{\textrm{d}\xi }{2\pi \textrm{i}}\frac{\xi ^{{y_i-i}}}{q_i-\xi }\cdot \prod _{\ell =r+1}^{t}(1+p_\ell \xi )=q_i^{{y_i-i}}\cdot \prod _{\ell =r+1}^{t}(1+p_\ell q_i), \end{aligned}$$

which is nonzero. We conclude that the matrix \({\textsf{M}}\) is upper-triangular and invertible. \(\square \)

Remark 3.11

Here we provide a more intuitive, pathwise explanation of the fact that the matrix \({\textsf{M}}\) is upper-triangular. By (3.51), for \(i>j\) we have

$$\begin{aligned} {\textsf{M}}_{i,j}&:= \sum _{z\in \mathbb {Z}} Q_{[i,N]}(x_0^{(i-1)},z) \, \Psi ^{N}_j(z) \\&= \sum _{z\in \mathbb {Z}}\lim _{x\rightarrow \infty }q_i^{x}\cdot \big ( Q_{i}\circ \cdots \circ Q_N\big )(x,z)\big ( R^*_{(r,t]}\circ Q_N^{-1}\circ \cdots \circ Q_{j+1}^{-1} \big ) (z,{y_j-j})\\&= \lim _{x\rightarrow \infty }q_i^{x}\cdot \big ( R^*_{(r,t]}\circ Q^{-1}_{[j+1,i-1]} \big ) (x,{y_j-j}), \end{aligned}$$

where in the last equality we used the fact that all the operators commute. Viewing the operators as associated to paths, \(R^*\)-operators take at most one step to the left at the time, whereas \(Q^{-1}\)-operators take at most one step to the right at the time. Thus, if \(i>j\), for any x sufficiently large, the point \({y_j-j}\) cannot be reached from x by applying the operator \(R_{(r,t]}^*\circ Q^{-1}_{[j+1,i-1]}\), hence \(R_{(r,t]}^*\circ Q^{-1}_{[j+1,i-1]}(x,{y_j-j})=0\). This shows that \({\textsf{M}}_{i,j}=0\) for \(i>j\).

We now derive a simplified expression for the correlation kernel in terms of biorthogonal functions. Define

$$\begin{aligned} \Phi ^{(n)}_i(x):=\sum _{j=1}^n \big [ {\textsf{M}}^{-1}\big ]_{i,j} \, Q_{[j,n]}(x_{0}^{(j-1)},x) {=\sum _{j=i}^n \big [ {\textsf{M}}^{-1}\big ]_{i,j} \, Q_{[j,n]}(x_{0}^{(j-1)},x)} ,\qquad x\in \mathbb {Z}, \end{aligned}$$
(3.56)

where the latter equality is due to the fact that \({\textsf{M}}\) (and hence \({\textsf{M}}^{-1}\)) is upper-triangular.

Proposition 3.12

The kernel K in (3.53) can be written as

$$\begin{aligned} K(m,x;n,x')=- Q_{(m,n]}(x,x')\mathbbm {1}_{\{n>m\}} + \sum _{i=1}^n \Psi ^{(m)}_i(x) \, \Phi ^{(n)}_i(x'). \end{aligned}$$
(3.57)

Moreover, for \(1\le i,j\le n\) and \(n=1,\ldots ,N\), the following biorthogonality relation holds:

$$\begin{aligned} \sum _{x\in \mathbb {Z}} \Psi ^{(n)}_i(x) \, \Phi ^{(n)}_j(x)&= \delta _{i,j}, \end{aligned}$$
(3.58)

where \(\delta _{i,j}\) is the Kronecker delta.

Proof

Since \({\textsf{M}}\) is upper-triangular and, by (3.48), \(Q_{[j,n]}(x_0^{(j-1)},x) =0\) whenever \(j>n\), the summation in (3.53) over ij can be restricted to \(1\le i\le j\le n\), yielding

$$\begin{aligned} K(m,x;n,x')=-Q_{(m,n]}(x,x')\mathbbm {1}_{\{n>m\}} + \sum _{i=1}^n \Psi ^{(m)}_i(x) \, \sum _{j=i}^n \big [ {\textsf{M}}^{-1}\big ]_{i,j} \, Q_{[j,n]}(x_0^{(j-1)},x'). \end{aligned}$$

Therefore, (3.57) follows from definition (3.56).

To prove the biorthogonality relation (3.58), recalling the definitions (3.52) and (3.56) of \(\Psi ^{(n)}_i(x)\) and \(\Phi ^{(n)}_j(x)\), we compute

$$\begin{aligned} \sum _{x\in \mathbb {Z}} \Psi ^{(n)}_i(x) \, \Phi ^{(n)}_j(x)&=\sum _{x\in \mathbb {Z}} \left\{ \sum _{z\in \mathbb {Z}} Q_{(n,N]}(x,z)\Psi ^{(N)}_i(z)\right\} \left\{ \sum _{k=1}^N \big [{\textsf{M}}^{-1}\big ]_{j,k} \, Q_{[k,n]}(x_{0}^{(k-1)},x) \right\} \\&= \sum _{k=1}^N \big [{\textsf{M}}^{-1}\big ]_{j,k} \sum _{z\in \mathbb {Z}} \Psi ^{(N)}_i(z) \sum _{x\in \mathbb {Z}} Q_{(n,N]}(x,z) \, Q_{[k,n]}(x^{(k-1)}_0,x)\\&{\mathop {=}\limits ^{(3.50)}} \sum _{k=1}^N \big [{\textsf{M}}^{-1}\big ]_{j,k} \sum _{z\in \mathbb {Z}} \Psi ^{(N)}_i(z) \, Q_{[k,N]}(x^{(k-1)}_0,z)\\&{\mathop {=}\limits ^{(3.54)}} \sum _{k=1}^N \big [{\textsf{M}}^{-1}\big ]_{j,k} \cdot {\textsf{M}}_{k,i} =\delta _{i,j} . \end{aligned}$$

Note that the exchange of the summations over x and z is justified by absolute convergence, due to our working assumption \(q_1<q_2<\cdots \); see the discussion at the beginning of this subsection. \(\square \)

4 Boundary Value Problem and Random Walk Hitting Times

The goal of this section is to prove our main result, Theorem 1.1. Towards this, in Sect. 4.1 we establish some equivalent formulations of the contour integrals \(\mathcal {S}\) and \(\bar{\mathcal {S}}\) in terms of local operators. Next, in Sect. 4.2, we establish a relationship between the functions \(\Phi ^{(n)}_i(x)\), implicitly defined in (3.56), and a terminal-boundary value problem for a discrete heat equation. In Sect. 4.3 and Sect. 4.4, we express the solution to this problem in terms of random walk hitting probabilities. We will first do so under the additional assumption that \(q_1<q_2<\cdots \), and then extend the result to general parameters through an analytic continuation argument, thus completing the proof of Theorem 1.1.

A boundary value problem of this kind and its connection to random walk hitting problems were first formulated in [MQR21]. However, our approach emphasizes the role of local operators and their path interpretation; this might shed some additional light on the nature of the boundary value problem itself. Furthermore, our main technical tool, Proposition 4.6, requires a completely different proof compared to [MQR21]. The reason is that, in the case of inhomogeneous rates, the kernel is not a polynomial in the spatial variables, while polynomiality is crucial in the proof of the random walk hitting formulas given by [MQR21]. In Sect. 4.5, we will build a subtle induction argument to prove Proposition 4.6.

As a preliminary notational remark, notice that, in Sects. 23, the vector \({\varvec{y}}\) encoding the initial configuration satisfied the weak inequalities \(y_1\ge \cdots \ge y_N\), according to the original \(\textsf{dRSK}\) dynamics of Sect. 2. On the other hand, in this section, we will always assume that \({\varvec{y}}\) satisfies the strict inequalities \(y_1> \cdots > y_N\), matching more closely the \(\textsf{dTASEP}\) initial configuration and the notation of Theorem 1.1. To translate formulas from the notation of previous sections, it will suffice to replace each \(y_k\) with \(y_k+k\) for all \(1\le k\le N\).

4.1 Preliminaries towards the Fredholm determinant

Moving towards the Fredholm determinant formula of Theorem 1.1, here we prove an alternative representation of \(\mathcal {S}\) and \(\bar{\mathcal {S}}\) involving local operators. This will give a natural connection between the path constructions of Sect. 3 and the random walk hitting times.

For \(1\le j\le k\), we define the operators \(\bar{Q}_{[j,k]}\) by

$$\begin{aligned} \begin{aligned} \bar{Q}_{[j,k]}(x,y)&{:= Q_{[j,k]}(x,y)+(-1)^{k-j} Q^\dagger _{[j,k]}(x,y)} \\&={\left\{ \begin{array}{ll} \left( Q_{j}\circ \cdots \circ Q_k\right) (x,y)\quad &{}x>y,\\ (-1)^{k-j}\left( Q^{\dagger }_{j}\circ \cdots \circ Q^{\dagger }_k\right) (x,y)\quad &{}x\le y, \end{array}\right. } \end{aligned} \end{aligned}$$
(4.1)

where \(x,y\in \mathbb {Z}\). It is then straightforward to check that

$$\begin{aligned} Q_j^{-1}\circ \bar{Q}_{[j,k]}= \bar{Q}_{[j+1,k]} \qquad \text {and}\qquad \bar{Q}_{[j,k]}\circ Q_{k}^{-1}= \bar{Q}_{[j,k-1]}, \end{aligned}$$

whenever \(j<k\). However, when \(j=k\), we have

$$\begin{aligned} \bar{Q}_{[k,k]}(x,y) = Q_{k}(x,y)+Q^\dagger _{k}(x,y) = q_k^{y-x} \qquad \text {for all } x,y\in \mathbb {Z}, \end{aligned}$$
(4.2)

hence we deduce from (3.3) that

$$\begin{aligned} Q_k^{-1}\circ \bar{Q}_{[k,k]}=\bar{Q}_{[k,k]}\circ Q_{k}^{-1}= 0. \end{aligned}$$

Note also that \(\bar{Q}_{[j,k]}\circ \bar{Q}_{[k+1,\ell ]}\) may not be well defined. The operator \(\bar{Q}_{[j,k]}\) is Toeplitz, but its symbol is divergent on the whole complex plane. However, recalling (3.15) and (3.16), we can still express it as a contour integral:

$$\begin{aligned} \begin{aligned} \bar{Q}_{[j,k]}(x,y)&=\oint _{|z|=r} \frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{y-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -z)}-\oint _{|z|=R} \frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{y-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -z)}\\&=-\oint _{\Gamma _{{\varvec{q}}}} \frac{\textrm{d}z}{2\pi \textrm{i}} \frac{z^{y-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -z)}, \end{aligned} \end{aligned}$$
(4.3)

where \(0<r<q_\ell <R\) for all \(j\le \ell \le k\), and \(\Gamma _{{\varvec{q}}}\) is any simple closed contour enclosing \(q_\ell \) for all \(j\le \ell \le k\) but not 0.

Proposition 4.1

The operators \(\mathcal {S}\) and \(\bar{\mathcal {S}}\) defined in (1.1) and (1.2) can be rewritten as

$$\begin{aligned}&\mathcal {S}_{[j,k],(r,t]}(x,y)= \left( {Q_{[j,k]}^{-1}}\circ R_{(r,t]}^*\right) (x,y),\end{aligned}$$
(4.4)
$$\begin{aligned}&\bar{\mathcal {S}}_{[j,k],(r,t]}(x,y)= \prod _{\ell =j}^{k}(q_\ell -1)\cdot \left( \bar{Q}_{[j,k]}\circ (R_{(r,t]}^*)^{-1}\right) (x,y), \end{aligned}$$
(4.5)

where the second formula holds under the hypothesis that \(p_iq_\ell <1\) for all \(r<i\le t\) and \(j\le \ell \le k\) (this condition is equivalent to the convergence of the right-hand side of (4.5)).

Proof

By (3.14) and Proposition 3.1, the symbol of the (bi-infinite) Toeplitz operator \(Q_{[j,k]}^{-1}\) is \(\varphi _{Q_{[j,k]}^{-1}}(z)=z^{-(k-j+1)} \prod _{\ell =j}^k (q_\ell - z)\), for \(|z|>0\). On the other hand, the Toeplitz operator \(R_k^{*}\) has kernel \(R_k^*(x,y)=\mathbbm {1}_{y=x}+p_k\mathbbm {1}_{y=x-1}\) and symbol \(\varphi _{{R}_k^{*}}(z)=\sum _{x\in \mathbb {Z}}R_k^*(0,x)z^{-x}=1+p_k z\). Therefore, the composition \(R_{(r,t]}^{*}\) is also a Toeplitz operator with symbol \(\varphi _{{R}_{(r,t]}^{*}}(z)=\prod _{\ell =r+1}^{t}(1+p_\ell z)\) for every \(z\in \mathbb {C}\). It then follows from Proposition 3.1 that

$$\begin{aligned} \left( Q_{[j,k]}^{-1}\circ R_{(r,t]}^*\right) (x,y) = \oint _{|z|=r} \frac{\textrm{d}z}{2\pi \textrm{i}z} z^{y-x-k+j-1} \prod _{\ell =j}^{k}(q_\ell -z)\cdot \prod _{\ell =r+1}^{t}(1+p_\ell z), \nonumber \\ \end{aligned}$$
(4.6)

for any \(r>0\). This, combined with definition (1.1), proves (4.4).

Let us now prove (4.5). By Proposition 3.1, we may express the inverse of \(R_{(r,t]}^*\) as

$$\begin{aligned} (R_{(r,t]}^*)^{-1}(x,y) = \oint _{|z|=R} \frac{\textrm{d}z}{2\pi \textrm{i}z} \frac{z^{y-x}}{\prod _{\ell =r+1}^{t}(1+p_\ell z)}, \qquad x,y\in \mathbb {Z}, \end{aligned}$$

where \(0<R<\min \{p_\ell ^{-1}\}\). Using (4.3) for \(\bar{Q}_{[j,k]}\) and noting from the above expression that \((R_{(r,t]}^*)^{-1}(x,y)=0\) if \(x<y\), we can formally write

$$\begin{aligned}&\left( \bar{Q}_{[j,k]}\circ (R_{(r,t]}^*)^{-1}\right) (x,y) =\sum _{x'\ge y} \bar{Q}_{[j,k]}(x,x') \, (R_{(r,t]}^*)^{-1}(x',y) \\&= -\oint _{|z|=R}\frac{\textrm{d}z}{2\pi \textrm{i}z}\oint _{\Gamma _{{\varvec{q}}}}\frac{\textrm{d}w}{2\pi \textrm{i}} \frac{z^y w^{-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -w)\cdot \prod _{\ell =r+1}^t (1+p_\ell z)}\sum _{x'\ge y} \left( \frac{w}{z}\right) ^{x'}. \end{aligned}$$

In order for \(\big (\bar{Q}_{[j,k]}\circ (R_{(r,t]}^*)^{-1}\big )(x,y)\) to converge, i.e. for the geometric series \(\sum _{x'\ge y} (w/z)^{x'}\) to converge, we need to take \(\Gamma _{{\varvec{q}}}\) to be inside the circle \(|z|=R\); see Fig. 6 for an illustration of the contours. This is only possible if \(\max _{j\le \ell \le k}\{q_\ell \}<\min _{r<i\le t}\{p_{i}^{-1}\}\), or equivalently \(p_iq_\ell <1\) for all \(r<i\le t\) and \(j\le \ell \le k\). Under this hypothesis, we evaluate the geometric series and compute the only residue of the z-contour at \(z=w\), thus obtaining

$$\begin{aligned}&\left( \bar{Q}_{[j,k]}\circ (R_{(r,t]}^*)^{-1}\right) (x,y) \\&= -\oint _{|z|=R}\frac{\textrm{d}z}{2\pi \textrm{i}}\oint _{\Gamma _{{\varvec{q}}}}\frac{\textrm{d}w}{2\pi \textrm{i}} \frac{w^{y-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -w)\cdot \prod _{\ell =r+1}^t (1+p_\ell z)}\frac{1}{z-w} \\&= -\oint _{\Gamma _{{\varvec{q}}}}\frac{\textrm{d}w}{2\pi \textrm{i}} \frac{w^{y-x+k-j}}{\prod _{\ell =j}^{k}(q_\ell -w) \cdot \prod _{\ell =r+1}^t (1+p_\ell w)}. \end{aligned}$$

This, combined with definition (1.2), leads to (4.5). \(\square \)

Fig. 6
figure 6

The contours \(\Gamma _{{\varvec{q}}}\) and \(|z|=R\). Here, the red dots represent the points \(\{-p_\ell ^{-1}\}\) and the blue dots represent the points \(\{q_\ell \}\)

4.2 Terminal-boundary value problem

Assume that \(q_1<q_2<\cdots \).

Proposition 4.2

Recall the definitions of \(Q_k\) and its inverse in (3.2)–(3.3). Recall also that \({\varvec{y}}=(y_1> \cdots > y_N)\) is an arbitrary vector encoding the initial configuration of \(\textsf{dTASEP}\). For \(n\le N\) and \(k\in \{0,\ldots ,n-1\}\), consider the terminal-boundary value problem

figure a

Then, for \(0\le \ell \le k\), the functions

$$\begin{aligned} h^n_k(\ell ,x):=\Phi ^{(n)}_{n-k}\circ R^*_{(r,t]} \circ Q^{-1}_{(n-\ell ,n]} (x) \end{aligned}$$
(4.10)

solve (4.7)–(4.9). In particular, we have that

$$\begin{aligned} \Phi ^{(n)}_{n-k}(x)= h^n_k(0,\cdot ) \circ (R^*_{(r,t]})^{-1}(x). \end{aligned}$$
(4.11)

Proof

It is clear that \(h^n_k\) defined in (4.10) satisfies (4.7), since, for \(\ell <k\),

$$\begin{aligned} \begin{aligned} h^n_k(\ell +1,x)&:= \Phi ^{(n)}_{n-k}\circ R^*_{(r,t]} \circ \big \{ Q^{-1}_n \circ \cdots \circ Q^{-1}_{n-\ell }\big \} (x) \\&= \Phi ^{(n)}_{n-k}\circ R^*_{(r,t]} \circ \big \{ Q^{-1}_n \circ \cdots \circ Q^{-1}_{n-\ell +1}\big \} \circ Q^{-1}_{n-\ell } (x) \\&= h^{n}_k(\ell ,\cdot ) \circ Q^{-1}_{n-\ell } (x). \end{aligned} \end{aligned}$$

The boundary condition (4.8) follows from the biorthogonality property (3.58) and the definition of \(\Psi ^{(n)}_k\) in (3.33) and (3.52): for \(\ell <k\),

$$\begin{aligned} \begin{aligned} h^{n}_k(\ell ,y_{n-\ell })&=\big (\Phi ^{(n)}_{n-k}\circ R^*_{(r,t]} \circ Q^{-1}_{(n-\ell , n]}\big ) (y_{n-\ell }) \\&=\sum _{x\in \mathbb {Z}} \Phi ^{(n)}_{n-k}(x) \cdot \big (R^*_{(r,t]}\circ Q^{-1}_{(n-\ell ,n]}\big )(x,y_{n-\ell }) =\sum _{x\in \mathbb {Z}} \Phi ^{(n)}_{n-k}(x)\Psi ^{(n)}_{n-\ell }(x) =0. \end{aligned} \end{aligned}$$

Finally, we check the terminal condition (4.9). By the definition (3.56) of \(\Phi ^{(n)}_{n-k}\), we have

$$\begin{aligned} \begin{aligned} h^n_k(k,x)&= \Phi ^{(n)}_{n-k}\circ R^*_{(r,t]} \circ Q^{-1}_{(n-k,n]} (x) \\&= \sum _{j=n-k}^{{n}} \big [{\textsf{M}}^{-1}\big ]_{n-k,j} \sum _{y\in \mathbb {Z}} Q_{[j,n]} (x^{(j-1)}_0,y) \cdot {\left( R^*_{(r,t]} \circ Q^{-1}_{(n-k,n]}\right) (y,x).} \end{aligned}\nonumber \\ \end{aligned}$$
(4.12)

Using (3.51) and recalling that the Toeplitz operators Q, \(Q^{-1}\) and \(R^*\) all commute with each other, we have, for \(j>n-k\),

$$\begin{aligned} \begin{aligned}&\sum _{y\in \mathbb {Z}} Q_{[j,n]} (x^{(j-1)}_0,y) \cdot \left( R^*_{(r,t]} \circ Q^{-1}_{(n-k,n]}\right) (y,x) \\&\quad = \; \sum _{y\in \mathbb {Z}} \lim _{z\rightarrow \infty }q_j^{z} \cdot Q_{[j,n]}(z,y) \cdot \left( R^*_{(r,t]} \circ Q^{-1}_{(n-k,n]}\right) (y,x)\\&\quad = \lim _{z\rightarrow \infty }q_j^{z} \left( R^*_{(r,t]} \circ Q^{-1}_{[n-k+1,j-1]}\right) (z,x). \end{aligned} \end{aligned}$$

The same argument of Remark 3.11 shows that the latter limit is zero for \(j>n-k\). Therefore, the only surviving summand in (4.12) is the one corresponding to \(j=n-k\). We thus have

$$\begin{aligned} \begin{aligned} h^n_k(k,x)&= \big [ {\textsf{M}}^{-1} \big ]_{n-k,n-k} \sum _{y\in \mathbb {Z}} Q_{[n-k,n]} (x^{(n-k-1)}_{0},y) \cdot \left( R^*_{(r,t]} \circ Q^{-1}_{(n-k,n]}\right) (y,x) \\&=\frac{\sum _{y\in \mathbb {Z}} Q_{n-k}(x^{(n-k-1)}_{0},y) \cdot R^*_{(r,t]}(y,x)}{\sum _{y\in \mathbb {Z}} Q_{n-k}(x^{(n-k-1)}_{0},y) \cdot R^*_{(r,t]}(y,y_{n-k})} \\&= \frac{q_{n-k}^{x}\cdot \sum _{y\in \mathbb {Z}} q_{n-k}^{y-x} \cdot R^*_{(r,t]}(y,x)}{q_{n-k}^{y_{n-k}}\cdot \sum _{y\in \mathbb {Z}}q_{n-k}^{y-y_{n-k}} \cdot R^*_{(r,t]}(y,y_{n-k})} \end{aligned} \end{aligned}$$

In the second equality we used again the commutativity of the operators, the fact that \(\big [\textsf{M}^{-1}\big ]_{n-k,n-k}=(\textsf{M}_{n-k,n-k})^{-1}\) (as \(\mathsf M\) is upper-triangular), and (3.55). In the third equality we used (3.49). Since \(R_{(r,t]}^*\) is a Toeplitz operator, the sum

$$\begin{aligned} \sum _{y\in \mathbb {Z}} q_{n-k}^{y-x} \cdot R^*_{(r,t]}(y,x) = \sum _{y\in \mathbb {Z}} q_{n-k}^{y-x} \cdot R^*_{(r,t]}(y-x,0) = \sum _{y\in \mathbb {Z}} q_{n-k}^{y} \cdot R^*_{(r,t]}(y,0) \end{aligned}$$

does not depend on x. Therefore, in the latest expression of \(h^n_k(k,x)\), the two sums appearing in the numerator and denominator cancel each other, and we obtain \(h^n_k(k,x)=q_{n-k}^{x-y_{n-k}}\), as desired. \(\square \)

Recalling (3.3), Eq. (4.7) can be written equivalently as

$$\begin{aligned} h_k^{n}(\ell +1,x)= -h_k^{n}(\ell ,x)+q_{n-\ell }\cdot h_k^{n}(\ell ,x-1). \end{aligned}$$
(4.13)

If we solve the latter recursively in x for any fixed \(\ell <k\), using the boundary condition (4.8), we obtain a cumulative (integral) expression of \(h^n_k(\ell ,\cdot )\) in terms of \(h^n_k(\ell +1,\cdot )\):

(4.14)

with the convention that the above summations equal zero when their range is empty, i.e., when \(x=y_{n-\ell }\).

Proposition 4.3

If the parameters \(q_{\ell }\) are equal to some value \(q>1\) for all \(\ell \), then every solution to the initial-boundary value problem (4.7)–(4.9) satisfies the property that \(q^{-x} h^n_k(\ell , x)\) is a polynomial of degree \(k-\ell \) in the spatial variable x.

Proof

This is trivially true for \(\ell =k\), as \(h^n_k(k,x)={q^{x-y_{n-k}}}\) by the initial condition (4.9). Using the recursion (4.14), we see that \(h^n_k(k-1,x)=q^{x-y_{n-k}}\cdot (y_{n-(k-1)}-x)\). Using this and, again, the recursion (4.14) inductively, we arrive at the claim. \(\square \)

Remark 4.4

Using the same procedure as in the proof of Proposition 4.3, one can see that the solutions to the terminal-boundary value problem (4.7)–(4.9) do not have an analogous polynomial property if the parameters \(q_{\ell }\), \(1\le \ell \le N\), are not identical. As mentioned at the beginning of Sect. 4, we will need to prove Theorem 1.1 using different methods, compared to [MQR21], due to the non-polynomiality of these solutions.

4.3 Random walk hitting probabilities

We will now present some preliminary computations that, although not strictly essential for our final purposes, will motivate and illustrate the representation of \(h^n_k\) in terms of random walk hitting probabilities.

When \(x\le y_{n-\ell }\) we can iterate the recursion (4.14) to obtain, for \(\ell <k\),

$$\begin{aligned} \begin{aligned} h^n_k(\ell ,x) = \sum _{x<x_1\le y_{n-\ell }} \, \sum _{x_1<x_2\le y_{n-(\ell +1)}}&\cdots \sum _{x_{k-\ell -1}<x_{k-\ell }\le y_{n-k+1}} \\&q_{n-\ell }^{x-x_1} \,q_{n-(\ell +1)}^{x_1-x_2} \cdots q_{n-(k-1)}^{x_{k-\ell -1}-x_{k-\ell }} q_{n-k}^{x_{k-\ell }-y_{n-k}} \end{aligned} \end{aligned}$$
(4.15)

Let \(S^*\) be an n-step geometric random walk moving strictly to the right (more precisely, a sum of independent geometric random variables with inhomogeneous parameters \(q_{n}^{-1},\ldots ,q_1^{-1}\)) with transition probability

$$\begin{aligned} \mathbb {P}(S^*_\ell =y \;|\; S^*_{\ell -1}=x):=(q_{n-\ell }-1)q_{n-\ell }^{x-y}\mathbbm {1}_{y>x}, \qquad 0\le \ell \le n-1. \end{aligned}$$
(4.16)

Then for \(x\le y_{n-\ell }\), the one step recurrence (4.14) can be written as

$$\begin{aligned} h_{k}^{n}(\ell ,x) = \frac{1}{q_{n-\ell }-1}\cdot \mathbb {E}_{S^*_{\ell -1}=x}\left[ h_{k}^n(\ell +1,S^*_\ell )\mathbbm {1}_{\{S^*_\ell \le y_{n-\ell }\}}\right] . \end{aligned}$$

Analogously, writing (4.15) in terms of the law of the random walk yields

$$\begin{aligned} h_{k}^{n}(\ell ,x) = \left( \prod _{j=\ell }^{k}\frac{1}{q_{n-j}-1}\right) \mathbb {E}_{S^*_{\ell -1}=x}\left[ q_{n-k}^{S^*_{k-1}-y_{n-k}}\mathbbm {1}_{\{S^*_j\le y_{n-j},\;\ell \le j\le k-1\}}\right] . \end{aligned}$$
(4.17)

For \(0\le \ell \le k\le n-1\), we define the hitting time

$$\begin{aligned} \tau ^*_{\ell ,n}:= \min \{m\in \{\ell ,\cdots ,n-1\}: S^*_m> y_{n-m}\}. \end{aligned}$$
(4.18)

Proposition 4.5

With the above notation, for \(x\le y_{n-\ell }\), we have

$$\begin{aligned} h_{k}^n(\ell ,x)= \frac{\mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}=k)}{\prod _{j=\ell }^{k}({q_{n-j}}-1)}. \end{aligned}$$
(4.19)

Proof

Let us first write

$$\begin{aligned} \mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}=k)&= \mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}\ge k)-\mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}\ge k+1)\\&= \mathbb {E}_{S^*_{\ell -1}=x}\left[ \mathbbm {1}_{\{S^*_j\le y_{n-j},\; \ell \le j\le k-1\}}\right] -\mathbb {E}_{S^*_{\ell -1}=x}\left[ \mathbbm {1}_{\{S^*_j\le y_{n-j},\; \ell \le j\le k\}}\right] . \end{aligned}$$

Here, the indicator \(\mathbbm {1}_{\{S^*_j\le y_{n-j},\; \ell \le j\le k-1\}}\) is set to be 1 when \(\ell =k\). Rearranging the terms and using standard properties of the conditional expectation, we obtain

$$\begin{aligned} \mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}=k) = \mathbb {E}_{S^*_{\ell -1}=x}\left[ \left( 1-\mathbb {E}\left[ \mathbbm {1}_{\{S^*_k\le y_{n-k}\}} \;\bigg |\; {S^*_\ell ,\ldots ,S^*_{k-1}}\right] \right) \mathbbm {1}_{\{S^*_j\le y_{n-j},\;\ell \le j\le k-1\}}\right] . \end{aligned}$$

Note that

$$\begin{aligned} \mathbb {E}\left[ \mathbbm {1}_{\{S^*_k\le y_{n-k}\}} \;\bigg |\; {S^*_\ell ,\ldots ,S^*_{k-1}}\right] = (q_{n-k}-1)\sum _{x=S^*_{k-1}+1}^{y_{n-k}}q_{n-k}^{S^*_{k-1}-x}= 1-q_{n-k}^{S^*_{k-1}-y_{n-k}}, \end{aligned}$$

so that

$$\begin{aligned} \mathbb {P}_{S^*_{\ell -1}=x}(\tau ^*_{\ell ,n}=k) = \mathbb {E}_{S^*_{\ell -1}=x}\left[ q_{n-k}^{S^*_{k-1}-y_{n-k}}\mathbbm {1}_{\{S^*_j\le y_{n-j},\; \ell \le j\le k-1\}}\right] . \end{aligned}$$

Hence, (4.19) follows from (4.17). \(\square \)

4.4 Hitting probability representation for the kernel

We will now derive a more explicit representation of the Fredholm determinant kernel (3.57), which contains the implicit part \(\sum _{i=1}^{n}\Psi _i^{(m)}(x)\Phi _i^{(n)}(x')\). Using (3.52), (3.33) and (4.11), we write

$$\begin{aligned} \sum _{i=1}^n \Psi ^{(m)}_i(x) \Phi ^{(n)}_i(x') = \sum _{i=1}^{n} Q_{(m,N]}\circ R_{(r,t]}^*\circ Q_{(i,N]}^{-1}(x,y_i) \sum _{z_2\in \mathbb {Z}} h^n_{n-i}(0,z_2)\cdot (R^*_{(r,t]})^{-1}(z_2,x'). \end{aligned}$$

Notice that, by the commutativity of Toeplitz operators,

$$\begin{aligned} Q_{(m,N]}\circ R_{(r,t]}^*\circ Q_{(i,N]}^{-1}(x,y_i)&= Q_{[1,N]}\circ Q_{[1,m]}^{-1}\circ R_{(r,t]}^*\circ Q_{[1,N]}^{-1}\circ Q_{[1,i]}(x,y_i)\\&= Q_{[1,m]}^{-1}\circ R_{(r,t]}^*\circ Q_{[1,i]}(x,y_i)\\&= \sum _{z_1\in \mathbb {Z}}(Q_{[1,m]}^{-1}\circ R_{(r,t]}^*)(x,z_1)Q_{[1,i]}(z_1,y_i). \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} \sum _{i=1}^n \Psi ^{(m)}_i(x) \Phi ^{(n)}_i(x') = \sum _{z_1,z_2\in \mathbb {Z}}\big ( Q_{[1,m]}^{-1}\circ R^*_{(r,t]} \big )(x,z_1) \cdot G(z_1,z_2) \cdot (R^*_{(r,t]})^{-1}(z_2,x'), \nonumber \\ \end{aligned}$$
(4.20)

where the function G is defined as

$$\begin{aligned} G(z_1,z_2):= \sum _{i=1}^n Q_{[1,i]}(z_1,y_i) \cdot h_{n-i}^n(0,z_2). \end{aligned}$$
(4.21)

By the definition (3.2) of the Q-operators and formula (4.15), we have

$$\begin{aligned} \begin{aligned} G(z_1,z_2)=\sum _{i=1}^n \,&\sum _{z_1:=x_0>x_1>x_2>\cdots>x_{i-1}>y_i} q_1^{x_1-z_1} \,q_2^{x_2-x_1} \cdots q_{i-1}^{x_{i-1}-x_{i-2}} \, q_{i}^{y_i-x_{i-1}} \\&\cdot \sum _{\begin{array}{c} z_2=:x_n<x_{n-1}< \cdots <x_i \\ x_{n-1}\le y_n,\ldots , x_i\le y_{i+1} \end{array}} q_i^{x_{i}-y_i} \,q_{i+1}^{x_{i+1}-x_{i}}\cdots q_{n-1}^{x_{n-1}-x_{n-2}} \, q_n^{z_2-x_{n-1}} \end{aligned} \end{aligned}$$
(4.22)

for \(z_2\le y_n\). Up to a normalizing constant, formula (4.22) precisely represents the probability that the geometric random walk \(S^*\) defined in (4.16), started from \(z_2\), ends at \(z_1\) after n steps and enters the region (strictly) to the right of the curve \((y_i)_{1\le i\le n}\) in between; see Fig. 7 for an illustration. More precisely, for \(z_2\le y_n\),

$$\begin{aligned} G(z_1,z_2)= \frac{\mathbb {P}_{S^*_{-1}=z_2}(S^*_{n-1}=z_1, \tau ^*_{0,n}<n)}{\prod _{j=0}^{n-1}(q_{n-j}-1)}, \end{aligned}$$

where \(\tau ^*_{0,n}\) is defined in (4.18).

Fig. 7
figure 7

Path representation of the function \(G(z_1,z_2)\) for \(z_2\le y_n\), as in (4.21)–(4.22). For \(1\le i \le n\), the solid red path depicts the path representation of \(h^n_{n-i}(0,z_2)\) (see also (4.15)), while the dashed, red path depicts the path representation of \((Q_1\circ \cdots \circ Q_i)(z_1,y_i)\). Concatenating the two paths gives a path of the geometric random walk \(S^*\) going from \(z_2\) to \(z_1\), which enters the region (strictly) to the right of the (discrete) curve \((y_i)_{1\le i\le n}\) with a first entrance time at some time \(1\le i \le n\)

The above expression of \(G(z_1,z_2)\) is only valid for \(z_2\le y_n\). For this special case, one only needs to use the first recurrence relation in (4.14). The situation for \(z_2>y_n\) is more complicated, since in this case one also has to use the second recurrence relation in (4.14). Consequently, the expression for \(G(z_1,z_2)\) defined in (4.21) will no longer be a sum of positive terms, but a sum containing both positive and negative terms; accordingly, \(G(z_1,z_2)\) will not be a probability (up to normalization). Nevertheless, \(G(z_1,z_2)\) still possesses a probabilistic interpretation. In order to extend the probabilistic interpretation of \(G(z_1,z_2)\) to all \(z_1,z_2\in \mathbb {Z}\), we need to introduce additional notation. For \(1\le j\le k\le n\), recall the operators \(\bar{Q}_{[j,k]}\) introduced in (4.1) and, for convenience, define a renormalized version of them as follows:

$$\begin{aligned} \hat{Q}_{[j,k]}(x,y):= \Bigg (\prod _{\ell =j}^{k}(q_\ell -1)\Bigg ) \bar{Q}_{[j,k]}(x,y). \end{aligned}$$
(4.23)

We now express \(G(z_1,z_2)\) in terms of a random walk hitting problem involving the \(\hat{Q}\)-operators. Let S be a geometric random walk moving strictly to the left with transition probabilities

$$\begin{aligned} \mathbb {P}(S_\ell =y \;|\; S_{\ell -1}=x):= (q_\ell -1)q_{\ell }^{y-x}\mathbbm {1}_{y<x} ={(q_\ell -1) Q_\ell (x,y)},\qquad \text {for }1\le \ell \le n. \nonumber \\ \end{aligned}$$
(4.24)

Define the hitting time

$$\begin{aligned} \tau :=\min \{m\in \{0,\ldots ,n\}:S_m>y_{m+1}\}, \end{aligned}$$
(4.25)

where \(y_{n+1}:=-\infty \). Then we have

Proposition 4.6

For any \(z_1,z_2\in \mathbb {Z}\),

$$\begin{aligned} G(z_1,z_2)=\frac{\mathbb {E}_{S_0=z_1}\left[ {\hat{Q}}_{[\tau +1,n]}(S_\tau ,z_2)\mathbbm {1}_{\tau <n}\right] }{\prod _{\ell =1}^{n}(q_\ell -1)}. \end{aligned}$$
(4.26)

Remark 4.7

The special case of Proposition 4.6 when \(q_j=q\) for all j was proved in [MQR21]. As pointed out earlier, their proof relies crucially on the fact that \(q^{-z_2}G(z_1,z_2)\) and \(\hat{Q}_{[k,n]}(\cdot ,z_2)\) (and hence the right hand side of (4.26)) are both polynomials in \(z_2\), so one only needs to check the equality for \(z_2\) in an infinite subset of \(\mathbb {Z}\) (a convenient choice would then be \(z_2\le y_n\)). When the parameters \(\{q_j\}\) are distinct, the polynomiality no longer holds.

The proof of Propositon 4.6 is one of the main technical novelties of this article and will be presented in Sect. 4.5. Assuming for the moment Proposition 4.6, we are ready to prove Theorem 1.1. The crucial additional information in Theorem 1.1 is a more explicit expression for the correlation kernel, which, in (3.57), was given implicitly through a biorthogonal relation. To prove the result, we will first assume that the parameters \(\{q_i\}\) satisfy the condition \(q_1<q_2<\cdots \) and use the probabilistic representation (4.26). Then, we will remove the restriction on the parameters by using analytic continuation.

Proof

Assume first that \(q_1<q_2<\cdots \). Using (4.26), (4.23), (4.5) and (1.4), we obtain

$$\begin{aligned}{} & {} \sum _{{z\in \mathbb {Z}}} G(x,z)\cdot (R_{(r,t]}^*)^{-1}(z,y)\nonumber \\{} & {} = \sum _{{z\in \mathbb {Z}}} \frac{\mathbb {E}_{S_0=x}\left[ \prod _{\ell =\tau +1}^{n}(q_\ell -1)\cdot \bar{Q}_{[\tau +1,n]}(S_\tau ,z)\mathbbm {1}_{\tau<n}\right] }{\prod _{\ell =1}^{n}(q_\ell -1)}\cdot (R_{(r,t]}^*)^{-1}(z,y)\nonumber \\{} & {} = \frac{\mathbb {E}_{S_0=x}\left[ \prod _{\ell =\tau +1}^{n}(q_\ell -1) \left( \sum _{{z\in \mathbb {Z}}}\bar{Q}_{[\tau +1,n]}(S_\tau ,z)\cdot (R_{(r,t]}^*)^{-1}(z,y)\right) \mathbbm {1}_{\tau<n}\right] }{\prod _{\ell =1}^{n}(q_\ell -1)} \nonumber \\{} & {} =\frac{\mathbb {E}_{S_0=x}\left[ \bar{\mathcal {S}}_{[\tau +1,n],(r,t]}(S_{\tau },y) \mathbbm {1}_{\tau <n}\right] }{\prod _{\ell =1}^{n}(q_\ell -1)} =\bar{\mathcal {S}}_{[1,n],(r,t]}^{\textrm{epi}({\varvec{y}})}(x,y). \end{aligned}$$
(4.27)

Then, by (3.57), (4.20), (4.21), (4.4) and (4.27), we have

$$\begin{aligned} K(m,x;n,x')&=-Q_{(m,n]}(x,x')\mathbbm {1}_{n>m}+\sum _{i=1}^n\Psi _i^{(m)}(x)\cdot \Phi _i^{(n)}(x')\\&=-Q_{(m,n]}(x,x')\mathbbm {1}_{n>m}\\&\quad +\!\!\!\sum _{{z_1,z_2\in \mathbb {Z}}} \!\!\! Q_{[1,m]}^{-1}\circ R_{(r,t]}^{*}(x,z_1)\cdot G(z_1,z_2)\cdot (R_{(r,t]}^*)^{-1}(z_2,x')\\&= -Q_{(m,n]}(x,x')\mathbbm {1}_{n>m}+ \mathcal {S}_{[1,m],(r,t]}\circ \bar{\mathcal {S}}_{[1,n],(r,t]}^{\textrm{epi}({\varvec{y}})}(x,x'). \end{aligned}$$

Fix now \(1\le k_1<k_2<\cdots <k_m\le N\) and \((s_1,\ldots ,s_m)\in {\mathbb {R}^m}\), as in the statement of Theorem 1.1. Take the starting time \(r:=0\) and choose the test function

$$\begin{aligned} g:\{1,\ldots ,N\}\times \mathbb {Z}\rightarrow \mathbb {R}, \qquad g(k,x):={\left\{ \begin{array}{ll} {-\chi _s(k_i,x)},\quad &{}\text {if }k=k_i\ \text {for some }1\le i\le m,\\ 0&{}\text {otherwise}, \end{array}\right. } \end{aligned}$$

recalling that \(\chi _s(k_i,x):=\mathbbm {1}_{x<s_i}\). Recall now that \(Y_k(t)=x_k^{(k)}\) is the left-most particle of the k-th row in the point process \({\textsf{X}}_N\) of Proposition 3.6 and Corollary 3.8. Therefore, by Proposition 3.10 and the above expression for the kernel, we have

where K is the kernel (1.6). This proves Theorem 1.1 for parameters satisfying the condition \(q_1<q_2<\cdots \).

Now we extend the result to parameters \(q_1,q_2,\ldots \) for the most general hypotheses of the theorem. On the one hand we can write the left hand side of (1.5) as a sum of transition probabilities over suitable configurations:

where \(\mathcal {Q}_{0,t}\) given by (3.30) is clearly analytic in \(q_i\) for each i. Note that the right-hand side above is a finite sum, since \(\mathcal {Q}_{0,t}({\varvec{y}},{\varvec{x}})= 0\) if \(t<x_i-y_i\) or \(x_i-y_i<0\) for some i. Hence, is analytic in \(q_i\) for each i.

On the other hand, the kernels \(\mathcal {S}_{[j,k],(r,t]}(x,y)\) and \(\bar{\mathcal {S}}_{[j,k],(r,t]}(x,y)\) defined through the contour integral representations (1.1) and (1.2) are clearly analytic in \(q_i\) for each i. Moreover, for the geometric random walk S defined in (4.24), the hitting time \(\tau \) defined in (4.25) and \(S_{\tau }\) have joint distribution given by a finite sum:

$$\begin{aligned} \mathbb {P}_{S_0=z_1}(\tau =k, S_{\tau }= z) = \mathbbm {1}_{z>y_{k+1}}\sum _{\begin{array}{c} z_1=x_0>x_1>\cdots >x_k=z\\ x_i\le y_{i+1}, \; 0\le i\le k-1 \end{array}} \, \prod _{i=1}^{k}\left( (q_i-1)\cdot q_i^{x_i-x_{i-1}}\right) , \nonumber \\ \end{aligned}$$
(4.28)

which is also analytic in \(q_i\) for all i. Thus the kernel \(\bar{\mathcal {S}}^{\textrm{epi}({\varvec{y}})}_{[1,n],(0,t]}(x,y)\), given by

$$\begin{aligned} \bar{\mathcal {S}}_{[1,n],(0,t]}^{\textrm{epi}({\varvec{y}})}(x,y)&:= \frac{ \mathbb {E}_{S_0=x}[\bar{\mathcal {S}}_{[\tau +1,n],(0,t]}(S_\tau ,y)\mathbbm {1}_{\tau <n}]}{\prod _{\ell =1}^{n}(q_\ell -1)}\\&=\frac{{\sum _{k=0}^{n-1}\sum _{z}\mathbb {E}_{S_0=x}[\bar{\mathcal {S}}_{[k+1,n],(0,t]}(z,y)\mathbbm {1}_{\tau =k,\, S_k=z}]}}{\prod _{\ell =1}^{n}(q_\ell -1)}\\&= \frac{\sum _{k=0}^{n-1}\sum _{z}\bar{\mathcal {S}}_{[k+1,n],(0,t]}(z,y)\cdot \mathbb {P}_{S_0=x}(\tau =k,S_{\tau }=z)}{\prod _{\ell =1}^{n}(q_\ell -1)}, \end{aligned}$$

is analytic in \(q_i\) for all i (note that the sum over z is a finite sum, as \(\mathbb {P}_{S_0=x}(\tau =k,S_{\tau }=z)=0\) if \(z\le y_{k+1}\) or \(z>x-k\)). Note that the kernel \(\bar{\mathcal {S}}_{[1,n],(0,t]}^{\textrm{epi}({\varvec{y}})}(x,y)\) is analytic in each \(q_i\) also at \(q_i=1\), since the normalizing factor \(\prod _{\ell =1}^{n}(q_\ell -1)\) in the denominator cancels out with the same factor appearing in \(\bar{\mathcal {S}}_{[k+1,n],(0,t]}(z,y)\cdot \mathbb {P}_{S_0=x}(\tau =k,S_{\tau }=z)\) for any k and z (see (1.2) and (4.28)). Therefore, we conclude that the kernel \(K(m,x;n,x')\) defined in (1.6) is analytic in \(q_i\) for each i, and so is the Fredholm determinant \(\det (I-\chi _sK\chi _s)\) associated to it. To be more precise, we need absolute convergence of the series expansion for the Fredholm determinant, which is guaranteed by the fact that \(\chi _sK\chi _s\) is a trace class operator. The trace class property can be proved in a similar way as in [MQR21, Appendix A and B] (there, uniform bounds on the trace norms are obtained for a family of kernels with respect to certain scaling parameters). The only modification needed amounts to replacing the weight function \(e^{t(z-1/2)}(z-1)^{n}z^{-n}\) that appears in [MQR21, (2.28) and (2.29)] with \(\prod _{\ell =j}^{k}(q_\ell z^{-1}-1)\cdot \prod _{\ell =r+1}^{t}(1+p_\ell z)\) for our discrete-time inhomogeneous setup. These weight functions, coming from the contour integral representations of \(\mathcal {S}_{[j,k],(r,t]}(x,y)\) and \(\bar{\mathcal {S}}_{[j,k],(r,t]}(x,y)\), as shown in (1.1) and (1.2), are independent of the entries x and y and remain uniformly bounded on the contours, so one can bound the trace norm almost identically as in [MQR21]; we omit the details.

Now, for fixed parameters \(\{p_i\}\), both sides of (1.5) admit analytic continuation to all \(q_j\) satisfying \(0<q_j<\min \{p_i^{-1}\}\) for all j and they agree for \(q_1<q_2<\cdots \), hence they must agree for all \(0<q_j<\min \{p_i^{-1}\}\), not necessarily ordered. \(\square \)

Remark 4.8

We have already commented in the introduction about the assumption \(p_i q_j<1\), which is innocent due to a particle-hole duality. The second assumption of the theorem, i.e. \(q_j>1\) for all j, is also innocent, as it can be removed by replacing \(p_i\) with \(\tilde{p}_i:=q p_i\), and \(q_i\) with \(\tilde{q}_i:=q_i/q\), for some choice of \(q>0\). Tuning q, one can recover any N-tuple \((\tilde{q}_1,\ldots ,\tilde{q}_N)\) of positive parameters. On the other hand, this will not change the jumping rates, since \(\tilde{p}_i \tilde{q}_j = p_iq_j\). Therefore, the Fredholm determinant on the right hand side of (1.5) does not depend on the choice of q. This can also be seen from the fact that, for any two choices of the renormalizing constants q and \(q'\), the corresponding kernels \(K_{q}\) and \(K_{q'}\) are off by a conjugation, which does not affect the Fredholm determinant:

$$\begin{aligned} K_{q'}(m,x;n,x')= \left( \frac{q'}{q}\right) ^{x-x'}K_{q}(m,x;n,x'). \end{aligned}$$

Example 4.9

(Step initial configuration) The simplest case in which the random walk hitting kernel \(\bar{\mathcal {S}}_{[1,n],(0,t]}^{\textrm{epi}({\varvec{y}})}\) can be explicitly written out is the step initial configuration, i.e. \(y_i=-i\) for \(i=1,\ldots ,N\). In this case, if the random walk S starts at \(S_0=x\le y_1\), then it will never hit the strict epigraph of the curve \((i,y_{i+1})_{0\le i\le N-1}\). This happens because the random walk S moves strictly to the left, hence

$$\begin{aligned} S_k\le S_0-k=x-k\le y_1-k=y_{k+1} \end{aligned}$$

for all \(k\ge 0\). Therefore, in this case, we have \(\mathbbm {1}_{\tau <n}=\mathbbm {1}_{\tau =0}\) and

$$\begin{aligned} \bar{\mathcal {S}}_{[1,n],(0,t]}^{\textrm{epi}({\varvec{y}})}(x,y)= \frac{\mathbbm {1}_{x>y_1} \bar{\mathcal {S}}_{[1,n],(0,t]}(x,y)}{\prod _{\ell =1}^{n}(q_\ell -1)}. \end{aligned}$$
(4.29)

The correlation kernel for the step initial configuration takes, thus, the form

$$\begin{aligned} K(m,x;n,x')=- Q_{(m,n]}\mathbbm {1}_{n>m} + \sum _{z\ge 0}\mathcal {S}_{[1,m],(0,t]}(x,z)\cdot \frac{\bar{\mathcal {S}}_{[1,n],(0,t]}(z,x')}{\prod _{\ell =1}^{n}(q_\ell -1)}. \end{aligned}$$
(4.30)

Using the contour integral representations (1.1)–(1.2), it is easy to check that

$$\begin{aligned} \begin{aligned} K(m,x;n,x')&=- Q_{(m,n]}\mathbbm {1}_{n>m}\\&\quad + \oint _{\Gamma _0}\frac{\textrm{d}z}{2\pi \textrm{i}}\oint _{\Gamma _{{\varvec{q}}}}\frac{\textrm{d}w}{2\pi \textrm{i}} \frac{1}{z-w} \frac{z^{-x-1}\prod _{\ell =1}^m(q_\ell z^{-1}-1)}{w^{-x'}\prod _{\ell =1}^n(q_\ell w^{-1}-1)} \prod _{\ell =1}^{t}\frac{1+p_\ell z}{1+p_\ell w}, \end{aligned} \end{aligned}$$

where \(\Gamma _0\) and \(\Gamma _{{\varvec{q}}}\) are contours as in Theorem 1.1, with the additional property that \(|z|<|w|\) for all \(z\in \Gamma _0\) and \(w\in \Gamma _{{\varvec{q}}}\).

4.5 Proof of Proposition 4.6

In this subsection we prove Proposition 4.6 by induction. For induction purposes, we define the following more general kernels:

$$\begin{aligned} G_{j,k}^{(n)}(z_1,z_2):= \sum _{i=j+1}^{n-k} Q_{[j+1,i]}(z_1,y_i)\cdot h_{n-i}^n(k,z_2), \end{aligned}$$
(4.31)

where \(0\le k\le n-1\) and \(0\le j\le n-k-1\). Then, by (4.21), we have

$$\begin{aligned} G(z_1,z_2) = G^{(n)}_{0,0}(z_1,z_2). \end{aligned}$$

We will prove the following generalization of Proposition 4.6:

$$\begin{aligned} G_{j,k}^{(n)}(z_1,z_2) = \frac{\mathbb {E}_{S_{j}=z_1}\left[ \hat{Q}_{[\tau ^{j}+1,n-k]}(S_{\tau ^{j}},z_2)\mathbbm {1}_{\tau ^{j}<n-k}\right] }{\prod _{\ell =j+1}^{n-k}(q_\ell -1)}, \end{aligned}$$
(4.32)

for all \(z_1,z_2\in \mathbb {Z}\), \(0\le k\le n-1\) and \(0\le j\le n-k-1\), where S is the geometric random walk defined in (4.24) and

$$\begin{aligned} \tau ^{j}:= \min \{m\in \{j,\ldots ,n\}: S_m>y_{m+1}\}. \end{aligned}$$
(4.33)

Notice that \(\tau ^0=\tau \), with \(\tau \) defined in (4.25). To prove (4.32), we use a backward induction on k and j.

For any \(0\le k\le n-1\) and \(j=n-k-1\), we have

$$\begin{aligned} \begin{aligned} G_{j,k}^{(n)}(z_1,z_2)&= Q_{n-k}(z_1,y_{n-k})\cdot h_{k}^{n}(k,z_2) \\&= \mathbbm {1}_{z_1>y_{n-k}}\cdot q_{n-k}^{z_2-z_1} \\&= \mathbb {P}_{S_{n-k-1}=z_1}(\tau ^{n-k-1}=n-k-1)\cdot \bar{Q}_{[n-k,n-k]}(z_1,z_2) \\&= \frac{\mathbb {E}_{S_{n-k-1}=z_1}\left[ \hat{Q}_{[\tau ^{n-k-1}+1,n-k]} (S_{\tau ^{n-k-1}},z_2)\mathbbm {1}_{\tau ^{n-k-1}<n-k}\right] }{q_{n-k}-1}, \end{aligned} \end{aligned}$$

where the first equality follows from (4.31), the second equality from (3.2) and (4.9), the third from (4.33) and (4.2), and the fourth from (4.23). This proves (4.32) for \(0\le k\le n-1\) and \(j=n-k-1\). In particular, (4.32) is proven for \(k=n-1\) and \(0\le j\le n-k-1\).

Assume now by induction that, for some \(0\le \ell \le n-2\), (4.32) holds for all \(k= \ell +1\) and \(0\le j\le n-\ell -2\). We will show that (4.32) holds for \(k=\ell \) and for all \(0\le j\le n-\ell -1\), proceeding with a backward induction on j. We have already proven above the base case \(k=\ell \) and \(j=n-\ell -1\). Assume that, for some \(0\le m\le n-\ell -2\), (4.32) holds for \(k=\ell \) and \(j= m+1\). We need to show (4.32) for \(k=\ell \) and \(j=m\) and, to do so, we will consider various cases separately.

Case 1: \(z_1\le y_{m+1}\). In this case we have \(Q_{m+1}(z_1,y_{m+1})=0\), hence by (4.31) we have

$$\begin{aligned} \begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2)&= \sum _{i=m+2}^{n-\ell } Q_{[m+1,i]}(z_1,y_i)\cdot h_{n-i}^n(\ell ,z_2)\\&= \sum _{y<z_1} Q_{m+1}(z_1,y)\left( \sum _{i=m+2}^{n-\ell } Q_{[m+2,i]}(y,y_i)\cdot h_{n-i}^n(\ell ,z_2)\right) \\&= \sum _{y<z_1} \frac{\mathbb {P}_{S_{m}=z_1}[S_{m+1}=y]}{q_{m+1}-1} \cdot \frac{\mathbb {E}_{S_{m+1}=y}\left[ \hat{Q}_{[\tau ^{m+1}+1,n-\ell ]}(S_{\tau ^{m+1}},z_2)\mathbbm {1}_{\tau ^{m+1}<n-\ell }\right] }{\prod _{j=m+2}^{n-\ell }(q_j-1)}, \end{aligned} \end{aligned}$$

where the latter equality follows from (4.24) and the induction hypothesis (with \(k=\ell \) and \(j=m+1\)). Factoring out the normalization constants and using the Markov property of the random walk S, we obtain

$$\begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2) = \frac{\mathbb {E}_{S_{m}=z_1}\left[ \hat{Q}_{[\tau ^{m+1}+1,n-\ell ]}(S_{\tau ^{m+1}},z_2)\mathbbm {1}_{\tau ^{m+1}<n-\ell }\right] }{\prod _{j=m+1}^{n-\ell }(q_j-1)}. \end{aligned}$$

Note now that, for \(z_1\le y_{m+1}\),

$$\begin{aligned} \tau ^{m+1}\mathbbm {1}_{S_m=z_1} = \tau ^{m}\mathbbm {1}_{S_m=z_1}, \end{aligned}$$

since the only situation when \(\tau ^{m}\ne \tau ^{m+1}\) is \(\tau ^{m}=m\), which cannot happen if \(S_m=z_1\le y_{m+1}\). Thus, for \(z_1\le y_{m+1}\), we have

$$\begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2) = \frac{\mathbb {E}_{S_{m}=z_1}\left[ \hat{Q}_{[\tau ^{m}+1,n-\ell ]}(S_{\tau ^{m}},z_2)\mathbbm {1}_{\tau ^{m}<n-\ell }\right] }{\prod _{j=m+1}^{n-\ell }(q_j-1)}, \end{aligned}$$

which proves (4.32) for \(k=\ell \) and \(j=m\) in Case 1.

Case 2: \(z_1>y_{m+1}\). In this case, we have:

$$\begin{aligned} {\text {If}\quad S_m=z_1, \qquad \text {then}\quad \tau ^m=m,} \end{aligned}$$
(4.34)

since \(z_1>y_{m+1}\). Therefore, recalling that \(m\le n-\ell -2\), the right hand side of (4.32) for \(k=\ell \) and \(j=m\) reduces to \(\bar{Q}_{[m+1,n-\ell ]}(z_1,z_2)\) and it suffices to prove that

$$\begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2)= {\bar{Q}}_{[m+1,n-\ell ]}(z_1,z_2). \end{aligned}$$
(4.35)

We now check (4.35) in three distinct subcases, using the recurrence relation (4.14).

Case 2.1: \(z_2\le y_{n-\ell }\). Then, by the first recurrence relation in (4.14), we have

$$\begin{aligned} h_{n-i}^n(\ell ,z_2) = \sum _{y=z_2+1}^{y_{n-\ell }} q_{n-\ell }^{z_2-y}\cdot h_{n-i}^n(\ell +1,y), \end{aligned}$$

for any \(m+1\le i\le n-\ell -1\). Using the latter equality and (4.9), we obtain

$$\begin{aligned} \begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2)&= \sum _{i=m+1}^{n-\ell } Q_{[m+1,i]}(z_1,y_i) \cdot h_{n-i}^n(\ell ,z_2) \\&= \sum _{i=m+1}^{n-\ell -1} Q_{[m+1,i]}(z_1,y_i) \left( \sum _{y=z_2+1}^{y_{n-\ell }}q_{n-\ell }^{z_2-y} \cdot h_{n-i}^n(\ell +1,y)\right) \\&\quad + Q_{[m+1,n-\ell ]}(z_1,y_{n-\ell })\cdot h_{\ell }^n(\ell ,z_2) \\&=\sum _{y=z_2+1}^{y_{n-\ell }}q_{n-\ell }^{z_2-y}\cdot \left( \sum _{i=m+1}^{n-\ell -1} Q_{[m+1,i]}(z_1,y_i) \cdot h_{n-i}^n(\ell +1,y)\right) \\&\quad + Q_{[m+1,n-\ell ]}(z_1,y_{n-\ell })\cdot q_{n-\ell }^{z_2-y_{n-\ell }}. \end{aligned} \end{aligned}$$
(4.36)

We recognize the sum inside the big parentheses in the last line above to be \(G_{m,\ell +1}^{(n)}(z_1,y)\). Applying the induction hypothesis with \(k=\ell +1\) and \(j=m\le n-(\ell +1)-1\), we may rewrite it as

$$\begin{aligned} \sum _{i=m+1}^{n-\ell -1} Q_{[m+1,i]}(z_1,y_i)\cdot h_{n-i}^n(\ell +1,y)&= \frac{\mathbb {E}_{S_m=z_1}[\hat{Q}_{[\tau ^{m}+1,n-\ell -1]}(S_{\tau ^m},y)\mathbbm {1}_{\tau ^{m}<n-\ell -1}]}{\prod _{j=m+1}^{n-\ell -1}(q_j-1)}\\&= \bar{Q}_{[m+1,n-\ell -1]}(z_1,y), \end{aligned}$$

where the latter equality follows from (4.34) and the fact that \(m\le n-\ell -2\). Notice now that, for all \(z_2< y\le y_{n-\ell }\), by the assumptions corresponding to Case 2 and Case 2.1, we have \(z_2< y\le y_{n-\ell }\le y_{m+1}<z_1\). Therefore, by (4.1), we have \(\bar{Q}_{[m+1,n-\ell -1]}(z_1,y)= Q_{[m+1,n-\ell -1]}(z_1,y)\) and \(\bar{Q}_{[m+1,n-\ell ]}(z_1,z_2)= Q_{[m+1,n-\ell ]}(z_1,z_2)\). It then follows from (4.36) that

$$\begin{aligned} \begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2)&=\sum _{y=z_2+1}^{y_{n-\ell }}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y) \\&\quad + \sum _{y_{n-\ell }<y<z_1}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y). \\&= \sum _{z_2< y<z_1} q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y)\\&= Q_{[m+1,n-\ell ]}(z_1,z_2) = \bar{Q}_{[m+1,n-\ell ]}(z_1,z_2), \end{aligned} \end{aligned}$$

which proves (4.35) in Case 2.1.

Case 2.2: \(y_{n-\ell }<z_2<z_1\). The computation is similar to the one in Case 2.1, except that we need to use the second recurrence relation for \(h_{n-i}^n(\ell ,z_2)\) in (4.14), i.e.

$$\begin{aligned} h_{n-i}^{n}(\ell ,z_2) = -\sum _{y=y_{n-\ell }+1}^{z_2}q_{n-\ell }^{z_2-y}\cdot h_{n-i}^n(\ell +1,y), \end{aligned}$$

since \(z_2>y_{n-\ell }\). Following a similar computation as in Case 2.1, we arrive at

$$\begin{aligned} \begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2) =-\!\!\!\!\sum _{y=y_{n-\ell }+1}^{z_2} \!\!\!\! q_{n-\ell }^{z_2-y}\cdot \bar{Q}_{[m+1,n-\ell -1]}(z_1,y) + Q_{[m+1,n-\ell ]}(z_1,y_{n-\ell })\cdot q_{n-\ell }^{z_2-y_{n-\ell }}. \end{aligned} \nonumber \\ \end{aligned}$$
(4.37)

Notice now that, for all \(y_{n-\ell }< y\le z_2 \), by the assumptions corresponding to Case 2 and Case 2.2, we have \(y_{n-\ell }< y\le z_2<z_1\). Therefore, by (4.1), we have \(\bar{Q}_{[m+1,n-\ell -1]}(z_1,y)= Q_{[m+1,n-\ell -1]}(z_1,y)\) and \(\bar{Q}_{[m+1,n-\ell ]}(z_1,z_2)= Q_{[m+1,n-\ell ]}(z_1,z_2)\), as in Case 2.1. We then deduce that

$$\begin{aligned} \begin{aligned} G_{m,\ell }^{(n)}(z_1,z_2)&=-\sum _{y=y_{n-\ell }+1}^{z_2}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y) \\&\quad + \sum _{y_{n-\ell }<y<z_1}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y). \\&= \sum _{z_2<y<z_1}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y) = Q_{[m+1,n-\ell ]}(z_1,z_2)\\&= \bar{Q}_{[m+1,n-\ell ]}(z_1,z_2), \end{aligned} \end{aligned}$$

which proves (4.35) in Case 2.2.

Case 2.3: \(z_2\ge z_1\). Given the assumptions corresponding to Case 2 and Case 2.3, we now have \(z_2\ge z_1 > y_{m+1}\ge y_{n-\ell }\). Therefore, similarly to Case 2.2, we apply the second recurrence relation for \(h_{n-i}^{n}(\ell ,z_2)\) in (4.14) and arrive at (4.37). However, this time, \(\bar{Q}_{[m+1,n-\ell -1]}(z_1,y)\) takes different forms for \(z_1>y>y_{n-\ell }\) and \(z_2\ge y\ge z_1\). We then split the sum over y in (4.37) accordingly and compute

$$\begin{aligned} \begin{aligned}&\sum _{y=y_{n-\ell }+1}^{z_2}q_{n-\ell }^{z_2-y}\cdot \bar{Q}_{[m+1,n-\ell -1]}(z_1,y) \\&\quad = \sum _{z_1>y>y_{n-\ell }}q_{n-\ell }^{z_2-y}\cdot Q_{[m+1,n-\ell -1]}(z_1,y) + (-1)^{n-\ell -m-2}\\&\quad \sum _{z_2\ge y\ge z_1} q_{n-\ell }^{z_2-y}\cdot Q^{\dagger }_{[m+1,n-\ell -1]}(z_1,y) \\&\quad = Q_{[m+1,n-\ell ]}(z_1,y_{n-\ell }) \cdot q_{n-\ell }^{z_2-y_{n-\ell }} - (-1)^{n-\ell -m-1}Q^{\dagger }_{[m+1,n-\ell ]}(z_1,z_2) , \end{aligned} \end{aligned}$$

where the latter equality is due to (3.2) and (3.1). Combining this with (4.37), after a cancellation, we obtain

$$\begin{aligned} G^{(n)}_{m,\ell }(z_1,z_2) = (-1)^{n-\ell -(m+1)} \cdot Q^{\dagger }_{[m+1,n-\ell ]}(z_1,z_2) = \bar{Q}_{[m+1,n-\ell ]}(z_1,z_2), \end{aligned}$$

where the latter equality follows from (4.1) and the assumption \(z_2\ge z_1\). This proves (4.35) in Case 2.3 and, thus, completes the proof of Proposition 4.6.