1 Introduction

1.1 TASEP

The Totally Asymmetric Simple Exclusion Process (TASEP) is a prototypical stochastic model of transport in one dimension. Introduced around 50 years ago in parallel in biology [59, 60] and probability theory [80], it has been extensively studied by a variety of methods.

TASEP is a continuous-time Markov process on the space of particle configurations in \({\mathbb {Z}}\) in which at most one particle per site is allowed. Each particle has an independent exponential clock of rate 1 (that is, the random time T after which the clock rings is distributed as \(\mathrm {Prob}(T>s)=e^{-\uplambda s}\), \(s>0\), where \(\uplambda =1\) is the rate). When the clock rings, the particle jumps to the right by one if the destination is free of a particle. Otherwise, the jump is blocked and nothing happens. See Fig. 1 for an illustration.

Fig. 1
figure 1

A forbidden jump (on the left) and a jump (on the right) in TASEP

In this work we focus on the process with the most well-studied initial condition—the step initial condition. Under it, the particles initially occupy \({\mathbb {Z}}_{<0}\), while \({\mathbb {Z}}_{\ge 0}\) is free of particles. Denote by h(tx) the TASEP interface (where \(t\in {\mathbb {R}}_{\ge 0}\), \(x\in {\mathbb {Z}}\)), which is obtained by placing a slope \(+1\) or a slope \(-1\) segment over a hole or particle, respectively, with the agreement that the step initial configuration corresponds to \(h(0,x)=|x|\). See Fig. 3 for an illustration. We also denote the TASEP distribution at time t (with step initial condition) by \(\upmu _t\).

It was shown by [76] (see also, e.g., [50, 75, Chapter 4] for an alternative approach based on symmetric functions) that the interface grows linearly with time and tends to the limit shape, under the hydrodynamic scaling (i.e. linear space and time scaling), which is a parabola:

$$\begin{aligned} \frac{1}{L}\, h(\tau L,\varkappa L) \rightarrow \frac{\varkappa ^2+\tau ^2}{2\tau }, \qquad L\rightarrow +\infty , \end{aligned}$$
(1.1)

where \(\varkappa \) and \(\tau \) are scaled space and time, and \(|\varkappa |\le \tau \).

In the past 20 years, starting with [50], much finer results about asymptotic behavior of TASEP have become available through the tools of Integrable Probability (cf. [18, 25]). This asymptotic analysis revealed that TASEP belongs to the (one-dimensional) Kardar–Parisi–Zhang (KPZ) universality class [33, 72]. In particular, the TASEP interface at time L, on the horizontal \(L^{2/3}\) and vertical \(L^{1/3}\) scales, converges to the Airy\(_2\) process, which is the top line of the Airy\(_2\) line ensemble (about the latter see, e.g., [35]). Furthermore, computations with TASEP allow to formulate general predictions for all one-dimensional systems in the KPZ class (e.g., see [39, 81]). The progress in understanding multitime asymptotics of the TASEP interfaces is rapidly advancing at present (see Remark 7.4 for references to recent results).

1.2 The backwards dynamics

The goal of our work is to present a new surprising property of the family of TASEP distributions \(\left\{ \upmu _t \right\} _{t\ge 0}\). We show that the distributions \(\upmu _t\) are coupled in the reverse time direction by a time-homogeneous Markov process with local interactions (the interaction strength depends on the location in the system). Let us now describe this backwards dynamics.

Denote by \({\mathcal {C}}\) the (countable) space of configurations on \({\mathbb {Z}}\) which differ from the step configuration by finitely many TASEP jumps.Footnote 1 Consider the continuous-time Markov chain on \({\mathcal {C}}\) which evolves as follows. At each hole there is an independent exponential clock whose rate is equal to the number of particles to the right of this hole. When the clock at a hole rings, the leftmost of the particles that are to the right of the hole instantaneously jumps into this hole (in particular, the particles almost surely jump to the left). See Fig. 2 for an illustration or (7.2) for a description of the generator. Note that, for configurations in \({\mathcal {C}}\), almost surely at most one particle can move at any time moment because there are only finitely many holes with nonzero rate.

Fig. 2
figure 2

An illustration of the backwards process. Jump rates attached to holes and a possible jump are indicated

The jumping mechanism described above has the following features:

  • gaps attract neighboring particles from the right;

  • the rate of attraction is proportional to the size of the gap;

  • the jumping particle lands inside the gap uniformly at random.

The same features of the jumping mechanism appear in the well-known continuous-space Hammersley process [2, 49], and the discrete-space Hammersley process [42, 44]. For this reason we call our Markov process (which evolves in the discrete space) the backwards Hammersley-type process, or BHP, for short. Note that compared to the well-known continuous-space Hammersley process, our BHP is space-inhomogeneous: the jump rate at a hole also depends on the number of particles to the right of it. The evolutions of the interface under TASEP and the BHP are given in Fig. 3.

Let \(\{\mathbf{L }_\tau \}_{\tau \in {\mathbb {R}}_{\ge 0}}\) be the Markov semigroup of the BHP defined in Sect. 1.2. That is, \(\mathbf{L} _\tau (\mathbf{x} ,\mathbf{y} )\), \(\mathbf{x} , \mathbf{y} \in {\mathcal {C}}\), is the probability that the particle configuration is \(\mathbf{y} \) at time \(\tau \) given that it started at \(\mathbf{x} \) at time 0 (here we use the fact that BHP is time-homogeneous).

Remark 1.1

The backwards process is well-defined. Indeed, for each initial condition \(\mathbf{x} \in {\mathcal {C}}\) of the backwards process, the set of its possible further states is finite. Therefore, the probability \(\mathbf{L} _{\tau }(\mathbf{x} ,\mathbf{y} )\) for any \(\mathbf{x} ,\mathbf{y} \in {\mathcal {C}}\) is well-defined (and can be obtained by exponentiating the corresponding finite-size piece of the BHP jump matrix).

1.3 Main result

Recall that \(\upmu _t\) is the distribution of the TASEP configuration at time t (with the step initial condition). The measure \(\upmu _t\) is supported on the space \({\mathcal {C}}\) for all \(t\ge 0\).

Theorem 1

The BHP maps the TASEP distributions backwards in time. That is, for any \(t,\tau \in {\mathbb {R}}_{\ge 0}\), we have

$$\begin{aligned} \upmu _t \,\mathbf{L} _\tau =\upmu _{\,e^{-\tau }t}. \end{aligned}$$
(1.2)

In detail, this identity means that for any \(\mathbf{x} \in {\mathcal {C}}\) we have

$$\begin{aligned} \sum _\mathbf{y \in {\mathcal {C}}} \upmu _{t}(\mathbf{y} ) \, \mathbf{L} _\tau (\mathbf{y} ,\mathbf{x} ) = \upmu _{\,e^{-\tau }t}(\mathbf{x} ). \end{aligned}$$
Fig. 3
figure 3

An illustration of the TASEP interface growth (top) and the interface decay under the backwards dynamics (bottom). In both pictures, lighter curves are the interfaces at later times. One can see that the TASEP evolution is symmetric about the vertical axis, while the backwards dynamics is not symmetric. Because of this asymmetry, there are in fact two backwards processes—one focusing on holes and the other focusing on particles. We only consider one of them in the present work

As \(\tau \rightarrow +\infty \), the right-hand side of (1.2) becomes \(\upmu _0\), which is the delta measure on the step configuration. This agrees with the observation that for any \(\mathbf{x} \in {\mathcal {C}}\) we haveFootnote 2

$$\begin{aligned} \lim _{\tau \rightarrow +\infty }{} \mathbf{L} _\tau (\mathbf{x} ,\mathbf{y} )= \mathbb {1}_{\mathbf{y }=\text {step configuration}}. \end{aligned}$$

Theorem 1 leads to a stationary Markov dynamics on the TASEP measure \(\upmu _t\) (it is discussed in Sect. 1.7). In particular, this stationary dynamics brings new identities for expectations with respect to \(\upmu _t\). One of these identities is given in Corollary 7.3.

The simulation depicting the TASEP evolution from the step initial configuration to \(t=350\), and then the action of the BHP on this interface is available online [56]. The interfaces in Fig. 3 are snapshots of this simulation.

1.4 Remark: Reversal of Markov processes

Before discussing the strategy of the proof of Theorem 1 let us mention that TASEP, like any Markov chain (under certain technical assumptions), can be reversed in time, and its reversal is again a Markov chain—but usually time-inhomogeneous and quite complicated.

For TASEP, let \(\{\mathbf{T }_t\}_{t\in {\mathbb {R}}_{\ge 0}}\) be its Markov semigroup. Defining

$$\begin{aligned} \mathbf{T} ^{rev}_{t,s}(\mathbf{x} ,\mathbf{y} ) = \frac{\upmu _s(\mathbf{y} )}{\upmu _t(\mathbf{x} )} \, \mathbf{T} _{t-s}(\mathbf{y} ,\mathbf{x} ) ,\qquad t>s, \end{aligned}$$

we see that \(\mathbf{T} ^{rev}\) also maps the TASEP distributions back in time: \(\upmu _t \mathbf{T} ^{rev}_{t,s}=\upmu _s\), \(s<t\). In other words, the probabilities \(\mathbf{T} ^{rev}\) come from the time-reversal of the TASEP conditional distributions. The Markov process corresponding to \(\{\mathbf{T }^{rev}_{t,s}\}\) is time-inhomogeneous, and its interactions are substantially nonlocal. Theorem 1 implies that the BHP \(\{\mathbf{L }_\tau \}\) is a different, much more natural, Markov process which maps the TASEP distributions back in time.

By a different mapping of the distributions we mean the following. One can check that the joint distribution of the TASEP configuration at two times \(e^{-\tau }t\) and t differs from the joint distribution of \((\mathbf{x} ,\mathbf{y} )\), where \(\mathbf{y} \) is distributed as \(\upmu _t\), and \(\mathbf{x} \) is obtained from \(\mathbf{y} \) by running the BHP process \(\mathbf{L} _{\tau }\).

1.5 Idea of proof of Theorem 1

We prove Theorem 1 in Sections 4, 5, and 6. Here let us outline the main steps.

First, we modify the problem by introducing an extra parameter \(q\in (0,1)\), and consider the TASEP in which the k-th particle from the right, \(k\in {\mathbb {Z}}_{\ge 1}\), has the jump rate \(q^{k-1}\).Footnote 3 Let the distribution at time t of this TASEP (with step initial configuration) be denoted by \(\upmu _t^{(q)}\).

Fig. 4
figure 4

A configuration \(\{x^j_i\}\) in \({\mathbb {Z}}\times {\mathbb {Z}}_{\ge 1}\). The leftmost (marked) particles are identified with TASEP. The interlacing condition \(x^{j+1}_{i+1}< x^{j}_{i}\le x^{j+1}_{i}\) holds throughout the configuration

Second, we use the well-known mapping of the TASEP to Schur processes. Schur processes [67] (and their various generalizations including the Macdonald processes [14]) are one of the central tools in Integrable Probability. The particular Schur processes we employ are probability distributions on particle configurations \(\{x^j_i\}_{1\le i\le j}\) in \({\mathbb {Z}}\times {\mathbb {Z}}_{\ge 1}\) which satisfy an interlacing condition, see Fig. 4.

There exists a Schur process (depending on q and the time parameter \(t\in {\mathbb {R}}_{\ge 0}\)) under which the joint distribution of the leftmost particles \(\{x^N_N\}_{N\in {\mathbb {Z}}_{\ge 1}}\) in each horizontal row is the same as of the q-dependent TASEP particles \(x_1(t)>x_2(t)>\cdots \) (i.e., this is \(\upmu _t^{(q)}\)). This mapping between TASEP and Schur processes is described in [16], but also follows from earlier constructions involving the Robinson–Schensted–Knuth correspondence. We recall the details in Sect. 3.

This Schur process corresponding to \(\upmu _t^{(q)}\) depends on q via the spectral parameters \(1,q,q^2,\ldots \) attached to the horizontal lines (as indicated in Fig. 4). The new ingredients we bring to Schur processes are Markov maps interchanging two neighboring spectral parameters (say, the j-th and the \((j+1)\)-th). By a Markov map we mean a way to randomly modify the interlacing particle configuration in \({\mathbb {Z}}\times {\mathbb {Z}}_{\ge 1}\) such that:

  • At the j-th horizontal level the particles almost surely jump to the left;

  • All other levels are untouched;

  • The interlacing conditions are preserved;

  • If the starting configuration was distributed as a Schur process, then the resulting configuration is distributed as a modified Schur process with the j-th and the \((j+1)\)-th spectral parameters interchanged.

We refer to this as the “L Markov map” since it moves particles to the left (it has a counterpart, the “R Markov map”, but we do not need it for the main result). The L Markov map at each j-th level depends only on the ratio of the spectral parameters being interchanged.

Combining the L Markov maps in such a way that they interchange the bottommost spectral parameter 1 with q, then with \(q^2\), then with \(q^3\), and so on, we can move this parameter 1 to infinity, where it “disappears” (see Fig. 9 for an illustration). The resulting distribution of the configuration will again be a Schur process with the same spectral parameters \((1,q,q^2,\ldots )\), but with the modified time parameter, \(t\mapsto qt\). Here we use the fact that the measure does not change under the simultaneous rescaling of the spectral parameters.

Considering the action of this combination of the L Markov maps on the leftmost particles \(\{x_N^N\}\), we arrive at an explicit Markov transition kernel on \({\mathcal {C}}\), denoted by \(\mathbf{L} ^{(q)}\), with the property that (this is Theorem 5.7)

$$\begin{aligned} \upmu _t^{(q)}\,\mathbf{L} ^{(q)}= \upmu _{qt}^{(q)} \qquad \text {for all } t\in {\mathbb {R}}_{\ge 0}. \end{aligned}$$

Finally, iterating the action of \(\mathbf{L} ^{(q)}\) and taking the limit as \(q\rightarrow 1\), we arrive at Theorem 1.

1.6 “Toy” example: coupling of Bernoulli random walks

The Schur process computations leading to Theorem 1 have an elementary consequence which we now describe. Its connection to Schur processes is detailed in Sect. 8.7.

Fig. 5
figure 5

Left: Probabilities in the Bernoulli random walk. Center: A sample trajectory of the Bernoulli random walk. Right: Local step of the process \(\mathbf{D} _\tau \)

Fix \(\beta \in (0,1)\), and let \({\mathsf {b}}_\beta \) be the distribution of the simple random walk in the quadrant \({\mathbb {Z}}_{\ge 0}^2\), under which the walker starts at (0, 0) and goes up with probability \(\beta \) and to the right with probability \(1-\beta \), independently at each step.

Consider the continuous-time Markov process on the space of random walk trajectories under which each (upright) local piece is independently replaced by the (rightup) piece at rate \(m+n-1\), where \((m,n)\in {\mathbb {Z}}_{\ge 1}^2\) are the coordinates of the local piece. See Fig. 5 for an illustration. Clearly, in each triangle \(\{m+n \le K \}\), almost surely at each time moment there is at most one change of the trajectory. Moreover, for different K these processes are compatible, so by the Kolmogorov extension theorem they indeed define a continuous-time Markov process on the full space of random walk trajectories. Denote the resulting Markov semigroup by \(\{\mathbf{D }_\tau \}_{\tau \in {\mathbb {R}}_{\ge 0}}\).

Proposition 2

For any \(\beta \in (0,1)\) and \(\tau \ge 0\) we have

$$\begin{aligned} {\mathsf {b} }_\beta \,\mathbf{D} _\tau ={\mathsf {b} }_{\beta (\tau )}, \quad \text {where}\quad \beta (\tau )=\frac{\beta e^{-\tau }}{1-\beta +\beta e^{-\tau }}. \end{aligned}$$

The action of \(\mathbf{D} _{\tau }\) decreases the parameter \(\beta \) and almost surely moves the trajectory closer to the m (horizontal) axis. By symmetry, one can also define a continuous-time Markov chain which moves the vertical pieces of the trajectory to the left, and increases the parameter \(\beta \). It could be interesting to look at the stationary dynamics—a combination of the two processes running in parallel which does not change \(\beta \)—and understand its large-scale asymptotic behavior. We do not focus on this question in the present work.

1.7 Stationary dynamics on the TASEP measure

Fix \(t\in {\mathbb {R}}_{>0}\). The backwards Hammersley-type process slowed down by a factor of t compensates the time change of the forward TASEP evolution. Running these two processes in parallel thus amounts to a continuous-time Markov process which preserves the TASEP distribution \(\upmu _t\).

One can say that the TASEP distributions \(\upmu _t\) are the “blocking measures” for the stationary dynamics [57] (see also [5]).

The presence of the stationary dynamics on \(\upmu _t\) allows to obtain new properties of the TASEP measure. In particular, we write down an exact evolution equation for \({\mathbb {E}}\,G(N_t^0)\), where \(N_t^0\) is the number of particles to the right of zero at time t, and G is an arbitrary function. This equation contains one more random quantity—the number of holes immediately to the left of zero. See Corollary 7.3 for details.

Moreover, in Sect. 7 we rederive the limit shape parabola for the TASEP by looking at the hydrodynamics of the process preserving \(\upmu _t\). Indeed, recall that the TASEP local equilibria—the ergodic translation invariant measures on configurations on the full line \({\mathbb {Z}}\) which are also invariant under the TASEP evolution—are precisely the product Bernoulli measures [57]. In the bulk of the BHP, the difference between jump rates of consecutive particles is inessential. Thus, the product Bernoulli measures also serve as local equilibria for the BHP.Footnote 4 By looking at the local equilibria, one can write down two hydrodynamic PDEs for the TASEP limit shape: first is the well-known Burgers’ equation, and the second is a PDE coming from the BHP, which is specific to the step initial condition. After simplifications, these PDEs lead to the parabola (1.1).

Beyond hydrodynamics, the asymptotic fluctuation behavior of the TASEP measures \(\upmu _t\) as \(t\rightarrow +\infty \) is understood very well by now, starting from [50]. It would be very interesting to extend these results to the combination \(\text {TASEP}+t^{-1}\text {BHP}\) which preserves \(\upmu _t\).

1.8 Further extensions

The Markov maps on Schur processes we introduce to prove our main result, Theorem 1, offer a variety of other applications and open problems. We discuss them in more detail Sect. 8. Here let us briefly outline the main directions:

  • The one-dimensional statement (mapping the TASEP distributions back in time) has an extension to two dimensions. Namely, there is a continuous-time Markov process on interlacing particle configurations (as in Fig. 4) which maps back in time the distributions of the anisotropic KPZ growth process on interlacing arrays studied in [16].

  • Instead of Schur processes, one can consider interlacing configurations of finite depth. This includes probability distributions on boxed plane partitions with weight proportional to \(q^{\mathrm {vol}}\) (where \(\mathrm {vol}\) is the volume under the boxed plane partition). In this setting our constructions produce Markov chains mapping the measure \(q^{\mathrm {vol}}\) to the measure \(q^{-\mathrm {vol}}\), and vice versa. (A simulation is available online [70].) Applying this procedure twice leads to a new sampling algorithm for the measures \(q^{\pm \mathrm {vol}}\).

  • A certain bulk limit of our two-dimensional Markov maps essentially leads to the growth processes preserving ergodic Gibbs measures on two-dimensional interlacing configurations introduced and studied in [83]. Thus, one can view our Markov maps as exact “pre-bulk” stationary dynamics on two-dimensional interlacing configurations.

  • Theorem 1 may be interpreted as the statement that the family of measures \(\{\upmu _t\}\) is coherent with respect to a projective system determined by the process \(\{\mathbf{L }_\tau \}\). Projective systems [23] generalize the notion of branching graphs, and the latter play a fundamental role in Asymptotic Representation Theory [24, 84]. (Even further, the distributions of the anisotropic KPZ growth are also coherent, on a projective system whose “levels” are spaces of two-dimensional interlacing configurations.) The framework of projective systems/branching graphs provides many natural questions in this setting.

  • Structurally, our Markov maps are inspired by the study of stochastic vertex models and bijectivization of the Yang–Baxter equation [29, 30]. Compared with the Schur case, the full Yang–Baxter equation for the quantum \({\mathfrak {sl}}_2\) contains more parameters. In this setting, Schur polynomials should be replaced by the spin Hall-Littlewood or spin q-Whittaker symmetric functions [12, 27]. It is interesting to see how far Theorem 1 can be generalized to other particle systems arising in this framework, including ASEP, various stochastic six vertex models, and random matrix models.

  • There exists a backwards dynamics for the ASEP started from a family of shock measures [9]. This ASEP backwards dynamics is obtained via a duality. While the shock measures are very different from the step initial configuration, it would be interesting to find connections of Theorem 1 to Markov duality.

Concrete open questions along these directions are formulated and discussed in Sect. 8.

1.9 Outline

In Sects. 2 and 3 we recall the necessary facts about Schur processes, TASEP, and their connection. In Sect. 4 we introduce the L and R Markov maps at the level of interlacing arrays. The action of each such map swaps two neighboring spectral parameters. In Sect. 5 we combine the L Markov maps in such a way that their combination \({\mathbb {L}}^{(q)}\) preserves the class of q-Gibbs measures on interlacing arrays (which includes the Schur processes related to the q-dependent TASEP). We compute the action of \({\mathbb {L}}^{(q)}\) on q-Gibbs measures and the corresponding Schur processes. In Sect. 6 we take a limit \(q\rightarrow 1\), which leads to our main result, Theorem 1. In Sect. 7 we illustrate the relation between the TASEP and the backwards evolutions at the hydrodynamic level by looking at the stationary dynamics on the TASEP distribution \(\upmu _t\). Finally, in Sect. 8 we discuss possible extensions of our constructions indicated in Sect. 1.8 above, and formulate a number of open questions.

2 Ascending Schur processes

This section is a brief review of ascending Schur processes introduced in [67] and their relation to TASEP. More details may be found in, e.g., [18].

2.1 Partitions

A partition \(\lambda = (\lambda _1\ge \cdots \ge \lambda _{\ell (\lambda )}>0 )\), where \(\lambda _i\in {\mathbb {Z}}\), is a weakly decreasing sequence of nonnegative integers. We denote \(|\lambda |:=\sum _{i=1}^{N}\lambda _i\). We call \(\ell (\lambda )\) the length of a partition. By convention we do not distinguish partitions if they differ by trailing zeroes. In this way \(\ell (\lambda )\) always denotes the number of strictly positive parts in \(\lambda \). Denote by \({\mathbb {Y}}\) the set of all partitions including the empty one \(\varnothing \) (by convention, \(\ell (\varnothing )=|\varnothing |=0\)).

2.2 Schur polynomials

Fix \(N\in {\mathbb {Z}}_{\ge 1}\). The Schur symmetric polynomials in N variables are indexed \(\lambda \in {\mathbb {Y}}\) and are defined as

$$\begin{aligned} s_\lambda (x_1,\ldots ,x_N ):= \frac{\det \bigl [x_i^{\lambda _j + N -j} \bigr ]_{i,j=1}^N}{\prod _{1\le i<j\le N} (x_i -x_j)},\qquad N\ge \ell (\lambda ). \end{aligned}$$

If \(N<\ell (\lambda )\), we set \(s_\lambda (x_1,\ldots ,x_N )=0\), by definition.

The Schur polynomials \(s_\lambda \) indexed by all \(\lambda \in {\mathbb {Y}}\) with \(\ell (\lambda )\le N\) form a linear basis in the space \({\mathbb {C}}[x_1,\ldots ,x_N ]^{\mathfrak {S}_N}\) of symmetric polynomials in N variables. Each \(s_\lambda \) is a homogeneous polynomial of degree \(|\lambda |\).

The Schur polynomials are stable in the following sense:

$$\begin{aligned} s_{\lambda } (x_1,\ldots ,x_N,0)= s_{\lambda } (x_1,\ldots ,x_N). \end{aligned}$$
(2.1)

This stability allows to define Schur symmetric functions \(s_\lambda \), \(\lambda \in {\mathbb {Y}}\), in infinitely many variables. These objects form a linear basis of the algebra of symmetric functions \(\varLambda \). We refer to [61, Ch. I.2] for the precise definition and details on the algebra \(\varLambda \).

2.3 Skew Schur polynomials

The skew Schur polynomials \(s_{\lambda /\varkappa }\), \(\lambda ,\varkappa \in {\mathbb {Y}}\) are defined through the branching rule as follows:

$$\begin{aligned} s_\lambda (x_1,\ldots ,x_N ) = \sum _{\varkappa \in {\mathbb {Y}}} s_\varkappa (x_1,\ldots ,x_K ) s_{\lambda /\varkappa }(x_{K+1},\ldots ,x_N ). \end{aligned}$$
(2.2)

Indeed, \(s_\lambda (x_1,\ldots ,x_N )\) is a symmetric polynomial in \(x_1,\ldots ,x_K\), and so the skew Schur polynomials in (2.2) are the coefficients of the linear expansion. These skew Schur polynomials are symmetric in \(x_{K+1},\ldots ,x_N\) and satisfy the stability property similar to (2.1). We have \(s_{\lambda /\varnothing }=s_\lambda \).

Let \(\lambda ,\varkappa \in {\mathbb {Y}}\). Plugging in just one variable into \(s_{\lambda /\varkappa }\) simplifies this symmetric function. Namely, \(s_{\lambda /\varkappa }(x)\) vanishes unless \(\varkappa \) and \(\lambda \) interlace (notation \(\varkappa \prec \lambda \); equivalently, \(\lambda /\varkappa \) is a horizontal strip):

$$\begin{aligned} \lambda _1\ge \varkappa _1\ge \lambda _2\ge \varkappa _2 \ge \ldots . \end{aligned}$$
(2.3)

Moreover,

$$\begin{aligned} s_{\lambda /\varkappa }(x)=x^{|\lambda |-|\varkappa |}\mathbb {1}_{\varkappa \prec \lambda }. \end{aligned}$$
(2.4)

For any \(\lambda \in {\mathbb {Y}}\), the set \(\left\{ \varkappa :\varkappa \prec \lambda \right\} \) is finite.

Iterating (2.2) and breaking down all skew Schur polynomials into single-variable ones, we see that each Schur polynomial has the following form:

$$\begin{aligned}&s_\lambda (x_1,\ldots ,x_N )\nonumber \\&\quad = \sum _{\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N)}=\lambda } x_1^{|\lambda ^{(1)}|} x_2^{|\lambda ^{(2)}|-|\lambda ^{(1)}|} \ldots x_{N-1}^{|\lambda ^{(N-1)}|-|\lambda ^{(N-2)}|} x_{N}^{|\lambda ^{(N)}|-|\lambda ^{(N-1)}|}, \end{aligned}$$
(2.5)

where the sum is taken over all interlacing arrays of partitions of depth N in which the top row coincides with \(\lambda \) (see Fig. 6 for an illustration). In combinatorial language, (2.5) is the representation of a Schur polynomial as a generating function of semistandard Young tableaux, cf. [45].

Remark 2.1

If \(N<\ell (\lambda )\), then there are no interlacing arrays of depth N whose top row is \(\lambda \) because at each level one can add at most one nonzero component. Thus, the right-hand side of (2.5) automatically vanishes if \(N<\ell (\lambda )\). This agrees with the fact that \(s_\lambda (x_1,\ldots ,x_N )=0\) if \(N<\ell (\lambda )\).

The following two identities for skew Schur polynomials play a fundamental role in our work. The first identity is a straightforward consequence of the symmetry of the Schur polynomials.

Proposition 2.2

For any \(\lambda ,\mu \in {\mathbb {Y}}\) and variables xy we have

$$\begin{aligned} \sum _{\varkappa \in {\mathbb {Y}}} s_{\lambda /\varkappa }(x)s_{\varkappa /\mu }(y) = \sum _{{\hat{\varkappa }}\in {\mathbb {Y}}} s_{\lambda /{\hat{\varkappa }}}(y) s_{{\hat{\varkappa }}/\mu }(x). \end{aligned}$$

The sums in both sides are finite.

The second is the skew Cauchy identity, see [61, Ch. I.5].

Proposition 2.3

For any \(\lambda , \mu \in {\mathbb {Y}}\) and variables \(x_1,\ldots ,x_N,y_1,\ldots ,y_M \) we have

$$\begin{aligned} \begin{aligned}&\sum _{\nu \in {\mathbb {Y}}} s_{\nu /\mu }(x_1,\ldots ,x_N ) s_{\nu /\lambda }(y_1,\ldots ,y_M ) \\&\quad = \prod _{i=1}^{N}\prod _{j=1}^{M} \frac{1}{1 - x_i y_j} \sum _{\varkappa \in {\mathbb {Y}}} s_{\lambda / \varkappa }(x_1,\ldots ,x_N) s_{\mu /\varkappa }(y_1,\ldots ,y_M ). \end{aligned} \end{aligned}$$
(2.6)

This is an identity of generating series in \(x_i,y_j\) under the standard geometric series expansion \(\frac{1}{1-x_iy_j}=1+x_iy_j+(x_iy_j)^2+\ldots \). Moreover, (2.6) holds as a numerical identity if \(x_i,y_j\in {\mathbb {C}}\) are such that \(|x_iy_j|<1\) for all ij.

Remark 2.4

If we set \(\lambda =\mu =\varnothing \) in (2.6), the sum in the right-hand side disappears (because \(s_{\varnothing /\varkappa }=\mathbb {1}_{\varkappa =\varnothing }\)), and we obtain

$$\begin{aligned} \sum _{\nu \in {\mathbb {Y}}} s_{\nu }(x_1,\ldots ,x_N ) s_{\nu }(y_1,\ldots ,y_M ) = \prod _{i=1}^{N}\prod _{j=1}^{M} \frac{1}{1 - x_i y_j}. \end{aligned}$$
(2.7)

Again, this is a numerical identity provided that \(|x_iy_j|<1\) for all ij.

2.4 Specializations

When \(x\ge 0\), we have \(s_{\lambda /\varkappa }(x)\ge 0\) from (2.4). More generally, the Schur polynomials \(s_\lambda (x_1,\ldots ,x_N )\) are nonnegative for real nonnegative \(x_1,\ldots ,x_N\).

We will also need the Plancherel specializations of Schur functions \(s_\lambda \). These specializations, indexed by \(t\ge 0\), may be defined through the limit

$$\begin{aligned} s_{\lambda }(\rho _t) := \lim _{K\rightarrow +\infty } s_\lambda \left( \frac{t}{K},\ldots ,\frac{t}{K} \right) , \qquad \lambda \in {\mathbb {Y}}, \end{aligned}$$
(2.8)

where \(\frac{t}{K}\) is repeated K times.

Remark 2.5

We also have \(s_\lambda (\rho _t)=t^{|\lambda |}\,\dfrac{\dim \lambda }{|\lambda |!}\), where \(\dim \lambda \) is the dimension of the irreducible representation of the symmetric group \(\mathfrak {S}_{|\lambda |}\), or, equivalently, the number of standard Young tableaux of shape \(\lambda \).

Generic nonnegative specializations will be denoted as \(\rho :\varLambda \rightarrow {\mathbb {R}}\), and we will also use the notation \(s_\lambda (\rho )\) for \(\rho (s_\lambda )\). For the purposes of the present paper, \(\rho \) would be either a Plancherel specialization, or a substitution of a finitely many nonnegative variables into the symmetric function.

Remark 2.6

A classification of Schur-positive specializations (that is, algebra homomorphisms \(\varLambda \rightarrow {\mathbb {R}}\) which are nonnegative on Schur functions) is known and is equivalent to the celebrated Edrei–Thoma theorem. See, for example, [24] for a modern account discussing various equivalent formulations.

2.5 Schur processes

Schur measures and processes are probability distributions on partitions or sequences of partitions whose probability weights are expressed through Schur polynomials in a certain way. They were introduced in [66, 67].

A Schur measure is a probability measure on \({\mathbb {Y}}\) with probability weights depending on two nonnegative specializations \(\rho _1,\rho _2\):

$$\begin{aligned} {\mathbb {P}}[\rho _1\mid \rho _2](\lambda ) = \frac{1}{Z}\, s_{\lambda }(\rho _1) s_{\lambda }(\rho _2), \qquad Z=\sum _{\lambda \in {\mathbb {Y}}}s_\lambda (\rho _1)s_\lambda (\rho _2). \end{aligned}$$
(2.9)

The normalizing constant Z can be computed using the Cauchy identity (2.7) (provided that the infinite sum converges).

Schur processes are probability measures on sequences of partitions generalizing the Schur measures. We will only need the particular case of ascending Schur processes. These are probability measures on interlacing arrays

$$\begin{aligned} \lambda ^{(1)}\prec \lambda ^{(2)}\prec \ldots \prec \lambda ^{(N)}, \qquad \lambda ^{(j)}\in {\mathbb {Y}} \end{aligned}$$

(for some fixed N) depending on a nonnegative specialization \(\rho \) and \(c_1,\ldots ,c_N\ge 0\):

$$\begin{aligned} {\mathbb {P}}[\vec c \mid \rho ](\lambda ^{(1)},\ldots ,\lambda ^{(N)} ):= \frac{1}{Z}\, s_{\lambda ^{(1)}}(c_1)s_{\lambda ^{(2)}/\lambda ^{(1)}}(c_2)\ldots s_{\lambda ^{(N)}/\lambda ^{(N-1)}}(c_N)s_{\lambda ^{(N)}}(\rho ).\nonumber \\ \end{aligned}$$
(2.10)

The normalizing constant has the form [this follows from (2.2) and (2.7)]:

$$\begin{aligned} Z=\sum _{\lambda \in {\mathbb {Y}}} s_\lambda (c_1,\ldots ,c_N )s_\lambda (\rho ) \end{aligned}$$
(2.11)

(provided that the series converges). We call N the depth of a Schur process. We will sometimes call the \(c_i\)’s the spectral parameters of Schur process \({\mathbb {P}}[\vec c\mid \rho ]\).

The next statement immediately follows from (2.2) and the skew Cauchy identity:

Proposition 2.7

Under the Schur process (2.10), the marginal distribution of each \(\lambda ^{(K)}\), \(1\le K\le N\), is given by the Schur measure \({\mathbb {P}}[(c_1,\ldots ,c_K )\mid \rho ]\).

2.6 Schur processes of infinite depth

Let us denote by \({\mathcal {S}}\) the set of interlacing arrays of infinite depth \(\{\lambda ^{(j)}\}_{j\in {\mathbb {Z}}_{\ge 0}}\), where \(\lambda ^{(j)}\in {\mathbb {Y}}\) and \(\lambda ^{(j-1)}\prec \lambda ^{(j)}\) (cf. Fig. 6 for an illustration).

Remark 2.8

The interlacing array in Fig. 6 and the one in Fig. 4 in the Introduction are related by \(x^N_k=\lambda ^{(N)}_k-N+k\). We work with the \(\{\lambda ^{(N)}_k\}\) notation throughout the paper.

By the Kolmogorov extension theorem, a measure on \({\mathcal {S}}\) is uniquely determined by a collection of compatible joint distributions of \(\{\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)}\}_{N\ge 1}\). If these joint distributions satisfy the \(\vec c\)-Gibbs property, then the resulting measure on \({\mathcal {S}}\) is \(\vec c\)-Gibbs.

Thus, Proposition 2.7 implies the following extension of the definition of a Schur process. Given an infinite sequence \(c_1,c_2,\ldots , \) of nonnegative reals such that the sums like (2.11) converge for all N, one can define the Schur process \({\mathbb {P}}[\vec c\mid \rho ]\) of infinite depth, i.e., a probability measure on \({\mathcal {S}}\). Indeed, this is because the distributions (2.10) for different N are compatible with each other by Proposition 2.7, so the measure on \({\mathcal {S}}\) with the desired finite-dimensional distributions exists.

Fig. 6
figure 6

An interlacing array

2.7 \(\vec c\)-Gibbs measures

Fix nonnegative reals \(c_1,c_2,\ldots \). A probability distribution on \({\mathcal {S}}\) is called \({\vec {c}}\)-Gibbs if for any N, given \(\lambda ^{(N)}=\lambda \), the conditional distribution of the bottom part \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N-1)}\prec \lambda \) of the interlacing array has the form

$$\begin{aligned}&\mathrm {Prob}(\lambda ^{(1)},\ldots ,\lambda ^{(N-1)}\mid \lambda ^{(N)}=\lambda ) \nonumber \\&\quad = \frac{s_{\lambda ^{(1)}}(c_1)s_{\lambda ^{(2)}/\lambda ^{(1)}}(c_2)\ldots s_{\lambda ^{(N-1)}/\lambda ^{(N-2)}}(c_{N-1})s_{\lambda /\lambda ^{(N-1)}}(c_N)}{s_{\lambda }(c_1,\ldots ,c_N )}. \end{aligned}$$
(2.12)

The expression in the denominator is simply the normalizing constant. One can say that each interlacing array in (2.12) is weighted proportional to the corresponding term in the expansion (2.5). Note that the \(\vec c\)-Gibbs property depends on the order of the \(c_i\)’s, but the normalizing constant in (2.12) does not.

The next lemma is straightforward consequence of (2.12).

Lemma 2.9

Fix any \(j\ge 2\). Under a \(\vec c\)-Gibbs measure, the conditional probability of \(\lambda ^{(j)}\) given all \(\lambda ^{(i)}\), with \(i\ne j\), is proportional to \(s_{\lambda ^{(j+1)}/\lambda ^{(j)}}(c_{j+1})\, s_{\lambda ^{(j)}/\lambda ^{(j-1)}}(c_j)\).

Denote the space of all \(\vec c\)-Gibbs measures on \({\mathcal {S}}\) by \(\mathfrak {G}_{\vec c}\). Note that this space does not change if we multiply all the parameters by the same positive number: \(\mathfrak {G}_{\vec c}=\mathfrak {G}_{a\cdot \vec c}\), \(a>0\). Indeed, this follows from (2.12) and the homogeneity of the Schur polynomials.

Remark 2.10

When all \(c_i\equiv 1\), the conditional distribution (2.12) becomes uniform (on the set of all interlacing arrays of depth N with top row \(\lambda \)). This uniform Gibbs case justifies the name \(\vec c\)-Gibbs in the general situation.

The Schur process \({\mathbb {P}}[\vec c\mid \rho ]\) is a particular example of a \(\vec c\)-Gibbs measure. The full classification of \(\vec c\)-Gibbs measures is known only in several particular cases. In the uniform case \(c_i\equiv 1\) this is the celebrated Edrei–Voiculescu theorem (see Sect. 8.1 and also, e.g., [22] for a modern account discussing various equivalent formulations). When the \(c_i\)’s form a geometric sequence, the classification was obtained much more recently in [46] (see also [47] for a generalization).

3 Schur processes and TASEP

In this section we recall a coupling between TASEP (with step initial configuration and particle-dependent speeds) and a marginal of an ascending Schur process. This mapping can be seen as a consequence of the column Robinson–Schensted–Knuth insertion [64, 65, 85]. One can also define a continuous-time Markov dynamics on interlacing arrays whose marginal is TASEP [16] (see also [25]).

3.1 TASEP

Let \(c_1,\ldots ,c_N,\ldots \) be positive reals. The continuous-time TASEP (Totally Asymmetric Simple Exclusion Process) with step initial condition and speeds \(\vec c\) is defined as follows. It is a Markov process on particle configurations \(\mathbf{x} (t)=(x_1(t)>x_2(t)>\cdots )\) on the integer lattice, such that

  • The initial particles’ locations are \(x_i(0)=-i\), \(i=1,2,\ldots \) (this is the step initial configuration);

  • The configuration has the rightmost particle \(x_1\);

  • The configuration is densely packed far to the left, that is, for all large enough M (where the bound on M depends on t) we have \(x_{M}(t)=-M\);

  • There is at most one particle per site.

Denote the space of such left-packed and right-finite particle configurations on \({\mathbb {Z}}\) by \({\mathcal {C}}\).

The continuous-time Markov evolution of TASEP proceeds as follows. Each particle \(x_i\) has an independent exponential clock with rate \(c_i\). That is, the time before \(x_i\) attempts to jump is an exponential random variable: \(\mathrm {Prob}(\text {time}>t)=e^{-c_i t}\), \(t\ge 0\). (We will refer to \(c_i\)’s as to the particle speeds.) When the clock of \(x_i\) rings, the particle jumps to the right by one if the destination is not occupied. If the destination of the jumping particle is occupied, the jump is forbidden and the particle configuration does not change. Because the process starts from the step initial configuration, only finitely many particles are free to jump at any particular time. Therefore at any time almost surely at most one jump happens. See Fig. 7 for an illustration.

Fig. 7
figure 7

An example of a jump and a forbidden jump in TASEP

3.2 Coupling to a Schur process

Fix \(N\in {\mathbb {Z}}_{\ge 1}\), positive reals \(c_1,\ldots ,c_N \), and \(t\ge 0\). Consider the Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) defined by (2.10), where \(\rho _t\) is the Plancherel specialization. Note that the series for the normalizing constant (2.11) always converges because

$$\begin{aligned} \begin{aligned} Z=\sum _{\lambda \in {\mathbb {Y}}} s_\lambda (c_1,\ldots ,c_N) s_\lambda (\rho _t)&= \lim _{K\rightarrow \infty } \sum _{\lambda \in {\mathbb {Y}}} s_\lambda (c_1,\ldots ,c_N) s_\lambda \left( \frac{t}{K},\ldots ,\frac{t}{K} \right) \\&= \lim _{K\rightarrow \infty }\prod _{i=1}^{N}\frac{1}{(1-c_i t/K)^K} = e^{(c_1+\ldots +c_N)t}, \end{aligned} \end{aligned}$$

and the last expression is an entire function in t and \(c_i\). Since this procedure works for all N, we can view \({\mathbb {P}}[\vec c\mid \rho _t]\) as a Schur process of infinite depth, i.e., a probability measure on \({\mathcal {S}}\).

When \(t=0\), \({\mathbb {P}}[\vec c\mid \rho _0]\) concentrated on the single interlacing array densely packed at zero, that is, with each \(\lambda ^{(j)}=(0,\ldots ,0 )\) (j times).

The next result is present in [16], but alternatively follows from much earlier constructions involving Robinson–Schensted–Knuth correspondences [64, 65, 85].

Theorem 3.1

Fix \(t\ge 0\) and particle speeds \(c_1,c_2,\ldots \), and consider the TASEP as in Sect. 3.1 at time t. Then we have equality of joint distributions at the fixed time t:

$$\begin{aligned} x_i(t){\mathop {=}\limits ^{d}}\lambda _i^{(i)}-i,\qquad i=1,2,\ldots , \end{aligned}$$
(3.1)

where \(\lambda ^{(i)}\) are the random partitions coming from the Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) described above.

Remark 3.2

A dynamical version of this result is also proven in [16]: there exists a continuous-time Markov chain on interlacing arrays (even a whole family of them, cf. [25, 26]) whose action on a Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) continuously increases the parameter t. We will refer to the dynamics from [16] as the push-block process (see Definition 8.8 for details). For the push-block process on interlacing arrays, (3.1) holds as equality of joint distributions of Markov processes. In other words, (3.1) is also true for multitime joint distributions of these processes. However, we do not need this dynamical result for most of our constructions.

4 Markov maps

This section introduces our main objects—the Markov maps \(L_\alpha ^{(j)}\) and \(R_{\alpha }^{(j)}\) which randomly change the j-th row \(\lambda ^{(j)}\) in an interlacing array while keeping all other rows intact. These maps act on \(\vec c\)-Gibbs measures by permuting spectral parameters.

4.1 First level

Let us first describe the maps for \(j=1\) (the simplest nontrivial case) to illustrate their structure and properties. We use the shorthand notation \(\lambda ^{(2)}=(\lambda _1,\lambda _2)\) and \(\lambda ^{(1)}=\varkappa _1\). The interlacing means that \(\lambda _2\le \varkappa _1\le \lambda _1\).

Definition 4.1

(Truncated geometric distribution) Let \(A\in {\mathbb {Z}}_{\ge 0}\) and \(\alpha \in [0,1]\). A discrete random variable \(Y=Y_\alpha (A)\) on \(\left\{ 0,1,\ldots ,A \right\} \) is called truncated geometric if it has the distribution

$$\begin{aligned} \mathrm {Prob}(Y=k)= {\left\{ \begin{array}{ll} (1-\alpha )\,\alpha ^k,&{} 0\le k\le A-1;\\ \alpha ^{A},&{} k=A. \end{array}\right. } \end{aligned}$$

Definition 4.2

(The L and R maps, first level) For \(\alpha \in [0,1]\), let \(L_\alpha ^{(1)}\) be the Markov mapFootnote 5 whose action on the pair \(\varkappa \prec \lambda \) does not change \(\lambda \), and replaces \(\varkappa _1\) as follows:

$$\begin{aligned} L_\alpha ^{(1)}:\varkappa _1\mapsto \lambda _2+Y_\alpha (\varkappa _1-\lambda _2). \end{aligned}$$

The action of \(R_\alpha ^{(1)}\) is simply the reflection of \(L_\alpha ^{(1)}\):

$$\begin{aligned} R_{\alpha }^{(1)}:\varkappa _1\mapsto \lambda _1-Y_\alpha (\lambda _1-\varkappa _1). \end{aligned}$$

The notation for the L and R operators is suggested by the directions in which they move \(\varkappa _1\). See Fig. 8 for an illustration.

Remark 4.3

If \(\alpha =1\), both \(L_1^{(1)}\) and \(R_1^{(1)}\) are identity operators. If \(\alpha =0\), then \(Y_0(A)=0\) almost surely, and so the actions of both \(L_0^{(1)}\) or \(R_0^{(1)}\) lead to the maximal possible displacement of \(\varkappa _1\), respectively, to the left or to the right.

Fig. 8
figure 8

Probabilities of all possible moves in the maps \(L_\alpha ^{(1)}\) (top) and \(R_\alpha ^{(1)}\) (bottom). The parts of the partitions are represented by bold vertical bars

The next lemma plays a key role and will later generalize to other rows of the interlacing array. Denote by \(s_i\), \(i=1,2,\ldots \) the i-th elementary permutation of the spectral parameters,

$$\begin{aligned} s_i\vec c:=(\ldots ,c_{i-1},c_{i+1},c_i,c_{i+2},\ldots ). \end{aligned}$$
(4.1)

Lemma 4.4

If \(c_1\ge c_2\) and \(c_1\ne 0\), then the Markov operator \(L_{c_2/c_1}^{(1)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_1\vec c}\). If \(c_1\le c_2\) and \(c_2\ne 0\), then the Markov operator \(R_{c_1/c_2}^{(1)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_1\vec c}\).

Proof

Let us consider only \(L_\alpha ^{(1)}\), the case of \(R_\alpha ^{(1)}\) is analogous. By Remark 4.3, when \(c_1=c_2\), \(L_1^{(1)}\) is the identity. But in this case \(s_1 \vec c=\vec c\), so there is nothing to prove.

We can assume that \(c_1>c_2\). Denote \(\alpha =c_2/c_1\). Using the \(\vec c\)-Gibbs property, we see that given \(\lambda =(\lambda _1,\lambda _2)\), the conditional probability weight of \(\varkappa =(\varkappa _1)\) is proportional to \(s_\varkappa (c_1)s_{\lambda /\varkappa }(c_2)\), which by (2.4) leads to

$$\begin{aligned} \mathrm {Prob}(\varkappa _1\mid \lambda ) = \frac{\alpha ^{-\varkappa _1}}{\sum _{k=\lambda _2}^{\lambda _1}\alpha ^{-k}}. \end{aligned}$$

The action of the operator \(L_{\alpha }^{(1)}\) on this distribution is readily computed:

$$\begin{aligned}&\sum _{{\hat{\varkappa }}_1=\lambda _2}^{\lambda _1} \mathrm {Prob}({\hat{\varkappa }}_1\mid \lambda )\cdot L_\alpha ^{(1)}({\hat{\varkappa }}_1\rightarrow \varkappa _1) \\&\quad = \frac{1}{\sum _{k=\lambda _2}^{\lambda _1}\alpha ^{-k}} \left( \alpha ^{-\varkappa _1}\cdot \alpha ^{\varkappa _1 - \lambda _2}+ \sum _{{\hat{\varkappa }}_1=\varkappa _1+1}^{\lambda _1}\alpha ^{-{\hat{\varkappa }}_1}\cdot (1-\alpha )\alpha ^{\varkappa _1- \lambda _2} \right) \\&\quad = \frac{1}{\sum _{k=\lambda _2}^{\lambda _1}\alpha ^{-k}} \left( \alpha ^{-\lambda _2} + \frac{\alpha ^{-\lambda _1}-\alpha ^{-\varkappa _1}}{1-\alpha } \cdot (1-\alpha )\alpha ^{\varkappa _1-\lambda _2} \right) \\&\quad = \frac{\alpha ^{\varkappa _1-\lambda _1-\lambda _2}}{\sum _{k=\lambda _2}^{\lambda _1}\alpha ^{-k}} = \frac{\alpha ^{\varkappa _1}}{\sum _{k=\lambda _2}^{\lambda _1}\alpha ^{k}}. \end{aligned}$$

The final expression is the conditional probability weight of \(\varkappa _1\) given \(\lambda \) under the \(s_1\vec c\)-Gibbs property. This completes the proof. \(\square \)

Remark 4.5

1. In words, Lemma 4.4 states that the action of the L or R operators reverses the geometric distribution on the segment \([\lambda _2,\lambda _1]\).

2. Note also that we apply \(L_{c_2/c_1}^{(1)}\) only if \(c_2\le c_1\) (and the opposite ordering restriction for \(R_{c_1/c_2}^{(1)}\)). If \(c_1>c_2\) in \(L_{c_2/c_1}^{(1)}\), then the algebraic computations in the proof of Lemma 4.4 are still valid. But the operator itself loses probabilistic meaning as some of its matrix elements become negative.

4.2 Remark: Relation to bijectivization

The Markov maps of Definition 4.2 which interchange the spectral parameters were suggested by the idea of bijectivization of the Yang–Baxter equation first employed in [30] (see also [1, 29]).

First, note that one can deduce the symmetry of the skew Schur polynomials (Proposition 2.2) from the Yang–Baxter equation. This argument is present, for example, in [12, Theorem 3.5] in a \(U_q(\widehat{{\mathfrak {s}}{\mathfrak {l}}_2})\) setting with additional parameters qs (the Schur case corresponds to \(q=s=0\)).

Next, bijectivization refines the Yang–Baxter equation into a pair of forward and backward local Markov moves which randomly update the configuration. Here the locality means the following. Encode \(\varkappa _1\) using the occupation variables \(\{\eta _x\}_{x\in {\mathbb {Z}}}\), where \(\eta _{\varkappa _1}=1\) and all other \(\eta _x\equiv 0\). The application of a single local Markov move (forward or backward) would change one of the occupation variables.

Then, considering a sequence of forward or backward moves leads, respectively, to the L and R operators. This can be seen by setting \(t=s=0\) in [30, Figure 4], taking a sequence of these moves, and passing from the occupation variables (equivalently, vertical arrows in the notation of that paper) to the elements of the interlacing array. For brevity, we do not explain the details of derivation of the L and R Markov operators from the bijectivization as an independent proof of the key Lemma 4.4 is rather straightforward.

4.3 General case

Let us now describe the Markov maps \(L_{\alpha }^{(j)}\) and \(R_{\alpha }^{(j)}\) for general j. This is an extension of Definition 4.2. For the next definition we use the convention \(\lambda ^{(j)}_0 = \infty \) and \(\lambda ^{(j)}_{j+1} = 0\) for all \(j \in {\mathbb {Z}}_{\ge 0}\) (recall that by Remark 2.1 in the j-th row of the interlacing array there cannot be more than j nonzero entries).

Definition 4.6

(The L and R maps, general case) Fix \(\alpha \in [0,1]\) and \(j \ge 2\). Let \(L_{\alpha }^{(j)}\) be the Markov map whose action on interlacing arrays of infinite depth \(\{\lambda ^{(i)}\}_{i\ge 1}\) does not change \(\lambda ^{(i)}\) for \(i \ne j\), and replaces \(\lambda ^{(j)}\) as follows:

$$\begin{aligned} L_{\alpha }^{(j)} : \lambda ^{(j)}_{k} \mapsto \max \{\lambda ^{(j-1)}_k , \lambda ^{(j+1)}_{k+1} \} + Y_\alpha ^{(k)}, \qquad k =1, \dots , j, \end{aligned}$$

where \(\{ Y_\alpha ^{(k)} \}_{k=1}^j\) is a collection of independent truncated geometric random variables with \(Y_\alpha ^{(k)}\) distributed as \(Y_{\alpha }\bigl (\lambda ^{(j)}_{k} - \max \{\lambda ^{(j-1)}_k , \lambda ^{(j+1)}_{k+1} \} \bigr )\).

The action of \(R_{\alpha }^{(j)}\) is simply the reflection of \(L_{\alpha }^{(j)}\):

$$\begin{aligned} R_{\alpha }^{(j)} : \lambda ^{(j)}_{k} \mapsto \min \{ \lambda ^{(j-1)}_{k-1}, \lambda ^{(j+1)}_{k}\} - Y_\alpha ^{(k)}, \qquad k =1, \dots , j, \end{aligned}$$

where \(\{ Y_\alpha ^{(k)} \}_{k=1}^j\) is a collection of independent truncated geometric random variables with \(Y_\alpha ^{(k)}\) distributed as \(Y_{\alpha }\bigl (\min \{ \lambda ^{(j-1)}_{k-1}, \lambda ^{(j+1)}_{k} \} - \lambda ^{(j)}_{k} \bigr )\).

In words, under both \(L_\alpha ^{(j)}\) and \(R_\alpha ^{(j)}\) each \(\lambda _k^{(j)}\), \(k=1,\ldots ,j \), is randomly independently moved to the left (resp., to the right) within the segment

$$\begin{aligned} \Bigl [\max \{\lambda ^{(j-1)}_k,\lambda ^{(j+1)}_{k+1}\}, \min \{\lambda _{k-1}^{(j-1)},\lambda ^{(j+1)}_k \}\Bigr ] \end{aligned}$$
(4.2)

to which \(\lambda ^{(j)}_k\) is constrained by interlacing. The moves of each \(\lambda _{k}^{(j)}\) are exactly the same as on the first level and are governed by the truncated geometric random variables.

The next statement is a generalization of Lemma 4.4. Recall that \(s_i\) denotes the i-th elementary permutation of the spectral parameters \(\vec {c}\).

Proposition 4.7

Fix \(j\ge 1\). If \(c_j\ge c_{j+1}\) and \(c_j \ne 0\), then the Markov operator \(L_{c_{j+1}/c_j}^{(j)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_j\vec c}\). If \(c_j\le c_{j+1}\) and \(c_{j+1}\ne 0\), then the Markov operator \(R_{c_j/c_{j+1}}^{(j)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_j\vec c}\).

Proof

Let us consider \(L^{(j)}_{\alpha }\) only; the case of \(R_{\alpha }^{(j)}\) is analogous. Denote \(\alpha = c_{j+1} / c_j\). We may assume that \(\alpha \ne 1\) as otherwise there is nothing to prove. Let us also take \(j\ge 2\) as the case \(j=1\) is Lemma 4.4.

Using the \(\vec {c}\)-Gibbs property, we see that given all \(\lambda ^{(i)}\) with \(i\ne j\), the conditional probability weight of \(\lambda ^{(j)}\) is proportional to \(s_{\lambda ^{(j)}} / s_{\lambda ^{(j-1)}}(c_j) \,s_{\lambda ^{(j+1)}/ \lambda ^{(j)}}(c_{j+1})\) (cf. Lemma 2.9). By (2.4), this implies

$$\begin{aligned} \mathrm {Prob} \left( \lambda ^{(j)} \mid \lambda ^{(i)}, i\ne j \right) = \prod _{k=1}^j \mathrm {P} \left( \lambda ^{(j)}_k \mid \lambda ^{(j-1)}_{k-1}, \lambda ^{(j-1)}_{k}, \lambda ^{(j+1)}_{k}, \lambda ^{(j+1)}_{k+1} \right) , \end{aligned}$$
(4.3)

where

$$\begin{aligned} \mathrm {P} ( m \mid a,b,c,d ) = \alpha ^{-m}\biggl (\, \sum _{ r = \max \{b,d \} }^{ \min \{a,c\} } \alpha ^{-r} \biggr )^{-1}. \end{aligned}$$

For \(\lambda ^{(j)}= (\lambda ^{(j)}_1, \dots , \lambda ^{(j)}_j)\), the operator \(L^{(j)}_{\alpha }\) acts on each \(\lambda ^{(j)}_k\) independently. Thus we may write \(L^{(j)}_{\alpha }\) as a product of local Markov maps which act on each segment (4.2) in the same manner as in Sect. 4.1. Similarly to Lemma 4.4 we conclude that the action of \(L_\alpha ^{(j)}\) reverses each local geometric distribution \(\mathrm {P}(m\mid a,b,c,d)\). Therefore, \(L_\alpha ^{(j)}\) turns (4.3) into the conditional probability weight of \(\lambda ^{(j)}\) under a \(s_j\vec c\)-Gibbs measure. This completes the proof. \(\square \)

5 Action on q-Gibbs measures

This section shows that suitably composed L maps preserve the class of q-Gibbs measures on interlacing arrays, and describes how a q-Gibbs measure changes under this action.

5.1 q-Gibbs property

Fix \(q \in (0,1]\). A \(\vec {c}\)-Gibbs measure on the set \({\mathcal {S}}\) of infinite interlacing arrays is called q-Gibbs if \(c_i =q^{i-1}\) for all \(i \in {\mathbb {Z}}_{\ge 1}\). We denote the set of q-Gibbs measures by \(\mathfrak {G}_q\).

Remark 5.1

One can define the volume of an interlacing array of finite depth N by

$$\begin{aligned} \mathrm {vol}(\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)}): = \sum _{i=1}^{N-1}|\lambda ^{(i)}|. \end{aligned}$$
(5.1)

Then the q-Gibbs property is equivalent to saying that conditioned on \(\lambda ^{(N)}\), the probability weight of the interlacing array \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N)}\) is proportional to \(q^{-\mathrm {vol}(\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)} )}\) (e.g., see [63]). Note that sometimes (in particular, in [46]) the term “q-Gibbs measures” refers to the elements of \(\mathfrak {G}_{q^{-1}}\) in our notation.

When \(q=1\), the q-Gibbs measures correspond to the uniform conditioning property (cf. Remark 2.10). Throughout this section we work under the assumption \(0<q<1\).

5.2 Iterated L map

When \(c_i=q^{i-1}\), we have \(c_{i+1}<c_i\) for all i. By Proposition 4.7, it means that the action of \(L^{(i)}_{c_{i+1}/c_i}\) permutes the spectral parameters \(q^{i}\) and \(q^{i-1}\). Iterating such \(L^{(i)}\) from \(i=1\) to infinity and keeping track of the permutations of the spectral parameters, we arrive at the following definition:

Definition 5.2

(Iterated L map) Let \({\mathsf {M}}\) be a probability measure on \({\mathcal {S}}\) and set \({\mathsf {M}}^{(0)} := {\mathsf {M}}\). Denote, inductively, \({\mathsf {M}}^{(j)}:={\mathsf {M}}^{(j-1)}L^{(j)}_{q^j}\) (see Fig. 9 for an illustration). Let \({\mathbb {L}}^{(q)}\) be the Markov map which acts on probability measures on \({\mathcal {S}}\) by

$$\begin{aligned} {\mathbb {L}}^{(q)}: \{ {\mathsf {M}}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) \}_{N\ge 1} \mapsto \{ {\mathsf {M}}^{(N+1)}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) \}_{N\ge 1}. \end{aligned}$$

Let us explain why \({\mathbb {L}}^{(q)}\) is well-defined. Recall that a probability measure on \({\mathcal {S}}\) is uniquely determined by a family of compatible joint distributions of \((\lambda ^{(1)},\ldots ,\lambda ^{(N)})\) (cf. Sect. 2.6). Next, for all \(K > N\) we have \({\mathsf {M}}^{(K)}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) = {\mathsf {M}}^{(N+1)}(\lambda ^{(1)}, \dots , \lambda ^{(N)})\). This guarantees that the collection of measures \(\{ {\mathsf {M}}^{(N+1)}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) \}_{N\ge 1}\) is indeed compatible, and thus defines a measure on \({\mathcal {S}}\) which we denote by \({\mathsf {M}}\,{\mathbb {L}}^{(q)}\).

Fig. 9
figure 9

Construction of the map \({\mathbb {L}}^{(q)}\). The spectral parameters \(q^j\) correspond to the action on q-Gibbs measures considered in Sect. 5.4, and the lines indicate the swapping of the spectral parameters after each j-th map \(L^{(j)}_{q^j}\)

5.3 q-Gibbs harmonic families

Let \({\mathsf {M}}\) be a q-Gibbs measure on \({\mathcal {S}}\). By the q-Gibbs property, for each \(N\ge 1\) the probability weight of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\) is represented as a product of the marginal probability weight of \(\lambda ^{(N)}\) and a q-Gibbs factor corresponding to the conditional distribution of \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N-1)} \) given \(\lambda ^{(N)}\). This allows to write

$$\begin{aligned} {\mathsf {M}}(\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)})= s_{\lambda ^{(1)} } (1) s_{\lambda ^{(2)}/ \lambda ^{(1)}}(q) \ldots s_{\lambda ^{(N)} / \lambda ^{(N-1)}}(q^{N-1}) \cdot \varphi _N(\lambda ^{(N)}), \end{aligned}$$

where \(\varphi _N\) is a function on the N-th level of the array defined as

$$\begin{aligned} \varphi _N(\nu )=\frac{{\mathsf {M}}(\lambda ^{(N)}=\nu )}{s_{\nu }(1,q,\ldots ,q^{N-1} )}. \end{aligned}$$
(5.2)

Because the functions \(\varphi _N\) for different N come from the same q-Gibbs measure \({\mathsf {M}}\), they must be compatible. This compatibility relation reads

$$\begin{aligned} \sum _{\lambda :\mu \prec \lambda } \varphi _{N}(\lambda ) \, s_{\lambda / \mu }(q^{N-1}) = \varphi _{N-1}(\mu ) \end{aligned}$$
(5.3)

for all \(N\ge 1\) and all \(\mu =(\mu _1,\ldots ,\mu _{N-1})\) on the \((N-1)\)-st level of the array (at the zeroth level we set \(\varphi _0(\varnothing )=1\), by agreement). We call a family of functions \(\{\varphi _N\}\) satisfying (5.3) and \(\varphi _0(\varnothing )=1\) a q-Gibbs harmonic family. The term “harmonic” comes from the Vershik–Kerov theory of the boundary of branching graphs (e.g., see [55]). Clearly, a q-Gibbs measure on \({\mathcal {S}}\) is uniquely determined by its associated q-Gibbs harmonic family \(\{\varphi _N\}\).

5.4 Action of the iterated L map on q-Gibbs measures

If \({\mathsf {M}}\in \mathfrak {G}_q\), then the action of \({\mathbb {L}}^{(q)}\) (that is, the sequence of the Markov maps \(L^{(j)}_{q^j}\)) on \({\mathsf {M}}\) swaps the spectral parameters as in Fig. 9, moving \(c_1=1\) all the way up to infinity where it “disappears”. The resulting spectral parameters \((q,q^2,q^3,\ldots )\) are proportional to the original ones. This suggests that \({\mathbb {L}}^{(q)}\) should preserve the class of q-Gibbs measures. The next result shows that this is indeed the case, and also describes the action of \({\mathbb {L}}^{(q)}\) on \(\mathfrak {G}_q\) in the language of harmonic families.

Theorem 5.3

The Markov map \({\mathbb {L}}^{(q)}\) maps preserves \(\mathfrak {G}_q\), the set of q-Gibbs measures on \({\mathcal {S}}\). More precisely, \({\mathbb {L}}^{(q)}\) maps each q-Gibbs harmonic family \(\{ \varphi _N\}_{N \in {\mathbb {Z}}_{\ge 1}}\) to a new q-Gibbs harmonic family \(\{\hat{\varphi }_N \}_{N \in {\mathbb {Z}}_{\ge 1}}\) as follows:

$$\begin{aligned} \hat{\varphi }_N (\mu ) = q^{|\mu |} \sum _{\lambda } \varphi _{N+1}(\lambda ) s_{\lambda / \mu }(1) =q^{|\mu |} \sum _{\lambda :\mu \prec \lambda } \varphi _{N+1}(\lambda ) . \end{aligned}$$
(5.4)

Proof

The second equality in (5.4) immediately follows from (2.4). Let us first explain why the sum in (5.4) is finite. We have by the definition (5.2) of \(\varphi _N\):

$$\begin{aligned} 1= \sum _{\lambda } \varphi _{N+1}(\lambda )\,s_\lambda (1,q,\ldots ,q^{N}) \ge \sum _{\lambda } \varphi _{N+1}(\lambda )\, 1^{\lambda _1}q^{\lambda _2}\ldots (q^N)^{\lambda _{N+1}}, \end{aligned}$$
(5.5)

where we bounded the Schur polynomial from below by taking one of its monomials (since all the monomials are nonnegative). The condition \(\mu \prec \lambda \) in (5.4) means that only the sum over \(\lambda _1\) in (5.4) is over an infinite set, and it thus converges thanks to (5.5).

Now let \(\{\lambda ^{(i)}\}\) be a random interlacing array distributed according to the q-Gibbs measure coming from \(\{\varphi _N\}\). Let the random array \(\{\theta ^{(i)}\}\) be the image of \(\{\lambda ^{(i)}\}\) under \({\mathbb {L}}^{(q)}\). Fix \(N\ge 1\). The distribution of \(\theta _N\) (described by the function \({\hat{\varphi _N}}\) which we aim to compute) is a result of applying the sequence of Markov maps \(L_q^{(1)},\ldots ,L_{q^{N}}^{(N)}\) (in this order). Because the last of these operators depends on \(\lambda ^{(N+1)}\), we see that the distribution of \(\theta ^{(N)}\) is not determined only by the joint distribution of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\). In other words, to compute \({\hat{\varphi _N}}\) we need to first extend \(\varphi _N\) to \(\varphi _{N+1}\), and utilize the q-Gibbs property.

Let us apply this idea. Fix \(\lambda ^{(N+1)}=\lambda \). This condition completely determines the conditional joint distribution of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\) via the q-Gibbs property. By iterating Proposition 4.7, we see that after applying the Markov maps \(L_q^{(1)},\ldots ,L_{q^{N-1}}^{(N-1)} \), the joint distribution of \(\lambda ^{(N)}\) and \(\theta ^{(N-1)}\), conditioned on \(\lambda ^{(N+1)}=\lambda \) comes from the \((q,q^2,\ldots ,q^{N-1},1,q^{N})\)-Gibbs property:

$$\begin{aligned}&\mathrm {Prob}\bigl ( \lambda ^{(N)}=\varkappa ,\;\theta ^{(N-1)}=\nu \mid \lambda ^{(N+1)}=\lambda \bigr ) \\&\quad = \frac{s_\nu (q,q^2,\ldots ,q^{N-1} ) s_{\varkappa /\nu }(1) s_{\lambda /\varkappa }(q^{N})}{s_\lambda (1,q,\ldots ,q^{N} )}. \end{aligned}$$

After the application of \(L^{(N)}_{q^N}\), the partition \(\lambda ^{(N)}\) turns into \(\theta ^{(N)}\), and we similarly have

$$\begin{aligned}&\mathrm {Prob}\bigl ( \theta ^{(N)}=\mu ,\;\theta ^{(N-1)}=\nu \mid \lambda ^{(N+1)}=\lambda \bigr ) \\&\quad = \frac{s_\nu (q,q^2,\ldots ,q^{N-1})s_{\mu /\nu }(q^N)s_{\lambda /\mu }(1)}{s_\lambda (1,q,\ldots ,q^N )}. \end{aligned}$$

Let us rewrite the last expression to compare it to the q-Gibbs conditional distribution. In the numerator, due to the homogeneity of Schur and skew Schur polynomials, we have:

$$\begin{aligned} \begin{aligned}&s_{\lambda /\mu }(1)s_{\mu /\nu }(q^N)s_\nu (q,\ldots ,q^{N-1} ) \\&\quad = q^{|\nu |} s_\nu (1,q,\ldots ,q^{N-2} ) q^{|\mu |-|\nu |}s_{\mu /\nu }(q^{N-1}) s_{\lambda /\mu }(1) \\&\quad = q^{|\mu |} s_\nu (1,q,\ldots ,q^{N-2} ) s_{\mu /\nu }(q^{N-1}) s_{\lambda /\mu }(1). \end{aligned} \end{aligned}$$
(5.6)

To extract from this the marginal distribution of \(\theta ^{(N)}\) (that is, to get to \({\hat{\varphi }}_{N}\)), we need to multiply (5.6) by \(\mathrm {Prob}(\lambda ^{(N+1)}=\lambda )/s_\lambda (1,\ldots ,q^N)\) (which is exactly \(\varphi _{N+1}(\lambda )\)) and sum the resulting expression over both \(\lambda \) and \(\nu \). We have

$$\begin{aligned} \mathrm {Prob}(\theta ^{(N)}=\mu )= \sum _{\nu ,\lambda :\nu \prec \mu \prec \lambda } q^{|\mu |} s_\nu (1,\ldots ,q^{N-2} ) s_{\mu /\nu }(q^{N-1}) s_{\lambda /\mu }(1) \, \varphi _{N+1}(\lambda ). \end{aligned}$$

The sum over \(\nu \) is simplified using the branching rule (2.2), and so

$$\begin{aligned} \frac{\mathrm {Prob}(\theta ^{(N)}=\mu )}{s_\mu (1,q,\ldots ,q^{N-1} )}= q^{|\mu |}\sum _{\lambda :\mu \prec \lambda } \varphi _{N+1}(\lambda )\,s_{\lambda /\mu }(1). \end{aligned}$$

We see that at the level of marginal distributions, the family \(\left\{ \varphi _N \right\} \) turns into \(\left\{ {\hat{\varphi }}_N \right\} \), where \({\hat{\varphi }}_N\) is defined by (5.4).

It remains to show that the new family \(\{{\hat{\varphi }}_N\}\) satisfies the q-Gibbs harmonicity. That is, we want to show for all N that

$$\begin{aligned} \sum _\mu {\hat{\varphi }}_N(\mu )s_{\mu /\nu }(q^{N-1})= {\hat{\varphi }}_{N-1}(\nu ) = q^{|\nu |} \sum _{\varkappa }\varphi _N(\varkappa )s_{\varkappa /\nu }(1) \end{aligned}$$

(the second equality is simply the definition of \({\hat{\varphi }}_{N-1}\)). We have

$$\begin{aligned} \sum _\mu {\hat{\varphi }}_N(\mu )s_{\mu /\nu }(q^{N-1})&= \sum _{\mu ,\lambda } \varphi _{N+1}(\lambda )s_{\lambda /\mu }(1)q^{|\mu |} s_{\mu /\nu }(q^{N-1}) \\&= q^{|\nu |} \sum _{\mu ,\lambda } \varphi _{N+1}(\lambda )s_{\lambda /\mu }(1) s_{\mu /\nu }(q^{N}) \\&= q^{|\nu |} \sum _{\lambda } \varphi _{N+1}(\lambda )s_{\lambda /\nu }(1,q^N) \\&= q^{|\nu |} \sum _{\lambda ,\varkappa } \varphi _{N+1}(\lambda )s_{\lambda /\varkappa }(q^N)s_{\varkappa /\nu }(1) \\&= q^{|\nu |} \sum _{\varkappa } \varphi _{N}(\varkappa )s_{\varkappa /\nu }(1), \end{aligned}$$

as desired. In the last step we used the harmonicity of the original family \(\left\{ \varphi _N \right\} \). This completes the proof. \(\square \)

Remark 5.4

Note that Theorem 5.3 fundamentally relies on the fact that the q-Gibbs measure lives on an infinite array. Indeed, for an array of finite depth it is not possible to move the spectral parameter 1 all the way up to infinity. In the proof of Theorem 5.3 we use the fact that the array has infinite depth when we extend \(\varphi _N\) to \(\varphi _{N+1}\). The case of arrays of finite depth is discussed in Sect. 8.

5.5 Application to Schur processes and TASEP with geometric speeds

Schur processes \({\mathbb {P}}[\vec c\mid \rho _t]\) with \(c_i=q^{i-1}\) are particular cases of q-Gibbs measures with

$$\begin{aligned} \varphi _N(\lambda )= e^{-t(1+q+\ldots +q^{N-1} )}\, \frac{s_\lambda (\rho _t)s_\lambda (1,q,\ldots ,q^{N-1} )}{s_\lambda (1,q,\ldots ,q^{N-1} )} = e^{-t(1+q+\ldots +q^{N-1} )} s_\lambda (\rho _t), \end{aligned}$$

where we took into account the normalization of the Schur measures. The Markov map \({\mathbb {L}}^{(q)}\) acts on these Schur processes as follows:

$$\begin{aligned} {\hat{\varphi }}_N(\mu )&= q^{|\mu |} e^{-t(1+q+\ldots +q^{N} )} \sum _{\lambda }s_\lambda (\rho _t)\,s_{\lambda /\mu }(1) \\ {}&= e^{t} e^{-t(1+q+\ldots +q^{N} )} q^{|\mu |} s_\mu (\rho _t) \\ {}&= e^{-qt(1+q+\ldots +q^{N-1} )} s_\mu (\rho _{q\cdot t}), \end{aligned}$$

where we used the skew Cauchy identity ((2.6) with \(\varkappa =\varnothing \)) and the homogeneity of the Schur polynomials [both properties clearly survive the Plancherel limit (2.8)]. Therefore, we have

$$\begin{aligned} {\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho _t] \,{\mathbb {L}}^{(q)} = {\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho _{qt}]. \end{aligned}$$
(5.7)

Recall that by Theorem 3.1, the joint distribution of the quantities \(\{\lambda ^{(N)}_N-N \}_{N\ge 1}\) under the Schur process \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho _t]\) is the same as the joint distribution of the particle locations \(\{x_N(t)\}_{N\ge 1}\) at time t of the TASEP with particle speeds \(c_i=q^{i-1}\) and the step initial configuration. Denote this joint distribution of particles \(\{x_N(t)\}\) by \(\upmu _t^{(q)}\).

Our next observation is that the action of the Markov map \({\mathbb {L}}^{(q)}\) on the random interlacing array \(\{\lambda ^{(N)}\}_{N\ge 1}\) can be projected to the leftmost components \(\{\lambda ^{(N)}_N \}_{N\ge 1}\), and the result is still a Markov map. In more detail, let \(\{\theta ^{(N)}\}_{N\ge 1}\) be the random interlacing array which is the image of \(\{\lambda ^{(N)}\}_{N\ge 1}\) under \({\mathbb {L}}^{(q)}\). From the very definition of \({\mathbb {L}}^{(q)}\), we see that conditioned on \(\{\lambda ^{(N)}\}_{N\ge 1}\), the distribution of \(\{\theta ^{(N)}_N \}_{N\ge 1}\) depends only on the leftmost components \(\{\lambda ^{(N)}_N \}_{N\ge 1}\), and not on the rest of the array \(\lambda \). Let us describe this projection of \({\mathbb {L}}^{(q)}\) explicitly in terms of locations of the TASEP particles \(x_N\) (via the identification \(x_N=\lambda ^{(N)}_N-N\)). Recall from Sect. 3.1 that \({\mathcal {C}}\) stands for the space of left-packed, right-finite particle configurations on \({\mathbb {Z}}\).

Definition 5.5

Let \(0<q<1\). We aim to define a Markov map \(\mathbf{L} ^{(q)}\) on \({\mathcal {C}}\). Fix a configuration \(x_1>x_2>\cdots \) in \({\mathcal {C}}\). By definition, its random image \({\hat{x}}_1>{\hat{x}}_2>\cdots \) under the action of \(\mathbf{L} ^{(q)}\) is

$$\begin{aligned} {\hat{x}}_i=x_{i+1}+1+Y_{q^{\scriptstyle i}}(x_i-x_{i+1}-1),\qquad i=1,2,\ldots , \end{aligned}$$

where the \(Y_{q^{\scriptstyle i}}\)’s are independent truncated geometric random variables (see Definition 4.1).

Remark 5.6

A homogeneous version of \(\mathbf{L} ^{(q)}\) appeared in [28], it is solvable through coordinate Bethe Ansatz [73].

Theorem 5.3, identity (5.7), and the fact that \(\mathbf{L} ^{(q)}\) is a projection of \({\mathbb {L}}^{(q)}\) immediately imply the following result:

Theorem 5.7

For any \(t\ge 0\), we have

$$\begin{aligned} \upmu _t^{(q)}\,\mathbf{L} ^{(q)}=\upmu _{qt}^{(q)}, \end{aligned}$$

where \(\upmu _t^{(q)}\) is the distribution of the TASEP with geometric rates (with ratio q) and step initial configuration, and \(\mathbf{L} ^{(q)}\) is the Markov map from Definition 5.5.

6 Limit \(q\rightarrow 1\) and proof of the main result

Here we take the limit as \(q\rightarrow 1\) of the results of the previous section, and arrive at a continuous-time Markov chain mapping the TASEP distributions backwards in time. This proves our main result, Theorem 1.

Iterate Theorem 5.7 to observe that for any \(T\in {\mathbb {Z}}_{\ge 1}\):

$$\begin{aligned} \upmu _t^{(q)} (\mathbf{L} ^{(q)})^T = \upmu ^{(q)}_{q^T t}, \end{aligned}$$
(6.1)

where \((\mathbf{L} ^{(q)})^T\) simply denotes the T-th power. Next, introduce the scaling:

$$\begin{aligned} q=e^{-\varepsilon },\qquad T=\lfloor \tau /\varepsilon \rfloor , \end{aligned}$$
(6.2)

where \(\varepsilon >0\) will go to zero, and \(\tau \in {\mathbb {R}}_{\ge 0}\) is the scaled continuous time. Clearly, we have \(q^T=e^{-\tau }(1+O(\varepsilon ))\). We aim to take the limit as \(q\rightarrow 1\) in (6.1).

Recall that by \(\upmu _t\), \(t\in {\mathbb {R}}_{\ge 0}\), we denote the distribution of the TASEP with constant speeds \(c_i\equiv 1\) at time t, started from the step initial configuration. Also recall that \({\mathcal {C}}\) is the space of left-packed, right-finite particle configurations on \({\mathbb {Z}}\). The space \({\mathcal {C}}\) has a natural partial order: \(\mathbf{x} \) precedes \(\mathbf{y} \) if \(x_i\le y_i\) for all i.

Lemma 6.1

For any fixed \(\tau ,t\in {\mathbb {R}}_{\ge 0}\) and any \(\delta >0\) there exists a finite set \({\mathcal {C}}^\delta ={\mathcal {C}}^\delta (t,\tau )\subset {\mathcal {C}}\) such that

$$\begin{aligned} \upmu _t({\mathcal {C}}^\delta )>1-\delta , \qquad \upmu _{e^{-\tau }t}({\mathcal {C}}^\delta )>1-\delta , \qquad \upmu _t^{(q)}({\mathcal {C}}^\delta )>1-\delta , \qquad \upmu _{q^T t}^{(q)}({\mathcal {C}}^\delta )>1-\delta \end{aligned}$$

for all sufficiently small \(\varepsilon >0\).

Proof

Take finite \({\mathcal {C}}^\delta \subset {\mathcal {C}}\) such that \(\upmu _{t}({\mathcal {C}}^\delta )>1-\delta \), and, moreover, \({\mathcal {C}}^\delta \) is closed with respect to the partial order (i.e., if \(\mathbf{x} \) precedes \(\mathbf{y} \) and \(\mathbf{y} \in {\mathcal {C}}^\delta \), then \(\mathbf{x} \in {\mathcal {C}}^\delta \)). This is possible because \(\upmu _t\) is a probability measure on \({\mathcal {C}}\), and closing a finite set with respect to our partial order keeps it finite. (One can even estimate the size of \({\mathcal {C}}^\delta \) because the first particle \(x_1(t)\) performs speed 1 directed a random walk.)

Next, \(\upmu _{e^{-\tau }t}({\mathcal {C}}^\delta )>1-\delta \) because the TASEP dynamics almost surely increases the configuration with respect to the order. The rest of the claim follows by monotonically coupling the TASEP \(\upmu _{\bullet }\) with constant speeds to the TASEP \(\upmu _{\bullet }^{(q)}\) with the q-geometric speeds. Here monotonicity means that the TASEP with the q-geometric speeds is always behind (in our partial order) the \(q=1\) TASEP; this monotone coupling exists since \(q<1\). \(\square \)

By Lemma 6.1, it suffices to consider the limit of identity (6.1) as \(q\rightarrow 1\) on finite subsets of \({\mathcal {C}}\). In the right-hand side we immediately get \(\upmu ^{(q)}_{q^T t}\rightarrow \upmu _{e^{-\tau }t}\). In the left-hand side we have \(\upmu _t^{(q)}\rightarrow \upmu _t\). It remains to take the limit of the T-th power of the Markov map \(\mathbf{L} ^{(q)}\).

The limit transition in \((\mathbf{L} ^{(q)})^{T}\) is in the spirit of the classical Poisson approximation to the binomial distribution—the probability of jumps gets smaller, but the number of trials (i.e., the discrete time) scales accordingly. More precisely, we have for the random variables \(Y_{q^{\scriptstyle k}}\) in Definition 5.5:

$$\begin{aligned} \begin{aligned} \mathrm {Prob}(Y_{q^{\scriptstyle k}}(A)=m)&= {\left\{ \begin{array}{ll} (1-q^{k})q^{mk},&{}0\le m<A;\\ q^{Ak},&{}m=A \end{array}\right. } \\&= {\left\{ \begin{array}{ll} k\varepsilon +O(\varepsilon ^2),&{} 0\le m\le A;\\ 1-Ak\varepsilon +O(\varepsilon ^2),&{}m=A. \end{array}\right. } \end{aligned} \end{aligned}$$
(6.3)

This leads to the following definition of the continuous-time backwards dynamics:

Definition 6.2

(Backwards Hammersley-type process \(\mathbf{L} _\tau \)) Consider the continuous-time dynamics on \({\mathcal {C}}\) defined as follows. Each particle \(x_k\), \(k=1,2,\ldots \) independently jumps to the left to one of the holes \(\{x_{k+1}+1,x_{k+1}+2,\ldots ,x_k-1 \}\) at rate k per hole. Equivalently, each particle \(x_k\) has an independent exponential clock of rate \(k(x_k-x_{k+1}-1)\); when the clock rings, \(x_k\) selects a hole between \(x_{k+1}\) and \(x_k\) uniformly at random and instantaneously moves there.Footnote 6

Note that for configurations in \({\mathcal {C}}\), the total jump rate of all particles is always finite. Therefore, the dynamics on \({\mathcal {C}}\) is well-defined. Denote by \(\mathbf{L} _\tau \), \(\tau \in {\mathbb {R}}_{\ge 0}\), the Markov transition operator of this dynamics from time 0 to time \(\tau \) (note that the dynamics is time-homogeneous). Observe that the step configuration (\(x_i=-i\) for all \(i=1,2,\ldots \)) is absorbing for the backwards dynamics \(\mathbf{L} _\tau \).

Thanks to Lemma 6.1 and (6.3), we have the convergence \((\mathbf{L} ^{(q)})^{T}\rightarrow \mathbf{L} _\tau \). This completes the proof of the main theorem \(\upmu _t\,\mathbf{L} _\tau =\upmu _{\,e^{-\tau }t}\).

7 Stationary dynamics on the TASEP measure

Here we illustrate the relation between the TASEP and the backwards Hammersley-type process by constructing a Markov dynamics preserving the TASEP measure \(\upmu _t\). We also discuss hydrodynamics of these two processes.

In this section we denote particle configurations by occupation variables \(\eta :{\mathbb {Z}} \rightarrow \{0,1\}\), with \(\eta (x) =1\) if there is a particle at location \(x\in {\mathbb {Z}}\), and \(\eta (x) =0\) otherwise. The step initial configuration is \(\eta (x)=1\) iff \(x<0\). Recall that by \({\mathcal {C}}\) we denote the space of left-packed, right-finite configurations. Denote by \(\overline{{\mathcal {C}}}=\left\{ 0,1 \right\} ^{{\mathbb {Z}}}\) the space of all particle configurations in \({\mathbb {Z}}\).

7.1 Definition of the stationary dynamics

Let \(A^{T}:= A^{\mathrm {TASEP}}\) be the infinitesimal generator for the TASEP with homogeneous particle speeds \(c_i=1\) (Sect. 3.1), and \(\{\mathbf{T }_t\}_{t\ge 0}\) be the corresponding Markov semigroup. Let \(A^{L}: = A^{\mathrm {BHP}}\) the infinitesimal generator of the backwards Hammersley-type process (BHP), see Definition 6.2, and \(\{\mathbf{L _\tau }\}_{\tau \ge 0}\) denote the BHP semigroup. For a fixed configuration \(\eta \in {\mathcal {C}}\), we denote by \(\eta ^{x,y}\), \(x\ne y\), the configuration

$$\begin{aligned} \eta ^{x, y}(z) = {\left\{ \begin{array}{ll} \eta (z) + 1, &{} z =y; \\ \eta (z), &{} z \ne x, y; \\ \eta (z) - 1, &{} z = x. \end{array}\right. } \end{aligned}$$

In words, \(\eta ^{x, y}\) corresponds to a particle jumping from location \(x \in {\mathbb {Z}}\) to location \(y \in {\mathbb {Z}}\). Note that \(\eta ^{x,y}\) may not be in \({\mathcal {C}}\) even if \(\eta \in {\mathcal {C}}\).

The infinitesimal generator for the TASEP acts as follows:

$$\begin{aligned} (A^T f) (\eta ) = \sum _{x \in {\mathbb {Z}}}\eta (x) (1 - \eta (x+1)) \bigl (f(\eta ^{x, x+1}) - f(\eta )\bigr ), \end{aligned}$$
(7.1)

for f a cylindrical function on \(\eta \in {\mathcal {C}}\) (i.e. a function that depends on finitely many coordinates of \(\eta \)). The factor \(\eta (x) (1 - \eta (x+1))\) takes care of the TASEP exclusion rule. The infinitesimal generator of the BHP acts as follows:

$$\begin{aligned} (A^{L} f)(\eta ) = \sum _{x \in {\mathbb {Z}}} \eta (x) \left( \sum _{y =x}^{\infty } \eta (y)\right) \sum _{m=1}^{\infty } \left( \prod _{k=1}^{m}(1 - \eta (x- k)) \right) \bigl (f(\eta ^{x, x-m}) -f(\eta )\bigr ),\nonumber \\ \end{aligned}$$
(7.2)

for f a cylindrical function on \(\eta \). Note that summations in the action of \(A^L\) are well defined since for \(\eta \in {\mathcal {C}}\) we have \(\eta (x) = 0\) for \(x\gg 0\) and \(\eta (x) =1\) for \(x\ll 0\).

Recall that \(\upmu _t\) is the distribution of the TASEP configuration at time t started from the step initial configuration. Denote the corresponding random particle configuration by \(\eta _t\). We have \(\eta _t\in {\mathcal {C}}\) almost surely.

For any \(t\in {\mathbb {R}}_{>0}\), define the operator

$$\begin{aligned} A:= tA^T+A^L. \end{aligned}$$
(7.3)

This is the generator of the continuous-time Markov process which is a combination of the BHP and the TASEP sped up by the factor of t. By a “combination” we mean that both processes run in parallel.

Proposition 7.1

The TASEP distribution \(\{\eta _t\}\) is invariant under the continuous-time Markov process with generator A, that is,

$$\begin{aligned} {\mathbb {E}}\left[ (A f) (\eta _t) \right] = 0 \end{aligned}$$

for all cylinder functions f.

Proof

By Theorem 1, we have

$$\begin{aligned} \upmu _t\, \mathbf{L} _\tau \, \mathbf{T} _{t (1 - e^{-\tau })} = \upmu _{t} \end{aligned}$$
(7.4)

for any \(t, \tau \ge 0\). Fixing \(t \ge 0\), differentiating the above identity in \(\tau \), and sending \(\tau \) to zero, we get \(\upmu _t\,(t A^{T} + A^{L}) =0\). This establishes the result. \(\square \)

Remark 7.2

It should be possible to show that the process with the generator (7.3), started from any configuration \(\mathbf{x} \in {\mathcal {C}}\), converges (as time goes to infinity) to its stationary distribution \(\upmu _t\). However, we do not focus on this question in the present paper.

A local version of Proposition 7.1 holds, too. That is, the Bernoulli measures of any given density \(\rho \in [0,1]\) on particle configurations on \({\mathbb {Z}}\) are invariant under both the TASEP and the homogeneous version of the BHP. (Locally the rates under BHP are constant, so the invariance should be considered under the homogeneous BHP.) The remarkable content of Proposition 7.1 is that the invariance is global on “out-of-equilibrium” random configurations with the distribution \(\upmu _t\), if the speeds of the TASEP and the inhomogeneous BHP are related in as in (7.3).

As a consequence of Proposition 7.1, let us take a specific function of the configuration:

$$\begin{aligned} N^0:=\eta (0)+\eta (1)+\eta (2)+\ldots ,\qquad f(\eta ):=G(N^0), \end{aligned}$$
(7.5)

where \(G(\cdot )\) is a function \({\mathbb {Z}}_{\ge 0}\rightarrow {\mathbb {R}}\). Note that \(2 N^0\) is the height function at zero. Let \(\eta _t\) be the random configuration of the TASEP at time t with the step initial configuration, and \(N^0_t:=\eta _t(0)+\eta _t(1)+\ldots \).

Corollary 7.3

With the above notation, we have

$$\begin{aligned} \frac{\partial }{\partial t}\,{\mathbb {E}}\, G(N^0_t)= -\frac{1}{t}\, {\mathbb {E}}\left( N^0_t\left( G(N^0_t-1)-G(N^0_t) \right) \sum _{x=1}^{\infty } x\,\eta _t(-x-1) \prod _{k=1}^{x}[1-\eta _t(-k)] \right) . \end{aligned}$$

In the sum over x in the right-hand side almost surely only one term is nonzero, and the whole sum is equal to the distance of the rightmost particle in \({\mathbb {Z}}_{<0}\) to zero.

Proof

The left-hand side is equal to \({\mathbb {E}}\left( A^T f(\eta _t) \right) \), which by Proposition 7.1 is the same as \(-t^{-1}{\mathbb {E}}(A^L f(\eta _t))\). The rest follows from the computation of \(A^L f(\eta _t)\) for the particular function (7.5), which is straightforward. \(\square \)

7.2 Hydrodynamics

The hydrodynamic limit for the TASEP is well known, with early results by [58] on the convergence to a local equilibrium and by [76] on the connection of the density function to the Burgers’ equation. The latter means that under linear space and time scaling, the limiting density density function \(\rho (t,z)\) of the TASEP is the entropic solution of the following initial-value problem for the one-dimensional Burgers’ equation:

$$\begin{aligned} \begin{aligned} \frac{\partial \rho }{\partial t}&= - \frac{\partial [\rho (1-\rho )]}{\partial z}\,;\\ \rho ( 0, z)&= {\left\{ \begin{array}{ll} 1, \quad z \le 0 \\ 0;\quad z > 0. \end{array}\right. } \end{aligned} \end{aligned}$$
(7.6)

We refer to [10] for further details, see also [43] for a recent review. The solution to (7.6) is given by

$$\begin{aligned} \rho (t, z) = {\left\{ \begin{array}{ll} 1, &{}z < -t; \\ (t-z)/2t, &{} -t \le z \le t; \\ 0, &{} z> t. \end{array}\right. } \end{aligned}$$
(7.7)

The limiting density \(\rho (t,z)\) describes the law of large numbers type behavior of the TASEP.

Remark 7.4

(Asymptotic analysis of TASEP) More recently, in the last 20 years, much finer scaling limits for the TASEP have become available, beginning with the work of Johansson [50] on the Tracy-Widom fluctuations of the position of the particles in the TASEP. More generally, the TASEP with various other examples of initial data has been shown to converge to the top lines of the \(\text {Airy}_1\) or \(\text {Airy}_2\) line ensembles under the appropriate scalings, see, e.g., the survey [39] and references therein for details. The progress in understanding the TASEP asymptotics with general initial data, and also the asymptotics of the space-time structure in TASEP is currently ongoing [4, 7, 8, 31, 36, 40, 41, 51,52,53, 62].

While we expect the BHP and the stationary dynamics from Sect. 7.1 to have applications for all these types of scaling limits, we begin by considering the hydrodynamic limit of the BHP in this section.

Let \(\eta _t \in {\mathcal {C}}\) be the random configuration at time \(t \ge 0\) of the TASEP with step initial conditions. For any \(\epsilon >0\), the (\(\epsilon \)-scaled) random empirical measure on \({\mathbb {R}}\) associated to \(\eta _t \in {\mathcal {C}}\) is given as follows:

$$\begin{aligned} \pi ^{\epsilon }_t := \epsilon \sum _{x \in {\mathbb {Z}}} \eta _{t}(x)\, \delta _{\epsilon x}. \end{aligned}$$
(7.8)

In particular, we have scaled the mass of each point by \(\epsilon \), scaled the lattice distance by \(\epsilon \), but the time remains unscaled. Denote the set of compactly supported continuous functions on the line by \(C_0({\mathbb {R}})\). The integral of a function \(f \in C_0({\mathbb {R}})\) against the measure \(\pi ^{\epsilon }\) is denoted by \(\langle \pi ^{\epsilon } , f\rangle \). Clearly, \(\langle \pi _t^\epsilon , f\rangle = \epsilon \sum _{x \in {\mathbb {Z}}} f(\epsilon x)\, \eta _{t} (x)\).

The next statement can be found in, e.g., [78]. The sequence of measures \(\{ \pi _{t/\epsilon }^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0} }\) converges as \(\epsilon \rightarrow 0\) in probability to \(\rho (t, z) dz\) so that the density function \(\rho (t,z)\) is the entropic solution of the initial value problem for the Burgers equation (7.6). That is, for each \(t\ge 0\), given any \(\delta >0\),

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\, \mathrm {Prob} \left( \Bigl |\epsilon \sum _{x \in {\mathbb {Z}}}\, f(\epsilon x) \eta _{t/\epsilon } (x) - \int _{-\infty }^{\infty } f(z) \rho ( t, z) dz \Bigr | \ge \delta \right) =0 \end{aligned}$$
(7.9)

for any \(f \in C_0({\mathbb {R}})\). Note that now we have scaled time by \(\epsilon ^{-1}\) in the empirical measure.

This result for TASEP generalizes to a large class of initial conditions. For instance, given a continuous density profile \(\rho _0: {\mathbb {R}} \rightarrow [0,1]\), a sequence \(\{\nu ^{\epsilon } \}_{\epsilon \in {\mathbb {R}}_{>0}}\) of probability measures on \(\overline{{\mathcal {C}}} = \{ 0, 1\}^{{\mathbb {Z}}}\) is said to be associated to the profile \(\rho _0\) if for every \(f\in C_0({\mathbb {R}})\) and every \(\delta >0\), we have

$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \, \nu ^{\epsilon } \left[ \Bigl |\epsilon \sum _{x \in {\mathbb {Z}}} f(\epsilon x)\, \eta (x) - \int _{- \infty }^{\infty } f(w)\, \rho _0(w) dw \Bigr | > \delta \right] = 0. \end{aligned}$$

Then, the empirical measure \(\pi _{t/\epsilon }^{\epsilon }\) for the TASEP, with initial conditions now given by \(\nu ^{\epsilon }\) converges in probability to an absolutely continuous measure \(\rho ( t, z) dz\) so that the density function is the entropic solution to the Burgers’ equation with the initial value given by the density profile \(\rho (t,z)\), see [78]. We expect a similar hydrodynamic result to hold for the BHP with some modifications: (1) a different PDE arising from the infinitesimal generator of the BHP, and (2) no time scaling for the empirical measure since lattice scaling also scales the particle numbers and, consequently, the speed of the particles.

Conjecture 1

Let \(\rho _0: {\mathbb {R}} \rightarrow [0,1]\) be an initial density profile and let \(\{ \nu ^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0}}\) be a sequence of probability measures on \({\mathcal {C}}\) associated to \(\rho _0\).Footnote 7 Also, for a fixed \(\epsilon >0\), take \(\eta _t^{\epsilon } \in {\mathcal {C}}\) to be the random configuration at time \(t>0\) of the BHP, with the initial configuration \(\eta _0^{\epsilon }\) determined by the measure \(\nu ^{\epsilon }\). Then, for every \(t>0\), the sequence of random empirical measures \(\pi ^{\epsilon }_t\) defined as in (7.8) converges in probability to the absolutely continuous measure \(\pi _t(dz) = \rho (z, t) dz\) in the sense of (7.9). The density \(\rho (t,z)\) is a solution of the initial value problem

$$\begin{aligned} \begin{aligned} \frac{\partial \rho (t,z) }{\partial t}&= \frac{\partial }{ \partial z} \left[ \frac{1 - \rho ( t,z)}{\rho (t,z)} \int _z^{\infty } \rho (t, w) dw \right] ; \\ \rho (0,z)&= \rho _0(z). \end{aligned} \end{aligned}$$
(7.10)

Remark 7.5

In Conjecture 1, it is unclear to the authors if there is a unique solution to the initial value problem (7.10). In particular, it is unclear what type of solution the limiting density profile \(\rho (t,z)\) should be.

Remark 7.6

The differential equation (7.10) can be informally obtained by looking at the local version of the BHP. That is, locally we expect the configuration to be close to the independent Bernoulli random configuration on the whole line \({\mathbb {Z}}\) with the density \(\rho (t,z)\). Then the expression under \(\partial /\partial z\) in the right-hand side of (7.10) is the (negative) flux. Indeed, \(\int _z^{\infty } \rho (t, w) dw\) means the inhomogeneous rate in the BHP, while \(-(1-\rho (t,z))/\rho (t,z)\) is the local flux of the homogeneous BHP with left jumps and speed 1. See Proposition 7.8 for more discussion.

Let us check that Conjecture 1 holds for the initial data associated with the TASEP distributions \(\upmu _t\).

Proposition 7.7

Fix some \(t_0 \in {\mathbb {R}}\) and let \(\eta _0^{\epsilon } \sim \upmu _{\epsilon ^{-1} e^{t_0}}\) be the TASEP random configuration at time \(\epsilon ^{-1} e^{t_0}\). Then, the sequence \(\{ \eta _0^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0}}\) is associated to the density profile

$$\begin{aligned} \rho _0 (z) = {\left\{ \begin{array}{ll} 1, &{} z < e^{ t_0}; \\ \frac{e^{ t_0}-z}{2e^{ t_0}},&{} -e^{ t_0} \le z \le e^{ t_0} ; \\ 0,&{} z> e^{t_0}, \end{array}\right. } \end{aligned}$$

and Conjecture 1 is true for the measures \(\nu ^{\epsilon }=\upmu _{\epsilon ^{-1}e^{t_0}}\).

Proof

By results for the TASEP, we know that the sequence \(\eta _0^{\epsilon }\) is associated to the density profile \(\rho _0\) given in the statement. Also, by Theorem 1, we know that the random configuration \(\eta _{t}^{\epsilon }\) obtained from \(\nu ^\epsilon =\upmu _{\epsilon ^{-1}e^{t_0}}\) by the BHP evolution as in Conjecture 1, is distributed according to \(\upmu _{\epsilon ^{-1} e^{t_0 - t}}\).

So, again by results for the TASEP, we know that the sequence of random measures \(\pi _t^{\epsilon }\) converges to an absolutely continuous measure \(\pi _t(dz) = \rho (z, t) dz\) with the density given by

$$\begin{aligned} \rho (t,z) = {\left\{ \begin{array}{ll} 1, &{} z < e^{ t_0 -t}; \\ \frac{e^{ t_0-t}-z}{2e^{ t_0-t}}; &{} -e^{ t_0-t} \le z \le e^{ t_0-t} \\ 0, &{} z> e^{ t_0-t}. \end{array}\right. } \end{aligned}$$

One can then check directly that the above \(\rho (t, z)\) solves the initial value problem (7.10). This completes the proof. \(\square \)

We base Conjecture 1 on the random evolution of the empirical measure \(\pi _t^{\epsilon }\) given by the infinitesimal generator for the BHP.

Proposition 7.8

Let \(f: {\mathbb {R}} \rightarrow {\mathbb {R}}\) be a twice differentiable compactly supported function and let \(\eta _t \in {\mathcal {C}}\) be the random configuration given by the BHP. Here the time \(t\ge 0\) and the initial configuration \(\eta _0\in {\mathcal {C}}\) are fixed. Then, there are martingales \(M_t^{\epsilon , f}\) with respect to the natural filtration \(\sigma (\eta _s^{\epsilon }, s \le t)\) so that

$$\begin{aligned} \langle \pi _t^{\epsilon } , f \rangle = \langle \pi _0^{\epsilon } , f\rangle + \int _0^t \langle \pi _s^\epsilon , g^{\epsilon } f' \rangle ds + M_t^{\epsilon , f} + {\mathcal {O}}(\epsilon ^{2}), \end{aligned}$$

for \(\pi _t^{\epsilon }\) the random empirical measure of \(\eta _t\) and the function

$$\begin{aligned} g^{\epsilon }(x) := - \Biggl ( \sum _{y=\lfloor \epsilon ^{-1}x \rfloor }^{\infty } \epsilon \, \eta _s(y) \Biggr ) \Biggl ( \sum _{m=1}^{\infty } m \prod _{k=1}^{m} \bigl ( 1- \eta _s(\lfloor \epsilon ^{-1}x \rfloor - k) \bigr ) \Biggr ). \end{aligned}$$

Proof

We have

$$\begin{aligned} \frac{\partial }{ \partial t} \, {\mathbb {E}}\, \langle \pi _t^{\epsilon }, f\rangle = {\mathbb {E}}\, A^L \langle \pi _t^{\epsilon }, f\rangle , \end{aligned}$$

where we regard \( \langle \pi _t^{\epsilon }, f\rangle \) as a function of the configuration \(\eta _t\). We can compute

$$\begin{aligned} \begin{aligned} A^L\langle \pi _t^{\epsilon }, f\rangle&= \sum _{x \in {\mathbb {Z}}} \left[ \sum _{m=1}^{\infty } \left( \frac{f(\epsilon x -\epsilon m) - f(\epsilon x)}{\epsilon m} \right) m \prod _{k=1}^{m}(1- \eta (x-k)) \right] \\&\quad \times \left( \sum _{y=x}^{\infty } \epsilon \eta (y) \right) \eta (x). \end{aligned} \end{aligned}$$

With the help of the approximation

$$\begin{aligned} f(\epsilon x -\epsilon m) = f(\epsilon x) -f'(\epsilon x) (\epsilon m) + {\mathcal {O}}(\epsilon ^2), \end{aligned}$$

the statement follows from standard results on Markov chains. \(\square \)

7.3 Limit shape for TASEP with step initial condition

Let us present an alternate derivation for the limit shape of the TASEP with the step initial configuration assuming Conjecture 1 but independent of the similar result for the TASEP. We only assume that the TASEP empirical measure converges to \(\rho \) satisfying the following system of equations:

$$\begin{aligned} \begin{aligned}&\frac{\partial \rho (t,z)}{\partial t} + \frac{\partial }{\partial z} \left[ \frac{1- \rho (t,z)}{t \rho (t, z)} \int _{z}^{\infty }\rho (t,w) dw\right] = 0; \\&\frac{\partial \rho (t,z)}{\partial t} + \frac{\partial }{\partial z} [\rho (t,z)(1- \rho (t,z))] =0. \end{aligned} \end{aligned}$$
(7.11)

In particular, we show that this system of partial differential equations determines a unique solution under some general assumptions.

First, eliminate the time derivative so that

$$\begin{aligned} \frac{\partial }{ \partial z} \left[ \frac{1- \rho (t,z)}{\rho (t,z)} \left( \rho (t,z)^2 - \frac{1}{t} \int _{z}^{\infty } \rho (t,w)dw\right) \right] =0. \end{aligned}$$

Then,

$$\begin{aligned} \frac{1- \rho (t,z)}{\rho (t,z)} \left( \rho (t,z)^2 - \frac{1}{t} \int _{z}^{\infty } \rho (t,w)dw\right) = c(t). \end{aligned}$$

Note that, for all \(t \in {\mathbb {R}}_{\ge 0}\), there is a \(z \in {\mathbb {Z}}\) small enough so that \(\rho (t, z) = 1\). This implies that the constant c(t) is in fact zero. Thus, we have

$$\begin{aligned} \rho ^2(t,z) = \frac{1}{t} \int _{z}^{\infty } \rho (t,w) d w. \end{aligned}$$

Taking the space derivative, we have

$$\begin{aligned} \frac{\partial \rho (t,z) }{\partial z} = - \frac{1}{2 t}. \end{aligned}$$
(7.12)

Revisiting the system of equations (7.11), we may now write the second equation as follows

$$\begin{aligned} \frac{\partial \rho (t,z)}{ \partial t} = \frac{1}{2 t} (1 - 2 \rho (t,z)). \end{aligned}$$

By separation of variables, we may solve the equation above up to a constant of integration, but this constant of integration may be determined by (7.12). Thus, we get the well-known hydrodynamic density function

$$\begin{aligned} \rho (t,z) = \frac{1}{2} - \frac{z}{2 t}, \qquad z\in [-t,t]. \end{aligned}$$

We have thus shown that the comparability of the TASEP and the BHP uniquely picks out the entropic solution to the Burgers’ equation for the limiting density function with the step initial condition.

8 Extensions and open questions

In this section we describe a number of modifications and extensions of the constructions presented earlier, and outline a number of open questions.

8.1 More general q-Gibbs measures

The Markov map \({\mathbb {L}}^{(q)}\) from Definition 5.2 acts nicely on Schur processes \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) with general specializations \(\rho \). Even more generally, we can consider two-sided Schur processes which live on interlacing arrays of signatures. Signatures are analogues of partitions in which parts are allowed to be negative. Interlacing arrays of signatures are simply the collections \(\{\lambda ^{(k)}_j\}_{1\le j\le k}\), satisfying the interlacing inequalities as in Fig. 6, and with \(\lambda ^{(k)}_j\in {\mathbb {Z}}\). (Note that we consider arrays of infinite depth.)

For a specialization \(\rho \) parametrized as

$$\begin{aligned} \begin{aligned}&\rho =(\alpha ^{\pm };\beta ^{\pm };\gamma ^{\pm }), \quad \alpha ^{\pm }_1\ge \alpha ^{\pm }_2\ge \cdots \ge 0 , \quad \beta ^{\pm }_1\ge \beta ^{\pm }_2\ge \cdots \ge 0 , \quad \gamma ^{\pm }\ge 0, \\&\quad \sum _{i=1}^\infty (\alpha _{i}^{\pm }+\beta _i^{\pm })<\infty , \quad \beta _1^++\beta _1^-\le 1, \end{aligned} \end{aligned}$$
(8.1)

and a signature \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N )\), \(\lambda _i\in {\mathbb {Z}}\), define

$$\begin{aligned} s_\lambda (\rho ) := \det \left[ \psi _{\lambda _i+j-i}(\rho ) \right] _{i,j=1}^{N}, \end{aligned}$$
(8.2)

where \(\psi _n(\rho )\), \(n\in {\mathbb {Z}}\), are the coefficients of the expansion

$$\begin{aligned} \sum _{n\in {\mathbb {Z}}}\psi _n(\rho ) u^n= e^{\gamma ^+(u-1)+\gamma ^{-}(u^{-1}-1)} \prod _{i\ge 1} \frac{1+\beta _i^+(u-1)}{1-\alpha _i^+(u-1)} \frac{1+\beta _i^-(u^{-1}-1)}{1-\alpha _i^-(u^{-1}-1)}, \end{aligned}$$
(8.3)

and \(|u|=1\). One of the equivalent forms of the Edrei–Voiculescu theorem (e.g., see [22]) states that (8.3) parametrizes the space of all totally nonnegative two-sided sequences.

Remark 8.1

In particular, taking \(\alpha _i^\pm =\beta _i^\pm =0\) for all i, \(\gamma ^-=0\), and \(\gamma ^+=t\) turns the just defined specialization \(\rho \) into \(\rho _t\) defined in Sect. 2.4.

Define the two-sided ascending Schur process \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) as the unique q-Gibbs measure on interlacing arrays of signatures such that for any N,

$$\begin{aligned}&{\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ] (\lambda ^{(1)},\ldots ,\lambda ^{(N)} ) \nonumber \\&\quad = \frac{1}{Z}\, s_{\lambda ^{(1)}}(1) s_{\lambda ^{(2)}/\lambda ^{(1)}}(q) \ldots s_{\lambda ^{(N)}/\lambda ^{(N-1)}}(q^{N-1}) \, s_{\lambda ^{(N)}}(\rho ), \end{aligned}$$
(8.4)

where the skew Schur functions for signatures can be defined by (2.4). Define the Schur process \({\mathbb {P}}[\vec {1}\mid \rho ]\) as the \(q\rightarrow 1\) degeneration of (8.4) (here and below we denote by \(\vec {1}\) the sequence of spectral parameters which are all equal to 1). Another equivalent form of the Edrei–Voiculescu theorem states that \({\mathbb {P}}[\vec {1}\mid \rho ]\) are all possible extreme Gibbs measures on interlacing arrays of signatures (a Gibbs measure is called extreme if it cannot be represented as a convex combination of other Gibbs measures). We refer to [11] for further details on the definition of the two-sided Schur processes.

Theorem 8.2

Let \(\rho \) be a specialization with parameters (8.1) such that \(\alpha _i^{-}=0\) for all i. Then we have for all \(0<q<1\):

$$\begin{aligned} {\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ] \, {\mathbb {L}}^{(q)} = {\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ^{(q)}], \end{aligned}$$

where \(\rho ^{(q)}\) is the specialization corresponding to the parameters

$$\begin{aligned} \begin{aligned}&{\hat{\alpha }}_i^+=\frac{\alpha _i^+ q}{1+\alpha _i^+-\alpha _i^+ q} ,\qquad {\hat{\alpha }}_i^-=0 \\&{\hat{\beta }}_i^+=\frac{\beta _i^+ q}{1-\beta _i^++\beta _i^+ q} ,\qquad {\hat{\beta }}^-_i= \frac{\beta _i^- q^{-1}}{1-\beta _i^-+\beta _i^-q^{-1}} \\&{\hat{\gamma }}^+=q\gamma ^+ ,\qquad {\hat{\gamma }}^-=q^{-1}\gamma ^{-}. \end{aligned} \end{aligned}$$
(8.5)

Note that \({\hat{\alpha }}_i^+,{\hat{\beta }}_i^{\pm }\ge 0\), and \({\hat{\beta }}_1^++{\hat{\beta }}_1^-\le 1\).

Proof of Theorem 8.2

This follows from Theorem 5.3 similarly to the computation in the beginning of Sect. 5.5. Namely, denote (8.3) by \(\varPsi (u;\rho )\). The q-Gibbs measure (8.4) corresponds to the q-Gibbs harmonic family

$$\begin{aligned} \varphi _N(\lambda )= \frac{s_\lambda (\rho )}{\varPsi (1;\rho )\varPsi (q;\rho )\varPsi (q^2;\rho )\ldots \varPsi (q^{N-1};\rho )}. \end{aligned}$$

Note that \(\varPsi (1;\rho )=1\) but it is convenient to include this factor here. Note also that the condition \(\alpha _i^{-}\equiv 0\) ensures that the series \(\varPsi (q^m;\rho )\) converge for all \(m\in {\mathbb {Z}}_{\ge 1}\). The action of \({\mathbb {L}}^{(q)}\) turns the q-Gibbs harmonic family \(\{\varphi _N\}\) into

$$\begin{aligned} {\hat{\varphi }}_N(\lambda )= \frac{q^{|\lambda |}s_\lambda (\rho )}{\varPsi (q;\rho )\varPsi (q^2;\rho )\ldots \varPsi (q^{N};\rho )} = \frac{\det [\psi _{\lambda _i+j-i}(\rho )\,q^{\lambda _i+j-i}]_{i,j=1}^{N}}{\varPsi (q;\rho )\varPsi (q^2;\rho )\ldots \varPsi (q^{N};\rho )} . \end{aligned}$$

In particular, for \(N=1\) we have

$$\begin{aligned} \psi _n(\rho ^{(q)})=\frac{\psi _n(\rho )\,q^{n}}{\varPsi (q;\rho )}, \qquad n\in {\mathbb {Z}}, \end{aligned}$$

which readily translates into the modification of the parameters (8.5) in the claim. \(\square \)

Measures on interlacing arrays of the form \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) are not extreme q-Gibbs. A classification of extreme q-Gibbs measures is obtained in [46] (note that our q corresponds to 1/q in that paper, so the description of the boundary needs to be reversed). Extreme q-Gibbs measures \({\mathbb {P}}_\mathbf{n }^{(q)}\) are parametrized by infinite sequences

$$\begin{aligned} \mathbf{n} =(n_1\ge n_2\ge \ldots ), \qquad n_i\in {\mathbb {Z}}. \end{aligned}$$

Moreover, \(\lim _{N\rightarrow +\infty }\lambda ^{(N)}_j=n_j\) for each fixed \(j=1,2,\ldots \), where \(\lambda ^{(N)}_j\) come from the random configuration distributed according to \({\mathbb {P}}_\mathbf{n }^{(q)}\). It is not hard to show the following.

Proposition 8.3

The action of the Markov map \({\mathbb {L}}^{(q)}\) on extreme q-Gibbs measures corresponds to the left shift in the space of parameters:

$$\begin{aligned} {\mathbb {P}}^{(q)}_{(n_1,n_2,n_3,\ldots )} \, {\mathbb {L}}^{(q)} = {\mathbb {P}}^{(q)}_{(n_2,n_3,n_4,\ldots )}. \end{aligned}$$

In [17] a decomposition of the non-extreme q-Gibbs measures \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) onto the extreme ones \({\mathbb {P}}^{(q)}_\mathbf{n }\) is given in terms of a determinantal point process on the set of shifted labels. The shifted labels in our notation are \(n_1-1>n_2-2>\cdots \), and they form a random point configuration on \({\mathbb {Z}}\) whose correlation functions have a determinantal form. The action of \({\mathbb {L}}^{(q)}\) on \(\mathbf{n} \) from Proposition 8.3 removes the largest point in this determinantal process on \({\mathbb {Z}}\), and shifts all its other points by one to the right.

Question 2

How to explicitly link the action of \({\mathbb {L}}^{(q)}\) on \(\mathbf{n} \) with the modification of the parameters (8.5) of the determinantal point process describing \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\)? Does this correspondence (between the action on the parameters of the kernel and the action on the underlying random point configuration) survive any limit transition to more familiar determinantal point processes (e.g., random matrix spectra or Airy\(_2\))?

8.2 Limit \(q\rightarrow 1\) and action on Gibbs measures

The \(q\rightarrow 1\) limit of Theorem 8.2 can be obtained similarly to the argument in Sect. 6. Define by \({\mathbb {L}}_{\tau }\), \(\tau \in {\mathbb {R}}_{\ge 0}\), the continuous-time Markov semigroup under which each particle \(\lambda ^{(k)}_j\) at each k-th level of the interlacing array independently jumps to the left into one of the possible locations m, where

$$\begin{aligned} \max \left\{ \lambda ^{(k+1)}_{j+1},\lambda ^{(k-1)}_j\right\} \le m\le \lambda ^{(k)}_j-1, \end{aligned}$$

at rate k per each of these possible locations.

However, this definition presents an issue since in a generic interlacing array, under \({\mathbb {L}}_\tau \) infinitely many particles jump in finite time. Moreover, because for any \(k\in {\mathbb {Z}}_{\ge 1}\) jumps of \(\lambda ^{(k)}_j\) depend on the \((k+1)\)-st level, one cannot simply restrict \({\mathbb {L}}_\tau \) to the first several levels. Therefore, we have to consider a smaller space of interlacing arrays:

Definition 8.4

Let the subset \({\mathcal {S}}^c\subset {\mathcal {S}}\) consist of interlacing arrays \(\{\lambda ^{(N)}_j\}_{1\le j\le N}\) satisfying \(\lambda ^{(N)}_j=0\) for all N and all \(J(N)\le j\le N\), where \(N-J(N)\rightarrow +\infty \) as \(N\rightarrow +\infty \).

For each fixed K, the restriction of \({\mathbb {L}}_\tau \) to

$$\begin{aligned} \{\lambda ^{(N)}_j:N\in {\mathbb {Z}}_{\ge 1}, \ N-K+1\le j\le N\} \end{aligned}$$

(that is, to the K leftmost diagonals) is a Markov process, in which only finitely many particles jump in finite time. For different K, these Markov processes are compatible. Therefore, \({\mathbb {L}}_\tau \) makes sense on the state space \({\mathcal {S}}^c\). Below we denote by \({\mathbb {L}}_{\tau }\) the Markov semigroup constructed in this manner.

Theorem 8.5

The action of the semigroup \({\mathbb {L}}_\tau \) on extreme Gibbs measures \({\mathbb {P}}[\vec {1}\mid \rho ]\), where \(\rho \) is a specialization as in (8.1)–(8.3) with \(\alpha _i^-=\beta _i^-=0\) for all i, \(\gamma ^-=0\), and \(\beta _1^+<1\), transforms the parameters of \(\rho \) exactly as in (8.5), but with q replaced by \(e^{-\tau }\).

Idea of proof

One can check that the Schur process \({\mathbb {P}}[\vec {1}\mid \rho ]\) with \(\alpha _i^-=\beta _i^-=0\) for all i, \(\gamma ^-=0\), and \(\beta _1^+<1\) is supported on the subset \({\mathcal {S}}^c\) described in Definition 8.4. Similarly to Sect. 6, we see that under the scaling \(q=e^{-\varepsilon }\), \(T=\lfloor \tau /\varepsilon \rfloor \), \(\varepsilon \rightarrow 0\), we have \(({\mathbb {L}}^{(q)})^T\rightarrow {\mathbb {L}}_{\tau }\). Next, the modification of the parameters (8.5) is a one-parameter semigroup. That is, applying \({\mathbb {L}}^{(q)}\) one more time replaces q everywhere in (8.5) by \(q^2\). Because \(q^T\sim e^{-\tau }\), we get the result. \(\square \)

In particular, \({\mathbb {L}}_{\tau }\) maps the push-block process of [16] (see Definition 8.8) backwards in time in the same sense as Theorem 1.

8.3 Iterated R maps

Consider the maps \(R^{(j)}_\alpha \) defined in Sect. 4. Similarly to Sect. 5.2, we can define the iterated R map \({\mathbb {R}}^{(q)}\) by

$$\begin{aligned} {\mathbb {R}}^{(q)} := R^{(1)}_{q} R^{(2)}_{q^2} R^{(3)}_{q^3} \ldots \end{aligned}$$

(this definition has the same formal meaning as for the map \({\mathbb {L}}^{(q)}\), see Sect. 5.2). The map \({\mathbb {R}}^{(q)}\) acts nicely on \(q^{-1}\)-Gibbs measures (i.e., corresponding to \(\vec c=(1,q^{-1},q^{-2},\ldots )\)). Namely, one can check that an analogue of Theorem 5.3 holds, with q replaced by \(q^{-1}\) in the definition of the harmonic functions and in (5.4). The \(q\rightarrow 1\) continuous-time limit \({\mathbb {R}}_{\tau }\) of \({\mathbb {R}}^{(q)}\) is also readily defined with the help of Definition 8.4—this is just the mirroring of \({\mathbb {L}}_{\tau }\) from Sect. 8.1, in which all particles jump to the right. One can obtain the following analogue of Theorems 8.2 and 8.5 for the action of \({\mathbb {R}}^{(q)}\) and \({\mathbb {R}}_{\tau }\) on \(q^{-1}\)-Gibbs Schur processes:

Theorem 8.6

Let \(\rho \) be a specialization as in (8.1)–(8.3) such that \(\alpha _i^{+}=0\) for all i. We have for all \(0<q<1\):

$$\begin{aligned} {\mathbb {P}}[(1,q^{-1},q^{-2},\ldots )\mid \rho ] \, {\mathbb {R}}^{(q)} = {\mathbb {P}}[(1,q^{-1},q^{-2},\ldots )\mid \rho ^{(1/q)}], \end{aligned}$$

where \(\rho ^{(1/q)}\) has modified parameters as in (8.5), but with q replaced by 1/q. Moreover, if \(\alpha _i^+=\beta _i^+=0\) for all i, \(\gamma ^+=0\), and \(\beta _1^+<1\), then \({\mathbb {P}}[\vec {1}\mid \rho ]\, {\mathbb {R}}_{\tau } = {\mathbb {P}}[\vec {1}\mid \rho ^{(e^\tau )}]\), where \(\rho ^{(e^\tau )}\) is defined in a similar way.

Question 3

Is it possible to extend the definition of \({\mathbb {R}}_\tau \) to Schur processes with \(\gamma ^+>0\)? (This is equivalent to extending \({\mathbb {L}}_\tau \) to the case \(\gamma ^->0\).)

If such an extension is possible, then \({\mathbb {R}}_\tau \) would turn the time t in the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) (with the Plancherel specialization \(\gamma ^+=t\) and all other parameters zero) into \(e^\tau t\), that is, forward. Note that this process would move infinitely many particles in finite time and move individual particles very far, too.

Recall that \({\mathbb {P}}[\vec {1}\mid \rho _t]\) can be generated by the push-block dynamics (Definition 8.8). Under this dynamics, the rightmost components \(\{\lambda ^{(N)}_1\}\) of the interlacing array evolve as a PushTASEP, a close relative of TASEP, but with a pushing mechanism [15, 16]. Therefore, a positive answer to Question 3 would lead to a continuous-time semigroup which maps PushTASEP forward in time.

8.4 Arrays of finite depth

Fix \(N\in {\mathbb {Z}}_{\ge 1}\) and let \({\mathcal {S}}^{\lambda ,N}\) be the space of interlacing arrays \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N-1)}\prec \lambda ^{(N)} \) with fixed top row \(\lambda ^{(N)}=\lambda \), where \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N\ge 0 )\), \(\lambda _i\in {\mathbb {Z}}\). Fix pairwise distinct spectral parameters \(c_1,\ldots ,c_N>0 \).

Recall the single level Markov maps \(L^{(j)}_{\alpha }\), \(R^{(j)}_{\alpha }\), \(j=1,\ldots ,N-1\), defined in Sect. 4. Consider the product space \(\widetilde{{\mathcal {S}}}^{\lambda ,N}:= {\mathcal {S}}^{\lambda ,N}\times \mathfrak {S}_N\), where \(\mathfrak {S}_N\) is the symmetric group. For each elementary permutation \(s_i=(i,i+1)\), \(1\le i\le N-1\), define the Markov map \(T(s_i)\) on \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\) as follows. On the \(\mathfrak {S}_N\) part it deterministically acts by \(\sigma \mapsto s_i \sigma \). On each fiber \({\mathcal {S}}^{\lambda ,N}\times \{\sigma \}\) it acts as the Markov map

$$\begin{aligned} T(s_i):= {\left\{ \begin{array}{ll} L^{(i)}_{c_{\sigma (i+1)}/c_{\sigma (i)}},&{} \text {if } c_{\sigma (i)}>c_{\sigma (i+1)}; \\ R^{(i)}_{c_{\sigma (i)}/c_{\sigma (i+1)}},&{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(8.6)

Note that \(T(s_i)\) do not satisfy the symmetric group relations when acting on \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\) (in particular, \(T(s_i)^2\) is not identity).

Let \({\mathbb {M}}^{\lambda }_{\vec c}\) denote the \(\vec c\)-Gibbs measure on \({\mathcal {S}}^{\lambda ,N}\):

$$\begin{aligned} {\mathbb {M}}^{\lambda }_{\vec c}(\lambda ^{(1)},\ldots ,\lambda ^{(N-1)} ) = \frac{s_{\lambda ^{(1)}}(c_1) s_{\lambda ^{(2)}/\lambda ^{(1)}}(c_2)\ldots s_{\lambda /\lambda ^{(N-1)}}(c_N)}{s_{\lambda }(c_1,\ldots ,c_{N} )}. \end{aligned}$$

Note that in contrast with arrays of infinite depth (cf. Sect. 2.7), here the \(\vec c\)-Gibbs property determines the measure \({\mathbb {M}}_{\vec c}^{\lambda }\) uniquely.

Let \(w_N=(N,N-1,\ldots ,2,1 )\) be the longest element in the symmetric group \(\mathfrak {S}_N\), and \(w_N=s_{i_1}s_{i_2}\ldots s_{i_{N(N-1)/2}} \), \(1\le i_k\le N-1\), be its reduced word decomposition which is also assumed fixed. Define

$$\begin{aligned} {\mathbb {T}}:= T(s_{i_1}) T(s_{i_2}) \ldots T(s_{i_{N(N-1)/2}}) \end{aligned}$$
(8.7)

(in this notation we do not indicate the dependence on the choice of a particular reduced word). Clearly, \({\mathbb {T}}^2\) acts as the identity on the \(\mathfrak {S}_N\) part of \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\). Moreover, \({\mathbb {T}}^2\) preserves the measure \({\mathbb {M}}_{\vec c}^{\lambda }\) viewed as the measure on \({\mathcal {S}}^{\lambda ,N}\times \{ e \}\). Indeed, this is because by Proposition 4.7 each \(T(s_i)\) maps \({\mathbb {M}}_{\sigma \vec c}^{\lambda }\) to \({\mathbb {M}}_{s_i \sigma \vec c}^{\lambda }\). The map \({\mathbb {T}}^2\) can be viewed as a sampling algorithm for the measure \({\mathbb {M}}_{\vec c}^{\lambda }\):

Proposition 8.7

Start with any (nonrandom) interlacing array \((\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N-1)}\prec \lambda )\) and apply the Markov map \({\mathbb {T}}^{2k}\) to it. The distribution of the resulting random interlacing array converges, as \(k\rightarrow +\infty \), to \({\mathbb {M}}_{\vec c}^{\lambda }\) in the total variation norm.

Proof

This follows from the standard convergence theorem for Markov chains on finite spaces. Indeed, the Markov chain corresponding to \({\mathbb {T}}^{2k}\) is

  • aperiodic since \({\mathbb {T}}^2\) assigns positive probability to the trivial move;

  • irreducible because \({\mathbb {T}}^2\) assigns positive probability to changing only one entry \(\lambda ^{(k)}_j\), \(1\le j\le k\le N-1\) in the interlacing array (the set \({\mathcal {S}}^{\lambda ,N}\) is connected by such individual changes).

This completes the proof. \(\square \)

Question 4

How fast is the convergence in Proposition 8.7, depending on the system size (which is \(\sim N \lambda _1\))? What is the mixing time of \({\mathbb {T}}\)?

8.5 q-distributed lozenge tilings

Let us now consider a concrete case of the setup outlined in the previous Sect. 8.4. Fix N and the top row \(\lambda =(b,b,\ldots ,b,0,0\ldots ,0)\), where b repeats a times and 0 repeats c times, with \(a+c=N\). Then interlacing arrays of depth N and top row \(\lambda \) are in bijection with lozenge tilings of a hexagon with sides abcabc, or, equivalently, with boxed plane partitions (see Fig. 10 for an illustration and, e.g., [25] for more details).

Fig. 10
figure 10

Samples from the measures \({\mathbb {M}}_{q^{-1}}\) (left) and \({\mathbb {M}}_{q}\) (right) with \(a=b=c=50\) and \(q=0.95\). The sample on the left is generated by the shuffling algorithm of [19], and the picture on the right is the result of applying the map \({\mathbb {T}}\)

Let \({\mathbb {M}}_{q^{-1}}\) and \({\mathbb {M}}_{q}\) denote the measures under which the probability weight of a lozenge tiling is proportional to \(q^{-\mathrm {vol}}\) or \(q^{\mathrm {vol}}\), respectively, where the volume is defined in (5.1). These two measures are \(\vec c\)-Gibbs with \(\vec c=(1,q,q^2,\ldots ,q^{N-1} )\) and \(\vec c=(q^{N-1},\ldots ,q,1 )\), respectively (recall that multiplying \(\vec c\) by a scalar does not change the \(\vec c\)-Gibbs property).

Take the reduced word

$$\begin{aligned} w_N= (s_{1}s_2\ldots s_{N-1}) (s_{1}s_2\ldots s_{N-2}) \ldots (s_1s_2) (s_1), \end{aligned}$$

and let \({\mathbb {T}}\) be the corresponding Markov map (8.7). One readily sees that the action of \({\mathbb {T}}\) on \({\mathbb {M}}_{q^{-1}}\):

  • Turn \({\mathbb {M}}_{q^{-1}}\) into \({\mathbb {M}}_q\);

  • Almost surely moves vertical lozenges (see Fig. 10) to the left because in (8.6) we always choose the option L;

  • Changes the \((N-1)\)-st row of the tiling only once, the \((N-2)\)-nd only twice, and so on.

An exact sampling algorithm for \({\mathbb {M}}_{q^{-1}}\) was presented in [19]. Starting with the exact sample of \({\mathbb {M}}_{q^{-1}}\) (Fig. 10, left) and applying \({\mathbb {T}}\), we obtain an exact sample of \({\mathbb {M}}_{q}\) (Fig. 10, right), while randomly moving the vertical lozenges to the left. An implementation of this mapping \({\mathbb {M}}_{q^{-1}}\,{\mathbb {T}}={\mathbb {M}}_q\) with all the intermediate steps can be found online [70].

The map \({\mathbb {T}}\) works in the same way for an arbitrary top row \(\lambda \) (when the polygon being tiled is not necessarily a hexagon, but can be a general sawtooth domain as in, e.g., [69]). The advantage of the hexagon case is the presence of the exact sampling algorithm [19].

Question 5

Consider lozenge tilings of growing sawtooth domains with top rows \(\lambda =\lambda (N)\) which depend on N in some way. Can the symmetry of the \(q^{\pm \mathrm {vol}}\) measures manifested by the map \({\mathbb {T}}\) be utilized to obtain the limit shape and fluctuations of the leftmost piece of the frozen boundary as \(N\rightarrow +\infty \)?

Here by the leftmost piece we mean the part of the curve separating the leftmost region occupied by only vertical lozenges, and the liquid region. Existence and characterization of limit shapes for \(q^{\pm \mathrm {vol}}\) is due to [32, 54], and some explicit formulas were obtained recently in [37].

8.6 Dynamics in the bulk

Consider the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) (also sometimes known as the Plancherel measure for the infinite-dimensional unitary group). It is convenient to use lozenge tiling interpretation of interlacing arrays as in the previous Sect. 8.5. From [16, 20] it is known that as N, k, and t go to infinity proportionally to each other, the local lattice configuration of lozenges around each \(\lambda ^{(N)}_k\) converges to the ergodic translation invariant Gibbs measure on lozenges tilings of the whole plane (see Fig. 11 for an illustration). Such ergodic measures form a two-parameter family [79]. As parameters one can take the densities of two of the three types of lozenges. We remark that the ergodic Gibbs measures are far from being independent Bernoulli ones. In particular, the joint correlations of lozenges possess a determinantal structure [67].

Fig. 11
figure 11

A lozenge configuration in the bulk and a possible move under the bulk limit of \({\mathbb {L}}_{\tau }\): the vertical lozenge with a black dot can move to one of the white dotted locations, at rate 1 per white dot. Square marks indicate lozenges which are blocked in the push-block dynamics

We say that (Nkt) correspond to the bulk of the system if the limiting density of each of the types of lozenges around \(\lambda ^{(N)}_{k}(t)\) is positive. One can also consider the bulk limit of the dynamics \({\mathbb {L}}_{\tau }\). Because \({\mathbb {L}}_{\tau }\) maps the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) to \({\mathbb {P}}[\vec {1}\mid \rho _{e^{-\tau }t}]\) and \(t\rightarrow +\infty \), we need to scale \(\tau \) as \(\tau =\uptau /t\) (here \(\uptau \in {\mathbb {R}}_{>0}\) is the new scaled time which stays fixed). Then \(e^{-\uptau /t}\,t\sim \left( 1-\frac{\uptau }{t} \right) t=t-\uptau \). Considering \({\mathbb {L}}_{\uptau /t}\) is equivalent to slowing down all the jump rates in \({\mathbb {L}}\) by the factor of t. Since we are looking around level N and N grows proportionally to t, the slowed down dynamics in the bulk will have equal jump rates on all levels at finite distance from the N-th one.

Therefore, under the bulk limit of \({\mathbb {L}}_{\tau }\), each vertical lozenge can move into one of the holes to the left of it (with the requirement that the interlacing is preserved), at a constant rate per hole (for simplicity, we can assume that this rate is equal to 1). See Fig. 11 for an illustration.

Consider the combination of the dynamics \({\mathbb {L}}_{\uptau l/t}\) and \({\mathbb {R}}_{\uptau r/t}\) running in parallel,Footnote 8 where \(l,r>0\) are parameters. In the bulk limit of this combination, we readily obtain the Hammersley-type process in the bulk with two-sided jumps. This two-sided dynamics was introduced and studied in [83], where it was shown that this dynamics preserves the ergodic Gibbs measures on tilings of the whole plane. We see that our Markov maps \({\mathbb {L}}_{\tau }\) and \({\mathbb {R}}_\tau \) can be viewed as the pre bulk limit versions of the two-sided Hammersley-type processes of [83].

Let us now discuss connections to the push-block dynamics of [16]. For completeness, let us recall its definition:

Definition 8.8

(Push-block dynamics) Each vertical lozenge has an independent exponential clock of rate 1. When the clock rings, the lozenge tries to move to the right by one. If it is blocked by a vertical lozenge from below (see the square mark in Fig. 11), then the jump is suppressed. If there are vertical lozenges above the one moving, then they also get pushed to the right by one.

The one-sided particular case of the Hammersley-type processes is the push-block dynamics, up to rotating the picture by \(\pi /3\) and focusing on the yellow lozenges in Fig. 11 instead of the vertical (gray) ones.

Thus, in the bulk limit Theorem 8.5 informally turns into the statement that one can run the one-sided Hammersley-type process and the push-block dynamics (both in terms of the vertical lozenges), and the resulting process preserves ergodic Gibbs measures. This statement follows from [83], as well as its rather straightforward generalization given next:

Proposition 8.9

Running six one-sided Hammersley-type processes in parallel, where each individual process moves one type of lozenges in one of the directions \(e^\mathbf{i \pi k/3}\), \(0\le k\le 5\), at a specified rate \(\upalpha _k \ge 0\), preserves ergodic Gibbs measures on tilings of the whole plane.

8.7 Branching graph perspective

Recall that by \({\mathcal {S}}^c\) we denote the set of all interlacing arrays of infinite depth which have many zeroes along the left border (Definition 8.4). Let us explain how the Markov maps \({\mathbb {L}}_{\tau }\) can be utilized to equip \({\mathcal {S}}^c\times {\mathbb {R}}\) with a structure of an \({\mathbb {R}}\)-graded projective system in the sense of [23]. Projective systems generalize branching graphs such as the Young graph, and the latter play a fundamental role in Asymptotic Representation Theory [24, 84]. The definitions and questions in this subsection are motivated by the connection to branching graphs.

Remark 8.10

The set \({\mathcal {S}}^c\times {\mathbb {R}}\) is “larger” than the more well-studied branching graphs. Namely, in the Young and Gelfand–Tsetlin graphs the vertices are indexed by Young diagrams and signatures, respectively (a signature is a tuple \((\nu _1\ge \cdots \ge \nu _N )\), \(\nu _i\in {\mathbb {Z}}\)), while in \({\mathcal {S}}^c\times {\mathbb {R}}\) the vertices are whole infinite collections of interlacing diagrams \(\lambda ^{(1)}\prec \lambda ^{(2)}\prec \cdots \). This makes it hard to predict which properties of the Young and Gelfand–Tsetlin graphs could translate to \({\mathcal {S}}^c\times {\mathbb {R}}\).

Let \(M_s\), \(s\in {\mathbb {R}}\), be probability measures on \({\mathcal {S}}\) supported by \({\mathcal {S}}^c\) (examples include the one-sided Schur measures as in Theorem 8.5). We call the family \(\{M_s\}_{s\in {\mathbb {R}}}\) coherent if for any \(\tau \ge 0\) and \(s\in {\mathbb {R}}\) we have

$$\begin{aligned} M_s\, {\mathbb {L}}_\tau =M_{s-\tau }. \end{aligned}$$

Coherent families are sometimes known as entrance laws, cf. [38]. Clearly, coherent families form a convex set. Its extreme elements are, by definition, those which cannot be represented as nontrivial convex combinations of other coherent families.

Question 6

How to characterize extreme coherent families? Can every coherent family be represented in a unique way as a (continual) convex combination of the extremes?

Let us present an example of a coherent family based on Schur processes. Take \(M_s^{\mathrm {Schur}}={\mathbb {P}}[\vec {1}\mid \rho (s)]\), where \(\rho (s)\) is a specialization with \(\alpha _i^{\pm }(s)=\beta _i^-(s)=0\) for all i, \(\gamma ^-(s)=0\), and other parameters given by

$$\begin{aligned} \beta _i^+(s)= \frac{\beta _i^+ e^{s}}{1-\beta _i^++\beta _i^+e^{s}},\qquad \gamma ^+(s)=e^{s}\gamma ^+, \end{aligned}$$

where \(\beta _i^{+}\) and \(\gamma ^{+}\) are fixed and satisfy (8.1). The fact that the family \(\{ M_s^{\mathrm {Schur}} \}\) is indeed coherent follows from Theorem 8.5.

Let us discuss two particular examples.

  • When \(\gamma ^+=1\) and all other parameters are zero, \(M_s^{\mathrm {Schur}}\) is the family of single-time distributions of the push-block dynamics under the logarithmic time change \(s=\log t\).

  • When \(\beta _1^+=\beta \in (0,1)\) and all other parameters are zero, the random interlacing array corresponding to \(M_s^{\mathrm {Schur}}\) has the form \(\lambda ^{(N)}=(1^{X_N}0^{N-X_N})\), where \((X_1,X_2,\ldots )\) is the trajectory of the simple random walk with steps 0, 1 taken with probabilities \(1-\beta _1^+(s)\) and \(\beta _1^+(s)\), respectively. The parameter \(\beta _1^+(s)\) interpolates between 0 and 1 at \(s=-\infty \) and \(s=+\infty \), respectively. The map \({\mathbb {L}}_\tau \) thus provides a coupling between these simple random walk trajectories with varying probability of up step. The concrete action of \({\mathbb {L}}_\tau \) in this example leads to Proposition 2 formulated in the Introduction.

Question 7

Are the coherent families \(\{ M_s^{\mathrm {Schur}}\}\) extreme? Are there other interesting (extreme or non-extreme) coherent families?

Let us focus on the case \(\{M_s^{\mathrm {Schur}}\}\) with \(\gamma ^+=1\) and all other parameters zero. The structure of a projective family allows to define for each \(s\in {\mathbb {R}}\) the up-down Markov process on \({\mathcal {S}}^c\) which preserves each \(M_s^{\mathrm {Schur}}\) (see [21]). In more detail, the forward Markov generator is defined as

$$\begin{aligned} {\mathbb {L}}^{up}_{s,s+ds}(\varvec{\mu }\rightarrow \varvec{\lambda }) = \frac{M^{\mathrm {Schur}}_{s+ds}(\varvec{\lambda })}{M^{\mathrm {Schur}}_s(\varvec{\mu })}\; {\mathbb {L}}_{ds}(\varvec{\lambda }\rightarrow \varvec{\mu }), \qquad \varvec{\lambda }, \varvec{\mu }\in {\mathcal {S}}^c. \end{aligned}$$

One can check that this is not the same forward evolution as the push-block generator (under any time change). In particular, \({\mathbb {L}}_{s,s+ds}^{up}\) is time-inhomogeneous. Therefore, the up-down Markov process arising from the branching graphs formalism does not reduce (in restriction to the leftmost particles \(\lambda ^{(N)}_{N}\)) to the stationary dynamics from Sect. 7.

Question 8

The up-down Markov chains associated with distinguished non-extreme coherent families on well-studied branching graphs converge to infinite-dimensional diffusions on the boundary (e.g., [21, 68]). Is there such a limit procedure for the up-down processes associated with \(\{M_s^{\mathrm {Schur}}\}\) or other coherent families on \({\mathcal {S}}^c\times {\mathbb {R}}\)?

Viewing \({\mathcal {C}}\) as a subset of \({\mathcal {S}}^c\), one can similarly define the projective system structure on \({\mathcal {C}}\times {\mathbb {R}}\) associated with the Markov maps \(\mathbf{L} _\tau \) (Definition 6.2). The restrictions of \(\{M_s^{\mathrm {Schur}}\}\) form coherent families on \({\mathcal {C}}\times {\mathbb {R}}\), and all the problems formulated in this subsection also make sense for the smaller object \({\mathcal {C}}\times {\mathbb {R}}\). Note that the up-down Markov chain on each floor \({\mathcal {C}}\times \{s\}\) with \(\gamma ^+=1\) (and all other parameters zero) preserves the TASEP distribution \(\upmu _{e^s}\), but is it not the same as the stationary dynamics discussed in Sect. 7.

8.8 Lifting to additional parameters

The definition of the local Markov maps \(L^{(j)}_{\alpha }\) and \(R^{(j)}_{\alpha }\) which randomly change a single level of an interlacing array is inspired by the bijectivization of a degenerate case of the Yang–Baxter equation. Beyond this degenerate case associated with the Schur symmetric polynomials, the bijectivization can be developed to include models associated with spin Hall-Littlewood or spin q-Whittaker symmetric functions [29, 30]. A scheme of symmetric functions is given in Fig. 12.

Fig. 12
figure 12

An hierarchy of symmetric functions

Let us consider three setups. First, in the spin Hall-Littlewood case, the maps \(L^{(j)}_\alpha \) and \(R^{(j)}_\alpha \) can be obtained by considering sequences of local transitions given in Figures 4 and 5 in [30] (see Sect. 4.2 for more details). Therefore, one can potentially define Markov maps preserving the class of probability measures on interlacing arrays satisfying a version of the Gibbs property associated with the spin Hall-Littlewood functions. These Gibbs measures include the subclass of spin Hall-Littlewood processes. The Markov maps on the spin Hall-Littlewood processes could project (in a way similar to how \({\mathbb {L}}_\tau \) leads to \(\mathbf{L} _\tau \)) into maps acting nicely on distributions of the stochastic six-vertex model and the ASEP with step initial data.

Second, on the spin q-Whittaker side the TASEP is generalized to the q-TASEP [14, 77] and further to the q-Hahn TASEP [34, 71]. A continuous-time version of the q-Hahn TASEP can be found in [6, 82].

Question 9

Do there exist Markov maps on (spin) q-Whittaker processes mapping the time parameter in the q-TASEP or the (continuous-time) q-Hahn TASEP backwards?

Finally, let us discuss a setting which does not immediately fit into the scheme of Fig. 12 but is also of interest. Configurations of the (not necessarily stochastic) six-vertex model with the domain wall boundary conditions (e.g., see [74]) can be encoded as finite depth interlacing arrays of strict partitions with fixed top row. The Yang–Baxter equation swapping spectral parameters in this model can potentially be bijectivised in the same way as in [30], which should lead to Markov maps acting nicely on the distribution of the six-vertex model. (In the Schur case this is described in Sect. 8.4.)

Question 10

Can these Markov maps be taken to the continuous-time limit similarly to the \(q\rightarrow 1\) limit described in Sect. 6? If this is possible, this would lead to a new non-local sampling algorithm for the distribution of the homogeneous (i.e., with equal spectral parameters) six-vertex model with domain wall boundary conditions. The bulk limit of this latter algorithm should presumably coincide with the Markov process from [13] preserving the distribution of the six-vertex model on a torus.