Abstract
We obtain a new relation between the distributions \(\upmu _t\) at different times \(t\ge 0\) of the continuoustime totally asymmetric simple exclusion process (TASEP) started from the step initial configuration. Namely, we present a continuoustime Markov process with local interactions and particledependent rates which maps the TASEP distributions \(\upmu _t\) backwards in time. Under the backwards process, particles jump to the left, and the dynamics can be viewed as a version of the discretespace Hammersley process. Combined with the forward TASEP evolution, this leads to a stationary Markov dynamics preserving \(\upmu _t\) which in turn brings new identities for expectations with respect to \(\upmu _t\). The construction of the backwards dynamics is based on Markov maps interchanging parameters of Schur processes, and is motivated by bijectivizations of the Yang–Baxter equation. We also present a number of corollaries, extensions, and open questions arising from our constructions.
1 Introduction
1.1 TASEP
The Totally Asymmetric Simple Exclusion Process (TASEP) is a prototypical stochastic model of transport in one dimension. Introduced around 50 years ago in parallel in biology [59, 60] and probability theory [80], it has been extensively studied by a variety of methods.
TASEP is a continuoustime Markov process on the space of particle configurations in \({\mathbb {Z}}\) in which at most one particle per site is allowed. Each particle has an independent exponential clock of rate 1 (that is, the random time T after which the clock rings is distributed as \(\mathrm {Prob}(T>s)=e^{\uplambda s}\), \(s>0\), where \(\uplambda =1\) is the rate). When the clock rings, the particle jumps to the right by one if the destination is free of a particle. Otherwise, the jump is blocked and nothing happens. See Fig. 1 for an illustration.
In this work we focus on the process with the most wellstudied initial condition—the step initial condition. Under it, the particles initially occupy \({\mathbb {Z}}_{<0}\), while \({\mathbb {Z}}_{\ge 0}\) is free of particles. Denote by h(t, x) the TASEP interface (where \(t\in {\mathbb {R}}_{\ge 0}\), \(x\in {\mathbb {Z}}\)), which is obtained by placing a slope \(+1\) or a slope \(1\) segment over a hole or particle, respectively, with the agreement that the step initial configuration corresponds to \(h(0,x)=x\). See Fig. 3 for an illustration. We also denote the TASEP distribution at time t (with step initial condition) by \(\upmu _t\).
It was shown by [76] (see also, e.g., [50, 75, Chapter 4] for an alternative approach based on symmetric functions) that the interface grows linearly with time and tends to the limit shape, under the hydrodynamic scaling (i.e. linear space and time scaling), which is a parabola:
where \(\varkappa \) and \(\tau \) are scaled space and time, and \(\varkappa \le \tau \).
In the past 20 years, starting with [50], much finer results about asymptotic behavior of TASEP have become available through the tools of Integrable Probability (cf. [18, 25]). This asymptotic analysis revealed that TASEP belongs to the (onedimensional) Kardar–Parisi–Zhang (KPZ) universality class [33, 72]. In particular, the TASEP interface at time L, on the horizontal \(L^{2/3}\) and vertical \(L^{1/3}\) scales, converges to the Airy\(_2\) process, which is the top line of the Airy\(_2\) line ensemble (about the latter see, e.g., [35]). Furthermore, computations with TASEP allow to formulate general predictions for all onedimensional systems in the KPZ class (e.g., see [39, 81]). The progress in understanding multitime asymptotics of the TASEP interfaces is rapidly advancing at present (see Remark 7.4 for references to recent results).
1.2 The backwards dynamics
The goal of our work is to present a new surprising property of the family of TASEP distributions \(\left\{ \upmu _t \right\} _{t\ge 0}\). We show that the distributions \(\upmu _t\) are coupled in the reverse time direction by a timehomogeneous Markov process with local interactions (the interaction strength depends on the location in the system). Let us now describe this backwards dynamics.
Denote by \({\mathcal {C}}\) the (countable) space of configurations on \({\mathbb {Z}}\) which differ from the step configuration by finitely many TASEP jumps.^{Footnote 1} Consider the continuoustime Markov chain on \({\mathcal {C}}\) which evolves as follows. At each hole there is an independent exponential clock whose rate is equal to the number of particles to the right of this hole. When the clock at a hole rings, the leftmost of the particles that are to the right of the hole instantaneously jumps into this hole (in particular, the particles almost surely jump to the left). See Fig. 2 for an illustration or (7.2) for a description of the generator. Note that, for configurations in \({\mathcal {C}}\), almost surely at most one particle can move at any time moment because there are only finitely many holes with nonzero rate.
The jumping mechanism described above has the following features:

gaps attract neighboring particles from the right;

the rate of attraction is proportional to the size of the gap;

the jumping particle lands inside the gap uniformly at random.
The same features of the jumping mechanism appear in the wellknown continuousspace Hammersley process [2, 49], and the discretespace Hammersley process [42, 44]. For this reason we call our Markov process (which evolves in the discrete space) the backwards Hammersleytype process, or BHP, for short. Note that compared to the wellknown continuousspace Hammersley process, our BHP is spaceinhomogeneous: the jump rate at a hole also depends on the number of particles to the right of it. The evolutions of the interface under TASEP and the BHP are given in Fig. 3.
Let \(\{\mathbf{L }_\tau \}_{\tau \in {\mathbb {R}}_{\ge 0}}\) be the Markov semigroup of the BHP defined in Sect. 1.2. That is, \(\mathbf{L} _\tau (\mathbf{x} ,\mathbf{y} )\), \(\mathbf{x} , \mathbf{y} \in {\mathcal {C}}\), is the probability that the particle configuration is \(\mathbf{y} \) at time \(\tau \) given that it started at \(\mathbf{x} \) at time 0 (here we use the fact that BHP is timehomogeneous).
Remark 1.1
The backwards process is welldefined. Indeed, for each initial condition \(\mathbf{x} \in {\mathcal {C}}\) of the backwards process, the set of its possible further states is finite. Therefore, the probability \(\mathbf{L} _{\tau }(\mathbf{x} ,\mathbf{y} )\) for any \(\mathbf{x} ,\mathbf{y} \in {\mathcal {C}}\) is welldefined (and can be obtained by exponentiating the corresponding finitesize piece of the BHP jump matrix).
1.3 Main result
Recall that \(\upmu _t\) is the distribution of the TASEP configuration at time t (with the step initial condition). The measure \(\upmu _t\) is supported on the space \({\mathcal {C}}\) for all \(t\ge 0\).
Theorem 1
The BHP maps the TASEP distributions backwards in time. That is, for any \(t,\tau \in {\mathbb {R}}_{\ge 0}\), we have
In detail, this identity means that for any \(\mathbf{x} \in {\mathcal {C}}\) we have
As \(\tau \rightarrow +\infty \), the righthand side of (1.2) becomes \(\upmu _0\), which is the delta measure on the step configuration. This agrees with the observation that for any \(\mathbf{x} \in {\mathcal {C}}\) we have^{Footnote 2}
Theorem 1 leads to a stationary Markov dynamics on the TASEP measure \(\upmu _t\) (it is discussed in Sect. 1.7). In particular, this stationary dynamics brings new identities for expectations with respect to \(\upmu _t\). One of these identities is given in Corollary 7.3.
The simulation depicting the TASEP evolution from the step initial configuration to \(t=350\), and then the action of the BHP on this interface is available online [56]. The interfaces in Fig. 3 are snapshots of this simulation.
1.4 Remark: Reversal of Markov processes
Before discussing the strategy of the proof of Theorem 1 let us mention that TASEP, like any Markov chain (under certain technical assumptions), can be reversed in time, and its reversal is again a Markov chain—but usually timeinhomogeneous and quite complicated.
For TASEP, let \(\{\mathbf{T }_t\}_{t\in {\mathbb {R}}_{\ge 0}}\) be its Markov semigroup. Defining
we see that \(\mathbf{T} ^{rev}\) also maps the TASEP distributions back in time: \(\upmu _t \mathbf{T} ^{rev}_{t,s}=\upmu _s\), \(s<t\). In other words, the probabilities \(\mathbf{T} ^{rev}\) come from the timereversal of the TASEP conditional distributions. The Markov process corresponding to \(\{\mathbf{T }^{rev}_{t,s}\}\) is timeinhomogeneous, and its interactions are substantially nonlocal. Theorem 1 implies that the BHP \(\{\mathbf{L }_\tau \}\) is a different, much more natural, Markov process which maps the TASEP distributions back in time.
By a different mapping of the distributions we mean the following. One can check that the joint distribution of the TASEP configuration at two times \(e^{\tau }t\) and t differs from the joint distribution of \((\mathbf{x} ,\mathbf{y} )\), where \(\mathbf{y} \) is distributed as \(\upmu _t\), and \(\mathbf{x} \) is obtained from \(\mathbf{y} \) by running the BHP process \(\mathbf{L} _{\tau }\).
1.5 Idea of proof of Theorem 1
We prove Theorem 1 in Sections 4, 5, and 6. Here let us outline the main steps.
First, we modify the problem by introducing an extra parameter \(q\in (0,1)\), and consider the TASEP in which the kth particle from the right, \(k\in {\mathbb {Z}}_{\ge 1}\), has the jump rate \(q^{k1}\).^{Footnote 3} Let the distribution at time t of this TASEP (with step initial configuration) be denoted by \(\upmu _t^{(q)}\).
Second, we use the wellknown mapping of the TASEP to Schur processes. Schur processes [67] (and their various generalizations including the Macdonald processes [14]) are one of the central tools in Integrable Probability. The particular Schur processes we employ are probability distributions on particle configurations \(\{x^j_i\}_{1\le i\le j}\) in \({\mathbb {Z}}\times {\mathbb {Z}}_{\ge 1}\) which satisfy an interlacing condition, see Fig. 4.
There exists a Schur process (depending on q and the time parameter \(t\in {\mathbb {R}}_{\ge 0}\)) under which the joint distribution of the leftmost particles \(\{x^N_N\}_{N\in {\mathbb {Z}}_{\ge 1}}\) in each horizontal row is the same as of the qdependent TASEP particles \(x_1(t)>x_2(t)>\cdots \) (i.e., this is \(\upmu _t^{(q)}\)). This mapping between TASEP and Schur processes is described in [16], but also follows from earlier constructions involving the Robinson–Schensted–Knuth correspondence. We recall the details in Sect. 3.
This Schur process corresponding to \(\upmu _t^{(q)}\) depends on q via the spectral parameters \(1,q,q^2,\ldots \) attached to the horizontal lines (as indicated in Fig. 4). The new ingredients we bring to Schur processes are Markov maps interchanging two neighboring spectral parameters (say, the jth and the \((j+1)\)th). By a Markov map we mean a way to randomly modify the interlacing particle configuration in \({\mathbb {Z}}\times {\mathbb {Z}}_{\ge 1}\) such that:

At the jth horizontal level the particles almost surely jump to the left;

All other levels are untouched;

The interlacing conditions are preserved;

If the starting configuration was distributed as a Schur process, then the resulting configuration is distributed as a modified Schur process with the jth and the \((j+1)\)th spectral parameters interchanged.
We refer to this as the “L Markov map” since it moves particles to the left (it has a counterpart, the “R Markov map”, but we do not need it for the main result). The L Markov map at each jth level depends only on the ratio of the spectral parameters being interchanged.
Combining the L Markov maps in such a way that they interchange the bottommost spectral parameter 1 with q, then with \(q^2\), then with \(q^3\), and so on, we can move this parameter 1 to infinity, where it “disappears” (see Fig. 9 for an illustration). The resulting distribution of the configuration will again be a Schur process with the same spectral parameters \((1,q,q^2,\ldots )\), but with the modified time parameter, \(t\mapsto qt\). Here we use the fact that the measure does not change under the simultaneous rescaling of the spectral parameters.
Considering the action of this combination of the L Markov maps on the leftmost particles \(\{x_N^N\}\), we arrive at an explicit Markov transition kernel on \({\mathcal {C}}\), denoted by \(\mathbf{L} ^{(q)}\), with the property that (this is Theorem 5.7)
Finally, iterating the action of \(\mathbf{L} ^{(q)}\) and taking the limit as \(q\rightarrow 1\), we arrive at Theorem 1.
1.6 “Toy” example: coupling of Bernoulli random walks
The Schur process computations leading to Theorem 1 have an elementary consequence which we now describe. Its connection to Schur processes is detailed in Sect. 8.7.
Fix \(\beta \in (0,1)\), and let \({\mathsf {b}}_\beta \) be the distribution of the simple random walk in the quadrant \({\mathbb {Z}}_{\ge 0}^2\), under which the walker starts at (0, 0) and goes up with probability \(\beta \) and to the right with probability \(1\beta \), independently at each step.
Consider the continuoustime Markov process on the space of random walk trajectories under which each (up, right) local piece is independently replaced by the (right, up) piece at rate \(m+n1\), where \((m,n)\in {\mathbb {Z}}_{\ge 1}^2\) are the coordinates of the local piece. See Fig. 5 for an illustration. Clearly, in each triangle \(\{m+n \le K \}\), almost surely at each time moment there is at most one change of the trajectory. Moreover, for different K these processes are compatible, so by the Kolmogorov extension theorem they indeed define a continuoustime Markov process on the full space of random walk trajectories. Denote the resulting Markov semigroup by \(\{\mathbf{D }_\tau \}_{\tau \in {\mathbb {R}}_{\ge 0}}\).
Proposition 2
For any \(\beta \in (0,1)\) and \(\tau \ge 0\) we have
The action of \(\mathbf{D} _{\tau }\) decreases the parameter \(\beta \) and almost surely moves the trajectory closer to the m (horizontal) axis. By symmetry, one can also define a continuoustime Markov chain which moves the vertical pieces of the trajectory to the left, and increases the parameter \(\beta \). It could be interesting to look at the stationary dynamics—a combination of the two processes running in parallel which does not change \(\beta \)—and understand its largescale asymptotic behavior. We do not focus on this question in the present work.
1.7 Stationary dynamics on the TASEP measure
Fix \(t\in {\mathbb {R}}_{>0}\). The backwards Hammersleytype process slowed down by a factor of t compensates the time change of the forward TASEP evolution. Running these two processes in parallel thus amounts to a continuoustime Markov process which preserves the TASEP distribution \(\upmu _t\).
One can say that the TASEP distributions \(\upmu _t\) are the “blocking measures” for the stationary dynamics [57] (see also [5]).
The presence of the stationary dynamics on \(\upmu _t\) allows to obtain new properties of the TASEP measure. In particular, we write down an exact evolution equation for \({\mathbb {E}}\,G(N_t^0)\), where \(N_t^0\) is the number of particles to the right of zero at time t, and G is an arbitrary function. This equation contains one more random quantity—the number of holes immediately to the left of zero. See Corollary 7.3 for details.
Moreover, in Sect. 7 we rederive the limit shape parabola for the TASEP by looking at the hydrodynamics of the process preserving \(\upmu _t\). Indeed, recall that the TASEP local equilibria—the ergodic translation invariant measures on configurations on the full line \({\mathbb {Z}}\) which are also invariant under the TASEP evolution—are precisely the product Bernoulli measures [57]. In the bulk of the BHP, the difference between jump rates of consecutive particles is inessential. Thus, the product Bernoulli measures also serve as local equilibria for the BHP.^{Footnote 4} By looking at the local equilibria, one can write down two hydrodynamic PDEs for the TASEP limit shape: first is the wellknown Burgers’ equation, and the second is a PDE coming from the BHP, which is specific to the step initial condition. After simplifications, these PDEs lead to the parabola (1.1).
Beyond hydrodynamics, the asymptotic fluctuation behavior of the TASEP measures \(\upmu _t\) as \(t\rightarrow +\infty \) is understood very well by now, starting from [50]. It would be very interesting to extend these results to the combination \(\text {TASEP}+t^{1}\text {BHP}\) which preserves \(\upmu _t\).
1.8 Further extensions
The Markov maps on Schur processes we introduce to prove our main result, Theorem 1, offer a variety of other applications and open problems. We discuss them in more detail Sect. 8. Here let us briefly outline the main directions:

The onedimensional statement (mapping the TASEP distributions back in time) has an extension to two dimensions. Namely, there is a continuoustime Markov process on interlacing particle configurations (as in Fig. 4) which maps back in time the distributions of the anisotropic KPZ growth process on interlacing arrays studied in [16].

Instead of Schur processes, one can consider interlacing configurations of finite depth. This includes probability distributions on boxed plane partitions with weight proportional to \(q^{\mathrm {vol}}\) (where \(\mathrm {vol}\) is the volume under the boxed plane partition). In this setting our constructions produce Markov chains mapping the measure \(q^{\mathrm {vol}}\) to the measure \(q^{\mathrm {vol}}\), and vice versa. (A simulation is available online [70].) Applying this procedure twice leads to a new sampling algorithm for the measures \(q^{\pm \mathrm {vol}}\).

A certain bulk limit of our twodimensional Markov maps essentially leads to the growth processes preserving ergodic Gibbs measures on twodimensional interlacing configurations introduced and studied in [83]. Thus, one can view our Markov maps as exact “prebulk” stationary dynamics on twodimensional interlacing configurations.

Theorem 1 may be interpreted as the statement that the family of measures \(\{\upmu _t\}\) is coherent with respect to a projective system determined by the process \(\{\mathbf{L }_\tau \}\). Projective systems [23] generalize the notion of branching graphs, and the latter play a fundamental role in Asymptotic Representation Theory [24, 84]. (Even further, the distributions of the anisotropic KPZ growth are also coherent, on a projective system whose “levels” are spaces of twodimensional interlacing configurations.) The framework of projective systems/branching graphs provides many natural questions in this setting.

Structurally, our Markov maps are inspired by the study of stochastic vertex models and bijectivization of the Yang–Baxter equation [29, 30]. Compared with the Schur case, the full Yang–Baxter equation for the quantum \({\mathfrak {sl}}_2\) contains more parameters. In this setting, Schur polynomials should be replaced by the spin HallLittlewood or spin qWhittaker symmetric functions [12, 27]. It is interesting to see how far Theorem 1 can be generalized to other particle systems arising in this framework, including ASEP, various stochastic six vertex models, and random matrix models.

There exists a backwards dynamics for the ASEP started from a family of shock measures [9]. This ASEP backwards dynamics is obtained via a duality. While the shock measures are very different from the step initial configuration, it would be interesting to find connections of Theorem 1 to Markov duality.
Concrete open questions along these directions are formulated and discussed in Sect. 8.
1.9 Outline
In Sects. 2 and 3 we recall the necessary facts about Schur processes, TASEP, and their connection. In Sect. 4 we introduce the L and R Markov maps at the level of interlacing arrays. The action of each such map swaps two neighboring spectral parameters. In Sect. 5 we combine the L Markov maps in such a way that their combination \({\mathbb {L}}^{(q)}\) preserves the class of qGibbs measures on interlacing arrays (which includes the Schur processes related to the qdependent TASEP). We compute the action of \({\mathbb {L}}^{(q)}\) on qGibbs measures and the corresponding Schur processes. In Sect. 6 we take a limit \(q\rightarrow 1\), which leads to our main result, Theorem 1. In Sect. 7 we illustrate the relation between the TASEP and the backwards evolutions at the hydrodynamic level by looking at the stationary dynamics on the TASEP distribution \(\upmu _t\). Finally, in Sect. 8 we discuss possible extensions of our constructions indicated in Sect. 1.8 above, and formulate a number of open questions.
2 Ascending Schur processes
This section is a brief review of ascending Schur processes introduced in [67] and their relation to TASEP. More details may be found in, e.g., [18].
2.1 Partitions
A partition \(\lambda = (\lambda _1\ge \cdots \ge \lambda _{\ell (\lambda )}>0 )\), where \(\lambda _i\in {\mathbb {Z}}\), is a weakly decreasing sequence of nonnegative integers. We denote \(\lambda :=\sum _{i=1}^{N}\lambda _i\). We call \(\ell (\lambda )\) the length of a partition. By convention we do not distinguish partitions if they differ by trailing zeroes. In this way \(\ell (\lambda )\) always denotes the number of strictly positive parts in \(\lambda \). Denote by \({\mathbb {Y}}\) the set of all partitions including the empty one \(\varnothing \) (by convention, \(\ell (\varnothing )=\varnothing =0\)).
2.2 Schur polynomials
Fix \(N\in {\mathbb {Z}}_{\ge 1}\). The Schur symmetric polynomials in N variables are indexed \(\lambda \in {\mathbb {Y}}\) and are defined as
If \(N<\ell (\lambda )\), we set \(s_\lambda (x_1,\ldots ,x_N )=0\), by definition.
The Schur polynomials \(s_\lambda \) indexed by all \(\lambda \in {\mathbb {Y}}\) with \(\ell (\lambda )\le N\) form a linear basis in the space \({\mathbb {C}}[x_1,\ldots ,x_N ]^{\mathfrak {S}_N}\) of symmetric polynomials in N variables. Each \(s_\lambda \) is a homogeneous polynomial of degree \(\lambda \).
The Schur polynomials are stable in the following sense:
This stability allows to define Schur symmetric functions \(s_\lambda \), \(\lambda \in {\mathbb {Y}}\), in infinitely many variables. These objects form a linear basis of the algebra of symmetric functions \(\varLambda \). We refer to [61, Ch. I.2] for the precise definition and details on the algebra \(\varLambda \).
2.3 Skew Schur polynomials
The skew Schur polynomials \(s_{\lambda /\varkappa }\), \(\lambda ,\varkappa \in {\mathbb {Y}}\) are defined through the branching rule as follows:
Indeed, \(s_\lambda (x_1,\ldots ,x_N )\) is a symmetric polynomial in \(x_1,\ldots ,x_K\), and so the skew Schur polynomials in (2.2) are the coefficients of the linear expansion. These skew Schur polynomials are symmetric in \(x_{K+1},\ldots ,x_N\) and satisfy the stability property similar to (2.1). We have \(s_{\lambda /\varnothing }=s_\lambda \).
Let \(\lambda ,\varkappa \in {\mathbb {Y}}\). Plugging in just one variable into \(s_{\lambda /\varkappa }\) simplifies this symmetric function. Namely, \(s_{\lambda /\varkappa }(x)\) vanishes unless \(\varkappa \) and \(\lambda \) interlace (notation \(\varkappa \prec \lambda \); equivalently, \(\lambda /\varkappa \) is a horizontal strip):
Moreover,
For any \(\lambda \in {\mathbb {Y}}\), the set \(\left\{ \varkappa :\varkappa \prec \lambda \right\} \) is finite.
Iterating (2.2) and breaking down all skew Schur polynomials into singlevariable ones, we see that each Schur polynomial has the following form:
where the sum is taken over all interlacing arrays of partitions of depth N in which the top row coincides with \(\lambda \) (see Fig. 6 for an illustration). In combinatorial language, (2.5) is the representation of a Schur polynomial as a generating function of semistandard Young tableaux, cf. [45].
Remark 2.1
If \(N<\ell (\lambda )\), then there are no interlacing arrays of depth N whose top row is \(\lambda \) because at each level one can add at most one nonzero component. Thus, the righthand side of (2.5) automatically vanishes if \(N<\ell (\lambda )\). This agrees with the fact that \(s_\lambda (x_1,\ldots ,x_N )=0\) if \(N<\ell (\lambda )\).
The following two identities for skew Schur polynomials play a fundamental role in our work. The first identity is a straightforward consequence of the symmetry of the Schur polynomials.
Proposition 2.2
For any \(\lambda ,\mu \in {\mathbb {Y}}\) and variables x, y we have
The sums in both sides are finite.
The second is the skew Cauchy identity, see [61, Ch. I.5].
Proposition 2.3
For any \(\lambda , \mu \in {\mathbb {Y}}\) and variables \(x_1,\ldots ,x_N,y_1,\ldots ,y_M \) we have
This is an identity of generating series in \(x_i,y_j\) under the standard geometric series expansion \(\frac{1}{1x_iy_j}=1+x_iy_j+(x_iy_j)^2+\ldots \). Moreover, (2.6) holds as a numerical identity if \(x_i,y_j\in {\mathbb {C}}\) are such that \(x_iy_j<1\) for all i, j.
Remark 2.4
If we set \(\lambda =\mu =\varnothing \) in (2.6), the sum in the righthand side disappears (because \(s_{\varnothing /\varkappa }=\mathbb {1}_{\varkappa =\varnothing }\)), and we obtain
Again, this is a numerical identity provided that \(x_iy_j<1\) for all i, j.
2.4 Specializations
When \(x\ge 0\), we have \(s_{\lambda /\varkappa }(x)\ge 0\) from (2.4). More generally, the Schur polynomials \(s_\lambda (x_1,\ldots ,x_N )\) are nonnegative for real nonnegative \(x_1,\ldots ,x_N\).
We will also need the Plancherel specializations of Schur functions \(s_\lambda \). These specializations, indexed by \(t\ge 0\), may be defined through the limit
where \(\frac{t}{K}\) is repeated K times.
Remark 2.5
We also have \(s_\lambda (\rho _t)=t^{\lambda }\,\dfrac{\dim \lambda }{\lambda !}\), where \(\dim \lambda \) is the dimension of the irreducible representation of the symmetric group \(\mathfrak {S}_{\lambda }\), or, equivalently, the number of standard Young tableaux of shape \(\lambda \).
Generic nonnegative specializations will be denoted as \(\rho :\varLambda \rightarrow {\mathbb {R}}\), and we will also use the notation \(s_\lambda (\rho )\) for \(\rho (s_\lambda )\). For the purposes of the present paper, \(\rho \) would be either a Plancherel specialization, or a substitution of a finitely many nonnegative variables into the symmetric function.
Remark 2.6
A classification of Schurpositive specializations (that is, algebra homomorphisms \(\varLambda \rightarrow {\mathbb {R}}\) which are nonnegative on Schur functions) is known and is equivalent to the celebrated Edrei–Thoma theorem. See, for example, [24] for a modern account discussing various equivalent formulations.
2.5 Schur processes
Schur measures and processes are probability distributions on partitions or sequences of partitions whose probability weights are expressed through Schur polynomials in a certain way. They were introduced in [66, 67].
A Schur measure is a probability measure on \({\mathbb {Y}}\) with probability weights depending on two nonnegative specializations \(\rho _1,\rho _2\):
The normalizing constant Z can be computed using the Cauchy identity (2.7) (provided that the infinite sum converges).
Schur processes are probability measures on sequences of partitions generalizing the Schur measures. We will only need the particular case of ascending Schur processes. These are probability measures on interlacing arrays
(for some fixed N) depending on a nonnegative specialization \(\rho \) and \(c_1,\ldots ,c_N\ge 0\):
The normalizing constant has the form [this follows from (2.2) and (2.7)]:
(provided that the series converges). We call N the depth of a Schur process. We will sometimes call the \(c_i\)’s the spectral parameters of Schur process \({\mathbb {P}}[\vec c\mid \rho ]\).
The next statement immediately follows from (2.2) and the skew Cauchy identity:
Proposition 2.7
Under the Schur process (2.10), the marginal distribution of each \(\lambda ^{(K)}\), \(1\le K\le N\), is given by the Schur measure \({\mathbb {P}}[(c_1,\ldots ,c_K )\mid \rho ]\).
2.6 Schur processes of infinite depth
Let us denote by \({\mathcal {S}}\) the set of interlacing arrays of infinite depth \(\{\lambda ^{(j)}\}_{j\in {\mathbb {Z}}_{\ge 0}}\), where \(\lambda ^{(j)}\in {\mathbb {Y}}\) and \(\lambda ^{(j1)}\prec \lambda ^{(j)}\) (cf. Fig. 6 for an illustration).
Remark 2.8
The interlacing array in Fig. 6 and the one in Fig. 4 in the Introduction are related by \(x^N_k=\lambda ^{(N)}_kN+k\). We work with the \(\{\lambda ^{(N)}_k\}\) notation throughout the paper.
By the Kolmogorov extension theorem, a measure on \({\mathcal {S}}\) is uniquely determined by a collection of compatible joint distributions of \(\{\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)}\}_{N\ge 1}\). If these joint distributions satisfy the \(\vec c\)Gibbs property, then the resulting measure on \({\mathcal {S}}\) is \(\vec c\)Gibbs.
Thus, Proposition 2.7 implies the following extension of the definition of a Schur process. Given an infinite sequence \(c_1,c_2,\ldots , \) of nonnegative reals such that the sums like (2.11) converge for all N, one can define the Schur process \({\mathbb {P}}[\vec c\mid \rho ]\) of infinite depth, i.e., a probability measure on \({\mathcal {S}}\). Indeed, this is because the distributions (2.10) for different N are compatible with each other by Proposition 2.7, so the measure on \({\mathcal {S}}\) with the desired finitedimensional distributions exists.
2.7 \(\vec c\)Gibbs measures
Fix nonnegative reals \(c_1,c_2,\ldots \). A probability distribution on \({\mathcal {S}}\) is called \({\vec {c}}\)Gibbs if for any N, given \(\lambda ^{(N)}=\lambda \), the conditional distribution of the bottom part \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N1)}\prec \lambda \) of the interlacing array has the form
The expression in the denominator is simply the normalizing constant. One can say that each interlacing array in (2.12) is weighted proportional to the corresponding term in the expansion (2.5). Note that the \(\vec c\)Gibbs property depends on the order of the \(c_i\)’s, but the normalizing constant in (2.12) does not.
The next lemma is straightforward consequence of (2.12).
Lemma 2.9
Fix any \(j\ge 2\). Under a \(\vec c\)Gibbs measure, the conditional probability of \(\lambda ^{(j)}\) given all \(\lambda ^{(i)}\), with \(i\ne j\), is proportional to \(s_{\lambda ^{(j+1)}/\lambda ^{(j)}}(c_{j+1})\, s_{\lambda ^{(j)}/\lambda ^{(j1)}}(c_j)\).
Denote the space of all \(\vec c\)Gibbs measures on \({\mathcal {S}}\) by \(\mathfrak {G}_{\vec c}\). Note that this space does not change if we multiply all the parameters by the same positive number: \(\mathfrak {G}_{\vec c}=\mathfrak {G}_{a\cdot \vec c}\), \(a>0\). Indeed, this follows from (2.12) and the homogeneity of the Schur polynomials.
Remark 2.10
When all \(c_i\equiv 1\), the conditional distribution (2.12) becomes uniform (on the set of all interlacing arrays of depth N with top row \(\lambda \)). This uniform Gibbs case justifies the name \(\vec c\)Gibbs in the general situation.
The Schur process \({\mathbb {P}}[\vec c\mid \rho ]\) is a particular example of a \(\vec c\)Gibbs measure. The full classification of \(\vec c\)Gibbs measures is known only in several particular cases. In the uniform case \(c_i\equiv 1\) this is the celebrated Edrei–Voiculescu theorem (see Sect. 8.1 and also, e.g., [22] for a modern account discussing various equivalent formulations). When the \(c_i\)’s form a geometric sequence, the classification was obtained much more recently in [46] (see also [47] for a generalization).
3 Schur processes and TASEP
In this section we recall a coupling between TASEP (with step initial configuration and particledependent speeds) and a marginal of an ascending Schur process. This mapping can be seen as a consequence of the column Robinson–Schensted–Knuth insertion [64, 65, 85]. One can also define a continuoustime Markov dynamics on interlacing arrays whose marginal is TASEP [16] (see also [25]).
3.1 TASEP
Let \(c_1,\ldots ,c_N,\ldots \) be positive reals. The continuoustime TASEP (Totally Asymmetric Simple Exclusion Process) with step initial condition and speeds \(\vec c\) is defined as follows. It is a Markov process on particle configurations \(\mathbf{x} (t)=(x_1(t)>x_2(t)>\cdots )\) on the integer lattice, such that

The initial particles’ locations are \(x_i(0)=i\), \(i=1,2,\ldots \) (this is the step initial configuration);

The configuration has the rightmost particle \(x_1\);

The configuration is densely packed far to the left, that is, for all large enough M (where the bound on M depends on t) we have \(x_{M}(t)=M\);

There is at most one particle per site.
Denote the space of such leftpacked and rightfinite particle configurations on \({\mathbb {Z}}\) by \({\mathcal {C}}\).
The continuoustime Markov evolution of TASEP proceeds as follows. Each particle \(x_i\) has an independent exponential clock with rate \(c_i\). That is, the time before \(x_i\) attempts to jump is an exponential random variable: \(\mathrm {Prob}(\text {time}>t)=e^{c_i t}\), \(t\ge 0\). (We will refer to \(c_i\)’s as to the particle speeds.) When the clock of \(x_i\) rings, the particle jumps to the right by one if the destination is not occupied. If the destination of the jumping particle is occupied, the jump is forbidden and the particle configuration does not change. Because the process starts from the step initial configuration, only finitely many particles are free to jump at any particular time. Therefore at any time almost surely at most one jump happens. See Fig. 7 for an illustration.
3.2 Coupling to a Schur process
Fix \(N\in {\mathbb {Z}}_{\ge 1}\), positive reals \(c_1,\ldots ,c_N \), and \(t\ge 0\). Consider the Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) defined by (2.10), where \(\rho _t\) is the Plancherel specialization. Note that the series for the normalizing constant (2.11) always converges because
and the last expression is an entire function in t and \(c_i\). Since this procedure works for all N, we can view \({\mathbb {P}}[\vec c\mid \rho _t]\) as a Schur process of infinite depth, i.e., a probability measure on \({\mathcal {S}}\).
When \(t=0\), \({\mathbb {P}}[\vec c\mid \rho _0]\) concentrated on the single interlacing array densely packed at zero, that is, with each \(\lambda ^{(j)}=(0,\ldots ,0 )\) (j times).
The next result is present in [16], but alternatively follows from much earlier constructions involving Robinson–Schensted–Knuth correspondences [64, 65, 85].
Theorem 3.1
Fix \(t\ge 0\) and particle speeds \(c_1,c_2,\ldots \), and consider the TASEP as in Sect. 3.1 at time t. Then we have equality of joint distributions at the fixed time t:
where \(\lambda ^{(i)}\) are the random partitions coming from the Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) described above.
Remark 3.2
A dynamical version of this result is also proven in [16]: there exists a continuoustime Markov chain on interlacing arrays (even a whole family of them, cf. [25, 26]) whose action on a Schur process \({\mathbb {P}}[\vec c\mid \rho _t]\) continuously increases the parameter t. We will refer to the dynamics from [16] as the pushblock process (see Definition 8.8 for details). For the pushblock process on interlacing arrays, (3.1) holds as equality of joint distributions of Markov processes. In other words, (3.1) is also true for multitime joint distributions of these processes. However, we do not need this dynamical result for most of our constructions.
4 Markov maps
This section introduces our main objects—the Markov maps \(L_\alpha ^{(j)}\) and \(R_{\alpha }^{(j)}\) which randomly change the jth row \(\lambda ^{(j)}\) in an interlacing array while keeping all other rows intact. These maps act on \(\vec c\)Gibbs measures by permuting spectral parameters.
4.1 First level
Let us first describe the maps for \(j=1\) (the simplest nontrivial case) to illustrate their structure and properties. We use the shorthand notation \(\lambda ^{(2)}=(\lambda _1,\lambda _2)\) and \(\lambda ^{(1)}=\varkappa _1\). The interlacing means that \(\lambda _2\le \varkappa _1\le \lambda _1\).
Definition 4.1
(Truncated geometric distribution) Let \(A\in {\mathbb {Z}}_{\ge 0}\) and \(\alpha \in [0,1]\). A discrete random variable \(Y=Y_\alpha (A)\) on \(\left\{ 0,1,\ldots ,A \right\} \) is called truncated geometric if it has the distribution
Definition 4.2
(The L and R maps, first level) For \(\alpha \in [0,1]\), let \(L_\alpha ^{(1)}\) be the Markov map^{Footnote 5} whose action on the pair \(\varkappa \prec \lambda \) does not change \(\lambda \), and replaces \(\varkappa _1\) as follows:
The action of \(R_\alpha ^{(1)}\) is simply the reflection of \(L_\alpha ^{(1)}\):
The notation for the L and R operators is suggested by the directions in which they move \(\varkappa _1\). See Fig. 8 for an illustration.
Remark 4.3
If \(\alpha =1\), both \(L_1^{(1)}\) and \(R_1^{(1)}\) are identity operators. If \(\alpha =0\), then \(Y_0(A)=0\) almost surely, and so the actions of both \(L_0^{(1)}\) or \(R_0^{(1)}\) lead to the maximal possible displacement of \(\varkappa _1\), respectively, to the left or to the right.
The next lemma plays a key role and will later generalize to other rows of the interlacing array. Denote by \(s_i\), \(i=1,2,\ldots \) the ith elementary permutation of the spectral parameters,
Lemma 4.4
If \(c_1\ge c_2\) and \(c_1\ne 0\), then the Markov operator \(L_{c_2/c_1}^{(1)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_1\vec c}\). If \(c_1\le c_2\) and \(c_2\ne 0\), then the Markov operator \(R_{c_1/c_2}^{(1)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_1\vec c}\).
Proof
Let us consider only \(L_\alpha ^{(1)}\), the case of \(R_\alpha ^{(1)}\) is analogous. By Remark 4.3, when \(c_1=c_2\), \(L_1^{(1)}\) is the identity. But in this case \(s_1 \vec c=\vec c\), so there is nothing to prove.
We can assume that \(c_1>c_2\). Denote \(\alpha =c_2/c_1\). Using the \(\vec c\)Gibbs property, we see that given \(\lambda =(\lambda _1,\lambda _2)\), the conditional probability weight of \(\varkappa =(\varkappa _1)\) is proportional to \(s_\varkappa (c_1)s_{\lambda /\varkappa }(c_2)\), which by (2.4) leads to
The action of the operator \(L_{\alpha }^{(1)}\) on this distribution is readily computed:
The final expression is the conditional probability weight of \(\varkappa _1\) given \(\lambda \) under the \(s_1\vec c\)Gibbs property. This completes the proof. \(\square \)
Remark 4.5
1. In words, Lemma 4.4 states that the action of the L or R operators reverses the geometric distribution on the segment \([\lambda _2,\lambda _1]\).
2. Note also that we apply \(L_{c_2/c_1}^{(1)}\) only if \(c_2\le c_1\) (and the opposite ordering restriction for \(R_{c_1/c_2}^{(1)}\)). If \(c_1>c_2\) in \(L_{c_2/c_1}^{(1)}\), then the algebraic computations in the proof of Lemma 4.4 are still valid. But the operator itself loses probabilistic meaning as some of its matrix elements become negative.
4.2 Remark: Relation to bijectivization
The Markov maps of Definition 4.2 which interchange the spectral parameters were suggested by the idea of bijectivization of the Yang–Baxter equation first employed in [30] (see also [1, 29]).
First, note that one can deduce the symmetry of the skew Schur polynomials (Proposition 2.2) from the Yang–Baxter equation. This argument is present, for example, in [12, Theorem 3.5] in a \(U_q(\widehat{{\mathfrak {s}}{\mathfrak {l}}_2})\) setting with additional parameters q, s (the Schur case corresponds to \(q=s=0\)).
Next, bijectivization refines the Yang–Baxter equation into a pair of forward and backward local Markov moves which randomly update the configuration. Here the locality means the following. Encode \(\varkappa _1\) using the occupation variables \(\{\eta _x\}_{x\in {\mathbb {Z}}}\), where \(\eta _{\varkappa _1}=1\) and all other \(\eta _x\equiv 0\). The application of a single local Markov move (forward or backward) would change one of the occupation variables.
Then, considering a sequence of forward or backward moves leads, respectively, to the L and R operators. This can be seen by setting \(t=s=0\) in [30, Figure 4], taking a sequence of these moves, and passing from the occupation variables (equivalently, vertical arrows in the notation of that paper) to the elements of the interlacing array. For brevity, we do not explain the details of derivation of the L and R Markov operators from the bijectivization as an independent proof of the key Lemma 4.4 is rather straightforward.
4.3 General case
Let us now describe the Markov maps \(L_{\alpha }^{(j)}\) and \(R_{\alpha }^{(j)}\) for general j. This is an extension of Definition 4.2. For the next definition we use the convention \(\lambda ^{(j)}_0 = \infty \) and \(\lambda ^{(j)}_{j+1} = 0\) for all \(j \in {\mathbb {Z}}_{\ge 0}\) (recall that by Remark 2.1 in the jth row of the interlacing array there cannot be more than j nonzero entries).
Definition 4.6
(The L and R maps, general case) Fix \(\alpha \in [0,1]\) and \(j \ge 2\). Let \(L_{\alpha }^{(j)}\) be the Markov map whose action on interlacing arrays of infinite depth \(\{\lambda ^{(i)}\}_{i\ge 1}\) does not change \(\lambda ^{(i)}\) for \(i \ne j\), and replaces \(\lambda ^{(j)}\) as follows:
where \(\{ Y_\alpha ^{(k)} \}_{k=1}^j\) is a collection of independent truncated geometric random variables with \(Y_\alpha ^{(k)}\) distributed as \(Y_{\alpha }\bigl (\lambda ^{(j)}_{k}  \max \{\lambda ^{(j1)}_k , \lambda ^{(j+1)}_{k+1} \} \bigr )\).
The action of \(R_{\alpha }^{(j)}\) is simply the reflection of \(L_{\alpha }^{(j)}\):
where \(\{ Y_\alpha ^{(k)} \}_{k=1}^j\) is a collection of independent truncated geometric random variables with \(Y_\alpha ^{(k)}\) distributed as \(Y_{\alpha }\bigl (\min \{ \lambda ^{(j1)}_{k1}, \lambda ^{(j+1)}_{k} \}  \lambda ^{(j)}_{k} \bigr )\).
In words, under both \(L_\alpha ^{(j)}\) and \(R_\alpha ^{(j)}\) each \(\lambda _k^{(j)}\), \(k=1,\ldots ,j \), is randomly independently moved to the left (resp., to the right) within the segment
to which \(\lambda ^{(j)}_k\) is constrained by interlacing. The moves of each \(\lambda _{k}^{(j)}\) are exactly the same as on the first level and are governed by the truncated geometric random variables.
The next statement is a generalization of Lemma 4.4. Recall that \(s_i\) denotes the ith elementary permutation of the spectral parameters \(\vec {c}\).
Proposition 4.7
Fix \(j\ge 1\). If \(c_j\ge c_{j+1}\) and \(c_j \ne 0\), then the Markov operator \(L_{c_{j+1}/c_j}^{(j)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_j\vec c}\). If \(c_j\le c_{j+1}\) and \(c_{j+1}\ne 0\), then the Markov operator \(R_{c_j/c_{j+1}}^{(j)}\) maps \(\mathfrak {G}_{\vec c}\) to \(\mathfrak {G}_{s_j\vec c}\).
Proof
Let us consider \(L^{(j)}_{\alpha }\) only; the case of \(R_{\alpha }^{(j)}\) is analogous. Denote \(\alpha = c_{j+1} / c_j\). We may assume that \(\alpha \ne 1\) as otherwise there is nothing to prove. Let us also take \(j\ge 2\) as the case \(j=1\) is Lemma 4.4.
Using the \(\vec {c}\)Gibbs property, we see that given all \(\lambda ^{(i)}\) with \(i\ne j\), the conditional probability weight of \(\lambda ^{(j)}\) is proportional to \(s_{\lambda ^{(j)}} / s_{\lambda ^{(j1)}}(c_j) \,s_{\lambda ^{(j+1)}/ \lambda ^{(j)}}(c_{j+1})\) (cf. Lemma 2.9). By (2.4), this implies
where
For \(\lambda ^{(j)}= (\lambda ^{(j)}_1, \dots , \lambda ^{(j)}_j)\), the operator \(L^{(j)}_{\alpha }\) acts on each \(\lambda ^{(j)}_k\) independently. Thus we may write \(L^{(j)}_{\alpha }\) as a product of local Markov maps which act on each segment (4.2) in the same manner as in Sect. 4.1. Similarly to Lemma 4.4 we conclude that the action of \(L_\alpha ^{(j)}\) reverses each local geometric distribution \(\mathrm {P}(m\mid a,b,c,d)\). Therefore, \(L_\alpha ^{(j)}\) turns (4.3) into the conditional probability weight of \(\lambda ^{(j)}\) under a \(s_j\vec c\)Gibbs measure. This completes the proof. \(\square \)
5 Action on qGibbs measures
This section shows that suitably composed L maps preserve the class of qGibbs measures on interlacing arrays, and describes how a qGibbs measure changes under this action.
5.1 qGibbs property
Fix \(q \in (0,1]\). A \(\vec {c}\)Gibbs measure on the set \({\mathcal {S}}\) of infinite interlacing arrays is called qGibbs if \(c_i =q^{i1}\) for all \(i \in {\mathbb {Z}}_{\ge 1}\). We denote the set of qGibbs measures by \(\mathfrak {G}_q\).
Remark 5.1
One can define the volume of an interlacing array of finite depth N by
Then the qGibbs property is equivalent to saying that conditioned on \(\lambda ^{(N)}\), the probability weight of the interlacing array \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N)}\) is proportional to \(q^{\mathrm {vol}(\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N)} )}\) (e.g., see [63]). Note that sometimes (in particular, in [46]) the term “qGibbs measures” refers to the elements of \(\mathfrak {G}_{q^{1}}\) in our notation.
When \(q=1\), the qGibbs measures correspond to the uniform conditioning property (cf. Remark 2.10). Throughout this section we work under the assumption \(0<q<1\).
5.2 Iterated L map
When \(c_i=q^{i1}\), we have \(c_{i+1}<c_i\) for all i. By Proposition 4.7, it means that the action of \(L^{(i)}_{c_{i+1}/c_i}\) permutes the spectral parameters \(q^{i}\) and \(q^{i1}\). Iterating such \(L^{(i)}\) from \(i=1\) to infinity and keeping track of the permutations of the spectral parameters, we arrive at the following definition:
Definition 5.2
(Iterated L map) Let \({\mathsf {M}}\) be a probability measure on \({\mathcal {S}}\) and set \({\mathsf {M}}^{(0)} := {\mathsf {M}}\). Denote, inductively, \({\mathsf {M}}^{(j)}:={\mathsf {M}}^{(j1)}L^{(j)}_{q^j}\) (see Fig. 9 for an illustration). Let \({\mathbb {L}}^{(q)}\) be the Markov map which acts on probability measures on \({\mathcal {S}}\) by
Let us explain why \({\mathbb {L}}^{(q)}\) is welldefined. Recall that a probability measure on \({\mathcal {S}}\) is uniquely determined by a family of compatible joint distributions of \((\lambda ^{(1)},\ldots ,\lambda ^{(N)})\) (cf. Sect. 2.6). Next, for all \(K > N\) we have \({\mathsf {M}}^{(K)}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) = {\mathsf {M}}^{(N+1)}(\lambda ^{(1)}, \dots , \lambda ^{(N)})\). This guarantees that the collection of measures \(\{ {\mathsf {M}}^{(N+1)}(\lambda ^{(1)} , \dots , \lambda ^{(N)}) \}_{N\ge 1}\) is indeed compatible, and thus defines a measure on \({\mathcal {S}}\) which we denote by \({\mathsf {M}}\,{\mathbb {L}}^{(q)}\).
5.3 qGibbs harmonic families
Let \({\mathsf {M}}\) be a qGibbs measure on \({\mathcal {S}}\). By the qGibbs property, for each \(N\ge 1\) the probability weight of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\) is represented as a product of the marginal probability weight of \(\lambda ^{(N)}\) and a qGibbs factor corresponding to the conditional distribution of \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N1)} \) given \(\lambda ^{(N)}\). This allows to write
where \(\varphi _N\) is a function on the Nth level of the array defined as
Because the functions \(\varphi _N\) for different N come from the same qGibbs measure \({\mathsf {M}}\), they must be compatible. This compatibility relation reads
for all \(N\ge 1\) and all \(\mu =(\mu _1,\ldots ,\mu _{N1})\) on the \((N1)\)st level of the array (at the zeroth level we set \(\varphi _0(\varnothing )=1\), by agreement). We call a family of functions \(\{\varphi _N\}\) satisfying (5.3) and \(\varphi _0(\varnothing )=1\) a qGibbs harmonic family. The term “harmonic” comes from the Vershik–Kerov theory of the boundary of branching graphs (e.g., see [55]). Clearly, a qGibbs measure on \({\mathcal {S}}\) is uniquely determined by its associated qGibbs harmonic family \(\{\varphi _N\}\).
5.4 Action of the iterated L map on qGibbs measures
If \({\mathsf {M}}\in \mathfrak {G}_q\), then the action of \({\mathbb {L}}^{(q)}\) (that is, the sequence of the Markov maps \(L^{(j)}_{q^j}\)) on \({\mathsf {M}}\) swaps the spectral parameters as in Fig. 9, moving \(c_1=1\) all the way up to infinity where it “disappears”. The resulting spectral parameters \((q,q^2,q^3,\ldots )\) are proportional to the original ones. This suggests that \({\mathbb {L}}^{(q)}\) should preserve the class of qGibbs measures. The next result shows that this is indeed the case, and also describes the action of \({\mathbb {L}}^{(q)}\) on \(\mathfrak {G}_q\) in the language of harmonic families.
Theorem 5.3
The Markov map \({\mathbb {L}}^{(q)}\) maps preserves \(\mathfrak {G}_q\), the set of qGibbs measures on \({\mathcal {S}}\). More precisely, \({\mathbb {L}}^{(q)}\) maps each qGibbs harmonic family \(\{ \varphi _N\}_{N \in {\mathbb {Z}}_{\ge 1}}\) to a new qGibbs harmonic family \(\{\hat{\varphi }_N \}_{N \in {\mathbb {Z}}_{\ge 1}}\) as follows:
Proof
The second equality in (5.4) immediately follows from (2.4). Let us first explain why the sum in (5.4) is finite. We have by the definition (5.2) of \(\varphi _N\):
where we bounded the Schur polynomial from below by taking one of its monomials (since all the monomials are nonnegative). The condition \(\mu \prec \lambda \) in (5.4) means that only the sum over \(\lambda _1\) in (5.4) is over an infinite set, and it thus converges thanks to (5.5).
Now let \(\{\lambda ^{(i)}\}\) be a random interlacing array distributed according to the qGibbs measure coming from \(\{\varphi _N\}\). Let the random array \(\{\theta ^{(i)}\}\) be the image of \(\{\lambda ^{(i)}\}\) under \({\mathbb {L}}^{(q)}\). Fix \(N\ge 1\). The distribution of \(\theta _N\) (described by the function \({\hat{\varphi _N}}\) which we aim to compute) is a result of applying the sequence of Markov maps \(L_q^{(1)},\ldots ,L_{q^{N}}^{(N)}\) (in this order). Because the last of these operators depends on \(\lambda ^{(N+1)}\), we see that the distribution of \(\theta ^{(N)}\) is not determined only by the joint distribution of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\). In other words, to compute \({\hat{\varphi _N}}\) we need to first extend \(\varphi _N\) to \(\varphi _{N+1}\), and utilize the qGibbs property.
Let us apply this idea. Fix \(\lambda ^{(N+1)}=\lambda \). This condition completely determines the conditional joint distribution of \(\lambda ^{(1)},\ldots ,\lambda ^{(N)}\) via the qGibbs property. By iterating Proposition 4.7, we see that after applying the Markov maps \(L_q^{(1)},\ldots ,L_{q^{N1}}^{(N1)} \), the joint distribution of \(\lambda ^{(N)}\) and \(\theta ^{(N1)}\), conditioned on \(\lambda ^{(N+1)}=\lambda \) comes from the \((q,q^2,\ldots ,q^{N1},1,q^{N})\)Gibbs property:
After the application of \(L^{(N)}_{q^N}\), the partition \(\lambda ^{(N)}\) turns into \(\theta ^{(N)}\), and we similarly have
Let us rewrite the last expression to compare it to the qGibbs conditional distribution. In the numerator, due to the homogeneity of Schur and skew Schur polynomials, we have:
To extract from this the marginal distribution of \(\theta ^{(N)}\) (that is, to get to \({\hat{\varphi }}_{N}\)), we need to multiply (5.6) by \(\mathrm {Prob}(\lambda ^{(N+1)}=\lambda )/s_\lambda (1,\ldots ,q^N)\) (which is exactly \(\varphi _{N+1}(\lambda )\)) and sum the resulting expression over both \(\lambda \) and \(\nu \). We have
The sum over \(\nu \) is simplified using the branching rule (2.2), and so
We see that at the level of marginal distributions, the family \(\left\{ \varphi _N \right\} \) turns into \(\left\{ {\hat{\varphi }}_N \right\} \), where \({\hat{\varphi }}_N\) is defined by (5.4).
It remains to show that the new family \(\{{\hat{\varphi }}_N\}\) satisfies the qGibbs harmonicity. That is, we want to show for all N that
(the second equality is simply the definition of \({\hat{\varphi }}_{N1}\)). We have
as desired. In the last step we used the harmonicity of the original family \(\left\{ \varphi _N \right\} \). This completes the proof. \(\square \)
Remark 5.4
Note that Theorem 5.3 fundamentally relies on the fact that the qGibbs measure lives on an infinite array. Indeed, for an array of finite depth it is not possible to move the spectral parameter 1 all the way up to infinity. In the proof of Theorem 5.3 we use the fact that the array has infinite depth when we extend \(\varphi _N\) to \(\varphi _{N+1}\). The case of arrays of finite depth is discussed in Sect. 8.
5.5 Application to Schur processes and TASEP with geometric speeds
Schur processes \({\mathbb {P}}[\vec c\mid \rho _t]\) with \(c_i=q^{i1}\) are particular cases of qGibbs measures with
where we took into account the normalization of the Schur measures. The Markov map \({\mathbb {L}}^{(q)}\) acts on these Schur processes as follows:
where we used the skew Cauchy identity ((2.6) with \(\varkappa =\varnothing \)) and the homogeneity of the Schur polynomials [both properties clearly survive the Plancherel limit (2.8)]. Therefore, we have
Recall that by Theorem 3.1, the joint distribution of the quantities \(\{\lambda ^{(N)}_NN \}_{N\ge 1}\) under the Schur process \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho _t]\) is the same as the joint distribution of the particle locations \(\{x_N(t)\}_{N\ge 1}\) at time t of the TASEP with particle speeds \(c_i=q^{i1}\) and the step initial configuration. Denote this joint distribution of particles \(\{x_N(t)\}\) by \(\upmu _t^{(q)}\).
Our next observation is that the action of the Markov map \({\mathbb {L}}^{(q)}\) on the random interlacing array \(\{\lambda ^{(N)}\}_{N\ge 1}\) can be projected to the leftmost components \(\{\lambda ^{(N)}_N \}_{N\ge 1}\), and the result is still a Markov map. In more detail, let \(\{\theta ^{(N)}\}_{N\ge 1}\) be the random interlacing array which is the image of \(\{\lambda ^{(N)}\}_{N\ge 1}\) under \({\mathbb {L}}^{(q)}\). From the very definition of \({\mathbb {L}}^{(q)}\), we see that conditioned on \(\{\lambda ^{(N)}\}_{N\ge 1}\), the distribution of \(\{\theta ^{(N)}_N \}_{N\ge 1}\) depends only on the leftmost components \(\{\lambda ^{(N)}_N \}_{N\ge 1}\), and not on the rest of the array \(\lambda \). Let us describe this projection of \({\mathbb {L}}^{(q)}\) explicitly in terms of locations of the TASEP particles \(x_N\) (via the identification \(x_N=\lambda ^{(N)}_NN\)). Recall from Sect. 3.1 that \({\mathcal {C}}\) stands for the space of leftpacked, rightfinite particle configurations on \({\mathbb {Z}}\).
Definition 5.5
Let \(0<q<1\). We aim to define a Markov map \(\mathbf{L} ^{(q)}\) on \({\mathcal {C}}\). Fix a configuration \(x_1>x_2>\cdots \) in \({\mathcal {C}}\). By definition, its random image \({\hat{x}}_1>{\hat{x}}_2>\cdots \) under the action of \(\mathbf{L} ^{(q)}\) is
where the \(Y_{q^{\scriptstyle i}}\)’s are independent truncated geometric random variables (see Definition 4.1).
Remark 5.6
A homogeneous version of \(\mathbf{L} ^{(q)}\) appeared in [28], it is solvable through coordinate Bethe Ansatz [73].
Theorem 5.3, identity (5.7), and the fact that \(\mathbf{L} ^{(q)}\) is a projection of \({\mathbb {L}}^{(q)}\) immediately imply the following result:
Theorem 5.7
For any \(t\ge 0\), we have
where \(\upmu _t^{(q)}\) is the distribution of the TASEP with geometric rates (with ratio q) and step initial configuration, and \(\mathbf{L} ^{(q)}\) is the Markov map from Definition 5.5.
6 Limit \(q\rightarrow 1\) and proof of the main result
Here we take the limit as \(q\rightarrow 1\) of the results of the previous section, and arrive at a continuoustime Markov chain mapping the TASEP distributions backwards in time. This proves our main result, Theorem 1.
Iterate Theorem 5.7 to observe that for any \(T\in {\mathbb {Z}}_{\ge 1}\):
where \((\mathbf{L} ^{(q)})^T\) simply denotes the Tth power. Next, introduce the scaling:
where \(\varepsilon >0\) will go to zero, and \(\tau \in {\mathbb {R}}_{\ge 0}\) is the scaled continuous time. Clearly, we have \(q^T=e^{\tau }(1+O(\varepsilon ))\). We aim to take the limit as \(q\rightarrow 1\) in (6.1).
Recall that by \(\upmu _t\), \(t\in {\mathbb {R}}_{\ge 0}\), we denote the distribution of the TASEP with constant speeds \(c_i\equiv 1\) at time t, started from the step initial configuration. Also recall that \({\mathcal {C}}\) is the space of leftpacked, rightfinite particle configurations on \({\mathbb {Z}}\). The space \({\mathcal {C}}\) has a natural partial order: \(\mathbf{x} \) precedes \(\mathbf{y} \) if \(x_i\le y_i\) for all i.
Lemma 6.1
For any fixed \(\tau ,t\in {\mathbb {R}}_{\ge 0}\) and any \(\delta >0\) there exists a finite set \({\mathcal {C}}^\delta ={\mathcal {C}}^\delta (t,\tau )\subset {\mathcal {C}}\) such that
for all sufficiently small \(\varepsilon >0\).
Proof
Take finite \({\mathcal {C}}^\delta \subset {\mathcal {C}}\) such that \(\upmu _{t}({\mathcal {C}}^\delta )>1\delta \), and, moreover, \({\mathcal {C}}^\delta \) is closed with respect to the partial order (i.e., if \(\mathbf{x} \) precedes \(\mathbf{y} \) and \(\mathbf{y} \in {\mathcal {C}}^\delta \), then \(\mathbf{x} \in {\mathcal {C}}^\delta \)). This is possible because \(\upmu _t\) is a probability measure on \({\mathcal {C}}\), and closing a finite set with respect to our partial order keeps it finite. (One can even estimate the size of \({\mathcal {C}}^\delta \) because the first particle \(x_1(t)\) performs speed 1 directed a random walk.)
Next, \(\upmu _{e^{\tau }t}({\mathcal {C}}^\delta )>1\delta \) because the TASEP dynamics almost surely increases the configuration with respect to the order. The rest of the claim follows by monotonically coupling the TASEP \(\upmu _{\bullet }\) with constant speeds to the TASEP \(\upmu _{\bullet }^{(q)}\) with the qgeometric speeds. Here monotonicity means that the TASEP with the qgeometric speeds is always behind (in our partial order) the \(q=1\) TASEP; this monotone coupling exists since \(q<1\). \(\square \)
By Lemma 6.1, it suffices to consider the limit of identity (6.1) as \(q\rightarrow 1\) on finite subsets of \({\mathcal {C}}\). In the righthand side we immediately get \(\upmu ^{(q)}_{q^T t}\rightarrow \upmu _{e^{\tau }t}\). In the lefthand side we have \(\upmu _t^{(q)}\rightarrow \upmu _t\). It remains to take the limit of the Tth power of the Markov map \(\mathbf{L} ^{(q)}\).
The limit transition in \((\mathbf{L} ^{(q)})^{T}\) is in the spirit of the classical Poisson approximation to the binomial distribution—the probability of jumps gets smaller, but the number of trials (i.e., the discrete time) scales accordingly. More precisely, we have for the random variables \(Y_{q^{\scriptstyle k}}\) in Definition 5.5:
This leads to the following definition of the continuoustime backwards dynamics:
Definition 6.2
(Backwards Hammersleytype process \(\mathbf{L} _\tau \)) Consider the continuoustime dynamics on \({\mathcal {C}}\) defined as follows. Each particle \(x_k\), \(k=1,2,\ldots \) independently jumps to the left to one of the holes \(\{x_{k+1}+1,x_{k+1}+2,\ldots ,x_k1 \}\) at rate k per hole. Equivalently, each particle \(x_k\) has an independent exponential clock of rate \(k(x_kx_{k+1}1)\); when the clock rings, \(x_k\) selects a hole between \(x_{k+1}\) and \(x_k\) uniformly at random and instantaneously moves there.^{Footnote 6}
Note that for configurations in \({\mathcal {C}}\), the total jump rate of all particles is always finite. Therefore, the dynamics on \({\mathcal {C}}\) is welldefined. Denote by \(\mathbf{L} _\tau \), \(\tau \in {\mathbb {R}}_{\ge 0}\), the Markov transition operator of this dynamics from time 0 to time \(\tau \) (note that the dynamics is timehomogeneous). Observe that the step configuration (\(x_i=i\) for all \(i=1,2,\ldots \)) is absorbing for the backwards dynamics \(\mathbf{L} _\tau \).
Thanks to Lemma 6.1 and (6.3), we have the convergence \((\mathbf{L} ^{(q)})^{T}\rightarrow \mathbf{L} _\tau \). This completes the proof of the main theorem \(\upmu _t\,\mathbf{L} _\tau =\upmu _{\,e^{\tau }t}\).
7 Stationary dynamics on the TASEP measure
Here we illustrate the relation between the TASEP and the backwards Hammersleytype process by constructing a Markov dynamics preserving the TASEP measure \(\upmu _t\). We also discuss hydrodynamics of these two processes.
In this section we denote particle configurations by occupation variables \(\eta :{\mathbb {Z}} \rightarrow \{0,1\}\), with \(\eta (x) =1\) if there is a particle at location \(x\in {\mathbb {Z}}\), and \(\eta (x) =0\) otherwise. The step initial configuration is \(\eta (x)=1\) iff \(x<0\). Recall that by \({\mathcal {C}}\) we denote the space of leftpacked, rightfinite configurations. Denote by \(\overline{{\mathcal {C}}}=\left\{ 0,1 \right\} ^{{\mathbb {Z}}}\) the space of all particle configurations in \({\mathbb {Z}}\).
7.1 Definition of the stationary dynamics
Let \(A^{T}:= A^{\mathrm {TASEP}}\) be the infinitesimal generator for the TASEP with homogeneous particle speeds \(c_i=1\) (Sect. 3.1), and \(\{\mathbf{T }_t\}_{t\ge 0}\) be the corresponding Markov semigroup. Let \(A^{L}: = A^{\mathrm {BHP}}\) the infinitesimal generator of the backwards Hammersleytype process (BHP), see Definition 6.2, and \(\{\mathbf{L _\tau }\}_{\tau \ge 0}\) denote the BHP semigroup. For a fixed configuration \(\eta \in {\mathcal {C}}\), we denote by \(\eta ^{x,y}\), \(x\ne y\), the configuration
In words, \(\eta ^{x, y}\) corresponds to a particle jumping from location \(x \in {\mathbb {Z}}\) to location \(y \in {\mathbb {Z}}\). Note that \(\eta ^{x,y}\) may not be in \({\mathcal {C}}\) even if \(\eta \in {\mathcal {C}}\).
The infinitesimal generator for the TASEP acts as follows:
for f a cylindrical function on \(\eta \in {\mathcal {C}}\) (i.e. a function that depends on finitely many coordinates of \(\eta \)). The factor \(\eta (x) (1  \eta (x+1))\) takes care of the TASEP exclusion rule. The infinitesimal generator of the BHP acts as follows:
for f a cylindrical function on \(\eta \). Note that summations in the action of \(A^L\) are well defined since for \(\eta \in {\mathcal {C}}\) we have \(\eta (x) = 0\) for \(x\gg 0\) and \(\eta (x) =1\) for \(x\ll 0\).
Recall that \(\upmu _t\) is the distribution of the TASEP configuration at time t started from the step initial configuration. Denote the corresponding random particle configuration by \(\eta _t\). We have \(\eta _t\in {\mathcal {C}}\) almost surely.
For any \(t\in {\mathbb {R}}_{>0}\), define the operator
This is the generator of the continuoustime Markov process which is a combination of the BHP and the TASEP sped up by the factor of t. By a “combination” we mean that both processes run in parallel.
Proposition 7.1
The TASEP distribution \(\{\eta _t\}\) is invariant under the continuoustime Markov process with generator A, that is,
for all cylinder functions f.
Proof
By Theorem 1, we have
for any \(t, \tau \ge 0\). Fixing \(t \ge 0\), differentiating the above identity in \(\tau \), and sending \(\tau \) to zero, we get \(\upmu _t\,(t A^{T} + A^{L}) =0\). This establishes the result. \(\square \)
Remark 7.2
It should be possible to show that the process with the generator (7.3), started from any configuration \(\mathbf{x} \in {\mathcal {C}}\), converges (as time goes to infinity) to its stationary distribution \(\upmu _t\). However, we do not focus on this question in the present paper.
A local version of Proposition 7.1 holds, too. That is, the Bernoulli measures of any given density \(\rho \in [0,1]\) on particle configurations on \({\mathbb {Z}}\) are invariant under both the TASEP and the homogeneous version of the BHP. (Locally the rates under BHP are constant, so the invariance should be considered under the homogeneous BHP.) The remarkable content of Proposition 7.1 is that the invariance is global on “outofequilibrium” random configurations with the distribution \(\upmu _t\), if the speeds of the TASEP and the inhomogeneous BHP are related in as in (7.3).
As a consequence of Proposition 7.1, let us take a specific function of the configuration:
where \(G(\cdot )\) is a function \({\mathbb {Z}}_{\ge 0}\rightarrow {\mathbb {R}}\). Note that \(2 N^0\) is the height function at zero. Let \(\eta _t\) be the random configuration of the TASEP at time t with the step initial configuration, and \(N^0_t:=\eta _t(0)+\eta _t(1)+\ldots \).
Corollary 7.3
With the above notation, we have
In the sum over x in the righthand side almost surely only one term is nonzero, and the whole sum is equal to the distance of the rightmost particle in \({\mathbb {Z}}_{<0}\) to zero.
Proof
The lefthand side is equal to \({\mathbb {E}}\left( A^T f(\eta _t) \right) \), which by Proposition 7.1 is the same as \(t^{1}{\mathbb {E}}(A^L f(\eta _t))\). The rest follows from the computation of \(A^L f(\eta _t)\) for the particular function (7.5), which is straightforward. \(\square \)
7.2 Hydrodynamics
The hydrodynamic limit for the TASEP is well known, with early results by [58] on the convergence to a local equilibrium and by [76] on the connection of the density function to the Burgers’ equation. The latter means that under linear space and time scaling, the limiting density density function \(\rho (t,z)\) of the TASEP is the entropic solution of the following initialvalue problem for the onedimensional Burgers’ equation:
We refer to [10] for further details, see also [43] for a recent review. The solution to (7.6) is given by
The limiting density \(\rho (t,z)\) describes the law of large numbers type behavior of the TASEP.
Remark 7.4
(Asymptotic analysis of TASEP) More recently, in the last 20 years, much finer scaling limits for the TASEP have become available, beginning with the work of Johansson [50] on the TracyWidom fluctuations of the position of the particles in the TASEP. More generally, the TASEP with various other examples of initial data has been shown to converge to the top lines of the \(\text {Airy}_1\) or \(\text {Airy}_2\) line ensembles under the appropriate scalings, see, e.g., the survey [39] and references therein for details. The progress in understanding the TASEP asymptotics with general initial data, and also the asymptotics of the spacetime structure in TASEP is currently ongoing [4, 7, 8, 31, 36, 40, 41, 51,52,53, 62].
While we expect the BHP and the stationary dynamics from Sect. 7.1 to have applications for all these types of scaling limits, we begin by considering the hydrodynamic limit of the BHP in this section.
Let \(\eta _t \in {\mathcal {C}}\) be the random configuration at time \(t \ge 0\) of the TASEP with step initial conditions. For any \(\epsilon >0\), the (\(\epsilon \)scaled) random empirical measure on \({\mathbb {R}}\) associated to \(\eta _t \in {\mathcal {C}}\) is given as follows:
In particular, we have scaled the mass of each point by \(\epsilon \), scaled the lattice distance by \(\epsilon \), but the time remains unscaled. Denote the set of compactly supported continuous functions on the line by \(C_0({\mathbb {R}})\). The integral of a function \(f \in C_0({\mathbb {R}})\) against the measure \(\pi ^{\epsilon }\) is denoted by \(\langle \pi ^{\epsilon } , f\rangle \). Clearly, \(\langle \pi _t^\epsilon , f\rangle = \epsilon \sum _{x \in {\mathbb {Z}}} f(\epsilon x)\, \eta _{t} (x)\).
The next statement can be found in, e.g., [78]. The sequence of measures \(\{ \pi _{t/\epsilon }^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0} }\) converges as \(\epsilon \rightarrow 0\) in probability to \(\rho (t, z) dz\) so that the density function \(\rho (t,z)\) is the entropic solution of the initial value problem for the Burgers equation (7.6). That is, for each \(t\ge 0\), given any \(\delta >0\),
for any \(f \in C_0({\mathbb {R}})\). Note that now we have scaled time by \(\epsilon ^{1}\) in the empirical measure.
This result for TASEP generalizes to a large class of initial conditions. For instance, given a continuous density profile \(\rho _0: {\mathbb {R}} \rightarrow [0,1]\), a sequence \(\{\nu ^{\epsilon } \}_{\epsilon \in {\mathbb {R}}_{>0}}\) of probability measures on \(\overline{{\mathcal {C}}} = \{ 0, 1\}^{{\mathbb {Z}}}\) is said to be associated to the profile \(\rho _0\) if for every \(f\in C_0({\mathbb {R}})\) and every \(\delta >0\), we have
Then, the empirical measure \(\pi _{t/\epsilon }^{\epsilon }\) for the TASEP, with initial conditions now given by \(\nu ^{\epsilon }\) converges in probability to an absolutely continuous measure \(\rho ( t, z) dz\) so that the density function is the entropic solution to the Burgers’ equation with the initial value given by the density profile \(\rho (t,z)\), see [78]. We expect a similar hydrodynamic result to hold for the BHP with some modifications: (1) a different PDE arising from the infinitesimal generator of the BHP, and (2) no time scaling for the empirical measure since lattice scaling also scales the particle numbers and, consequently, the speed of the particles.
Conjecture 1
Let \(\rho _0: {\mathbb {R}} \rightarrow [0,1]\) be an initial density profile and let \(\{ \nu ^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0}}\) be a sequence of probability measures on \({\mathcal {C}}\) associated to \(\rho _0\).^{Footnote 7} Also, for a fixed \(\epsilon >0\), take \(\eta _t^{\epsilon } \in {\mathcal {C}}\) to be the random configuration at time \(t>0\) of the BHP, with the initial configuration \(\eta _0^{\epsilon }\) determined by the measure \(\nu ^{\epsilon }\). Then, for every \(t>0\), the sequence of random empirical measures \(\pi ^{\epsilon }_t\) defined as in (7.8) converges in probability to the absolutely continuous measure \(\pi _t(dz) = \rho (z, t) dz\) in the sense of (7.9). The density \(\rho (t,z)\) is a solution of the initial value problem
Remark 7.5
In Conjecture 1, it is unclear to the authors if there is a unique solution to the initial value problem (7.10). In particular, it is unclear what type of solution the limiting density profile \(\rho (t,z)\) should be.
Remark 7.6
The differential equation (7.10) can be informally obtained by looking at the local version of the BHP. That is, locally we expect the configuration to be close to the independent Bernoulli random configuration on the whole line \({\mathbb {Z}}\) with the density \(\rho (t,z)\). Then the expression under \(\partial /\partial z\) in the righthand side of (7.10) is the (negative) flux. Indeed, \(\int _z^{\infty } \rho (t, w) dw\) means the inhomogeneous rate in the BHP, while \((1\rho (t,z))/\rho (t,z)\) is the local flux of the homogeneous BHP with left jumps and speed 1. See Proposition 7.8 for more discussion.
Let us check that Conjecture 1 holds for the initial data associated with the TASEP distributions \(\upmu _t\).
Proposition 7.7
Fix some \(t_0 \in {\mathbb {R}}\) and let \(\eta _0^{\epsilon } \sim \upmu _{\epsilon ^{1} e^{t_0}}\) be the TASEP random configuration at time \(\epsilon ^{1} e^{t_0}\). Then, the sequence \(\{ \eta _0^{\epsilon }\}_{\epsilon \in {\mathbb {R}}_{>0}}\) is associated to the density profile
and Conjecture 1 is true for the measures \(\nu ^{\epsilon }=\upmu _{\epsilon ^{1}e^{t_0}}\).
Proof
By results for the TASEP, we know that the sequence \(\eta _0^{\epsilon }\) is associated to the density profile \(\rho _0\) given in the statement. Also, by Theorem 1, we know that the random configuration \(\eta _{t}^{\epsilon }\) obtained from \(\nu ^\epsilon =\upmu _{\epsilon ^{1}e^{t_0}}\) by the BHP evolution as in Conjecture 1, is distributed according to \(\upmu _{\epsilon ^{1} e^{t_0  t}}\).
So, again by results for the TASEP, we know that the sequence of random measures \(\pi _t^{\epsilon }\) converges to an absolutely continuous measure \(\pi _t(dz) = \rho (z, t) dz\) with the density given by
One can then check directly that the above \(\rho (t, z)\) solves the initial value problem (7.10). This completes the proof. \(\square \)
We base Conjecture 1 on the random evolution of the empirical measure \(\pi _t^{\epsilon }\) given by the infinitesimal generator for the BHP.
Proposition 7.8
Let \(f: {\mathbb {R}} \rightarrow {\mathbb {R}}\) be a twice differentiable compactly supported function and let \(\eta _t \in {\mathcal {C}}\) be the random configuration given by the BHP. Here the time \(t\ge 0\) and the initial configuration \(\eta _0\in {\mathcal {C}}\) are fixed. Then, there are martingales \(M_t^{\epsilon , f}\) with respect to the natural filtration \(\sigma (\eta _s^{\epsilon }, s \le t)\) so that
for \(\pi _t^{\epsilon }\) the random empirical measure of \(\eta _t\) and the function
Proof
We have
where we regard \( \langle \pi _t^{\epsilon }, f\rangle \) as a function of the configuration \(\eta _t\). We can compute
With the help of the approximation
the statement follows from standard results on Markov chains. \(\square \)
7.3 Limit shape for TASEP with step initial condition
Let us present an alternate derivation for the limit shape of the TASEP with the step initial configuration assuming Conjecture 1 but independent of the similar result for the TASEP. We only assume that the TASEP empirical measure converges to \(\rho \) satisfying the following system of equations:
In particular, we show that this system of partial differential equations determines a unique solution under some general assumptions.
First, eliminate the time derivative so that
Then,
Note that, for all \(t \in {\mathbb {R}}_{\ge 0}\), there is a \(z \in {\mathbb {Z}}\) small enough so that \(\rho (t, z) = 1\). This implies that the constant c(t) is in fact zero. Thus, we have
Taking the space derivative, we have
Revisiting the system of equations (7.11), we may now write the second equation as follows
By separation of variables, we may solve the equation above up to a constant of integration, but this constant of integration may be determined by (7.12). Thus, we get the wellknown hydrodynamic density function
We have thus shown that the comparability of the TASEP and the BHP uniquely picks out the entropic solution to the Burgers’ equation for the limiting density function with the step initial condition.
8 Extensions and open questions
In this section we describe a number of modifications and extensions of the constructions presented earlier, and outline a number of open questions.
8.1 More general qGibbs measures
The Markov map \({\mathbb {L}}^{(q)}\) from Definition 5.2 acts nicely on Schur processes \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) with general specializations \(\rho \). Even more generally, we can consider twosided Schur processes which live on interlacing arrays of signatures. Signatures are analogues of partitions in which parts are allowed to be negative. Interlacing arrays of signatures are simply the collections \(\{\lambda ^{(k)}_j\}_{1\le j\le k}\), satisfying the interlacing inequalities as in Fig. 6, and with \(\lambda ^{(k)}_j\in {\mathbb {Z}}\). (Note that we consider arrays of infinite depth.)
For a specialization \(\rho \) parametrized as
and a signature \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N )\), \(\lambda _i\in {\mathbb {Z}}\), define
where \(\psi _n(\rho )\), \(n\in {\mathbb {Z}}\), are the coefficients of the expansion
and \(u=1\). One of the equivalent forms of the Edrei–Voiculescu theorem (e.g., see [22]) states that (8.3) parametrizes the space of all totally nonnegative twosided sequences.
Remark 8.1
In particular, taking \(\alpha _i^\pm =\beta _i^\pm =0\) for all i, \(\gamma ^=0\), and \(\gamma ^+=t\) turns the just defined specialization \(\rho \) into \(\rho _t\) defined in Sect. 2.4.
Define the twosided ascending Schur process \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) as the unique qGibbs measure on interlacing arrays of signatures such that for any N,
where the skew Schur functions for signatures can be defined by (2.4). Define the Schur process \({\mathbb {P}}[\vec {1}\mid \rho ]\) as the \(q\rightarrow 1\) degeneration of (8.4) (here and below we denote by \(\vec {1}\) the sequence of spectral parameters which are all equal to 1). Another equivalent form of the Edrei–Voiculescu theorem states that \({\mathbb {P}}[\vec {1}\mid \rho ]\) are all possible extreme Gibbs measures on interlacing arrays of signatures (a Gibbs measure is called extreme if it cannot be represented as a convex combination of other Gibbs measures). We refer to [11] for further details on the definition of the twosided Schur processes.
Theorem 8.2
Let \(\rho \) be a specialization with parameters (8.1) such that \(\alpha _i^{}=0\) for all i. Then we have for all \(0<q<1\):
where \(\rho ^{(q)}\) is the specialization corresponding to the parameters
Note that \({\hat{\alpha }}_i^+,{\hat{\beta }}_i^{\pm }\ge 0\), and \({\hat{\beta }}_1^++{\hat{\beta }}_1^\le 1\).
Proof of Theorem 8.2
This follows from Theorem 5.3 similarly to the computation in the beginning of Sect. 5.5. Namely, denote (8.3) by \(\varPsi (u;\rho )\). The qGibbs measure (8.4) corresponds to the qGibbs harmonic family
Note that \(\varPsi (1;\rho )=1\) but it is convenient to include this factor here. Note also that the condition \(\alpha _i^{}\equiv 0\) ensures that the series \(\varPsi (q^m;\rho )\) converge for all \(m\in {\mathbb {Z}}_{\ge 1}\). The action of \({\mathbb {L}}^{(q)}\) turns the qGibbs harmonic family \(\{\varphi _N\}\) into
In particular, for \(N=1\) we have
which readily translates into the modification of the parameters (8.5) in the claim. \(\square \)
Measures on interlacing arrays of the form \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) are not extreme qGibbs. A classification of extreme qGibbs measures is obtained in [46] (note that our q corresponds to 1/q in that paper, so the description of the boundary needs to be reversed). Extreme qGibbs measures \({\mathbb {P}}_\mathbf{n }^{(q)}\) are parametrized by infinite sequences
Moreover, \(\lim _{N\rightarrow +\infty }\lambda ^{(N)}_j=n_j\) for each fixed \(j=1,2,\ldots \), where \(\lambda ^{(N)}_j\) come from the random configuration distributed according to \({\mathbb {P}}_\mathbf{n }^{(q)}\). It is not hard to show the following.
Proposition 8.3
The action of the Markov map \({\mathbb {L}}^{(q)}\) on extreme qGibbs measures corresponds to the left shift in the space of parameters:
In [17] a decomposition of the nonextreme qGibbs measures \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\) onto the extreme ones \({\mathbb {P}}^{(q)}_\mathbf{n }\) is given in terms of a determinantal point process on the set of shifted labels. The shifted labels in our notation are \(n_11>n_22>\cdots \), and they form a random point configuration on \({\mathbb {Z}}\) whose correlation functions have a determinantal form. The action of \({\mathbb {L}}^{(q)}\) on \(\mathbf{n} \) from Proposition 8.3 removes the largest point in this determinantal process on \({\mathbb {Z}}\), and shifts all its other points by one to the right.
Question 2
How to explicitly link the action of \({\mathbb {L}}^{(q)}\) on \(\mathbf{n} \) with the modification of the parameters (8.5) of the determinantal point process describing \({\mathbb {P}}[(1,q,q^2,\ldots )\mid \rho ]\)? Does this correspondence (between the action on the parameters of the kernel and the action on the underlying random point configuration) survive any limit transition to more familiar determinantal point processes (e.g., random matrix spectra or Airy\(_2\))?
8.2 Limit \(q\rightarrow 1\) and action on Gibbs measures
The \(q\rightarrow 1\) limit of Theorem 8.2 can be obtained similarly to the argument in Sect. 6. Define by \({\mathbb {L}}_{\tau }\), \(\tau \in {\mathbb {R}}_{\ge 0}\), the continuoustime Markov semigroup under which each particle \(\lambda ^{(k)}_j\) at each kth level of the interlacing array independently jumps to the left into one of the possible locations m, where
at rate k per each of these possible locations.
However, this definition presents an issue since in a generic interlacing array, under \({\mathbb {L}}_\tau \) infinitely many particles jump in finite time. Moreover, because for any \(k\in {\mathbb {Z}}_{\ge 1}\) jumps of \(\lambda ^{(k)}_j\) depend on the \((k+1)\)st level, one cannot simply restrict \({\mathbb {L}}_\tau \) to the first several levels. Therefore, we have to consider a smaller space of interlacing arrays:
Definition 8.4
Let the subset \({\mathcal {S}}^c\subset {\mathcal {S}}\) consist of interlacing arrays \(\{\lambda ^{(N)}_j\}_{1\le j\le N}\) satisfying \(\lambda ^{(N)}_j=0\) for all N and all \(J(N)\le j\le N\), where \(NJ(N)\rightarrow +\infty \) as \(N\rightarrow +\infty \).
For each fixed K, the restriction of \({\mathbb {L}}_\tau \) to
(that is, to the K leftmost diagonals) is a Markov process, in which only finitely many particles jump in finite time. For different K, these Markov processes are compatible. Therefore, \({\mathbb {L}}_\tau \) makes sense on the state space \({\mathcal {S}}^c\). Below we denote by \({\mathbb {L}}_{\tau }\) the Markov semigroup constructed in this manner.
Theorem 8.5
The action of the semigroup \({\mathbb {L}}_\tau \) on extreme Gibbs measures \({\mathbb {P}}[\vec {1}\mid \rho ]\), where \(\rho \) is a specialization as in (8.1)–(8.3) with \(\alpha _i^=\beta _i^=0\) for all i, \(\gamma ^=0\), and \(\beta _1^+<1\), transforms the parameters of \(\rho \) exactly as in (8.5), but with q replaced by \(e^{\tau }\).
Idea of proof
One can check that the Schur process \({\mathbb {P}}[\vec {1}\mid \rho ]\) with \(\alpha _i^=\beta _i^=0\) for all i, \(\gamma ^=0\), and \(\beta _1^+<1\) is supported on the subset \({\mathcal {S}}^c\) described in Definition 8.4. Similarly to Sect. 6, we see that under the scaling \(q=e^{\varepsilon }\), \(T=\lfloor \tau /\varepsilon \rfloor \), \(\varepsilon \rightarrow 0\), we have \(({\mathbb {L}}^{(q)})^T\rightarrow {\mathbb {L}}_{\tau }\). Next, the modification of the parameters (8.5) is a oneparameter semigroup. That is, applying \({\mathbb {L}}^{(q)}\) one more time replaces q everywhere in (8.5) by \(q^2\). Because \(q^T\sim e^{\tau }\), we get the result. \(\square \)
In particular, \({\mathbb {L}}_{\tau }\) maps the pushblock process of [16] (see Definition 8.8) backwards in time in the same sense as Theorem 1.
8.3 Iterated R maps
Consider the maps \(R^{(j)}_\alpha \) defined in Sect. 4. Similarly to Sect. 5.2, we can define the iterated R map \({\mathbb {R}}^{(q)}\) by
(this definition has the same formal meaning as for the map \({\mathbb {L}}^{(q)}\), see Sect. 5.2). The map \({\mathbb {R}}^{(q)}\) acts nicely on \(q^{1}\)Gibbs measures (i.e., corresponding to \(\vec c=(1,q^{1},q^{2},\ldots )\)). Namely, one can check that an analogue of Theorem 5.3 holds, with q replaced by \(q^{1}\) in the definition of the harmonic functions and in (5.4). The \(q\rightarrow 1\) continuoustime limit \({\mathbb {R}}_{\tau }\) of \({\mathbb {R}}^{(q)}\) is also readily defined with the help of Definition 8.4—this is just the mirroring of \({\mathbb {L}}_{\tau }\) from Sect. 8.1, in which all particles jump to the right. One can obtain the following analogue of Theorems 8.2 and 8.5 for the action of \({\mathbb {R}}^{(q)}\) and \({\mathbb {R}}_{\tau }\) on \(q^{1}\)Gibbs Schur processes:
Theorem 8.6
Let \(\rho \) be a specialization as in (8.1)–(8.3) such that \(\alpha _i^{+}=0\) for all i. We have for all \(0<q<1\):
where \(\rho ^{(1/q)}\) has modified parameters as in (8.5), but with q replaced by 1/q. Moreover, if \(\alpha _i^+=\beta _i^+=0\) for all i, \(\gamma ^+=0\), and \(\beta _1^+<1\), then \({\mathbb {P}}[\vec {1}\mid \rho ]\, {\mathbb {R}}_{\tau } = {\mathbb {P}}[\vec {1}\mid \rho ^{(e^\tau )}]\), where \(\rho ^{(e^\tau )}\) is defined in a similar way.
Question 3
Is it possible to extend the definition of \({\mathbb {R}}_\tau \) to Schur processes with \(\gamma ^+>0\)? (This is equivalent to extending \({\mathbb {L}}_\tau \) to the case \(\gamma ^>0\).)
If such an extension is possible, then \({\mathbb {R}}_\tau \) would turn the time t in the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) (with the Plancherel specialization \(\gamma ^+=t\) and all other parameters zero) into \(e^\tau t\), that is, forward. Note that this process would move infinitely many particles in finite time and move individual particles very far, too.
Recall that \({\mathbb {P}}[\vec {1}\mid \rho _t]\) can be generated by the pushblock dynamics (Definition 8.8). Under this dynamics, the rightmost components \(\{\lambda ^{(N)}_1\}\) of the interlacing array evolve as a PushTASEP, a close relative of TASEP, but with a pushing mechanism [15, 16]. Therefore, a positive answer to Question 3 would lead to a continuoustime semigroup which maps PushTASEP forward in time.
8.4 Arrays of finite depth
Fix \(N\in {\mathbb {Z}}_{\ge 1}\) and let \({\mathcal {S}}^{\lambda ,N}\) be the space of interlacing arrays \(\lambda ^{(1)}\prec \ldots \prec \lambda ^{(N1)}\prec \lambda ^{(N)} \) with fixed top row \(\lambda ^{(N)}=\lambda \), where \(\lambda =(\lambda _1\ge \cdots \ge \lambda _N\ge 0 )\), \(\lambda _i\in {\mathbb {Z}}\). Fix pairwise distinct spectral parameters \(c_1,\ldots ,c_N>0 \).
Recall the single level Markov maps \(L^{(j)}_{\alpha }\), \(R^{(j)}_{\alpha }\), \(j=1,\ldots ,N1\), defined in Sect. 4. Consider the product space \(\widetilde{{\mathcal {S}}}^{\lambda ,N}:= {\mathcal {S}}^{\lambda ,N}\times \mathfrak {S}_N\), where \(\mathfrak {S}_N\) is the symmetric group. For each elementary permutation \(s_i=(i,i+1)\), \(1\le i\le N1\), define the Markov map \(T(s_i)\) on \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\) as follows. On the \(\mathfrak {S}_N\) part it deterministically acts by \(\sigma \mapsto s_i \sigma \). On each fiber \({\mathcal {S}}^{\lambda ,N}\times \{\sigma \}\) it acts as the Markov map
Note that \(T(s_i)\) do not satisfy the symmetric group relations when acting on \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\) (in particular, \(T(s_i)^2\) is not identity).
Let \({\mathbb {M}}^{\lambda }_{\vec c}\) denote the \(\vec c\)Gibbs measure on \({\mathcal {S}}^{\lambda ,N}\):
Note that in contrast with arrays of infinite depth (cf. Sect. 2.7), here the \(\vec c\)Gibbs property determines the measure \({\mathbb {M}}_{\vec c}^{\lambda }\) uniquely.
Let \(w_N=(N,N1,\ldots ,2,1 )\) be the longest element in the symmetric group \(\mathfrak {S}_N\), and \(w_N=s_{i_1}s_{i_2}\ldots s_{i_{N(N1)/2}} \), \(1\le i_k\le N1\), be its reduced word decomposition which is also assumed fixed. Define
(in this notation we do not indicate the dependence on the choice of a particular reduced word). Clearly, \({\mathbb {T}}^2\) acts as the identity on the \(\mathfrak {S}_N\) part of \(\widetilde{{\mathcal {S}}}^{\lambda ,N}\). Moreover, \({\mathbb {T}}^2\) preserves the measure \({\mathbb {M}}_{\vec c}^{\lambda }\) viewed as the measure on \({\mathcal {S}}^{\lambda ,N}\times \{ e \}\). Indeed, this is because by Proposition 4.7 each \(T(s_i)\) maps \({\mathbb {M}}_{\sigma \vec c}^{\lambda }\) to \({\mathbb {M}}_{s_i \sigma \vec c}^{\lambda }\). The map \({\mathbb {T}}^2\) can be viewed as a sampling algorithm for the measure \({\mathbb {M}}_{\vec c}^{\lambda }\):
Proposition 8.7
Start with any (nonrandom) interlacing array \((\lambda ^{(1)}\prec \cdots \prec \lambda ^{(N1)}\prec \lambda )\) and apply the Markov map \({\mathbb {T}}^{2k}\) to it. The distribution of the resulting random interlacing array converges, as \(k\rightarrow +\infty \), to \({\mathbb {M}}_{\vec c}^{\lambda }\) in the total variation norm.
Proof
This follows from the standard convergence theorem for Markov chains on finite spaces. Indeed, the Markov chain corresponding to \({\mathbb {T}}^{2k}\) is

aperiodic since \({\mathbb {T}}^2\) assigns positive probability to the trivial move;

irreducible because \({\mathbb {T}}^2\) assigns positive probability to changing only one entry \(\lambda ^{(k)}_j\), \(1\le j\le k\le N1\) in the interlacing array (the set \({\mathcal {S}}^{\lambda ,N}\) is connected by such individual changes).
This completes the proof. \(\square \)
Question 4
How fast is the convergence in Proposition 8.7, depending on the system size (which is \(\sim N \lambda _1\))? What is the mixing time of \({\mathbb {T}}\)?
8.5 qdistributed lozenge tilings
Let us now consider a concrete case of the setup outlined in the previous Sect. 8.4. Fix N and the top row \(\lambda =(b,b,\ldots ,b,0,0\ldots ,0)\), where b repeats a times and 0 repeats c times, with \(a+c=N\). Then interlacing arrays of depth N and top row \(\lambda \) are in bijection with lozenge tilings of a hexagon with sides a, b, c, a, b, c, or, equivalently, with boxed plane partitions (see Fig. 10 for an illustration and, e.g., [25] for more details).
Let \({\mathbb {M}}_{q^{1}}\) and \({\mathbb {M}}_{q}\) denote the measures under which the probability weight of a lozenge tiling is proportional to \(q^{\mathrm {vol}}\) or \(q^{\mathrm {vol}}\), respectively, where the volume is defined in (5.1). These two measures are \(\vec c\)Gibbs with \(\vec c=(1,q,q^2,\ldots ,q^{N1} )\) and \(\vec c=(q^{N1},\ldots ,q,1 )\), respectively (recall that multiplying \(\vec c\) by a scalar does not change the \(\vec c\)Gibbs property).
Take the reduced word
and let \({\mathbb {T}}\) be the corresponding Markov map (8.7). One readily sees that the action of \({\mathbb {T}}\) on \({\mathbb {M}}_{q^{1}}\):

Turn \({\mathbb {M}}_{q^{1}}\) into \({\mathbb {M}}_q\);

Almost surely moves vertical lozenges (see Fig. 10) to the left because in (8.6) we always choose the option L;

Changes the \((N1)\)st row of the tiling only once, the \((N2)\)nd only twice, and so on.
An exact sampling algorithm for \({\mathbb {M}}_{q^{1}}\) was presented in [19]. Starting with the exact sample of \({\mathbb {M}}_{q^{1}}\) (Fig. 10, left) and applying \({\mathbb {T}}\), we obtain an exact sample of \({\mathbb {M}}_{q}\) (Fig. 10, right), while randomly moving the vertical lozenges to the left. An implementation of this mapping \({\mathbb {M}}_{q^{1}}\,{\mathbb {T}}={\mathbb {M}}_q\) with all the intermediate steps can be found online [70].
The map \({\mathbb {T}}\) works in the same way for an arbitrary top row \(\lambda \) (when the polygon being tiled is not necessarily a hexagon, but can be a general sawtooth domain as in, e.g., [69]). The advantage of the hexagon case is the presence of the exact sampling algorithm [19].
Question 5
Consider lozenge tilings of growing sawtooth domains with top rows \(\lambda =\lambda (N)\) which depend on N in some way. Can the symmetry of the \(q^{\pm \mathrm {vol}}\) measures manifested by the map \({\mathbb {T}}\) be utilized to obtain the limit shape and fluctuations of the leftmost piece of the frozen boundary as \(N\rightarrow +\infty \)?
Here by the leftmost piece we mean the part of the curve separating the leftmost region occupied by only vertical lozenges, and the liquid region. Existence and characterization of limit shapes for \(q^{\pm \mathrm {vol}}\) is due to [32, 54], and some explicit formulas were obtained recently in [37].
8.6 Dynamics in the bulk
Consider the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) (also sometimes known as the Plancherel measure for the infinitedimensional unitary group). It is convenient to use lozenge tiling interpretation of interlacing arrays as in the previous Sect. 8.5. From [16, 20] it is known that as N, k, and t go to infinity proportionally to each other, the local lattice configuration of lozenges around each \(\lambda ^{(N)}_k\) converges to the ergodic translation invariant Gibbs measure on lozenges tilings of the whole plane (see Fig. 11 for an illustration). Such ergodic measures form a twoparameter family [79]. As parameters one can take the densities of two of the three types of lozenges. We remark that the ergodic Gibbs measures are far from being independent Bernoulli ones. In particular, the joint correlations of lozenges possess a determinantal structure [67].
We say that (N, k, t) correspond to the bulk of the system if the limiting density of each of the types of lozenges around \(\lambda ^{(N)}_{k}(t)\) is positive. One can also consider the bulk limit of the dynamics \({\mathbb {L}}_{\tau }\). Because \({\mathbb {L}}_{\tau }\) maps the Schur process \({\mathbb {P}}[\vec {1}\mid \rho _t]\) to \({\mathbb {P}}[\vec {1}\mid \rho _{e^{\tau }t}]\) and \(t\rightarrow +\infty \), we need to scale \(\tau \) as \(\tau =\uptau /t\) (here \(\uptau \in {\mathbb {R}}_{>0}\) is the new scaled time which stays fixed). Then \(e^{\uptau /t}\,t\sim \left( 1\frac{\uptau }{t} \right) t=t\uptau \). Considering \({\mathbb {L}}_{\uptau /t}\) is equivalent to slowing down all the jump rates in \({\mathbb {L}}\) by the factor of t. Since we are looking around level N and N grows proportionally to t, the slowed down dynamics in the bulk will have equal jump rates on all levels at finite distance from the Nth one.
Therefore, under the bulk limit of \({\mathbb {L}}_{\tau }\), each vertical lozenge can move into one of the holes to the left of it (with the requirement that the interlacing is preserved), at a constant rate per hole (for simplicity, we can assume that this rate is equal to 1). See Fig. 11 for an illustration.
Consider the combination of the dynamics \({\mathbb {L}}_{\uptau l/t}\) and \({\mathbb {R}}_{\uptau r/t}\) running in parallel,^{Footnote 8} where \(l,r>0\) are parameters. In the bulk limit of this combination, we readily obtain the Hammersleytype process in the bulk with twosided jumps. This twosided dynamics was introduced and studied in [83], where it was shown that this dynamics preserves the ergodic Gibbs measures on tilings of the whole plane. We see that our Markov maps \({\mathbb {L}}_{\tau }\) and \({\mathbb {R}}_\tau \) can be viewed as the pre bulk limit versions of the twosided Hammersleytype processes of [83].
Let us now discuss connections to the pushblock dynamics of [16]. For completeness, let us recall its definition:
Definition 8.8
(Pushblock dynamics) Each vertical lozenge has an independent exponential clock of rate 1. When the clock rings, the lozenge tries to move to the right by one. If it is blocked by a vertical lozenge from below (see the square mark in Fig. 11), then the jump is suppressed. If there are vertical lozenges above the one moving, then they also get pushed to the right by one.
The onesided particular case of the Hammersleytype processes is the pushblock dynamics, up to rotating the picture by \(\pi /3\) and focusing on the yellow lozenges in Fig. 11 instead of the vertical (gray) ones.
Thus, in the bulk limit Theorem 8.5 informally turns into the statement that one can run the onesided Hammersleytype process and the pushblock dynamics (both in terms of the vertical lozenges), and the resulting process preserves ergodic Gibbs measures. This statement follows from [83], as well as its rather straightforward generalization given next:
Proposition 8.9
Running six onesided Hammersleytype processes in parallel, where each individual process moves one type of lozenges in one of the directions \(e^\mathbf{i \pi k/3}\), \(0\le k\le 5\), at a specified rate \(\upalpha _k \ge 0\), preserves ergodic Gibbs measures on tilings of the whole plane.
8.7 Branching graph perspective
Recall that by \({\mathcal {S}}^c\) we denote the set of all interlacing arrays of infinite depth which have many zeroes along the left border (Definition 8.4). Let us explain how the Markov maps \({\mathbb {L}}_{\tau }\) can be utilized to equip \({\mathcal {S}}^c\times {\mathbb {R}}\) with a structure of an \({\mathbb {R}}\)graded projective system in the sense of [23]. Projective systems generalize branching graphs such as the Young graph, and the latter play a fundamental role in Asymptotic Representation Theory [24, 84]. The definitions and questions in this subsection are motivated by the connection to branching graphs.
Remark 8.10
The set \({\mathcal {S}}^c\times {\mathbb {R}}\) is “larger” than the more wellstudied branching graphs. Namely, in the Young and Gelfand–Tsetlin graphs the vertices are indexed by Young diagrams and signatures, respectively (a signature is a tuple \((\nu _1\ge \cdots \ge \nu _N )\), \(\nu _i\in {\mathbb {Z}}\)), while in \({\mathcal {S}}^c\times {\mathbb {R}}\) the vertices are whole infinite collections of interlacing diagrams \(\lambda ^{(1)}\prec \lambda ^{(2)}\prec \cdots \). This makes it hard to predict which properties of the Young and Gelfand–Tsetlin graphs could translate to \({\mathcal {S}}^c\times {\mathbb {R}}\).
Let \(M_s\), \(s\in {\mathbb {R}}\), be probability measures on \({\mathcal {S}}\) supported by \({\mathcal {S}}^c\) (examples include the onesided Schur measures as in Theorem 8.5). We call the family \(\{M_s\}_{s\in {\mathbb {R}}}\) coherent if for any \(\tau \ge 0\) and \(s\in {\mathbb {R}}\) we have
Coherent families are sometimes known as entrance laws, cf. [38]. Clearly, coherent families form a convex set. Its extreme elements are, by definition, those which cannot be represented as nontrivial convex combinations of other coherent families.
Question 6
How to characterize extreme coherent families? Can every coherent family be represented in a unique way as a (continual) convex combination of the extremes?
Let us present an example of a coherent family based on Schur processes. Take \(M_s^{\mathrm {Schur}}={\mathbb {P}}[\vec {1}\mid \rho (s)]\), where \(\rho (s)\) is a specialization with \(\alpha _i^{\pm }(s)=\beta _i^(s)=0\) for all i, \(\gamma ^(s)=0\), and other parameters given by
where \(\beta _i^{+}\) and \(\gamma ^{+}\) are fixed and satisfy (8.1). The fact that the family \(\{ M_s^{\mathrm {Schur}} \}\) is indeed coherent follows from Theorem 8.5.
Let us discuss two particular examples.

When \(\gamma ^+=1\) and all other parameters are zero, \(M_s^{\mathrm {Schur}}\) is the family of singletime distributions of the pushblock dynamics under the logarithmic time change \(s=\log t\).

When \(\beta _1^+=\beta \in (0,1)\) and all other parameters are zero, the random interlacing array corresponding to \(M_s^{\mathrm {Schur}}\) has the form \(\lambda ^{(N)}=(1^{X_N}0^{NX_N})\), where \((X_1,X_2,\ldots )\) is the trajectory of the simple random walk with steps 0, 1 taken with probabilities \(1\beta _1^+(s)\) and \(\beta _1^+(s)\), respectively. The parameter \(\beta _1^+(s)\) interpolates between 0 and 1 at \(s=\infty \) and \(s=+\infty \), respectively. The map \({\mathbb {L}}_\tau \) thus provides a coupling between these simple random walk trajectories with varying probability of up step. The concrete action of \({\mathbb {L}}_\tau \) in this example leads to Proposition 2 formulated in the Introduction.
Question 7
Are the coherent families \(\{ M_s^{\mathrm {Schur}}\}\) extreme? Are there other interesting (extreme or nonextreme) coherent families?
Let us focus on the case \(\{M_s^{\mathrm {Schur}}\}\) with \(\gamma ^+=1\) and all other parameters zero. The structure of a projective family allows to define for each \(s\in {\mathbb {R}}\) the updown Markov process on \({\mathcal {S}}^c\) which preserves each \(M_s^{\mathrm {Schur}}\) (see [21]). In more detail, the forward Markov generator is defined as
One can check that this is not the same forward evolution as the pushblock generator (under any time change). In particular, \({\mathbb {L}}_{s,s+ds}^{up}\) is timeinhomogeneous. Therefore, the updown Markov process arising from the branching graphs formalism does not reduce (in restriction to the leftmost particles \(\lambda ^{(N)}_{N}\)) to the stationary dynamics from Sect. 7.
Question 8
The updown Markov chains associated with distinguished nonextreme coherent families on wellstudied branching graphs converge to infinitedimensional diffusions on the boundary (e.g., [21, 68]). Is there such a limit procedure for the updown processes associated with \(\{M_s^{\mathrm {Schur}}\}\) or other coherent families on \({\mathcal {S}}^c\times {\mathbb {R}}\)?
Viewing \({\mathcal {C}}\) as a subset of \({\mathcal {S}}^c\), one can similarly define the projective system structure on \({\mathcal {C}}\times {\mathbb {R}}\) associated with the Markov maps \(\mathbf{L} _\tau \) (Definition 6.2). The restrictions of \(\{M_s^{\mathrm {Schur}}\}\) form coherent families on \({\mathcal {C}}\times {\mathbb {R}}\), and all the problems formulated in this subsection also make sense for the smaller object \({\mathcal {C}}\times {\mathbb {R}}\). Note that the updown Markov chain on each floor \({\mathcal {C}}\times \{s\}\) with \(\gamma ^+=1\) (and all other parameters zero) preserves the TASEP distribution \(\upmu _{e^s}\), but is it not the same as the stationary dynamics discussed in Sect. 7.
8.8 Lifting to additional parameters
The definition of the local Markov maps \(L^{(j)}_{\alpha }\) and \(R^{(j)}_{\alpha }\) which randomly change a single level of an interlacing array is inspired by the bijectivization of a degenerate case of the Yang–Baxter equation. Beyond this degenerate case associated with the Schur symmetric polynomials, the bijectivization can be developed to include models associated with spin HallLittlewood or spin qWhittaker symmetric functions [29, 30]. A scheme of symmetric functions is given in Fig. 12.
Let us consider three setups. First, in the spin HallLittlewood case, the maps \(L^{(j)}_\alpha \) and \(R^{(j)}_\alpha \) can be obtained by considering sequences of local transitions given in Figures 4 and 5 in [30] (see Sect. 4.2 for more details). Therefore, one can potentially define Markov maps preserving the class of probability measures on interlacing arrays satisfying a version of the Gibbs property associated with the spin HallLittlewood functions. These Gibbs measures include the subclass of spin HallLittlewood processes. The Markov maps on the spin HallLittlewood processes could project (in a way similar to how \({\mathbb {L}}_\tau \) leads to \(\mathbf{L} _\tau \)) into maps acting nicely on distributions of the stochastic sixvertex model and the ASEP with step initial data.
Second, on the spin qWhittaker side the TASEP is generalized to the qTASEP [14, 77] and further to the qHahn TASEP [34, 71]. A continuoustime version of the qHahn TASEP can be found in [6, 82].
Question 9
Do there exist Markov maps on (spin) qWhittaker processes mapping the time parameter in the qTASEP or the (continuoustime) qHahn TASEP backwards?
Finally, let us discuss a setting which does not immediately fit into the scheme of Fig. 12 but is also of interest. Configurations of the (not necessarily stochastic) sixvertex model with the domain wall boundary conditions (e.g., see [74]) can be encoded as finite depth interlacing arrays of strict partitions with fixed top row. The Yang–Baxter equation swapping spectral parameters in this model can potentially be bijectivised in the same way as in [30], which should lead to Markov maps acting nicely on the distribution of the sixvertex model. (In the Schur case this is described in Sect. 8.4.)
Question 10
Can these Markov maps be taken to the continuoustime limit similarly to the \(q\rightarrow 1\) limit described in Sect. 6? If this is possible, this would lead to a new nonlocal sampling algorithm for the distribution of the homogeneous (i.e., with equal spectral parameters) sixvertex model with domain wall boundary conditions. The bulk limit of this latter algorithm should presumably coincide with the Markov process from [13] preserving the distribution of the sixvertex model on a torus.
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Notes
In other words, \({\mathcal {C}}\) consists of configurations \(\{x_1>x_2>x_3>\cdots \}\subset {\mathbb {Z}}\) which possess a rightmost particle \(x_1\), and such that \(x_N=N\) for all N large enough.
Throughout the paper \(\mathbb {1}_{E}\) stands for the indicator function if the event E.
A Markov map is the same as a stochastic matrix or a onestep transition operator of a Markov chain (it is also sometimes called “link” in the literature). An application of a Markov map is a random update of the underlying configuration. At the same time, each Markov map is a deterministic linear operator in the space of probability distributions on configurations. When applying a map M to a probability measure \(\pi \), we write this as \(\pi \mapsto \pi M\).
The mechanism of jumping into a hole selected uniformly at random is similar to the Hammersley process [2, 49]. Therefore, we will sometimes refer to \(\mathbf{L} _\tau \) (as well as its twodimensional version \({\mathbb {L}}_\tau \) discussed in Sect. 8.1 below) as Hammersleytype process (BHP, for short).
We need to make sure that the BHP evolution is welldefined, so the initial configuration must be in \({\mathcal {C}}\subset \overline{{\mathcal {C}}}\).
References
Aggarwal, A., Borodin, A., Bufetov, A.: Stochasticization of solutions to the Yang–Baxter equation. Ann. Henri Poincare 20(8), 2495–2554 (2019). arXiv:1810.04299 [math.PR]
Aldous, D., Diaconis, P.: Hammersley’s interacting particle process and longest increasing subsequences. Probab. Theory Relat. Fields 103(2), 199–213 (1995)
Andjel, E., Guiol, H.: Longrange exclusion processes, generator and invariant measures. Ann. Probab. 33(6), 2314–2354 (2005). arXiv:math/0411655 [math.PR]
Baik, J., Liu, Z.: Multipoint distribution of periodic TASEP. J. AMS (2019). arXiv:1710.03284 [math.PR]
Balazs, M., Bowen, R.: Product blocking measures and a particle system proof of the Jacobi triple product. Ann. Inst. Henri Poincaré B 54(1), 514–528 (2018). arXiv:1606.00639 [math.PR]
Barraquand, G., Corwin, I.: The \(q\)Hahn asymmetric exclusion process. Ann. Appl. Probab. 26(4), 2304–2356 (2016). arXiv:1501.03445 [math.PR]
Basu, R., Ganguly, S.: Time correlation exponents in last passage percolation (2018). arXiv preprint arXiv:1807.09260 [math.PR]
Basu, R., Ganguly, S., Hammond, A.: Fractal geometry of Airy\(_2\) processes coupled via the Airy sheet (2019). arXiv preprint arXiv:1904.01717 [math.PR]
Belitsky, V., Schütz, G.: Selfduality and shock dynamics in the \(n\)component priority ASEP. Stoch. Process. Appl. 128(4), 1165–1207 (2018). arXiv:1606.04587 [math.PR]
Benassi, A., Fouque, J.P.: Hydrodynamical limit for the asymmetric simple exclusion process. Ann. Probab. 15(2), 546–560 (1987)
Borodin, A.: Schur dynamics of the Schur processes. Adv. Math. 228(4), 2268–2291 (2011). arXiv:1001.3442 [math.CO]
Borodin, A.: On a family of symmetric rational functions. Adv. Math. 306, 973–1018 (2017). arXiv:1410.0976 [math.CO]
Borodin, A., Bufetov, A.: An irreversible local Markov chain that preserves the six vertex model on a torus. Ann. Inst. Henri Poincaré B 53(1), 451–463 (2017). arXiv:1509.05070 [mathph]
Borodin, A., Corwin, I.: Macdonald processes. Probab. Theory Relat. Fields 158, 225–400 (2014). arXiv:1111.4408 [math.PR]
Borodin, A., Ferrari, P.: Large time asymptotics of growth models on spacelike paths I: PushASEP. Electron. J. Probab. 13, 1380–1418 (2008). arXiv:0707.2813 [mathph]
Borodin, A., Ferrari, P.: Anisotropic growth of random surfaces in \(2+1\) dimensions. Commun. Math. Phys. 325, 603–684 (2014). arXiv:0804.3035 [mathph]
Borodin, A., Gorin, V.: Markov processes of infinitely many nonintersecting random walks. Probab. Theory Relat. Fields 155(3–4), 935–997 (2013). arXiv:1106.1299 [math.PR]
Borodin, A., Gorin, V.: Lectures on integrable probability. In: Probability and Statistical Physics in St. Petersburg, vol. 91. Proceedings of Symposia in Pure Mathematics, pp. 155–214. AMS (2016). arXiv:1212.3351 [math.PR]
Borodin, A., Gorin, V., Rains, E.: qDistributions on boxed plane partitions. Sel. Math. 16(4), 731–789 (2010). arXiv:0905.0679 [mathph]
Borodin, A., Kuan, J.: Asymptotics of Plancherel measures for the infinitedimensional unitary group. Adv. Math. 219(3), 894–931 (2008). arXiv:0712.1848 [math.RT]
Borodin, A., Olshanski, G.: Infinitedimensional diffusions as limits of random walks on partitions. Probab. Theory Relat. Fields 144(1), 281–318 (2009). arXiv:0706.1034 [math.PR]
Borodin, A., Olshanski, G.: The boundary of the Gelfand–Tsetlin graph: a new approach. Adv. Math. 230, 1738–1779 (2012). arXiv:1109.1412 [math.CO]
Borodin, A., Olshanski, G.: The Young bouquet and its boundary. Mosc. Math. J. 13(2), 193–232 (2013). arXiv:1110.4458 [math.RT]
Borodin, A., Olshanski, G.: Representations of the Infinite Symmetric Group, vol. 160. Cambridge University Press, Cambridge (2016)
Borodin, A., Petrov, L.: Integrable probability: from representation theory to Macdonald processes. Probab. Surv. 11, 1–58 (2014). arXiv:1310.8007 [math.PR]
Borodin, A., Petrov, L.: Nearest neighbor Markov dynamics on Macdonald processes. Adv. Math. 300, 71–155 (2016). arXiv:1305.5501 [math.PR]
Borodin, A., Wheeler, M.: Spin \(q\)Whittaker polynomials (2017). arXiv preprint arXiv:1701.06292 [math.CO]
Brankov, J., Priezzhev, V., Schadschneider, A., Schreckenberg, M.: The Kasteleyn model and a cellular automaton approach to traffic flow. J. Phys. A Math. Gen. 29(10), L229–L235 (1996). arXiv:condmat/9512062
Bufetov, A., Mucciconi, M., Petrov, L.: Yang–Baxter random fields and stochastic vertex models. Adv. Math. (2019). arXiv preprint arXiv:1905.06815 [math.PR]
Bufetov, A., Petrov, L.: Yang–Baxter field for spin Hall–Littlewood symmetric functions. Forum Math. Sigma 7, e39 (2019). arXiv:1712.04584 [math.PR]
Chhita, S., Ferrari, P., Spohn, H.: Limit distributions for KPZ growth models with spatially homogeneous random initial conditions. Ann. Appl. Probab. 28(3), 1573–1603 (2018). arXiv:1611.06690 [math.PR]
Cohn, H., Kenyon, R., Propp, J.: A variational principle for domino tilings. J. AMS 14(2), 297–346 (2001). arXiv:math/0008220 [math.CO]
Corwin, I.: The Kardar–Parisi–Zhang equation and universality class. Random Matrices Theory Appl. 1, 1130001 (2012). arXiv:1106.1596 [math.PR]
Corwin, I.: The \(q\)Hahn Boson process and \(q\)Hahn TASEP. Int. Math. Res. Notices rnu094 (2014). arXiv:1401.3321 [math.PR]
Corwin, I., Hammond, A.: Brownian Gibbs property for Airy line ensembles. Invent. Math. 195(2), 441–508 (2014). arXiv:1108.2291 [math.PR]
Dauvergne, D., Ortmann, J., Virag, B.: The directed landscape (2018). arXiv preprint arXiv:1812.00309 [math.PR]
Di Francesco, P., Guitter, E.: A tangent method derivation of the arctic curve for qweighted paths with arbitrary starting points. J. Phys. A 52(11), 115205 (2019). arXiv:1810.07936 [mathph]
Dynkin, E.: Sufficient statistics and extreme points. Ann. Probab. 6, 705–730 (1978)
Ferrari, P.: The universal Airy\(_1\) and Airy\(_2\) processes in the Totally Asymmetric Simple Exclusion Process. In: Baik, J., Kriecherbauer, T., Li, L.C., McLaughlin, K.T.R., Tomei, C. (eds.) Integrable Systems and Random Matrices: In Honor of Percy Deift. Contemporary Math., pp. 321–332. AMS (2008). arXiv:mathph/0701021
Ferrari, P., Occelli, A.: Timetime covariance for last passage percolation with generic initial profile. Math. Phys. Anal. Geom. 22(22), 1 (2019). arXiv:1807.02982 [mathph]
Ferrari, P., Spohn, H.: On time correlations for KPZ growth in one dimension. SIGMA 12, 74 (2016). arXiv:1602.00486 [mathph]
Ferrari, P.A.: Limit theorems for tagged particles. Markov Process. Relat. Fields 2(1), 17–40 (1996)
Ferrari, P.A.: TASEP hydrodynamics using microscopic characteristics. Probab. Surv. 15, 1–27 (2018)
Ferrari, P.A., Martin, J.: Multiclass processes, dual points and M/M/1 queues (2005). arXiv preprint arXiv:mathph/0509045
Fulton, W.: Young Tableaux with Applications to Representation Theory and Geometry. Cambridge University Press. ISBN: 0521567246 (1997)
Gorin, V.: The qGelfandTsetlin graph, Gibbs measures and qToeplitz matrices. Adv. Math. 229(1), 201–266 (2012). arXiv:1011.1769 [math.RT]
Gorin, V., Olshanski, G.: A quantization of the harmonic analysis on the infinitedimensional unitary group. J. Funct. Anal. 270(1), 375–418 (2016). arXiv:1504.06832 [math.RT]
Guiol, H.: Un résultat pour le processus d’exclusion à longue portée [A result for the longrange exclusion process]. Ann. l’Institut Henri Poincare (B) Probab. Stat. 33(4), 387–405 (1997)
Hammersley, J.M.: A few seedlings of research. In: Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 345–394 (1972)
Johansson, K.: Shape fluctuations and random matrices. Commun. Math. Phys. 209(2), 437–476 (2000). arXiv:math/9903134 [math.CO]
Johansson, K.: The twotime distribution in geometric lastpassage percolation (2018). arXiv preprint arXiv:1802.00729 [math.PR]
Johansson, K.: The long and short time asymptotics of the twotime distribution in local random growth (2019). arXiv preprint arXiv:1904.08195 [math.PR]
Johansson, K., Rahman, M.: Multitime distribution in discrete polynuclear growth. Commun. Pure Appl. Math. (2020). arXiv:1906.01053 [math.PR]
Kenyon, R., Okounkov, A.: Limit shapes and the complex Burgers equation. Acta Math. 199(2), 263–302 (2007). arXiv:mathph/0507007
Kerov, S., Okounkov, A., Olshanski, G.: The boundary of Young graph with Jack edge multiplicities. Int. Math. Res. Notices 4, 173–199 (1998). arXiv:qalg/9703037
Li, H., Petrov, L.: Computer simulation of the backwards TASEP evolution. https://lpetrov.cc/simulations/20190626backtasep/ (2019)
Liggett, T.: Interacting Particle Systems. Springer, Berlin (2005)
Liggett, T.M.: Ergodic theorems for the asymmetric simple exclusion process. Trans. Am. Math. Soc. 213, 237–261 (1975)
MacDonald, C., Gibbs, J.: Concerning the kinetics of polypeptide synthesis on polyribosomes. Biopolymers 7(5), 707–725 (1969)
MacDonald, C., Gibbs, J., Pipkin, A.: Kinetics of biopolymerization on nucleic acid templates. Biopolymers 6(1), 1–25 (1968)
Macdonald, I.G.: Symmetric Functions and Hall Polynomials, 2nd edn. Oxford University Press, Oxford (1995)
Matetski, K., Quastel, J., Remenik, D.: The KPZ fixed point (2017). arXiv preprint arXiv:1701.00018 [math.PR]
Mkrtchyan, S., Petrov, L.: GUE corners limit of qdistributed lozenge tilings. Electron. J. Probab. 22(101), 24 (2017). https://doi.org/10.1214/17EJP112. arXiv:1703.07503 [math.PR]
O’Connell, N.: A pathtransformation for random walks and the Robinson–Schensted correspondence. Trans. AMS 355(9), 3669–3697 (2003)
O’Connell, N.: Conditioned random walks and the RSK correspondence. J. Phys. A 36(12), 3049–3066 (2003)
Okounkov, A.: Infinite wedge and random partitions. Sel. Math. 7(1), 57–81 (2001). arXiv:math/9907127 [math.RT]
Okounkov, A., Reshetikhin, N.: Correlation function of Schur process with application to local geometry of a random 3dimensional Young diagram. J. AMS 16(3), 581–603 (2003). arXiv:math/0107056 [math.CO]
Petrov, L.: A twoparameter family of infinitedimensional diffusions in the Kingman simplex. Funct. Anal. Appl. 43(4), 279–296 (2009). arXiv:0708.1930 [math.PR]
Petrov, L.: The boundary of the Gelfand–Tsetlin graph: new proof of Borodin–Olshanski’s formula, and its qanalogue. Mosc. Math. J. 14(1), 121–160 (2014). arXiv:1208.3443 [math.CO]
Petrov, L., Zhang, E.: Computer simulations of dynamics on \(q\)vol lozenge tilings inverting the parameter \(q\). https://lpetrov.cc/simulations/20190430qvol/ (2019)
Povolotsky, A.: On integrability of zerorange chipping models with factorized steady state. J. Phys. A 46, 465205 (2013). arXiv:1308.3250 [mathph]
Quastel, J., Spohn, H.: The onedimensional KPZ equation and its universality class. J. Stat. Phys. 160(4), 965–984 (2015). arXiv:1503.06185 [mathph]
Rákos, A., Schütz, G.: Current distribution and random matrix ensembles for an integrable asymmetric fragmentation process. J. Stat. Phys. 118(3–4), 511–530 (2005). arXiv:condmat/0405464 [condmat.statmech]
Reshetikhin, N.: Lectures on the integrability of the 6vertex model. In: Exact Methods in LowDimensional Statistical Physics and Quantum Computing, pp. 197–266. Oxford Univ. Press (2010). arXiv:1010.5031 [mathph]
Romik, D.: The Surprising Mathematics of Longest Increasing Subsequences. Cambridge University Press, Cambridge (2015)
Rost, H.: Nonequilibrium behaviour of a many particle process: density profile and local equilibria. Z. Wahrsch. Verw. Gebiete 58(1), 41–53 (1981). https://doi.org/10.1007/BF00536194
Sasamoto, T., Wadati, M.: Exact results for onedimensional totally asymmetric diffusion models. J. Phys. A 31, 6057–6071 (1998)
Seppäläinen, T.: Existence of hydrodynamics for the totally asymmetric simple Kexclusion process. Ann. Probab. 27(1), 361–415 (1999)
Sheffield, S.: Random surfaces. Astérisque 304 (2005). arXiv:math/0304049 [math.PR]
Spitzer, F.: Interaction of Markov processes. Adv. Math. 5(2), 246–290 (1970)
Spohn, H.: KPZ scaling theory and the semidiscrete directed polymer model (2012). arXiv:1201.0645 [condmat.statmech]
Takeyama, Y.: A deformation of affine Hecke algebra and integrable stochastic particle system. J. Phys. A 47(46), 465203 (2014). arXiv:1407.1960 [mathph]
Toninelli, F.: A \((2 + 1)\)dimensional growth process with explicit stationary measures. Ann. Probab. 45(5), 2899–2940 (2017). arXiv:1503.05339 [math.PR]
Vershik, A., Kerov, S.: Asymptotic theory of the characters of the symmetric group. Funktsional. Anal. i Prilozhen. 15(4), 15–27, 96 (1981)
Vershik, A., Kerov, S.: The characters of the infinite symmetric group and probability properties of the Robinson–Shensted–Knuth algorithm. SIAM J. Algebr. Discrete Math. 7(1), 116–124 (1986)
Acknowledgements
We are grateful to Alexei Borodin, Evgeni Dimitrov, Patrik Ferrari, Vadim Gorin, Matthew Nicoletti, Grigori Olshanski, Dan Romik, Tomohiro Sasamoto, Mykhaylo Shkolnikov, and Fabio Toninelli for helpful remarks. LP is grateful to the organizers of the workshop “Asymptotic Algebraic Combinatorics” and the support of the Banff International Research Station where a part of this work was done. Both authors were partially supported by the National Science Foundation grant DMS1664617.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Petrov, L., Saenz, A. Mapping TASEP back in time. Probab. Theory Relat. Fields 182, 481–530 (2022). https://doi.org/10.1007/s00440021010740
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440021010740
Keywords
 Totally asymmetric simple exclusion process
 Hammersley process
 Schur processes
 Stationary dynamics
Mathematics Subject Classification
 60K35
 05E05
 60J60
 82C22
 82B23