1 Introduction

1.1 Background

The Robinson-Schensted-Knuth (RSK) correspondence [33], the Burge correspondence [20], and the Schützenberger involution [41] are celebrated combinatorial bijections, classically described in terms of operations on (generalized) permutations, integer matrices, words, and Young tableaux. These correspondences play a fundamental role in algebraic combinatorics, especially in the theory of symmetric functions. See [27, 45] for more details on these correspondences.

The RSK correspondence is classically described as a map between non-negative integer matrices and pairs of semistandard Young tableaux of the same shape, through a row insertion algorithm. It can be used to prove various Cauchy-Littlewood identities, thus connecting to Schur functions [45]. The RSK map and Schur functions underpin the solvability of various interconnected probabilistic models such as longest increasing subsequences in random permutations, directed last passage percolation, the corner growth model, and the totally asymmetric simple exclusion process – see the seminal works [1, 2, 31].

Fomin [26] and Roby [39] first expressed the RSK correspondence in terms of local growth rules. Berenstein and Kirillov [9] described it explicitly in terms of piecewise linear functions, i.e. operations in the \((\max ,+)\)-semiring, thus allowing the extension to input matrices with real (not necessarily integer) entries. Furthermore, such a piecewise linear description is naturally prone to be extended to generic Young-diagram-shaped input arrays (not necessarily rectangular). The latter aspect was useful to study last passage percolation models with point-to-line and point-to-half-line path geometries and/or various symmetries on the input weights. In particular, Bisi and Zygouras [11, 14, 15] found new exact formulas for such models in terms of all the irreducible characters of the classical groups (e.g. symplectic and orthogonal characters), which complemented the Schur measures of Baik and Rains [2].

The Burge correspondence is, in the classical combinatorial description, a variant of the RSK mapping that bijectively transforms a non-negative integer matrix into a pair of semistandard Young tableaux of the same shape, through a column insertion algorithm. Its description in terms of piecewise linear functions is due to van Leeuwen [46] – see also [10, 34]. The Burge correspondence, analogously to RSK, can be used to study last passage percolation models; for recent applications, see [10, 12].

The Schützenberger involution is classically described as an evacuation algorithm, or equivalently a sequence of jeu de taquin operations on a (semi)standard Young tableau. It can be shown to be indeed an involution and to preserve the shape of the tableau. It can be alternatively described as a piecewise linear function on Gelfand-Tsetlin patterns [8] and, as such, extended to input patterns with real entries. Schützenberger involution and jeu de taquin have been also proven to be useful tools in combinatorial probability – see e.g. [40].

Considering the piecewise linear description of the above bijections, one may formally replace the operations \((\max ,+)\) of the “tropical” semiring with the operations \((+,\times )\) of the usual algebra. Following [38], we call such a procedure geometric lifting and the resulting bijections geometric, from the theory of geometric crystals [7]. In particular, the geometric RSK correspondence is a birational mapping on matrices with positive real entries, while the geometric Schützenberger involution is a birational involution on triangular (more generally, trapezoidal) arrays with positive real entries. They have been both introduced by Kirillov [32] and further studied by Noumi and Yamada [37].

The geometric RSK correspondence and Whittaker functions, together, explain the solvability of certain \((1+1)\)-dimensional models of random directed polymers in a random environment. These statistical physics models were introduced in [30] and have been the object of intense research over the past thirty years – see [22] for a recent review. By directed lattice path of length \(n-1\) we mean any sequence \(\varvec{\pi }=(\pi (1),\dots ,\pi (n))\in ({\mathbb {N}}^2)^n\) such that \(\Vert \pi (i+1) - \pi (i) \Vert _1 =1\) for \(1\le i\le n-1\) and \(\Vert \pi (n) - \pi (1) \Vert _1 = n-1\); only two consecutive directions are thus allowed for the whole path, for example south and east (see Fig. 1). Denote by \(\Pi _{m,n}\) the set of all directed lattice paths from (1, 1) to \((m,n)\in {\mathbb {N}}^2\), and let \((W_{i,j})_{(i,j)\in {\mathbb {N}}^2}\) be a field of positive independent random weights, known as random environment. We define on \(\Pi _{m,n}\) the quenched probability, known as point-to-point polymer measure,

$$\begin{aligned} {\mathcal {P}}_{m,n}(\varvec{\pi }) := \frac{1}{Z_{m,n}} \prod _{(i,j)\in \varvec{\pi }} W_{i,j}&\text {for } \varvec{\pi } \in \Pi _{m,n} \, , \end{aligned}$$
(1.1)

where the normalizing point-to-point polymer partition function is

$$\begin{aligned} Z_{m,n} := \sum _{\varvec{\pi } \in \Pi _{m,n}} \prod _{(i,j)\in \varvec{\pi }} W_{i,j} \, . \end{aligned}$$
(1.2)

We remark that, in the statistical physics literature, the directed path \(\pi \) is often viewed as the trajectory of a \((1+1)\)-dimensional simple walk and the random variables \(W_{i,j}\) are viewed as Gibbs weights, i.e. exponential functions of a random energy divided by the temperature of the system.

If one applies the geometric RSK map to the (random) input matrix \((W_{i,j})_{1\le i\le m, \, 1\le j\le n}\), it turns out that the output matrix contains the partition function (1.2) as the entry (mn). This is a general fact. Furthermore, for some specific choices of the random environment, the properties of the geometric RSK map also permit finding exact formulas for the distribution of the partition function. The most studied exactly solvable polymer model is known as the log-gamma polymer and was introduced in [42]. In this model, all the weights follow an inverse-gamma law (so that the corresponding Gibbs weights are log-gamma distributed).

Fig. 1
figure 1

On the left-hand figure, a directed lattice path from (1, 1) to (nn), with \(n=10\). The dotted antidiagonal line \(\{i+j = n+1\}\) divides the path into two parts, denoted by a solid red line and a dashed blue line respectively. Any such a path can be bijectively mapped into a pair of paths of length \(n-1\) constrained to have the same endpoint. This is illustrated on the right-hand figure, where the blue path is just reflected about the antidiagonal (color figure online)

Corwin, O’Connell, Seppäläinen, and Zygouras [23] linked the distribution of the log-gamma polymer partition function to Whittaker functions, in their integral formulation given by Givental [29]. Their argument was based on the connection between the geometric RSK correspondence and \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker functions, analogous to the well-known relationship between the RSK map and Schur functions. The analog of the Cauchy-Littlewood identity in this setting turned out to be a certain Whittaker integral identity due to Bump [18] and Stade [44].

Subsequently, O’Connell, Seppäläinen, and Zygouras [38] provided a new description of the geometric RSK as a composition of several local birational maps on the entries of the input matrix, deducing that the geometric RSK is volume preserving in log-log variables. Besides, they began the study of the geometric RSK map in the presence of symmetry constraints, analyzing the corresponding polymer models and Whittaker measures. This program drew inspiration from the work of Baik and Rains [2] on RSK with symmetries, last passage percolation, and Schur measures, and aimed at studying their non-determinantal analogs in the polymer setting. In particular, [38] focused on symmetric input matrices, i.e. such that \(W_{i,j}=W_{j,i}\) for all ij, proving that the volume preserving property still holds in this setting. This allowed studying the log-gamma polymer in a symmetric environment and obtaining the distribution of its partition function as a (different) Whittaker measure. The corresponding Whittaker integral identity is equivalent to a formula for the Mellin transform of a \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker function due to Stade [43].

The log-gamma polymer with point-to-line or point-to-half-line path geometries (with or without symmetry), where the polymer path has a fixed length but the endpoint is not fixed, has been analyzed by Bisi and Zygouras [13]. Denote by \(\Pi _{n}\) the set of all directed lattice paths of length \(n-1\) starting at (1, 1). Analogously to the point-to-point case, one can then define the point-to-line polymer measure

$$\begin{aligned} {\mathcal {P}}_{n}(\varvec{\pi }) := \frac{1}{Z_{n}} \prod _{(i,j)\in \varvec{\pi }} W_{i,j}&\text {for } \varvec{\pi } \in \Pi _{n} \, , \end{aligned}$$
(1.3)

where the normalizing point-to-line polymer partition function is

$$\begin{aligned} Z_{n} := \sum _{\varvec{\pi } \in \Pi _{n}} \prod _{(i,j)\in \varvec{\pi }} W_{i,j} \, . \end{aligned}$$
(1.4)

The main contribution of [13] was to express the law of \(Z_n\) in terms of \(\mathrm {SO}_{2n+1}({\mathbb {R}})\)-Whittaker functions. Their primary tool was the geometric RSK map extended to generic polygonal (not necessarily rectangular) input arrays, already used in [36].

1.2 Contributions of this work

In this work, we continue the program initiated in [38] of studying polymer models in symmetric environments and we focus on a persymmetric environment, a case that was out of the scope of the approach of [38] and could not be covered therein. Namely, we consider a weights’ matrix \((W_{i,j})_{1\le i,j\le n}\) that is symmetric about the antidiagonal, i.e. such that \(W_{i,j} = W_{n-j+1, n-i+1}\) for all ij; a matrix with this property is usually called persymmetric. Notice that the point-to-point persymmetric polymer partition function can be written as

$$\begin{aligned} Z_{n,n} = \sum _{\varvec{\pi } \in \Pi _{n,n}} \prod _{(i,j)\in \varvec{\pi }} W_{i,j} = \sum _{\begin{array}{c} (a,b):\\ a+b=n+1 \end{array}} \underbrace{\left( \sum _{\varvec{\pi } \in \Pi _{a,b}} \prod _{(i,j)\in \varvec{\pi }} W'_{i,j} \right) }_{=Z'_{a,b}} \underbrace{\left( \sum _{\widetilde{\varvec{\pi }} \in \Pi _{a,b}} \prod _{(k,l)\in \widetilde{\varvec{\pi }}} W'_{k,l} \right) }_{=Z'_{a,b}} \, ,\nonumber \!\!\!\!\!\!\\ \end{aligned}$$
(1.5)

where \(W'_{i,j}:= \sqrt{W_{i,j}}\) for \(i+j=n+1\) and \(W'_{i,j}:= W_{i,j}\) for \(i+j<n+1\), so that \(Z'_{a,b}\) is the point-to-point partition function with endpoint (ab) on the line \(\{a+b=n+1\}\) and associated with the modified environment \((W_{i,j}')\). The “path transformation” that justifies the identity above is illustrated in Fig. 1. Thus, remarkably, from the physical point of view, the point-to-point persymmetric polymer partition function can be interpreted as the replica partition function for two polymer paths of length \(n-1\) in the same environment \((W_{i,j}')\), constrained to coincide at the endpoint:

$$\begin{aligned} { Z^{\mathsf {repl} }_{n} } := \sum _{\begin{array}{c} (a,b):\\ a+b=n+1 \end{array}} \left( Z'_{a,b}\right) ^2 \, . \end{aligned}$$
(1.6)

Replicas are important observables in statistical mechanics, as they can provide insights into the properties of the models. For polymer models, replicas can shed light on localization phenomena [4,5,6, 21, 22]. As a consequence of the connection with the persymmetric polymer, the present work leads to the computation of the Laplace transform of the replica partition function (1.6), which will be expressed as an integral of special functions called Whittaker functions (more in-depth explanations will be given later on and the detailed formulas can be found in Sect. 6).

A key to the problem of studying the distribution of the persymmetric polymer partition function is to prove a volume preserving property for the geometric RSK correspondence restricted to persymmetric matrices, as we now explain. The image of a persymmetric matrix \(\varvec{w} = (w_{i,j})_{1\le i,j\le n}\) under the geometric RSK map is a matrix \(\varvec{t}= (t_{i,j})_{1\le i,j\le n}\) whose lower and upper triangular parts are Schützenberger dual of each other. This is a consequence of the fact that the geometric RSK map commutes with the matrix transposition, together with Theorem 3.1. Therefore, the map \((w_{i,j})_{i+j \le n+1} \mapsto (t_{i,j})_{i\le j}\) is a bijection. Proving a volume preserving property for such a bijection would permit obtaining the distribution of the persymmetric polymer partition function as a Whittaker measure, using similar techniques as in [38]. However, as the persymmetric constraints are “non-local” with respect to the order of composition of the local maps, it is not possible to prove the desired property from the geometric RSK construction as given in [38].

Our alternative approach to the analysis of the persymmetric polymer, instead, will consist in constructing and studying what we call the geometric Burge correspondence. We define it as a sequence of local birational maps, as done in [38] for geometric RSK, via geometric lifting of the piecewise linear description of the combinatorial Burge correspondence presented (with minor differences) in [10, 34, 46]. Notice that, wherever possible, we work with generic Young-diagram-shaped arrays instead of matrices. One of our main contributions is Theorem 3.2, which links together the three geometric correspondences (RSK, Burge, and Schützenberger) via column/row mirror reflection of the input matrix. Its combinatorial version, i.e. Theorem 2.2, is a classical result. However, the approach required to prove Theorem 3.2 differs significantly from the known combinatorial proofs; in particular, we will use the description of the three geometric correspondences in terms of local maps and apply induction several times. Interestingly, our proof reduces to certain local ‘commutation relations’ – see Proposition 3.3 – that, to our knowledge, were not even known in the combinatorial setting.

Let us now connect this construction to polymer models. Define \(\Pi ^{*}_{m,n}\) to be the set of all directed lattice paths from (m, 1) to (1, n); notice that this set is “dual” to \(\Pi _{m,n}\), in the sense that its paths connect the other pair of opposite vertices of the rectangle \([1,m]\times [1,n]\). Define also the dual partition function as

$$\begin{aligned} Z^*_{m,n} := \sum _{\pi \in \Pi ^{*}_{m,n}} \prod _{(i,j)\in \pi } W_{i,j} \, . \end{aligned}$$
(1.7)

We then have that the image \(\varvec{T} = (T_{i,j})_{1\le i\le m, \, 1\le j\le n}\) of the matrix \(\varvec{W} = (W_{i,j})_{1\le i\le m, \, 1\le j\le n}\) under the geometric Burge correspondence contains the dual partition function \(Z^*_{m,n}\) as the entry \(T_{m,n}\). This will be an immediate consequence of Theorem 3.2 and of the aforementioned fact that the geometric RSK output matrix contains the “usual” partition function \(Z_{m,n}\) defined in (1.2) as the entry \(T_{m,n}\).

It is clear that the partition function on a persymmetric input matrix \(\varvec{W}\) coincides with the dual partition function on the symmetric input matrix obtained by reversing the rows of \(\varvec{W}\). This, along with the observations above, explains why we can use the geometric Burge correspondence to study the persymmetric polymer partition function. This approach turns out to be far more convenient, as we need to deal with symmetric (instead of persymmetric) input matrices. As all the local maps of interest are volume preserving in log-log variables, the geometric Burge correspondence also is. Furthermore, the geometric Burge correspondence, like the geometric RSK, behaves nicely when restricted to symmetric matrices: the image of a symmetric matrix is also symmetric, and the volume preserving property continues to hold almost trivially. Using also other properties of the geometric Burge correspondence that we establish along the way (either via Theorem 3.2 or via the local maps definition), we will be able to obtain the distribution of the persymmetric polymer partition function as a \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker measure.

The Whittaker measure that we find for the persymmetric polymer coincides, for a certain choice of parameters, with the one for the symmetric polymer obtained in [38]. This seems to be a highly non-trivial fact: we are not aware of a direct proof based on the definition of the polymer models. We also mention that a number of other very interesting distributional identities in integrable polymer models have been observed in recent papers: [17, 28] are based on six vertex models and Yang-Baxter equations (see also [16] for related work), whereas [24] relies on RSK methods. However, the distributional equality we have observed in this work, for polymers on symmetric input matrices, does not appear in the above works.

As we have described above, our construction of the geometric Burge correspondence allows us to connect to polymer models. We should mention that the combinatorial Burge correspondence has been already used to deal with last passage percolation models, which (in the statistical physics terminology) are ‘zero temperature degenerations’ of polymer partition functions. This is e.g. the case in [10], where the main focus is on last passage percolation with point-to-half-line path geometry. Actually, one could argue that the Burge correspondence had also been implicitly used in [2] to provide a combinatorial bijection for the RSK map restricted to persymmetric matrices and ultimately study persymmetric last passage percolation. Our approach can, thus, be considered as a geometric lifting of Baik and Rains’s construction. However, in the combinatorial setting everything can be phrased in terms of the Schützenberger involution and there is no reason to give too much attention to the Burge map itself. On the other hand, in the geometric setting, in order to obtain the required volume preserving property for the geometric RSK map restricted to persymmetric matrices, one requires a much deeper understanding of the geometric Burge map itself and its relation to the geometric versions of the RSK and Schützenberger maps. This is contained in our construction via local maps together with Proposition 3.3 and Theorem 3.2. Proposition 3.3, which is one of the key ingredients used in the proof of Theorem 3.2, gives a remarkable (and seemingly non-trivial) relation satisfied by the local maps involved.

In terms of asymptotic analysis, the partition functions of several polymer models are expected to be in the KPZ universality class. Although there has recently been important progress in this regard [25, 47], this does not cover the setting of polymers with symmetries. For the latter, formal asymptotics have been achieved in [3]. We do not address such issues in the present work.

Organization of the article In Sect. 2 we introduce some notation and recap known piecewise linear descriptions of the classical combinatorial RSK, Burge, and Schützenberger bijections, to prepare the reader for the geometric lifting. In Sect. 3 we introduce the geometric Burge correspondence as a composition of local birational maps, recall analogous (known) definitions of the geometric RSK and Schützenberger maps, and explain their interconnection. In Sect. 4 we prove that the geometric Burge correspondence is volume preserving in log-log variables, as well as other useful properties. Section 5 deals with the restriction of the geometric Burge correspondence to symmetric input arrays and the specialization of its properties in this setting. In Sect. 6 we consider the persymmetric polymer (or equivalently the replica) partition function, proving that its distribution is given by a \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker measure and deducing the corresponding Whittaker integral identity; we also discuss the relation to the symmetric polymer studied in [38].

2 Combinatorial bijections and their piecewise linear formulation

The RSK, Burge and Schützenberger correspondences are combinatorial bijections, classically constructed via row insertion, column insertion, and jeu de taquin operations, respectively. In this section, we give a brief and expository reminder of these bijections and their reformulation in terms of piecewise linear transformations. This will motivate their geometric lifting that we perform in Sect. 3. Besides the classical references, e.g. [27], we refer to [48] for several combinatorial aspects and interesting relations between these correspondences. We also refer to [8, 34, 37, 38, 49] for more details on the RSK correspondence and its piecewise linear formulations. Piecewise linear descriptions of the Burge correspondence (often exposed in the formalism of Fomin growth diagrams) can be found in [10, 34, 46]; in particular, the version that we present here is closer to [10]. Finally, we refer to [8] for the piecewise linear formulation of the Schützenberger involution.

Let \({\mathbb {N}}:=\{1,2,3,\dots \}\). We will view any Young diagram \(\lambda \) as the partition \((\lambda _1,\lambda _2,\dots )\) such that \(\lambda _i\) is the number of boxes in the i-th row of \(\lambda \) or, equivalently, as the index set \(\{(i,j)\in {\mathbb {N}}^2 :j\le \lambda _i\}\) of its boxes. We will say that (mn) is a border box of a Young diagram \(\lambda \) if it is the last box of the corresponding diagonal, i.e. if \((m+1,n+1) \notin \lambda \). In particular, we will call (mn) a corner box if \(\lambda \setminus \{(m,n)\}\) is a Young diagram, i.e. if none of the three boxes \((m,n+1)\), \((m+1,n)\), \((m+1,n+1)\) belongs to \(\lambda \). For example, for the partition \( (2,2,1) \equiv \{(1,1), (1,2), (2,1), (2,2), (3,1)\}\), all boxes except (1, 1) are border boxes, but only (2, 2) and (3, 1) are corner boxes. We will also denote a rectangular Young diagram by \(m\times n := \{1,\dots ,m\} \times \{1,\dots ,n\}\).

For a given Young diagram \(\lambda \), let \({\mathbb {R}}^{\lambda }\) be the set of \(\lambda \)-shaped arrays \(P = (p_{i,j})_{(i,j)\in \lambda }\) of real numbers. If the values \(p_{i,j}\) are restricted to be positive integers and also have the property that are weakly increasing in j, for any fixed i, and strictly increasing in i, for any fixed j, then P is called a semistandard Young tableau. A useful reparametrization of Young tableaux goes under the name of Gelfand-Tsetlin patterns. Given a Young tableau \(P=(p_{i,j})_{(i,j)\in \lambda }\), its corresponding Gelfand-Tsetlin pattern \(\varvec{u} = ( u_{i,j})_{i,j\ge 1}\) is given by

$$\begin{aligned} u_{i,j} := \#\{1\le k\le \lambda _j :p_{j,k}\le i\} \, . \end{aligned}$$
(2.1)

In words, \(u_{i,j}\) is the number of entries in the j-th row of P that are less than or equal to i. Assuming that the shape \(\lambda \) of P is of length at most m and the entries \(p_{i,j}\) are in the alphabet \(\{1,\dots ,n\}\), one can view \(\varvec{u}\) as a trapezoidal array \((u_{i,j})_{1\le i\le n, \, 1\le j\le i \wedge m}\) – the \(u_{i,j}\)’s not in this range of indices being redundant. By construction, the shape \(\lambda \) of P corresponds to the bottom row \((u_{n,1},u_{n,2},\dots )\) of \(\varvec{u}\). Moreover, it is easy to verify that the Gelfand-Tsetlin variables, as defined in (2.1), satisfy the interlacing conditions

$$\begin{aligned} u_{i+1,j+1}\le u_{i,j}\le u_{i+1,j} \, . \end{aligned}$$

Piecewise linear mapsWe collect here, for convenience, all the piecewise linear maps that represent the building blocks of the piecewise linear formulation of all our combinatorial maps (RSK, Burge, and Schützenberger). Such a formulation, though, will be introduced for each combinatorial map separately, later in this section. We will denote all the piecewise linear maps by letters with a “vee accent”, to distinguish them from the corresponding maps in the geometric setting of the next sections.

For a Young diagram \(\lambda \) and \((i,j)\in \lambda \), we define as the local maps that act on \(\varvec{w}\in {\mathbb {R}}^{\lambda }\) by only modifying \(w_{i,j}\) according to the following rules:

(2.2)
(2.3)
(2.4)

For two distinct indices \((i,j),(k,l)\in \lambda \), we also define as the local map that acts on an array \(\varvec{w}\in {\mathbb {R}}^{\lambda }\) by only modifying \(w_{i,j}\) and \(w_{k,l}\) as follows:

(2.5)

For \(i=1\) and/or \(j=1\), the values of \(w_{i-1,j}\) and \(w_{i,j-1}\) are determined by the following convention: \(w_{0,1}=w_{1,0}=0\) and \(w_{0,k}=w_{k,0}=-\infty \) for all \(k>1\). Notice that and also involve entries \(w_{i+1,j}\) and \(w_{i,j+1}\), so for these maps to be well defined we assume that \((i+1,j),(i,j+1) \in \lambda \); likewise, for we assume that \((i,j+1) \in \lambda \). It is clear that all the above local maps are bijective, but only and are involutions. Furthermore, they all satisfy several trivial commutative properties, as each modifies only one or two entries of the input array.

From now on, for any \(n\in {\mathbb {Z}}\) we will refer to the n-th diagonal of an array \(\varvec{w}\in {\mathbb {R}}^{\lambda }\) as the sequence of its entries \(w_{i,j}\) such that \(j-i=n\). Let us now define, for all \((k,l)\in \lambda \), the following diagonal maps:

(2.6)
(2.7)
(2.8)

where \(h:=k \wedge l\). The terminology “diagonal map” comes from the fact that any of these maps indexed by (kl) only modifies the \((l-k)\)-th diagonal of the input array. It is likewise clear that any two diagonal maps commute if they act on diagonals that are not the same nor neighboring. As compositions of bijections, diagonal maps are all bijective. Furthermore, notice that is a composition of commuting involutions, hence it is itself an involution. Diagonal maps (2.6), (2.7), and (2.8) will be involved in the construction of the RSK, Burge, and Schützenbeger correspondences, respectively.

The Robinson-Schensted-Knuth (RSK) correspondence This correspondence is based on an algorithm called row insertion, which we will now describe. Row inserting a positive integer i into a given semistandard Young tableau works as follows: if i is larger than or equal to all the entries of the first row of the tableau, then a new box containing i is added at the end of the first row and the procedure stops. Otherwise, i replaces the first number of the first row that is strictly larger than i. The replaced number, call it j, is now “bumped” and inserted into the second row of the tableau in the same way. The procedure continues until one of the bumped numbers is placed at the end of a row of the tableau, yielding a new semistandard Young tableau with one extra box.

Now, any word w in the alphabet \(\{1,\dots ,n\}\) can be decomposed into a sequence of m increasing words \(w_1\), ..., \(w_m\) (for some m):

$$\begin{aligned} w = \underbrace{1^{w_{1,1}} 2^{w_{1,2}} \cdots n^{w_{1,n}}}_{w_1} \underbrace{1^{w_{2,1}} 2^{w_{2,2}} \cdots n^{w_{2,n}}}_{w_2} \cdots \underbrace{1^{w_{m,1}} 2^{w_{m,2}} \cdots n^{w_{m,n}}}_{w_m} \, , \end{aligned}$$
(2.9)

where all \(w_{i,j}\)’s are non-negative integers and \(i^r\equiv i\cdots i\) denotes a sequence of r consecutive letters i. The RSK algorithm acts on a word by successively row inserting all its letters. More precisely, one starts by inserting the first letter of w into the empty tableau \(P_0=\varnothing \), thus obtaining a tableau \(P_1\); then one inserts the second letter of w into \(P_1\), obtaining a new tableau \(P_2\). The process continues in the same way until all letters of w have been inserted, thus yielding a tableau P with as many boxes as the length of w. In parallel with the P-tableau, one can construct a Q-tableau, which records the shapes of the successive sequence of (intermediate) P-tableaux after the row insertion of each increasing word \(w_k\). Namely, every time a letter of \(w_k\) in inserted into the P-tableau, thus yielding a new P-tableau with one extra box, a box containing k is also added to the Q-tableau in the same position. The RSK algorithm thus yields a bijection between a word w in the alphabet \(\{1,\dots ,n\}\) with m increasing subwords and a pair of semistandard Young tableaux (PQ) of the same shape \(\lambda \) of length at most \(m\wedge n\), such that the entries of P are in \(\{1,\dots ,n\}\) and the entries of Q are in \(\{1,\dots ,m\}\).

Notice that the word w is naturally encoded by a matrix \(\varvec{w}=(w_{i,j})_{1\le i\le m, \, 1\le j \le n} \in {\mathbb {Z}}_{\ge 0}^{m\times n}\). Let now \(\varvec{u}\) and \(\varvec{v}\) be the Gelfand-Tsetlin patterns that bijectively correspond to the output tableaux P and Q, respectively:

$$\begin{aligned} \varvec{u} = ( u_{i,j})_{1\le i\le n, \, 1\le j\le i \wedge m} \, , \qquad \qquad \varvec{v} = ( v_{i,j})_{1\le i\le m, \, 1\le j\le i\wedge n } \, . \end{aligned}$$
(2.10)

As P and Q are of the same shape, we can “glue” together \(\varvec{u}\) and \(\varvec{v}\) along their identical bottom rows \((u_{n,1}, u_{n,2}, \dots ) = (v_{m,1}, v_{m,2}, \dots ) = \lambda \). The result of this “gluing” is an \(m\times n\) matrix, which we denoteFootnote 1 by \(\varvec{t} = (\varvec{u} \backslash \varvec{v})\), defined through the following relations:

$$\begin{aligned} u_{i,j} := t_{m-j+1,i-j+1} \, , \qquad \qquad v_{i,j} := t_{i-j+1,n-j+1} \, . \end{aligned}$$
(2.11)

We will also say that \(\varvec{u}\) and \(\varvec{v}\) are the series lower and upper triangular/trapezoidal parts of \(\varvec{t}\), respectively. For instance, when \(m<n\), we have that

figure a

In this sense, the RSK correspondence can be also viewed as a map \(\varvec{w} \mapsto \varvec{t}\) between \(m\times n\) non-negative integer matrices. Notice that the pattern \(\varvec{v} := ( v_{i,j})_{1\le i\le m, \, 1\le j\le i\wedge n }\) corresponding to Q has the property that the diagonal \((v_{k,1},\dots ,v_{k,k\wedge n })\) is the shape of the tableau obtained after the row insertion of \(w_k\).

Recall that entry \(u_{i,j}\) of the Gelfand-Tsetlin pattern \(\varvec{u}\), which corresponds to tableau P, encodes the numbers of entries in the j-th row of P that are smaller than or equal to i. Using this fact, we will now briefly describe how the combinatorially defined RSK correspondence translates to piecewise linear transformations on Gelfand-Tsetlin patterns. Suppose that, after inserting the first \(k-1\) words, we have obtained an (intermediate) P-tableau corresponding to the Gelfand-Tsetlin pattern \(\varvec{u}=(u_{i,j})_{1\le i\le n, \, 1\le j\le i \wedge (k-1)}\). Next, row inserting word \(w_k\) has the following effect: the number of ones in the P-tableau will increase to \(u_{1,1}'= u_{1,1}+w_{k,1}\), after the insertion of \(1^{w_{k,1}}\). The inserted ones will bump a number of twos from the first row, which will then be row inserted in the second row. The number of twos that are bumped equals \((u_{11}'-u_{1,1}) \wedge ( u_{2,1}-u_{1,1})= u_{1,1}' \wedge u_{2,1}-u_{1,1}\); as a result, the number of twos in the second row will become \(u_{22}' = u_{22} + (u_{11}'\wedge u_{2,1} - u_{11})\). After that, the row insertion of \(2^{w_{k,2}}\) will change \(u_{2,1}\) to \(u_{2,1}'=w_2 + u_{2,1}\vee u_{1,1}'\). The row insertion of the twos leads to a bumping of threes and the process continues in the same fashion through analogous piecewise linear transformation. Consider now transformations defined in (2.6). The change of \(u_{1,1}\) to \(u_{1,1}'\) is encoded through the application of , while the change of \(u_{2,1}\) and \(u_{2,2}\) to \(u_{2,1}'\) and \(u_{2,2}'\), respectively, is encoded through the application of . Similarly, transformation will encode the changes of \(u_{3,1}\), \(u_{3,2}\), and \(u_{3,3}\) after the insertion of the threes and the corresponding bumping process, and so on.

As the expository description above suggests, the Gelfand-Tsetlin pattern \(\varvec{u}\) can be constructed by repeatedly applying maps of type . Actually, the same happens for the pattern \(\varvec{v}\) that corresponds to the Q-tableau, as we now briefly argue. A remarkable symmetry property of the RSK algorithm is that transposing the input matrix \(\varvec{w}\) amounts to swapping the roles of the resulting Gelfand-Tsetlin patterns \(\varvec{u}\) and \(\varvec{v}\) (or equivalently the P- and Q-tableaux). Therefore, \(\varvec{v}\) can be constructed via the same diagonal maps ’s applied on the transposed matrix. We conclude that the whole output matrix \(\varvec{t} = (\varvec{u} \backslash \varvec{v})\) is obtained via repeated applications of these diagonal maps.

At this point, RSK can be extended in two natural ways: firstly, it can be seen as a bijective map between arrays indexed by a Young diagram (the case of matrices thus corresponding to rectangular Young diagrams); secondly, it can be seen as a map between arrays with real entries instead of non-negative integer entries (as the piecewise linear maps are still well defined). In this general framework, the RSK correspondence on real \(\lambda \)-shaped arraysFootnote 2 is defined by

(2.13)

where \(((i_1,j_1),\dots ,(i_n,j_n))\) is any sequence of distinct boxes of \(\lambda \) such that, for all \(1\le k\le n\), \(\lambda ^{(k)} := \{(i_1,j_1),\dots , (i_k,j_k)\}\) is a Young diagram, and \(\lambda ^{(n)} = \lambda \). Notice that for such a sequence there might be several choices, which all lead to equivalent definitions of , due to the commutative properties of the diagonal maps. For instance, we have that

Even though is defined for \(\lambda \)-shaped arrays, it may also be applied to \(\mu \)-shaped arrays, for any \(\mu \supseteq \lambda \), by acting on the \(\lambda \)-part of the diagram and leaving all entries indexed by \(\mu / \lambda \) unchanged; in such a case, we do not use the simplified notation , to avoid ambiguity.

Finally, let us highlight that the RSK correspondence features a simple recursive definition in terms of corner boxes: if (mn) is a corner box of \(\lambda \), then

where by convention \(\mathsf {K} ^{\varnothing }={{\,\mathrm{id}\,}}\).

The Burge correspondence This correspondence is based on an algorithm called column insertion, somehow “dual” to row insertion. To column insert a positive integer i into a given semistandard Young tableau, one proceeds as follows. If i is strictly larger than all the entries of the first columnFootnote 3 of the tableau, then a new box containing i is added at the end of the first column and the procedure stops. Otherwise, i replaces the first number of the first column that is larger than or equal to i. The replaced number, call it j, is now “bumped” and inserted into the second column of the tableau in the same way. The procedure continues until one of the bumped numbers is placed at the end of a column of the tableau, yielding a new semistandard Young tableau with one extra box (it is fairly easy to check that the insertion preserves the strict monotonicity of columns and the weak monotonicity of rows).

The Burge correspondence uses the column insertion (instead of the row insertion in the RSK correspondence) to map a word w, as in (2.9), onto a pair of Young tableaux (PQ) of the same shape. A second difference from the RSK correspondence is that the letters of each increasing word \(w_k\) are read in reverse order; namely, the Burge correspondence successively column inserts the letters of w, reading them as follows:

$$\begin{aligned} \underbrace{n^{w_{1,n}} \cdots 2^{w_{1,2}} 1^{w_{1,1}}}_{w_1 \text { reverse}} \underbrace{n^{w_{2,n}}\cdots 2^{w_{2,2}} 1^{w_{2,1}}}_{w_2 \text { reverse}} \cdots \underbrace{n^{w_{m,n}}\cdots 2^{w_{m,2}} 1^{w_{m,1}}}_{w_m \text { reverse}} \, . \end{aligned}$$
(2.14)

To obtain the P-tableau, one constructs a sequence \((\varnothing =P_0, P_1, P_2 \dots , P_L = P)\) of L intermediate tableaux (where \(L=\sum _{i,j} w_{i,j}\) is the length of the letter w) that starts from the empty tableau and ends at the final P-tableau: for \(1\le l\le L\), \(P_l\) is obtained from \(P_{l-1}\) by column inserting the l-th letter of (2.14) into \(P_{l-1}\). Finally, similarly to RSK, the Q-tableau in the Burge correspondence records the sequence of shapes of the intermediate P-tableaux after the column insertion of each increasing word \(w_k\) (read in reverse order).

Similarly to RSK, the Burge correspondence can be equivalently viewed as a bijection between a matrix \(\varvec{w}=(w_{i,j})_{1\le i \le m,\, 1\le j\le n}\in {\mathbb {Z}}_{\ge 0}^{m\times n}\) and a pair \((\varvec{u},\varvec{v})\) of Gelfand-Tsetlin patterns (2.10) with the same bottom row. Again, the two patterns can be glued together into an matrix \(\varvec{t} = (\varvec{u} \backslash \varvec{v})\) as in (2.11)-(2.12), in order to view the Burge correspondence as a map between \(m\times n\) non-negative integer matrices.

Let us now briefly describe how the combinatorial description of the Burge correspondence can be viewed as a sequence of piecewise linear transformations on Gelfand-Tsetlin patterns. Suppose that we have inserted the first \(k-1\) words, thus obtaining an intermediate P-tableau corresponding to a Gelfand-Tsetlin pattern \(\varvec{u}=(u_{i,j})_{1\le i\le n, \, 1\le j\le i\wedge (k-1)}\). We now column insert word \(w_k\), reading it in reverse order, i.e. as \(n^{w_{k,n}}\cdots 2^{w_{k,2}}1^{w_{k,1}}\). To convey the main idea in the simplest case, let us assume that \(n=2\), so that the intermediate P-tableau has at most two rows, filled with ones and twos only. When we column insert the first few twos, we start filling with the twos the \(u_{2,2} - u_{1,1}\) “free spaces” in the second row of P. However, we cannot insert more than \(u_{2,2} - u_{1,1}\) twos in the second row of P, otherwise the strict monotonicity of the columns would be violated; hence, if at some point we run out of such “free spaces”, the extra \(w_{k,2} - w_{k,2} \wedge (u_{1,1} - u_{2,2})\) twos will end up in the first row. As a result, on the one hand the number \(u_{2,2}\) of twos in the second row changes to \(u_{2,2}'= (w_{k,2} + u_{2,2}) \wedge u_{1,1}\); on the other hand, the total length \(u_{2,1}\) of the first row of P increases to \(u'_{2,1} = u_{2,1} + (w_{k,2} - w_{k,2} \wedge (u_{1,1} - u_{2,2}))\). After the twos have been inserted, we column insert the \(w_{k,1}\) ones, placing them all in the first row. This has the effect of increasing by \(w_{k,1}\) both the number of ones of the first row and the total length of the first row. Namely, \(u_{1,1}\) changes to \(u_{1,1}' = u_{1,1} + w_{k,1}\) and \(u'_{2,1}\) changes to the final value \(u_{2,1}'' = u'_{2,1}+ w_{k,1} = u_{2,1} + w_{k,2} - w_{k,2} \wedge (u_{1,1} - u_{2,2}) + w_{k,1}\). The change of \(u_{1,1}\) to \(u_{1,1}'\) represents the action of transformation from (2.4) and (2.8), whereas the change of \((u_{2,1}, u_{2,2})\) to \((u_{2,1}'', u_{2,2}')\) can be viewed as the action of transformation from  (2.4), (2.5), and (2.8) (the index k refers to the fact that the k-th increasing word \(w_k\) is being inserted).

The construction of \(\varvec{u}\) can thus be achieved via repeated applications of ’s. As for RSK, a symmetry property holds for the Burge correspondence as well: if \(\varvec{w}\) is mapped onto \((\varvec{u},\varvec{v})\), then the transpose of \(\varvec{w}\) is mapped onto \((\varvec{v},\varvec{u})\). Therefore, \(\varvec{v}\) can be constructed by applying maps ’s on the transposed matrix. We conclude that the whole output matrix \(\varvec{t} = (\varvec{u} \backslash \varvec{v})\) is obtained via repeated applications of these diagonal maps.

Thanks to its piecewise linear description, the Burge correspondence can be also extended as a bijection between Young-diagram-shaped arrays with real entries. In this general framework, it is definedFootnote 4 as the map given by

(2.15)

where \(((i_1,j_1),\dots ,(i_n,j_n))\) is any sequence of distinct boxes of \(\lambda \) such that, for all \(1\le k\le n\), \(\lambda ^{(k)} := \{(i_1,j_1),\dots , (i_k,j_k)\}\) is a Young diagram, and \(\lambda ^{(n)} = \lambda \). As in RSK, even though is defined for \(\lambda \)-shaped arrays, it may also be applied to \(\mu \)-shaped arrays, for any \(\mu \supseteq \lambda \), by acting on the \(\lambda \) part of the diagram and leaving all entries indexed by \(\mu / \lambda \) unchanged; in such a case, we do not use the simplified notation , to avoid ambiguity.

Finally, we may rephrase the definition (2.15) of the Burge correspondence in a recursive fashion: if (mn) is a corner box of \(\lambda \), then

where by convention \( \mathsf {B} ^{\varnothing } = {{\,\mathrm{id}\,}}\).

The Schützenberger involution For the sake of conciseness, we do not discuss the classical combinatorial construction of this correspondence, but we rather provide its piecewise linear description straightaway. Recalling the definition of the diagonal maps ’s from (2.7), let us define the map , acting on \(m\times n\) matrices, by

(2.16)

It is easy to see that this is an involution, using the commutative properties of the diagonal maps and the fact that each of them is an involution.

Denoting by \(\lambda '\) the conjugate partition of a partition \(\lambda \), define the transposition map \(\mathsf {T} : {\mathbb {R}}^{\lambda } \rightarrow {\mathbb {R}}^{\lambda '}\), \(\varvec{w} \mapsto \varvec{w}^{\mathsf {T} }\) by setting \(w^{\mathsf {T} }_{i,j} := w_{j,i}\) for all \((i,j)\in \lambda '\). By definition of the ’s, only acts on the triangular/trapezoidal lower partFootnote 5\(\varvec{u}\) of the input matrix \(\varvec{t} = (\varvec{u} \backslash \varvec{v})\), preserving the \((n-m)\)-th diagonal. Calling the lower part of , we thus have that . Likewise, only acts on the upper part \(\varvec{v}\), replacing it with another triangular/trapezoidal array that we call : we thus have that .

In the case of \(\varvec{u}\) being a Gelfand-Tsetlin pattern (equivalently, its corresponding Young tableau), the maps and coincide with the classical Schützenberger involution defined via jeu de taquin operations – see e.g. [27, 45]. By extension, we will therefore refer to the maps and on real matrices as the Schützenberger involution on the upper and lower part, respectively.

Let us now define the involutions that reverse, respectively, the rows and the columns of an \(m\times n\) matrix:

$$\begin{aligned}&\mathsf {R} :{\mathbb {R}}^{m\times n} \rightarrow {\mathbb {R}}^{m\times n} \, ,&\varvec{w} \mapsto \varvec{w}^{\mathsf {R} } \, ,&w^\mathsf {R} _{i,j} = w_{m-i+1,j} \, , \end{aligned}$$
(2.17)
$$\begin{aligned}&\mathsf {C} :{\mathbb {R}}^{m\times n} \rightarrow {\mathbb {R}}^{m\times n} \, ,&\varvec{w} \mapsto \varvec{w}^{\mathsf {C} } \, ,&w^\mathsf {C} _{i,j} = w_{i, n-j+1} \, , \end{aligned}$$
(2.18)

for \(1\le i\le m\) and \(1\le j\le n\).

The following theorems relate the RSK, Burge, and Schützenberger correspondences through row and/or column reversion of the input matrix. As we will see in the next section, they all admit a geometric lifting.

Theorem 2.1

([27, Appendix A.1])

Let \(\varvec{w}\in {\mathbb {R}}^{m\times n}\). The following diagram commutes:

figure b

Theorem 2.2

( [27, Appendix A.4.1])

Let \(\varvec{w}\in {\mathbb {R}}^{m\times n}\). The following diagrams commute:

figure c

The above theorems are usually stated in the classical combinatorial context of input matrices with non-negative integer entries [27]. The even more special case of permutation matrices corresponds to the following fact: column inserting the elements \(\sigma (n),\dots ,\sigma (1)\) of a permutation \(\sigma \) in reverse order gives the same P-tableau as row inserting \(\sigma (1),\dots ,\sigma (n)\) in the standard order, and the Schützenberger dual of the Q-tableau.

3 Geometric Burge, RSK, and Schützenberger correspondences

In this section we perform the geometric lifting of the piecewise linear bijections introduced in Sect. 2: namely, we formally replace the “tropical operations” \((\vee , \wedge , +, -)\) with the “usual four operations” \((+,-,\times , \div )\). This will lead to the definition of the corresponding birational maps on polygonal arrays with positive real entries, in terms of local maps on the entries. The geometric RSK correspondence has been first studied in [32, 37], but our description in terms of local maps is due to [38] (in the case of rectangular input arrays); the geometric Schützenberger involution has been discussed in [32, 37]; finally, to the best of our knowledge, the geometric Burge correspondence has not been considered before. Notice that some confusion might arise from the fact that, in some of the above references, the geometric lifting is called tropical image instead. We will also prove the geometric analog of Theorems 2.1 and 2.2 , a result that links together all three geometric correspondences.

For a given Young diagram \(\lambda \), let \({\mathbb {R}}_{>0}^{\lambda }\) be the set of \(\lambda \)-shaped arrays of positive real numbers. For \((i,j)\in \lambda \), we define \(\mathsf {a} _{i,j}, \mathsf {b} _{i,j}, \mathsf {c} _{i,j} :{\mathbb {R}}_{>0}^{\lambda } \rightarrow {\mathbb {R}}_{>0}^{\lambda }\) as the local maps that act on \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) by only modifying \(w_{i,j}\) according to the following rules:

$$\begin{aligned}&\mathsf {a} _{i,j}:w_{i,j} \longmapsto \frac{1}{w_{i,j}}(w_{i-1,j} + w_{i,j-1}) \left( \frac{1}{w_{i+1,j}} + \frac{1}{w_{i,j+1}} \right) ^{-1} \, , \end{aligned}$$
(2.1)
$$\begin{aligned}&\mathsf {b} _{i,j}:w_{i,j} \longmapsto \frac{1}{w_{i,j}}(w_{i-1,j} + w_{i,j-1}) w_{i,j+1} \, , \end{aligned}$$
(2.2)
$$\begin{aligned}&\mathsf {c} _{i,j}:w_{i,j} \longmapsto w_{i,j} (w_{i-1,j} + w_{i,j-1}) \, . \end{aligned}$$
(2.3)

For two distinct indices \((i,j),(k,l)\in \lambda \), we also define \(\mathsf {d} ^{k,l}_{i,j} :{\mathbb {R}}_{>0}^{\lambda } \rightarrow {\mathbb {R}}_{>0}^{\lambda }\) as the local map that acts on an array \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) by only modifying \(w_{i,j}\) and \(w_{k,l}\) as follows:

$$\begin{aligned} \mathsf {d} ^{k,l}_{i,j}:{\left\{ \begin{array}{ll} w_{i,j} \longmapsto \left( \dfrac{1}{w_{i,j}} + \dfrac{1}{w_{k,l} (w_{i-1,j} + w_{i,j-1})} \right) ^{-1} \, , \\ w_{k,l} \longmapsto \left( \dfrac{w_{k,l}(w_{i-1,j} + w_{i,j-1})}{w_{i,j}^2} + \dfrac{1}{w_{i,j}} \right) \left( \dfrac{1}{w_{i+1,j}} + \dfrac{1}{w_{i,j+1}} \right) ^{-1} \, . \end{array}\right. } \end{aligned}$$
(2.4)

For \(i=1\) and/or \(j=1\), the values of \(w_{i-1,j}\) and \(w_{i,j-1}\) are determined by the following convention: \(w_{0,1}=w_{1,0}=1/2\) and \(w_{0,k}=w_{k,0}=0\) for all \(k>1\). For \(\mathsf {a} _{i,j}\), \(\mathsf {b} _{i,j}\), and \(\mathsf {d} _{i,j}^{k,l}\) to be well-defined, \((i+1,j)\) and/or \((i,j+1)\) must be boxes of \(\lambda \). It will also be useful to define the map \(\mathsf {e} _{i,j}^{k,l}\), which acts on a \(\lambda \)-shaped array \(\varvec{w}\), with \((i,j),(k,l)\in \lambda \), by exchanging \(w_{i,j}\) with \(w_{k,l}\):

$$\begin{aligned} \mathsf {e} ^{k,l}_{i,j}:{\left\{ \begin{array}{ll} w_{i,j} \longmapsto w_{k,l} \, , \\ w_{k,l} \longmapsto w_{i,j} \, . \end{array}\right. } \end{aligned}$$
(2.5)

All these local maps are bijective, but only \(\mathsf {a} _{i,j}\), \(\mathsf {b} _{i,j}\), and \(\mathsf {e} _{i,j}^{k,l}\) are involutions. As in the “tropical” case, they all satisfy obvious commutative properties, due to their local action on the entries of the input array. For example, local maps of type (2.1)–(2.3) commute whenever the subscripts are not nearest neighbors in \({\mathbb {N}}^2\).

Let us now define, for all \((k,l)\in \lambda \), the following diagonal maps:

$$\begin{aligned} \varrho _{k,l}&:= \mathsf {a} _{k-h+1,l-h+1} \circ \mathsf {a} _{k-h+2,l-h+2} \circ \dots \circ \mathsf {a} _{k-1,l-1} \circ \mathsf {c} _{k,l} \, , \end{aligned}$$
(2.6)
$$\begin{aligned} \sigma _{k,l}&:= \mathsf {a} _{k-h+1,l-h+1} \circ \mathsf {a} _{k-h+2,l-h+2} \circ \dots \circ \mathsf {a} _{k-1,l-1} \circ \mathsf {b} _{k,l} \, , \end{aligned}$$
(2.7)
$$\begin{aligned} \tau _{k,l}&:= \mathsf {c} _{k,l} \circ \mathsf {d} ^{k,l}_{k-1,l-1} \circ \dots \circ \mathsf {d} ^{k,l}_{k-h+2,l-h+2} \circ \mathsf {d} ^{k,l}_{k-h+1,l-h+1} \, , \end{aligned}$$
(2.8)

where \(h:=k \wedge l\). All these maps are bijective, and \(\sigma _{k,l}\) is also an involution. Any two diagonal maps commute if they act on diagonals that are not the same nor neighboring.

We can now define the geometric Robinson-Schensted-Knuth (RSK) correspondence \(\mathsf {K} = \mathsf {K} ^{\lambda }\) and the geometric Burge correspondence \(\mathsf {B} = \mathsf {B} ^{\lambda }\) as the bijections \({\mathbb {R}}_{>0}^{\lambda } \rightarrow {\mathbb {R}}_{>0}^{\lambda }\) given by

$$\begin{aligned} \mathsf {K}&:= \varrho _{i_n,j_n} \circ \varrho _{i_{n-1},j_{n-1}} \circ \dots \circ \varrho _{i_1,j_1} \, , \end{aligned}$$
(2.9)
$$\begin{aligned} \mathsf {B}&:= \tau _{i_n,j_n} \circ \tau _{i_{n-1},j_{n-1}} \circ \dots \circ \tau _{i_1,j_1} \, . \end{aligned}$$
(2.10)

As in (2.13) and (2.15), here \(((i_1,j_1),\dots ,(i_n,j_n))\) is any sequence of distinct boxes such that, for all \(1\le k\le n\), \(\lambda ^{(k)} := \{(i_1,j_1),\dots , (i_k,j_k)\}\) is a Young diagram, and \(\lambda ^{(n)} = \lambda \). We will be mostly using the following equivalent recursive definition of \(\mathsf {K} \) and \(\mathsf {B} \):

$$\begin{aligned} \mathsf {K} ^{\lambda } = \varrho _{m,n} \circ \mathsf {K} ^{\lambda \setminus \{(m,n)\}} \quad \qquad \text {and}\qquad \quad \mathsf {B} ^{\lambda } = \tau _{m,n} \circ \mathsf {B} ^{\lambda \setminus \{(m,n)\}} \, , \end{aligned}$$
(2.11)

for any corner box (mn) of \(\lambda \), where by convention \(\mathsf {K} ^{\varnothing } = \mathsf {B} ^{\varnothing } = {{\,\mathrm{id}\,}}\).

Recall that we denote by \(\mathsf {T} \) the map that acts on a Young-diagram-shaped array by transposing it in the usual way. As in the “tropical” case, it is easy to see that the geometric RSK and Burge correspondences satisfy a symmetry property: \(\mathsf {K} (\varvec{w}^{\mathsf {T} }) = \mathsf {K} (\varvec{w})^{\mathsf {T} }\) and \(\mathsf {B} (\varvec{w}^{\mathsf {T} }) = \mathsf {B} (\varvec{w})^{\mathsf {T} }\) for all \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) – see Proposition 5.1 for a formal statement and proof in the Burge case.

We next define the geometric Schützenberger involution \(\mathsf {S} = \mathsf {S} ^{m\times n} :{\mathbb {R}}_{>0}^{m\times n} \rightarrow {\mathbb {R}}_{>0}^{m\times n}\) by

$$\begin{aligned} \mathsf {S} ^{m\times n} := \sigma _{m,1} \circ (\sigma _{m,2} \circ \sigma _{m,1}) \circ (\sigma _{m,3} \circ \sigma _{m,2} \circ \sigma _{m,1}) \circ \dots \circ (\sigma _{m,n-1} \circ \dots \circ \sigma _{m,1}) \, .\nonumber \\ \end{aligned}$$
(2.12)

Similarly to Sect. 2, we write \(\varvec{t}=(\varvec{u}\backslash \varvec{v})\) for a matrix \(\varvec{t}\) with “lower part” \(\varvec{u}\) and “upper part” \(\varvec{v}\) (i.e. the parts that \(\varvec{t}\) is divided into by the diagonal that contains the bottom-right corner of the matrix) – see (2.10)-(2.11)-(2.12). We have that \(\mathsf {S} \) (respectively, \(\mathsf {T} \mathsf {S} \mathsf {T} \)) acts on \(\varvec{t}\) by modifying the lower part \(\varvec{u}\) (respectively, the upper part \(\varvec{v}\)) only, and preserving the \((n-m)\)-th diagonal. Therefore, if \(\varvec{u}^{\mathsf {S} }\) is the lower part of \(\mathsf {S} (\varvec{u} \backslash \varvec{v})\) and \(\varvec{v}^{\mathsf {S} }\) is the upper part of \(\mathsf {T} \mathsf {S} \mathsf {T} (\varvec{u} \backslash \varvec{v})\), then we have that \(\mathsf {S} (\varvec{u} \backslash \varvec{v}) = (\varvec{u}^{\mathsf {S} } \backslash \varvec{v})\) and \(\mathsf {T} \mathsf {S} \mathsf {T} (\varvec{u} \backslash \varvec{v}) = (\varvec{u} \backslash \varvec{v}^{\mathsf {S} })\). The maps \(\varvec{u} \mapsto \varvec{u}^{\mathsf {S} }\) and \(\varvec{v} \mapsto \varvec{v}^{\mathsf {S} }\) can be regarded as the geometric lifting of (the generalization of) the Schützenberger involution studied in [8].

The relation between geometric RSK and geometric Schützenberger involution goes through both row and column reversion of the input matrix, as stated in the following theorem, which is the geometric analog of Theorem 2.1. As in (2.17)-(2.18), \(\mathsf {R} \) and \(\mathsf {C} \) denote the maps that reverse, respectively, the rows and the columns of a matrix.

Theorem 3.1

([32, Section 4.5], [37, Sections 2.4, 3.1]) Let \(\varvec{w} \in {\mathbb {R}}_{>0}^{m\times n}\). The following diagram commutes:

figure d

We will now prove a stronger and fundamental result that represents the geometric lifting of Theorem 2.2. It connects the geometric RSK, Burge, and Schützenberger correspondences through either column reversion or row reversion of the input matrix.

Theorem 3.2

Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{m\times n}\). The following diagrams commute:

figure e

Notice that Theorem 3.1 can be indeed derived as a straightforward consequence of Theorem 3.2. To prove the latter result we need the following proposition, whose proof is quite involved and is postponed to the appendix.

Proposition 3.3

If \((p,q),(p,q+1),(p-1,q) \in \lambda \), then the following relation between maps acting on \(\lambda \)-shaped arrays holds:

$$\begin{aligned} \sigma _{p,q} \varrho _{p,q+1} \tau _{p,q} = \tau _{p,q+1} \varrho _{p,q} \sigma _{p-1,q} \mathsf {e} _{p,q}^{p,q+1} \, . \end{aligned}$$
(2.13)

The latter can be seen as a structural ‘commutation relation’ between the diagonal maps involved in the geometric RSK, Burge, and Schützenberger bijections. Following the same lines of our proofs (or, alternatively, using a tropical limit procedure as in [11, Sect. 1.1.3]), one can argue that an analogous commutation relation holds for the corresponding piece-wise linear maps discussed in Sect. 2. However, we are not aware of any combinatorial version of Proposition 3.3 in the literature and, moreover, this appears to be a somewhat non-trivial identity, even in the combinatorial setting. Notice that the combinatorial analog of Theorem 3.2, i.e. Theorem 2.2, is classically proven without resorting to the piece-wise linear formulation of the combinatorial correspondences.

Proof of Theorem 3.2

The second relation \(\mathsf {T} \mathsf {S} \mathsf {T} \mathsf {K} = \mathsf {B} \mathsf {R} \) just follows from the first relation \(\mathsf {B} \mathsf {C} = \mathsf {S} \mathsf {K} \) as well as the basic properties of the maps involved. Namely, assuming that \(\mathsf {B} \mathsf {C} = \mathsf {S} \mathsf {K} \) holds true, recalling that both \(\mathsf {K} \) and \(\mathsf {B} \) commute with the transposition, and using the trivial fact that \(\mathsf {T} \mathsf {C} \mathsf {T} = \mathsf {R} \), we obtain:

$$\begin{aligned} \mathsf {T} \mathsf {S} \mathsf {T} \mathsf {K} = \mathsf {T} \mathsf {S} \mathsf {K} \mathsf {T} = \mathsf {T} \mathsf {B} \mathsf {C} \mathsf {T} = \mathsf {B} \mathsf {T} \mathsf {C} \mathsf {T} = \mathsf {B} \mathsf {R} \, . \end{aligned}$$

We are then left to prove that \(\mathsf {B} \mathsf {C} = \mathsf {S} \mathsf {K} \) as maps \({\mathbb {R}}_{>0}^{m\times n} \rightarrow {\mathbb {R}}_{>0}^{m\times n}\); to do so, we will apply the induction principle several times. Let us first fix any \(n\ge 1\) and proceed by induction on m, i.e. the number of rows of the matrices. For \(m=1\) and a \(1\times n\) matrix \(\varvec{w} = \begin{pmatrix} w_1&w_2&\dots&w_{n-1}&w_n \end{pmatrix}\), by definition we have that

$$\begin{aligned} \mathsf {K} (\varvec{w}) = \mathsf {B} (\varvec{w}) = \begin{pmatrix} w_1&w_1 w_2&\dots&\prod _{k=1}^{n-1} w_k&\prod _{k=1}^n w_k \end{pmatrix} \, . \end{aligned}$$

It is also easy to check that

$$\begin{aligned} \mathsf {S} (\varvec{w}) = \begin{pmatrix} \dfrac{w_n}{w_{n-1}}&\dfrac{w_n}{w_{n-2}}&\dots&\dfrac{w_n}{w_1}&w_n \end{pmatrix} \, . \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} \mathsf {B} \mathsf {C} (\varvec{w})&= \mathsf {B} \begin{pmatrix} w_n&w_{n-1}&\dots&w_2&w_1 \end{pmatrix} \\&= \begin{pmatrix} w_n&w_{n-1} w_n&\dots&\prod _{k=2}^n w_k&\prod _{k=1}^n w_k \end{pmatrix} \, , \\&= \mathsf {S} \begin{pmatrix} w_1&w_1 w_2&\dots&\prod _{k=1}^{n-1} w_k&\prod _{k=1}^n w_k \end{pmatrix} = \mathsf {S} \mathsf {K} (\varvec{w}) \, , \end{aligned} \end{aligned}$$

thus proving that \(\mathsf {B} \mathsf {C} = \mathsf {S} \mathsf {K} \) for \(1\times n\) matrices.

Let us now suppose by induction that for a given \(m>1\) the statement is true in the case of \((m-1)\times n\) matrices for all \(n\ge 1\), and prove the statement in the case of \(m\times n\) matrices for all \(n\ge 1\). Let \(\varvec{x} = (x_{i,j})_{1\le i\le m-1, 1\le j\le n} \in {\mathbb {R}}_{>0}^{(m-1)\times n}\), \(\varvec{y} = (y_1,\dots ,y_n) \in {\mathbb {R}}_{>0}^{1\times n}\), and

$$\begin{aligned} \varvec{w} := \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} = \begin{pmatrix} x_{1,1} &{}\dots &{}x_{1,n} \\ \vdots &{}\ddots &{}\vdots \\ x_{m-1,1} &{}\dots &{}x_{m-1,n} \\ y_1 &{}\dots &{}y_n \end{pmatrix} \, . \end{aligned}$$

By definition, the geometric Burge correspondence on \(\varvec{w}\) can be obtained by applying maps \(\tau _{k,l}\)’s first for all \(1\le k\le m-1\) and \(1\le l\le n\), and subsequently for \(k=m\) and \(1\le l\le n\). Therefore,

$$\begin{aligned} \mathsf {B} \mathsf {C} (\varvec{w}) = \tau _{m,n} \cdots \tau _{m,1} \begin{pmatrix} \mathsf {B} \mathsf {C} (\varvec{x}) \\ \mathsf {C} (\varvec{y}) \end{pmatrix} = \tau _{m,n} \cdots \tau _{m,1} \begin{pmatrix} \mathsf {S} \mathsf {K} (\varvec{x}) \\ \mathsf {C} (\varvec{y}) \end{pmatrix} \, , \end{aligned}$$

where in the latter equality we have applied the induction hypothesis on \(\varvec{x}\). The same reasoning holds for the geometric RSK as a composition of maps \(\varrho _{k,l}\)’s:

$$\begin{aligned} \mathsf {S} \mathsf {K} (\varvec{w}) = \mathsf {S} \, \varrho _{m,n} \cdots \varrho _{m,1} \begin{pmatrix} \mathsf {K} (\varvec{x}) \\ \varvec{y} \end{pmatrix} \, . \end{aligned}$$

Since \(\mathsf {K} \) is invertible, to conclude that \(\mathsf {B} \mathsf {C} = \mathsf {S} \mathsf {K} \) on \(m\times n\) matrices it suffices to show that

$$\begin{aligned} \tau _{m,n} \cdots \tau _{m,1} \begin{pmatrix} \mathsf {S} (\varvec{x}) \\ \mathsf {C} (\varvec{y}) \end{pmatrix} = \mathsf {S} \, \varrho _{m,n} \cdots \varrho _{m,1} \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} \, . \end{aligned}$$
(2.14)

We are then left to prove (2.14) for all \(m> 1\) and \(n\ge 1\). We will now fix m and proceed by induction on n. The statement for \(n=1\) follows from the fact that \(\tau _{m,1} = \mathsf {c} _{m,1} = \varrho _{m,1}\) and \(\mathsf {S} ^{k\times 1} = \mathsf {C} ^{k\times 1} = {{\,\mathrm{id}\,}}^{k\times 1}\) for any \(k\ge 1\). We will now show that, for any given \(N>1\), if (2.14) holds for \(n=N-1\), then it also holds for \(n=N\). For \(n=N\), the left-hand side of (2.14) reads as

$$\begin{aligned}&\tau _{m,N} \cdots \tau _{m,1} \begin{pmatrix} \mathsf {S} (\varvec{x}) \\ \mathsf {C} (\varvec{y}) \end{pmatrix}\\&\quad = \tau _{m,N} (\tau _{m,N-1} \cdots \tau _{m,1}) \begin{pmatrix} \mathsf {S} ^{(m-1)\times (N-1)} (\sigma _{m-1,N-1} \cdots \sigma _{m-1,1}) (\varvec{x}) \\ \begin{matrix} y_N &{}y_{N-1} &{}\cdots &{}y_2 &{}y_1 \end{matrix} \end{pmatrix} \\&\quad = \tau _{m,N} \mathsf {S} ^{m \times (N-1)} (\varrho _{m,N-1} \cdots \varrho _{m,1}) \begin{pmatrix} (\sigma _{m-1,N-1} \cdots \sigma _{m-1,1}) (\varvec{x}) \\ \begin{matrix} y_2 &{}\cdots &{}y_{N-1} &{}y_N &{}y_1 \end{matrix} \end{pmatrix} \\&\quad = \tau _{m,N} \mathsf {S} ^{m \times (N-1)} (\varrho _{m,N-1} \cdots \varrho _{m,1}) (\mathsf {e} _{m,N-1}^{m,N} \cdots \mathsf {e} _{m,1}^{m,2}) \begin{pmatrix} (\sigma _{m-1,N-1} \cdots \sigma _{m-1,1}) (\varvec{x}) \\ \begin{matrix} y_1 &{}y_2 &{}\cdots &{}y_{N-1} &{}y_N \end{matrix} \end{pmatrix} \\&\quad = \tau _{m,N} \mathsf {S} ^{m \times (N-1)} (\varrho _{m,N-1} \cdots \varrho _{m,1}) (\sigma _{m-1,N-1} \cdots \sigma _{m-1,1}) (\mathsf {e} _{m,N-1}^{m,N} \cdots \mathsf {e} _{m,1}^{m,2}) \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} \\&\quad = \mathsf {S} ^{m \times (N-1)} \tau _{m,N} (\varrho _{m,N-1} \sigma _{m-1,N-1} \mathsf {e} _{m,N-1}^{m,N}) \cdots (\varrho _{m,1} \sigma _{m-1,1} \mathsf {e} _{m,1}^{m,2}) \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} \, . \end{aligned}$$

For the above equalities we have used, in order: the recursive definition of \(\mathsf {S} \); the induction hypothesis, i.e. (2.14) for \(n=N-1\); the definition of the exchange operator \(\mathsf {e} _{i,j}^{k,l}\); finally, the commutative properties of local and diagonal maps. On the other hand, for \(n=N\), the right-hand side of (2.14) reads as

$$\begin{aligned} \begin{aligned} \mathsf {S} \, \varrho _{m,N} \cdots \varrho _{m,1} \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix}&= \mathsf {S} ^{m\times (N-1)} (\sigma _{m,N-1} \cdots \sigma _{m,1}) (\varrho _{m,N} \cdots \varrho _{m,2} \varrho _{m,1}) \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} \\&= \mathsf {S} ^{m\times (N-1)} (\sigma _{m,N-1} \varrho _{m,N}) \cdots (\sigma _{m,1} \varrho _{m,2}) \varrho _{m,1} \begin{pmatrix} \varvec{x} \\ \varvec{y} \end{pmatrix} \, , \end{aligned} \end{aligned}$$

again by definition of \(\mathsf {S} \) and the commutative properties. To conclude that (2.14) holds for \(n=N\), it thus remains to show that

$$\begin{aligned} \begin{aligned}&\tau _{m,N} (\varrho _{m,N-1} \sigma _{m-1,N-1} \mathsf {e} _{m,N-1}^{m,N}) \cdots (\varrho _{m,1} \sigma _{m-1,1} \mathsf {e} _{m,1}^{m,2}) \\&\qquad = (\sigma _{m,N-1} \varrho _{m,N}) \cdots (\sigma _{m,1} \varrho _{m,2}) \varrho _{m,1} \, . \end{aligned} \end{aligned}$$
(2.15)

In turn, the latter readily follows from \(N-1\) iterative applications of Proposition 3.3 together with the already noticed fact that \(\tau _{m,1} = \varrho _{m,1}\). \(\square \)

4 Properties of the geometric Burge correspondence

In this section we prove the volume preserving property and other useful properties of the geometric Burge correspondence. Such properties will follow either directly from the definition via local maps given in Sect. 3 or from Theorem 3.2, as a consequence of the analogous properties of the geometric RSK correspondence [38].

For the geometric RSK correspondence, it is known that the product of the last k entries of a diagonal in the output array can be expressed in terms of the input array as a “partition function” on k non-intersecting directed lattice paths.

Proposition 4.1

([37, 38]) Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{m\times n}\) and \(\varvec{t}:= \mathsf {K} (\varvec{w})\). For all \(1\le k\le m \wedge n\) we have that

$$\begin{aligned} t_{m,n} t_{m-1,n-1} \cdots t_{m-k+1, n-k+1} = \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{(k)}_{m,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} w_{i,j} \, , \end{aligned}$$
(2.1)

where \(\Pi ^{(k)}_{m,n}\) is the set of k-tuples of non-intersecting directed lattice paths in \({\mathbb {N}}^2\) starting at (1, 1), (1, 2), ..., (1, k) and ending at \((m,n-k+1)\), \((m,n-k+2)\), ..., (mn) respectively.

The geometric Burge correspondence has a similar property, where the non-intersecting paths go in the north-east direction instead of south-east. This fact is proven in the next proposition. We state the result for generic Young-diagram-shaped arrays, and specialize it to the “extreme” cases, which are of particular interest. First, denote the product of all the entries on the k-th diagonal of \(\varvec{t}\in {\mathbb {R}}_{>0}^{\lambda }\) by

$$\begin{aligned} P_k(\varvec{t}) := \prod _{j-i = k} t_{i,j} \, . \end{aligned}$$
(2.2)

Proposition 4.2

Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) and \(\varvec{t}:= \mathsf {B} (\varvec{w})\). If (mn) is a border box of \(\lambda \), then for all \(1\le k\le m \wedge n\) we have that

$$\begin{aligned} t_{m,n} t_{m-1,n-1} \cdots t_{m-k+1, n-k+1} = \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{*(k)}_{m,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} w_{i,j} \, , \end{aligned}$$
(2.3)

where \(\Pi ^{*(k)}_{m,n}\) is the set of k-tuples of non-intersecting directed lattice paths in \({\mathbb {N}}^2\) starting at (m, 1), (m, 2), ..., (mk) and ending at \((1,n-k+1)\), \((1,n-k+2)\), ..., (1, n) respectively. In particular, for \(k=1\),

$$\begin{aligned} t_{m,n} = \sum _{\pi \in \Pi ^{(1)}_{m,n}} \prod _{(i,j)\in \pi } w_{i,j} \, , \end{aligned}$$
(2.4)

and, for \(k=m\wedge n\),

$$\begin{aligned} P_{n-m}(\varvec{t}) = \prod _{i=1}^m \prod _{j=1}^n w_{i,j} \, . \end{aligned}$$
(2.5)

Proof

Identities (2.4) and (2.5) are straightforward consequences of (2.3), hence we only need to prove the latter. It is clear from (2.10) that

$$\begin{aligned} \varvec{t} = \mathsf {B} ^{\lambda }(\varvec{w}) =\tau _{i_l, j_l} \circ \cdots \circ \tau _{i_1, j_1} \circ \mathsf {B} ^{m\times n} (\varvec{w}) \, , \end{aligned}$$
(2.6)

where \(l= |\lambda | - mn\) and \((i_1,j_1), \dots , (i_l,j_l)\) are chosen so that \((m\times n) \cup \{(i_1,j_1), \dots , (i_h, j_h)\}\) is a Young diagram for all \(1\le h\le l\). Since by hypothesis (mn) is the last box of \(\lambda \) on the corresponding diagonal, the application of \(\tau _{i_1, j_1}\), ..., \(\tau _{i_l, j_l}\) in (2.6) does not modify the \((n-m)\)-th diagonal of \(\mathsf {B} ^{m\times n} (\varvec{w})\). It then suffices to prove (2.3) when \(\lambda = m\times n\) is a rectangular partition, so we now restrict to this case. By Theorem 3.2 we have that \(\varvec{t} = \mathsf {B} (\varvec{w}) = \widetilde{\varvec{t}}^{\mathsf {S} }\), where \(\widetilde{\varvec{t}} := \mathsf {K} (\varvec{w}^{\mathsf {C} })\). By Proposition 4.1, we have that

$$\begin{aligned} \begin{aligned} {\widetilde{t}}_{m,n} {\widetilde{t}}_{m-1,n-1} \cdots {\widetilde{t}}_{m-k+1, n-k+1}&= \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{(k)}_{m,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} w_{i,n-j+1} \\&= \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{*(k)}_{m,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} w_{i,j} \, . \end{aligned} \end{aligned}$$

The geometric Schützenberger involution does not modify the \((n-m)\)-th diagonal of \(\widetilde{\varvec{t}}\), hence \({\widetilde{t}}_{m,n} = t_{m,n}\), \({\widetilde{t}}_{m-1,n-1} = t_{m-1,n-1}\), ..., \({\widetilde{t}}_{m-k+1,n-k+1} = t_{m-k+1,n-k+1}\). The above display then proves (2.3) in the case \(\lambda = m\times n\). \(\square \)

We now state another property of the geometric Burge correspondence, which expresses the sum of certain ratios of output entries as a sum of inverse input entries.

Proposition 4.3

Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) and \(\varvec{t}:= \mathsf {B} (\varvec{w})\). We have that

$$\begin{aligned} \frac{1}{t_{1,1}}&= \sum _{i:(i,i)\in \lambda } \frac{1}{w_{i,i}} \, , \end{aligned}$$
(2.7)
$$\begin{aligned} \sum _{(i,j)\in \lambda } \frac{t_{i-1,j} + t_{i,j-1}}{t_{i,j}}&= \sum _{(i,j)\in \lambda } \frac{1}{w_{i,j}} \, , \end{aligned}$$
(2.8)

with the convention that \(t_{0,1}=t_{1,0}=1/2\) and \(t_{0,k}=t_{k,0}=0\) for all \(k>1\).

Proof

Let (nn) be the (only) border box of \(\lambda \) on the main diagonal. Then (2.7) can be obtained by setting \(m=n\) in Proposition 4.2 and dividing both sides of equation (2.3), taken with \(k=n-1\), by the corresponding sides of (2.5).

To prove (2.8), we proceed by induction on \(|\lambda |\). If \(\lambda \) has just the box (1, 1), then (2.8) follows for example from (2.7). Assume now that \(|\lambda | >1\), and pick any corner box (mn) of \(\lambda \). Let \(\varvec{w}\in {\mathbb {R}}^{\lambda }_{>0}\) and \(\varvec{t}:= \mathsf {B} (\varvec{w})\). Set \({\widetilde{\lambda }} := \lambda \setminus \{(m,n)\}\) and \(\widetilde{\varvec{t}}:= \mathsf {B} ^{{\widetilde{\lambda }}}(\varvec{w})\), so that \(\varvec{t} = \tau _{m,n}(\widetilde{\varvec{t}})\) by (2.11). By induction, the statement holds for the input array \(\varvec{w}\) and the output array \(\widetilde{\varvec{t}}\) when they are both restricted to the partition \({\widetilde{\lambda }}\). It then suffices to prove that

$$\begin{aligned} \sum _{(i,j)\in \lambda } \frac{t_{i-1,j} + t_{i,j-1}}{t_{i,j}} = \sum _{(i,j)\in {\widetilde{\lambda }}} \frac{{\widetilde{t}}_{i-1,j} + {\widetilde{t}}_{i,j-1}}{{\widetilde{t}}_{i,j}} + \frac{1}{w_{m,n}} \, . \end{aligned}$$
(2.9)

To fix the ideas, let us suppose \(m\le n\) and set \(p:= n-m\ge 0\). By (2.8), we then have that

$$\begin{aligned} \tau _{m,n} = \mathsf {c} _{m,n} \circ \mathsf {d} ^{m,n}_{m-1,n-1} \circ \dots \circ \mathsf {d} ^{m,n}_{2,p+2} \circ \mathsf {d} ^{m,n}_{1,p+1} \, . \end{aligned}$$

In particular, the arrays \(\varvec{t}\) and \(\widetilde{\varvec{t}}\) only differ at the entries on the p-th diagonal. Therefore, we may restrict the sums on both sides of (2.9) to the terms that involve entries on the p-th diagonal of \(\varvec{t}\) and \(\widetilde{\varvec{t}}\). We are thus left to prove that

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^{m-1} \left[ \frac{t_{i-1,p+i} + t_{i,p+i-1}}{t_{i,p+i}} + t_{i,p+i} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}}\right) \right] + \frac{t_{m-1,n} + t_{m,n-1}}{t_{m,n}} \\ = \,&\sum _{i=1}^{m-1} \left[ \frac{t_{i-1,p+i} + t_{i,p+i-1}}{{\widetilde{t}}_{i,p+i}} + {\widetilde{t}}_{i,p+i} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}}\right) \right] +\frac{1}{w_{m,n}} \, . \end{aligned} \end{aligned}$$
(2.10)

To prove the latter, set \(\varvec{t}^{(0)} := \widetilde{\varvec{t}}\) and \(\varvec{t}^{(i)} := \mathsf {d} ^{m,n}_{i,p+i}(\varvec{t}^{(i-1)})\) for \(1\le i\le m-1\), so that \(\varvec{t} = \mathsf {c} _{m,n}(\varvec{t}^{(m-1)})\). By definitions (2.4) and (2.3), we thus have that

$$\begin{aligned} t_{i,p+i}&= \left( \frac{1}{{\widetilde{t}}_{i,p+i}} + \frac{1}{t^{(i-1)}_{m,n} (t_{i-1,p+i} + t_{i,p+i-1})} \right) ^{-1}&\text {for }1\le i\le m-1 \, , \end{aligned}$$
(2.11)
$$\begin{aligned} t_{m,n}&= t^{(m-1)}_{m,n}(t_{m-1,n} + t_{m,n-1}) \, . \end{aligned}$$
(2.12)

Again from (2.4) it also follows that

$$\begin{aligned} t^{(i)}_{m,n} = \left( \frac{t^{(i-1)}_{m,n} (t_{i-1,p+i} + t_{i,p+i-1})}{{\widetilde{t}}_{i,p+i}^2} + \frac{1}{{\widetilde{t}}_{i,p+i}} \right) \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}} \right) ^{-1} \end{aligned}$$

for \(1\le i\le m-1\), which is equivalent to

$$\begin{aligned} t^{(i-1)}_{m,n} = \left[ t^{(i)}_{m,n} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}} \right) - \frac{1}{{\widetilde{t}}_{i,p+i}} \right] \frac{{\widetilde{t}}_{i,p+i}^2}{t_{i-1,p+i} + t_{i,p+i-1}} \, . \end{aligned}$$
(2.13)

From (2.11) and (2.13) we obtain another expression for \(t_{i,p+i}\) that involves \(t^{(i)}_{m,n}\) instead of \(t^{(i-1)}_{m,n}\):

$$\begin{aligned} t_{i,p+i} = {\widetilde{t}}_{i,p+i} - \frac{1}{t^{(i)}_{m,n}} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}} \right) ^{-1} \, . \end{aligned}$$
(2.14)

We now compute the left-hand side of (2.10) by using (2.11) for the first occurrence of \(t_{i,p+i}\) (\(1\le i\le m-1\)), (2.14) for the second occurrence of \(t_{i,p+i}\) (\(1\le i\le m-1\)), and (2.12) for the only occurrence of \(t_{m,n}\):

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^{m-1} \left[ \frac{t_{i-1,p+i} + t_{i,p+i-1}}{t_{i,p+i}} + t_{i,p+i} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}}\right) \right] + \frac{t_{m-1,n} + t_{m,n-1}}{t_{m,n}} \\&\quad = \sum _{i=1}^{m-1} \left[ \frac{t_{i-1,p+i} + t_{i,p+i-1}}{{\widetilde{t}}_{i,p+i}} + \frac{1}{t^{(i-1)}_{m,n}} + {\widetilde{t}}_{i,p+i} \left( \frac{1}{t_{i+1,p+i}} + \frac{1}{t_{i,p+i+1}}\right) - \frac{1}{t^{(i)}_{m,n}} \right] \\&\qquad + \frac{1}{t^{(m-1)}_{m,n}} \, . \end{aligned} \end{aligned}$$

Noticing that

$$\begin{aligned} \sum _{i=1}^{m-1} \left[ \frac{1}{t^{(i-1)}_{m,n}} - \frac{1}{t^{(i)}_{m,n}} \right] + \frac{1}{t^{(m-1)}_{m,n}} = \frac{1}{w_{m,n}} \, , \end{aligned}$$

as the summation is telescopic and \(t^{(0)}_{m,n} = {\widetilde{t}}_{m,n} = w_{m,n}\), we conclude that (2.10) holds. \(\square \)

Remark 4.4

All the geometric correspondences of Sect. 3 have been defined via composition of local maps and hence, essentially, via a recursive procedure. Furthermore, the geometric RSK correspondence possesses another recursive structure on the border output entries, as we now explain. Let (mn) be a border box of a partition \(\lambda \), \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\), \(\varvec{s} := \mathsf {K} (\varvec{w})\), and \(\varvec{t}:= \mathsf {B} (\varvec{w})\). It is immediate to see from the definition (2.9) of geometric RSK correspondence that \(s_{m,n} = (s_{m-1,n} + s_{m,n-1})w_{m,n}\). However, the geometric Burge correspondence lacks such an obvious recursive structure, in the sense that \(t_{m,n}\) cannot be written as a function of the entries of \(\varvec{t}\) and \(\varvec{w}\) in the boxes neighboring (mn). This is also reflected by the fact that the right-hand side of (2.4), which expresses \(t_{m,n}\) in terms of directed paths on the input array \(\varvec{w}\), also lacks a recursive structure if viewed as a function of (mn). Therefore, it is not clear how to prove (2.4) inductively, i.e. similarly to the proof strategy of Proposition 4.3. On the other hand, it is easy to see that both sides of (2.5) do have an inductive structure, so formula (2.5) could be also proven inductively from the definition of geometric Burge correspondence.

We now state and prove the volume-preserving property for the geometric Burge correspondence in log-log variables.

Theorem 4.5

Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\) and \(\varvec{t}:= \mathsf {B} (\varvec{w})\). Then, the map

$$\begin{aligned} (\log w_{i,j})_{(i,j)\in \lambda } \mapsto (\log t_{i,j})_{(i,j)\in \lambda } \end{aligned}$$

has Jacobian \(\pm 1\).

Proof

As \(\mathsf {B} \) can be written as a composition of \(c_{k,l}\)’s and \(d^{k,l}_{i,j}\)’s (see (2.10) and (2.8)), it suffices to show that both these types of local maps are volume preserving in log-log variables. This property is immediate for \(c_{k,l}\), so we will prove it for \(d^{k,l}_{i,j}\) only. We will also suppose that \(i,j>1\), as the proof simplifies when \(i=1\) or \(j=1\). Set

$$\begin{aligned} x_1&:= \log w_{i,j} \, ,&x_3&:= \log w_{i-1,j} \, ,&x_5&:= \log w_{i+1,j} \, , \\ x_2&:= \log w_{k,l} \, ,&x_4&:= \log w_{i,j-1} \, ,&x_6&:= \log w_{i,j+1} \, . \end{aligned}$$

Looking at the definition (2.4) of \(d^{k,l}_{i,j}\), we define the transformation \(F:{\mathbb {R}}^6 \rightarrow {\mathbb {R}}^6\) with i-th component

$$\begin{aligned} F_i(x_1,\dots ,x_6) = {\left\{ \begin{array}{ll} -\log \left[ {{\,\mathrm{\mathrm {e}}\,}}^{-x_1} + {{\,\mathrm{\mathrm {e}}\,}}^{-x_2}({{\,\mathrm{\mathrm {e}}\,}}^{x_3} + {{\,\mathrm{\mathrm {e}}\,}}^{x_4})^{-1} \right] &{}i=1 \, , \\ \log \left[ {{\,\mathrm{\mathrm {e}}\,}}^{x_2 - 2x_1} ({{\,\mathrm{\mathrm {e}}\,}}^{x_3} + {{\,\mathrm{\mathrm {e}}\,}}^{x_4}) + {{\,\mathrm{\mathrm {e}}\,}}^{-x_1} \right] - \log \left[ {{\,\mathrm{\mathrm {e}}\,}}^{-x_5} + {{\,\mathrm{\mathrm {e}}\,}}^{-x_6}\right] &{}i=2 \, , \\ x_i &{}3\le i\le 6 \, . \end{array}\right. } \end{aligned}$$

To obtain the Jacobian of F, it suffices to compute the partial derivatives of \(F_1\) and \(F_2\) with respect of \(x_1\) and \(x_2\). Setting

$$\begin{aligned} g(x_1,x_2,x_3,x_4) := \frac{{{\,\mathrm{\mathrm {e}}\,}}^{-x_1}}{ {{\,\mathrm{\mathrm {e}}\,}}^{-x_1} + {{\,\mathrm{\mathrm {e}}\,}}^{-x_2}({{\,\mathrm{\mathrm {e}}\,}}^{x_3} + {{\,\mathrm{\mathrm {e}}\,}}^{x_4})^{-1} } \, , \end{aligned}$$

one can easily obtain:

$$\begin{aligned} \frac{\partial F_1}{\partial x_1} = g \, , \qquad \quad \frac{\partial F_1}{\partial x_2} = 1-g \, , \qquad \quad \frac{\partial F_2}{\partial x_1} = -1-g \, , \qquad \quad \frac{\partial F_2}{\partial x_2} = g \, . \end{aligned}$$

Therefore, the modulus of the Jacobian of F is given by

$$\begin{aligned} |\frac{\partial F_1}{\partial x_1} \frac{\partial F_2}{\partial x_2} - \frac{\partial F_1}{\partial x_2} \frac{\partial F_2}{\partial x_1} | = |g^2 - (1-g)(-1-g) | = 1 \, , \end{aligned}$$

as desired. \(\square \)

5 The geometric Burge correspondence on symmetric arrays

A self-conjugate partition, or equivalently a symmetric Young diagram, is a partition \(\lambda \) such that \((i,j)\in \lambda \) if and only if \((j,i)\in \lambda \). If \(\lambda \) is a self-conjugate partition, an array \(\varvec{w}\) of shape \(\lambda \) is called symmetric if \(\varvec{w}^{\mathsf {T} } = \varvec{w}\), i.e. \(w_{i,j}=w_{j,i}\) for all \((i,j)\in \lambda \).

The aim of this section is to explain how the geometric Burge correspondence behaves when restricted to symmetric arrays. This will be a key ingredient for the polymer analysis in Sect. 6.

Proposition 5.1

For any \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\), we have that \(\mathsf {B} (\varvec{w}^{\mathsf {T} }) = \mathsf {B} (\varvec{w})^{\mathsf {T} }\). In particular, if \(\varvec{w}\) is a symmetric array, then \(\mathsf {B} (\varvec{w})\) also is.

Proof

The equality \(\mathsf {B} (\varvec{w}^{\mathsf {T} }) = \mathsf {B} (\varvec{w})^{\mathsf {T} }\) follows from the fact that \(\mathsf {B} \) is a composition of local maps \(c_{k,l}\)’s and \(d^{k,l}_{i,j}\)’s, which trivially commute with the transposition map (see definitions (2.3) and (2.4)). In particular, if \(\varvec{w}\) is symmetric, i.e. \(\varvec{w}=\varvec{w}^{\mathsf {T} }\), then \(\mathsf {B} (\varvec{w}) = \mathsf {B} (\varvec{w})^{\mathsf {T} }\), which means that \(\mathsf {B} (\varvec{w})\) is also symmetric. \(\square \)

The latter proposition implies that the geometric Burge correspondence on symmetric arrays of shape \(\lambda \) can be restricted to a bijection on arrays indexed by the “upper part” of \(\lambda \), i.e. \(\lambda ^\mathrm{{up}}:= \{(i,j)\in \lambda :i\le j\}\) (notice that, in general, \(\lambda ^\mathrm{{up}}\) is not a partition). Namely, there exists a bijection

$$\begin{aligned} \mathsf {B} ^\mathrm{{up}}:{\mathbb {R}}_{>0}^{\lambda ^\mathrm{{up}}} \rightarrow {\mathbb {R}}_{>0}^{\lambda ^\mathrm{{up}}} \, , \qquad \quad (w_{i,j})_{(i,j)\in \lambda ^\mathrm{{up}}} \mapsto (t_{i,j})_{(i,j)\in \lambda ^\mathrm{{up}}} \end{aligned}$$

such that \(\mathsf {B} (\varvec{w})|_{\lambda ^\mathrm{{up}}} = \mathsf {B} ^\mathrm{{up}}(\varvec{w}|_{\lambda ^\mathrm{{up}}})\) for all \(\varvec{w}\in {\mathbb {R}}_{>0}^{\lambda }\). One obvious way to obtain the output of \(\mathsf {B} ^\mathrm{{up}}\) is to take the input array indexed by \(\lambda ^\mathrm{{up}}\), symmetrize it about the diagonal, apply the geometric Burge correspondence, thus obtaining (via Proposition 5.1) another symmetric array, which can be restricted back to \(\lambda ^\mathrm{{up}}\). Another equivalent way is to define new local maps, by slightly modifying the original definitions, and apply them directly to the restricted array \((w_{i,j})_{(i,j)\in \lambda ^\mathrm{{up}}}\). More precisely, the new local maps \(\mathsf {c} ^\mathrm{{up}}_{k,l}\) and \(\mathsf {d} ^{k,l,\mathrm {up}}_{i,j}\) need to be defined as follows:

  • If \(i<j\) and \(k<l\), then \(\mathsf {c} ^\mathrm{{up}}_{i,j} := \mathsf {c} _{i,j}\) and \(\mathsf {d} ^{k,l,\mathrm {up}}_{i,j} := \mathsf {d} ^{k,l}_{i,j}\).

  • If \(i=j\) and \(k=l\), then \(\mathsf {c} ^\mathrm{{up}}_{k,l}\) and \(\mathsf {d} ^{k,l,\mathrm {up}}_{i,j}\) only modify the following entries:

    $$\begin{aligned} \mathsf {c} _{i,i}^\mathrm{{up}}&:w_{i,i} \longmapsto 2 w_{i-1,i} w_{i,i} \, , \end{aligned}$$
    (2.1)
    $$\begin{aligned} \mathsf {d} ^{k,k,\mathrm {up}}_{i,i}&:{\left\{ \begin{array}{ll} w_{i,i} \longmapsto \left( \dfrac{1}{w_{i,i}} + \dfrac{1}{2 w_{i-1,i} w_{k,k} } \right) ^{-1} \, , \\ w_{k,k} \longmapsto \left( \dfrac{2 w_{i-1,i} w_{k,k}}{w_{i,i}^2} + \dfrac{1}{w_{i,i}} \right) \dfrac{w_{i,i+1}}{2} \, , \end{array}\right. } \end{aligned}$$
    (2.2)

    with the usual conventions that \(w_{0,1}=1/2\) and \(w_{0,k}=0\) for all \(k>1\).

These new local maps can be obtained by just specializing (2.3)-(2.4) to the symmetric case. We then define \(\tau ^\mathrm{{up}}_{k,l}\), for all \(k\le l\), by just replacing \(\mathsf {c} _{k,l}\) with \(\mathsf {c} ^\mathrm{{up}}_{k,l}\) and each \(\mathsf {d} ^{k,l}_{i,j}\) with \(\mathsf {d} ^{k,l,\mathrm {up}}_{i,j}\) in the definition (2.8) of \(\tau _{k,l}\). We can finally construct \(\mathsf {B} ^\mathrm{{up}}\) by setting

$$\begin{aligned} \mathsf {B} ^\mathrm{{up}} := \tau _{i_n,j_n}^\mathrm{{up}} \circ \tau _{i_{n-1},j_{n-1}}^\mathrm{{up}} \circ \dots \circ \tau _{i_1,j_1}^\mathrm{{up}} \, , \end{aligned}$$
(2.3)

where \(((i_1,j_1),\dots ,(i_n,j_n))\) is any sequence of distinct boxes such that, for each \(1\le k\le n\),

$$\begin{aligned} \lambda ^{(k)} := \{(i_1,j_1)\dots ,(i_k,j_k)\}_{k=1}^n \cup \{(j_1,i_1)\dots ,(j_k,i_k)\}_{k=1}^n \end{aligned}$$

is a Young diagram, and \(\lambda ^{(n)} = \lambda \).

The properties stated in Propositions 4.2 and 4.3 automatically hold when the geometric Burge correspondence is restricted to symmetric arrays. On the other hand, the volume-preserving property does not follow immediately and is addressed in the next theorem.

Theorem 5.2

Let \(\varvec{w}\in {\mathbb {R}}^{\lambda }_{>0}\) be a symmetric array and let \(\varvec{t} := \mathsf {B} (\varvec{w})\). Then, the map

$$\begin{aligned} (\log w_{i,j})_{(i,j)\in \lambda , \, i\le j} \mapsto (\log t_{i,j})_{(i,j)\in \lambda , \, i\le j} \end{aligned}$$

has Jacobian \(\pm 1\).

Proof

Set \(\lambda ^\mathrm{{up}}:= \{(i,j)\in \lambda :i\le j\}\). As explained above, we have that \(\varvec{t}|_{\lambda ^\mathrm{{up}}} = \mathsf {B} ^\mathrm{{up}}(\varvec{w}|_{\lambda ^\mathrm{{up}}})\). Since \(\mathsf {B} ^\mathrm{{up}}\) can be defined by (2.3), it is a composition of \(\mathsf {c} ^\mathrm{{up}}_{i,j}\)’s and \(\mathsf {d} ^{k,l,\mathrm {up}}_{i,j}\)’s. Therefore, it suffices to prove that these modified local maps are volume preserving in log-log variables. If \(i<j\) and \(k<l\), the new local maps coincide with the old ones, which possess this property as shown in the proof of Theorem 4.5. If \(i=j\) and \(k=l\), the new local maps are given by (2.1) and (2.2): in this case, the check is still totally analogous to the one done for Theorem 4.5, so we omit it. \(\square \)

6 Polymer replicas and Whittaker functions

In this section we study a polymer model in a persymmetric environment, as discussed in Sect. 1.2. In particular, thanks to the direct connection between the point-to-point partition function in a persymmetric environment and the replica partition function (see (1.5)-(1.6)), we determine the distribution of the latter as a Whittaker measure.

Let us first introduce Whittaker functions. For any triangular array \(\varvec{z} = (z_{i,j})_{1\le j\le i\le n}\), define the energy of \(\varvec{z}\) to be the functional

$$\begin{aligned} {\mathcal {E}}(\varvec{z}) := \sum _{i=1}^{n-1} \sum _{j=1}^i \left( \frac{z_{i+1,j+1}}{z_{i,j}} + \frac{z_{i,j}}{z_{i+1,j}} \right) \, . \end{aligned}$$

Define also the (geometric) type of \(\varvec{z}\) to be the n-vector, denoted by \({{\,\mathrm{type}\,}}(\varvec{z})\), whose i-th component is the ratio between the product of the i-th row of \(\varvec{z}\) and the product of its \((i-1)\)-th row:

$$\begin{aligned} {{\,\mathrm{type}\,}}(\varvec{z})_i := \frac{\prod _{j=1}^i z_{i,j}}{\prod _{j=1}^{i-1} z_{i-1,j}} \, , \qquad \qquad 1\le i\le n \, . \end{aligned}$$
(6.1)

We then define the \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker function with parameter \(\varvec{\alpha }=(\alpha _1,\dots ,\alpha _n)\in {\mathbb {C}}^n\) and argument \(\varvec{x}=(x_1,\dots ,x_n)\in {\mathbb {R}}_{>0}^n\) as

$$\begin{aligned} \Psi ^{n}_{\varvec{\alpha }}(\varvec{x}) := \int _{{\mathcal {T}}^n(\varvec{x})} \prod _{i=1}^n {{\,\mathrm{type}\,}}(\varvec{z})_i^{\alpha _i} \cdot {{\,\mathrm{\mathrm {e}}\,}}^{-{\mathcal {E}}(\varvec{z})} \prod _{\begin{array}{c} 1\le i <n \\ 1\le j\le i \end{array}} \frac{\mathop {}\!\mathrm {d}z_{i,j}}{z_{i,j}} \, , \end{aligned}$$
(6.2)

where \({\mathcal {T}}^n(\varvec{x})\) is the set of all triangular arrays \(\varvec{z} = (z_{i,j})_{1\le j\le i\le n}\) with positive entries and bottom row \((z_{n,1},\dots ,z_{n,n}) = (x_1,\dots ,x_n) = \varvec{x}\).

Recall that a random variable Y follows an inverse gamma distribution with parameters \(\alpha >0\) and \(\beta >0\) if

$$\begin{aligned} {\mathbb {P}}(Y\in \mathop {}\!\mathrm {d}y) = \frac{\beta ^{\alpha }}{\Gamma (\alpha )} y^{-\alpha } \exp \left( -\frac{\beta }{y}\right) \mathbb {1}_{\{y>0\}} \frac{\mathop {}\!\mathrm {d}y}{y} \, , \end{aligned}$$
(6.3)

in which case we write \(Y \sim {{\,\mathrm{invGamma}\,}}(\alpha ,\beta )\).

Fix now parameters \(\varvec{\alpha } = (\alpha _1,\dots , \alpha _n)\in {\mathbb {R}}_{>0}^n\) and \(\beta \in {\mathbb {R}}_{>0}\). Consider a random \(n\times n\) symmetric matrix \(\varvec{W} = (W_{i,j})_{1\le i,j\le n}\) with entries \((W_{i,j})_{i\le j}\) independent and inverse gamma distributed as follows:

$$\begin{aligned} W_{i,j} \sim {\left\{ \begin{array}{ll} {{\,\mathrm{invGamma}\,}}(\alpha _i, \beta ) &{}\text {if } 1\le i=j\le n \, , \\ {{\,\mathrm{invGamma}\,}}(\alpha _i + \alpha _j,1) &{}\text {if } 1\le i<j\le n \, . \end{array}\right. } \end{aligned}$$

Namely, the joint distribution of the upper triangular entries of \(\varvec{W}\) is

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}(W_{i,j} \in \mathop {}\!\mathrm {d}w_{i,j}:i\le j) \\&\quad = \, \frac{1}{c_{\varvec{\alpha },\beta }} \left[ \prod _{i=1}^n w_{i,i}^{-\alpha _i} \prod _{i<j} w_{i,j}^{-\alpha _i - \alpha _j} \right] \exp \left\{ - \sum _{i=1}^n \frac{\beta }{w_{i,i}} -\sum _{i<j} \frac{1}{w_{i,j}} \right\} \\&\qquad \prod _{i\le j} \mathbb {1}_{\{w_{i,j}>0\}} \frac{\mathop {}\!\mathrm {d}w_{i,j}}{w_{i,j}} \, , \end{aligned} \end{aligned}$$
(6.4)

with normalization constant

$$\begin{aligned} c_{\varvec{\alpha },\beta } := \beta ^{-\sum _{i=1}^n \alpha _i} \prod _{i=1}^n \Gamma (\alpha _i) \prod _{i<j} \Gamma (\alpha _i + \alpha _j) \, . \end{aligned}$$
(6.5)

The main theorem of this section states that the diagonal entries of the image of \(\varvec{W}\) under the geometric Burge correspondence have a joint density proportional to a \({{\,\mathrm{GL}\,}}_n({\mathbb {R}})\)-Whittaker function (with an exponential prefactor).

Theorem 6.1

If \(\varvec{W}\) is an \(n\times n\) symmetric matrix distributed according to (6.4) and \(\varvec{T} := \mathsf {B} (\varvec{W})\), then

$$\begin{aligned} {\mathbb {P}}(T_{i,i} \in \mathop {}\!\mathrm {d}x_i :1\le i\le n) = \frac{1}{c_{\varvec{\alpha },\beta }} {{\,\mathrm{\mathrm {e}}\,}}^{-\beta /x_n} \Psi ^n_{-\varvec{\alpha }}(x_1,\dots ,x_n) \prod _{i=1}^n \mathbb {1}_{\{x_i>0\}} \frac{\mathop {}\!\mathrm {d}x_i}{x_i} \, , \end{aligned}$$
(6.6)

where the constant \(c_{\varvec{\alpha },\beta }\) is defined by (6.5).

Proof

Our strategy consists in computing the push-forward measure that the distribution of \(\varvec{W}\) induces on \(\varvec{T}\), using the properties of the geometric Burge correspondence obtained in Sects. 4 and 5 .

Let \(\varvec{w}\in {\mathbb {R}}_{>0}^{n\times n}\) be a symmetric matrix; then, \(\varvec{t}:= \mathsf {B} (\varvec{w})\) is also symmetric by Proposition 5.1. Moreover, by (2.5), we have that

$$\begin{aligned} \prod _{j=1}^n w_{i,j} = \frac{P_{n-i}(\varvec{t})}{P_{n-i+1}(\varvec{t})} \, , \end{aligned}$$

where \(P_k(\varvec{t})\) is the product of the k-th diagonal of \(\varvec{t}\), as defined in (2.2). It follows that

$$\begin{aligned} \begin{aligned} \prod _{i=1}^n w_{i,i}^{-\alpha _i} \prod _{i<j} w_{i,j}^{-\alpha _i - \alpha _j} = \prod _{i,j=1}^n w_{i,j}^{-\alpha _i} = \prod _{i=1}^n \left( \prod _{j=1}^n w_{i,j} \right) ^{-\alpha _i} = \prod _{i=1}^n \left( \frac{P_{n-i}(\varvec{t})}{P_{n-i+1}(\varvec{t})} \right) ^{-\alpha _i} \, . \end{aligned} \end{aligned}$$

On the other hand, Proposition 4.3 implies that

$$\begin{aligned} \sum _{i=1}^n \frac{\beta }{w_{i,i}}&= \frac{\beta }{t_{1,1}} \, , \\ \sum _{i<j} \frac{1}{w_{i,j}}&= \frac{1}{2} \sum _{i\ne j} \frac{1}{w_{i,j}} = \frac{1}{2} \sum _{\begin{array}{c} 1\le i,j\le n \\ (i,j)\ne (1,1) \end{array}} \frac{t_{i-1,j} + t_{i,j-1}}{t_{i,j}}\\&= \sum _{1<i\le j\le n} \frac{t_{i-1,j}}{t_{i,j}} + \sum _{1\le i\le j<n} \frac{t_{i,j-1}}{t_{i,j}} \, . \end{aligned}$$

By Theorem 5.2 the map \((\log w_{i,j})_{i\le j} \mapsto (\log t_{i,j})_{i\le j}\) has Jacobian \(\pm 1\), hence the push-forward of (6.4) is

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}(T_{i,j} \in \mathop {}\!\mathrm {d}t_{i,j} :1\le i\le j\le n) = \frac{1}{c_{\varvec{\alpha },\beta }} \prod _{i=1}^n \left( \frac{P_{n-i}(\varvec{t})}{P_{n-i+1}(\varvec{t})} \right) ^{-\alpha _i} \\&\qquad \qquad \times \exp \left\{ - \left[ \frac{\beta }{t_{1,1}} + \sum _{1<i\le j\le n} \frac{t_{i-1,j}}{t_{i,j}} + \sum _{1\le i\le j<n} \frac{t_{i,j-1}}{t_{i,j}} \right] \right\} \prod _{i\le j} \mathbb {1}_{\{t_{i,j}>0\}} \frac{\mathop {}\!\mathrm {d}t_{i,j}}{t_{i,j}} \, . \end{aligned} \end{aligned}$$

To obtain the joint density of \((T_{1,1},\dots ,T_{n,n})\), one has to integrate out all \(t_{i,j}\)’s with \(i<j\) in the latter expression. If we then reindex the variables by setting \(t_{i,j} = z_{n-j+i,n-j+1}\) for all \(1\le i\le j\le n\), we obtain the right-hand side of (6.6), where the Whittaker function \(\Psi ^n_{-\varvec{\alpha }}\) is defined by (6.2). \(\square \)

Since the right-hand side of (6.6) is a probability density, we obtain as a consequence an explicit integral formula for Whittaker functions in terms of gamma functions.

Corollary 6.2

For \(\varvec{\alpha }\in {\mathbb {R}}_{>0}^n\) and \(\beta \in {\mathbb {R}}_{>0}\), we have that

$$\begin{aligned} \int _{{\mathbb {R}}_{>0}^n} {{\,\mathrm{\mathrm {e}}\,}}^{-\beta /x_n} \Psi ^n_{-\varvec{\alpha }}(x_1,\dots ,x_n) \prod _{i=1}^n \frac{\mathop {}\!\mathrm {d}x_i}{x_i} = \beta ^{-\sum _{i=1}^n \alpha _i} \prod _{i=1}^n \Gamma (\alpha _i) \prod _{1\le i<j\le n} \Gamma (\alpha _i+\alpha _j) \, . \end{aligned}$$

The latter can be seen as the analog of a Cauchy-Littlewood identity in our setting. It is equivalent to an integral identity for the Mellin transform of a Whittaker function, conjectured by Bump and Friedberg [19] and proved by Stade [43] – see [38, Sect. 7].

Let us now link Theorem 6.1 to the polymer models introduced in Sect. 1. Consider again the symmetric log-gamma random environment \(\varvec{W}\) of (6.4). For \(1\le k\le n\), define

$$\begin{aligned} Z^{*(k)}_{n,n} := \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{*(k)}_{n,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} W_{i,j} \, , \end{aligned}$$
(6.7)

where \(\Pi ^{*(k)}_{n,n}\) is the set of k-tuples of non-intersecting directed lattice paths in \({\mathbb {N}}^2\) starting at (n, 1), (n, 2), ..., (nk) and ending at \((1,n-k+1)\), \((1,n-k+2)\), ..., (1, n) respectively. In particular, \(Z^*_{n,n} = Z^{*(1)}_{n,n}\) is the dual point-to-point polymer partition function in a symmetric environment. By (2.3), each \(Z^{*(k)}_{n,n}\) can be expressed as the product of some diagonal entries of the image \(\varvec{T}\) of \(\varvec{W}\) under the geometric Burge correspondence:

$$\begin{aligned} Z^{*(k)}_{n,n} = T_{1,1} \cdots T_{k,k} \, , \qquad \qquad 1\le k\le n \, . \end{aligned}$$

Thus, the right-hand side of (6.6) is precisely the joint density of the random vector

$$\begin{aligned} \left( T_{1,1},T_{2,2},\dots ,T_{n,n}\right) = \left( Z^{*(1)}_{n,n}, \frac{Z^{*(2)}_{n,n}}{Z^{*(1)}_{n,n}}, \dots , \frac{Z^{*(n)}_{n,n}}{Z^{*(n-1)}_{n,n}}\right) \, . \end{aligned}$$
(6.8)

Obviously, the dual point-to-point polymer partition function in the symmetric environment \(\varvec{W}\) can be alternatively viewed as the point-to-point polymer partition function in the persymmetric environment obtained by reversing the columns (or the rows) of \(\varvec{W}\). Furthermore, the latter can also be reinterpreted as a replica partition function of two polymer paths constrained to coincide at the endpoint – see explanations in Sect. 1, around (1.5)-(1.6). In particular, let \({ Z^{\mathsf {repl} }_{n} }\) be the replica partition function (1.6) on the modified log-gamma environment \((W'_{i,j})_{i+j\le n+1}\) given by

$$\begin{aligned} \begin{aligned}&{\mathbb {P}}(W_{i,j}' \in \mathop {}\!\mathrm {d}w_{i,j}:i+ j\le n+1) = \frac{2^n}{c_{\varvec{\alpha },\beta }} \left[ \prod _{i=1}^n w_{i,n-i+1}^{-2 \alpha _{i}} \prod _{i+j\le n} w_{i,j}^{-\alpha _i - \alpha _{n-j+1}} \right] \\&\qquad \qquad \qquad \times \exp \left\{ - \sum _{i=1}^n \frac{\beta }{w^2_{i,n-i+1}} -\sum _{i+j\le n} \frac{1}{w_{i,j}} \right\} \prod _{i+j\le n+1} \mathbb {1}_{\{w_{i,j}>0\}} \frac{\mathop {}\!\mathrm {d}w_{i,j}}{w_{i,j}} \, , \end{aligned} \end{aligned}$$
(6.9)

with normalization constant \(c_{\varvec{\alpha },\beta }\) as in (6.5). It then follows that

$$\begin{aligned} { Z^{\mathsf {repl} }_{n} } \text { is equal in distribution to } Z^*_{n,n} \, . \end{aligned}$$

Thanks to Theorem 6.1 and (6.8), we thus arrive at the following theorem:

Theorem 6.3

The Laplace transform of the replica partition function \({ Z^{\mathsf {repl} }_{n} }\) defined in (1.6) on the log-gamma environment (6.9) is given by the integral formula

$$\begin{aligned} {\mathbb {E}}\left[ {{\,\mathrm{\mathrm {e}}\,}}^{-r { Z^{\mathsf {repl} }_{n} }} \right] = \frac{1}{c_{\varvec{\alpha },\beta }} \int _{{\mathbb {R}}_{>0}^n} {{\,\mathrm{\mathrm {e}}\,}}^{-rx_1 - \beta /x_n} \, \Psi ^n_{-\varvec{\alpha }}(x_1,\dots ,x_n) \prod _{i=1}^n \frac{\mathop {}\!\mathrm {d}x_i}{x_i} \, . \end{aligned}$$
(6.10)

We close by discussing a non-trivial identity in distribution, in the context of symmetryzed polymers, between the dual and the usual partition functions, which have been analyzed in the present work and in [38] respectively. Let us consider a symmetric matrix \(\varvec{W}\) distributed as in (6.4) with \(\beta =1/2\). Define, for \(1\le k\le n\),

$$\begin{aligned} Z^{(k)}_{n,n} := \sum _{(\pi _1,\dots ,\pi _k) \in \Pi ^{(k)}_{n,n}} \prod _{(i,j)\in \pi _1 \cup \cdots \cup \pi _k} W_{i,j} \, , \end{aligned}$$
(6.11)

where \(\Pi ^{(k)}_{n,n}\) is the set of k-tuples of non-intersecting directed lattice paths in \({\mathbb {N}}^2\) starting at (1, 1), (1, 2), ..., (1, k) and ending at \((n,n-k+1)\), \((n,n-k+2)\), ..., (nn) respectively. In particular, \(Z_{n,n} = Z^{(1)}_{n,n}\) is the usual partition function from (1, 1) to (nn) on \(\varvec{W}\). It was shown in [38, Sect. 5] that the vector

$$\begin{aligned} \left( Z^{(1)}_{n,n}, \frac{Z^{(2)}_{n,n}}{Z^{(1)}_{n,n}}, \dots , \frac{Z^{(n)}_{n,n}}{Z^{(n-1)}_{n,n}}\right) \end{aligned}$$

has exactly the density given by the right-hand side of (6.6) for \(\beta =1/2\). In particular, for this specific symmetric environment, \(Z_{n,n}\) and \(Z^*_{n,n}\) turn out to be identically distributed.

When \(n=2\), this reduces to the following identity in law: if XY and Z are independent inverse gamma random variables with respective parameters ab and \(a+b\), then the random variables \((X+Y)Z^2\) and XYZ have the same law. This can be seen as a consequence of Lukacs’ theorem [35, Sect. 1], as follows. Let us write \(U:=X^{-1}+Y^{-1}\) and \(V:=X^{-1} Y\). Since \(U^{-1}\) and Z are independent and both inverse gamma distributed with parameter \(a+b\), we have that \(U^{-2} Z\) and \(U^{-1} Z^2\) are equally distributed. Moreover, by Lukacs’ theorem, U and V are independent. It follows that \(XYZ = U^{-2} Z (1+V)^2 V^{-1}\) has the same law as \((X+Y)Z^2 = U^{-1} Z^2 (1+V)^2 V^{-1}\), as required.