Journal of Algebraic Combinatorics

, Volume 42, Issue 2, pp 555–603

# Refined Cauchy/Littlewood identities and six-vertex model partition functions: II. Proofs and new conjectures

Article

## Abstract

We prove two identities of Hall–Littlewood polynomials, which appeared recently in Betea and Wheeler (2014). We also conjecture, and in some cases prove, new identities which relate infinite sums of symmetric polynomials and partition functions associated with symmetry classes of alternating sign matrices. These identities generalize those already found in Betea and Wheeler (2014), via the introduction of additional parameters. The left-hand side of each of our identities is a simple refinement of a relevant Cauchy or Littlewood identity. The right-hand side of each identity is (one of the two factors present in) the partition function of the six-vertex model on a relevant domain.

### Keywords

Cauchy and Littlewood identities Symmetric functions  Alternating sign matrices Six-vertex model

## 1 Introduction

In this paper, we continue our study of Cauchy- and Littlewood-type identities, initiated in [2], and their relationship with partition functions of the six-vertex model. The principal results studied in [2] were three infinite sum identities for Hall–Littlewood polynomials:
\begin{aligned}&\sum _{\lambda } \prod _{i=0}^{\infty } \prod _{j=1}^{m_i(\lambda )} (1-t^j) P_{\lambda }(x_1,\ldots ,x_n;t) P_{\lambda }(y_1,\ldots ,y_n;t) \nonumber \\&\quad = \frac{\prod _{i,j=1}^{n} (1-t x_i y_j)}{\prod _{1 \leqslant i<j \leqslant n} (x_i - x_j) (y_i - y_j)} \det _{1\leqslant i,j \leqslant n} \left[ \frac{(1-t)}{(1-x_i y_j)(1-t x_i y_j)} \right] , \end{aligned}
(1)
\begin{aligned}&\sum _{\begin{array}{c} \lambda \ \text {with} \\ \text {even columns} \end{array}} \quad \prod _{i=0}^{\infty }\ \prod _{j=2,4,6,\ldots }^{m_i(\lambda )} (1-t^{j-1}) P_{\lambda }(x_1,\ldots ,x_{2n};t) \nonumber \\&\quad = \prod _{1 \leqslant i<j \leqslant 2n} \frac{(1-t x_i x_j)}{(x_i-x_j)} \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \frac{(x_i-x_j)(1-t)}{(1-x_i x_j) (1-t x_i x_j)} \right] , \end{aligned}
(2)
\begin{aligned}&\sum _{\lambda } \prod _{i=0}^{\infty } \prod _{j=1}^{m_i(\lambda )} (1-t^j) P_{\lambda }(x_1,\ldots ,x_n;t) K_{\lambda }(y_1,\ldots ,y_n;t) \nonumber \\&\quad = \frac{\prod _{i,j=1}^{n} (1-t x_i y_j) (1-\frac{t x_i}{y_j}) }{\prod _{1\leqslant i<j \leqslant n} (x_i-x_j) (y_i-y_j) (1 - t x_i x_j) (1 - \frac{1}{y_i y_j}) } \nonumber \\&\quad \quad \times \det _{1\leqslant i,j \leqslant n} \left[ \frac{(1-t)}{(1- x_i y_j)(1-t x_i y_j)(1- \frac{x_i}{y_j})(1-\frac{t x_i}{y_j})} \right] , \end{aligned}
(3)
where $$P_{\lambda }$$ and $$K_{\lambda }$$ denote Hall–Littlewood polynomials of type $$A_n$$ [9] and $$BC_n$$ [15], respectively. In (1) and (3), the sum is taken over all partitions of maximal length $$n$$, while in (2), the sum is over all partitions of maximal length $$2n$$ and whose Young diagrams have even-length columns. In all equations, $$m_i(\lambda )$$ is multiplicity of the part $$i$$ in $$\lambda$$, assuming all partitions are suffixed by $$m_0(\lambda )$$ zeros to augment them to their maximal length.

Equation (1) is due to Warnaar [17], based on earlier results of Kirillov and Noumi [4]. In [2], we exposed a combinatorial interpretation of (1): The left- hand side can be viewed as a generating series of path-weighted plane partitions [16], while the right-hand side is the partition function of the six-vertex model under domain wall boundary conditions [3, 6], and thus a generating series of alternating sign matrices (ASMs) [7, 8].

Equations (2) and (3), both conjectured in [2], are further examples of such a relationship. In both of these equations, the right-hand side is the partition function of the six-vertex model on a certain lattice (in (2), the underlying lattice has off-diagonal symmetry [8]; the partition function in (3) arises from reflecting domain wall boundary conditions [14]) and may be viewed as a multi-parameter generating series of a symmetry class of ASMs [off-diagonally symmetric ASMs in the case of (2); U-turn ASMs in the case of (3)]. Although we are able to view the left-hand side of (2) as a generating series of path-weighted symmetric plane partitions [2], for the moment there is no known combinatorial interpretation of the left-hand side of (3) in terms of plane partitions or other tableaux-related objects.

The goals of the present work are as follows. Firstly, we provide a new proof of (1) by applying the Izergin–Korepin technique1 [3, 6] to the left-hand side of the equation, before adapting this method to prove (2). We remain unable to prove (3) by such methods, due to the absence of a simple combinatorial (tableau) formula for the $$BC_n$$-symmetric Hall–Littlewood polynomials.

Secondly, we shall generalize all three identities by the introduction of additional parameters. It was already demonstrated in [17] that (1) may be refined by two extra parameters $$q$$ and $$u$$, with the original identity being recovered when $$q=0$$ and $$u=t$$. The introduction of the $$q$$ parameter elevates the participating symmetric functions to Macdonald polynomials, and the equation itself comes from acting on the Cauchy identity with a generating series of Macdonald difference operators [9] (where $$u$$ is the indeterminate of the generating series). We prove that even in the presence of the two extra parameters, the right- hand side of the identity remains a determinant (a fact which was not explicit in either [17] or [4]). In a similar vein, we find that it is possible to refine (2) by the introduction of the parameters $$q$$ and $$u$$. To round off, we conjecture a deformed version of (3) involving $$u$$ and four parameters $$t_0,t_1,t_2,t_3$$ which elevates it to the level of lifted Koornwinder polynomials [11].

Finally, we investigate the meaning of the deformation parameters thus introduced in the setting of the six-vertex model. Surprisingly, the inclusion of the indeterminate $$u$$ in our equations does not break the correspondence with partition functions of the six-vertex model: The $$u$$-deformed versions of (1)–(3) all lead to determinants/Pfaffians which have appeared in [8] in the context of further symmetry classes of ASMs. We will not comment on the role of $$q$$ in this scheme, since it appears to play only a trivial role.2

The paper is organized as follows. In Sect. 2, we give proofs of two results: identity (1) (using an independent method to that of [17]) and (2) (conjectured in [2]). In Sect. 3, we discuss the generalization of (1) to Macdonald polynomials (obtained in [17]), and conjecture a companion generalization of (2) to this level. $$u$$-deformations of the Cauchy-, Littlewood- and $$BC$$-type Cauchy identities are discussed in Sects. 4, 5 and 6, and their connection with partition functions of ASM symmetry classes is exposed. The main result in Sect. 4 is that a $$u$$-deformed version of (1) is closely related to the partition function of half-turn symmetric ASMs (for a particular value of $$u$$). The main result in Sect. 5 is Theorem 7, a $$u$$-generalization of Eq. (2). In this case, for an appropriate value of $$u$$, we obtain a close connection with the partition function of off-diagonally/off-anti-diagonally symmetric ASMs. In Sect. 6, we conjecture a $$u$$-generalization of (3) (Conjecture 2). We prove a simpler, companion identity involving symplectic Schur polynomials (Theorem 9) but are unable to prove the conjecture (due to the lack of a suitable branching rule for the lifted Koornwinder polynomials which participate). The conjecture has been verified for small partitions using Mathematica and Sage. Once again, a certain value of $$u$$ leads to a correspondence with a six-vertex model partition function (in this case, the partition function of double U-turn ASMs). Finally, following Rains [11], in the Appendix, we present a few results on $$BC$$-type interpolation and Koornwinder polynomials (and their symmetric function analogs) that we use.

Throughout the paper, $$\bar{x} := \frac{1}{x}$$. An $$n$$-tuple of variables $$(x_1,\ldots ,x_n)$$ will sometimes be denoted $$\vec {x}_n$$. We reserve letters $$\lambda , \mu , \ldots$$ for partitions. A partition $$\lambda$$ is either the empty partition ($$0$$) or a sequence of strictly positive numbers listed in decreasing order: $$\lambda _1 \geqslant \lambda _2 \geqslant \cdots \geqslant \lambda _k > 0$$. We call each $$\lambda _i$$ a part and $$\ell (\lambda ):=k$$ the length (number of nonzero parts) of $$\lambda$$. If all parts of $$\lambda$$ are even, we call the partition even. $$m_i(\lambda )$$ stands for the number of parts in $$\lambda$$ equal to $$i$$. If for some prespecified $$n$$ we have $$\ell (\lambda ) \leqslant n$$, we abuse notation and define $$m_0(\lambda ) = n-\ell (\lambda )$$ to be the number of zeros we need to append to $$\lambda$$ to get a vector of length $$n$$. Moreover, we call $$|\lambda |:=\sum _{i=1}^{\ell (\lambda )} \lambda _i$$ the weight of the partition. For any $$\lambda$$, we have a conjugate partition$$\lambda '$$ whose parts are defined as $$\lambda '_i := |\{j: \lambda _j \geqslant i\}|$$. We finally define the notion of interlacing partitions. Let $$\lambda$$ and $$\mu$$ be two partitions with $$|\lambda | \geqslant |\mu |$$. They are said to be interlacing, and we write $$\lambda \succ \mu$$ if and only if
\begin{aligned} \lambda _1 \geqslant \mu _1 \geqslant \lambda _2 \geqslant \mu _2 \geqslant \lambda _3 \geqslant \cdots \end{aligned}
In the language of [9], the interlacing property is equivalent to saying that the skew diagram $$\lambda -\mu$$ forms a horizontal strip, meaning that $$\lambda '_i-\mu '_i \leqslant 1$$ for all $$i \geqslant 1$$.

## 2 Proofs

The primary goal of this section is to prove Eq. (2), effectively by using the Izergin–Korepin technique familiar from quantum integrable models. As a warm-up, we begin by providing a new proof of (1) along such lines. This approach to proving (1) and (2) may not be the most elegant (indeed, in the case of (1), there is a much simpler proof using Macdonald difference operators—see Sect. 3), but it is powerful since it only assumes two standard properties of Hall–Littlewood polynomials: their branching rule and a Pieri identity.

### 2.1 Branching rule for Hall–Littlewood polynomials

The branching rule allows a Hall–Littlewood polynomial $$P_{\lambda }(x_1,\ldots ,x_n;t)$$ to be written as a sum over Hall–Littlewood polynomials $$P_{\mu }(x_1,\ldots ,x_{n-1};t)$$ in a smaller alphabet. From the definition of skew Hall–Littlewood polynomials (see Sect. 5, Chapter III of [9]), one has
\begin{aligned} P_{\lambda }(x_1,\ldots ,x_n;t) = \sum _{\mu } P_{\lambda /\mu }(x_n;t) P_{\mu }(x_1,\ldots ,x_{n-1};t). \end{aligned}
Since the skew Hall–Littlewood polynomial $$P_{\lambda /\mu }(x_n;t)$$ in a single variable satisfies
\begin{aligned} P_{\lambda /\mu }(x_n;t) = \psi _{\lambda /\mu }(t) x_n^{|\lambda -\mu |}, \end{aligned}
the branching rule can be expressed as
\begin{aligned} P_{\lambda }(x_1,\ldots ,x_n;t) = \sum _{\mu } \psi _{\lambda /\mu }(t) x_n^{|\lambda -\mu |} P_{\mu }(x_1,\ldots ,x_{n-1};t), \end{aligned}
(4)
where the function $$\psi _{\lambda / \mu }(t)$$ is given by3
\begin{aligned} \psi _{\lambda /\mu }(t) = \delta _{\lambda \succ \mu } \left( \prod _{\begin{array}{c} i \geqslant 1 \\ m_i(\mu ) = m_i(\lambda )+1 \end{array}} (1-t^{m_i(\mu )}) \right) \end{aligned}
with $$\psi _{\lambda /\mu }(t) = 0$$ unless $$\lambda \succ \mu$$. In the sequel we will often find it convenient to rephrase (4) in terms of horizontal strips, by writing
\begin{aligned} P_{\lambda }(x_1,\ldots ,x_n;t) = \sum _{\mu :\mu ' = \lambda ' - \epsilon } \prod _{\begin{array}{c} i \geqslant 1 \\ \epsilon _i=0 \\ \epsilon _{i+1}=1 \end{array}} (1-t^{m_i(\mu )}) x_n^{|\lambda -\mu |} P_{\mu }(x_1,\ldots ,x_{n-1};t), \end{aligned}
(5)
where the notation $$\mu ' = \lambda ' - \epsilon$$ means that $$\mu '_i = \lambda '_i - \epsilon _i$$ for all $$i \geqslant 1$$, for some $$\epsilon _i \in \{0,1\}$$.

### 2.2 A Pieri identity for Hall–Littlewood polynomials

Pieri rules allow one to compute the product of a fundamental symmetric function (such as complete symmetric functions, or elementary symmetric functions) and a more advanced symmetric function (such as Schur, Hall–Littlewood, or Macdonald polynomials). Several types of Pieri rules exist for Hall–Littlewood polynomials, but in this section, we will only make use of the identity
\begin{aligned} e_k(x_1,\ldots ,x_n) P_{\mu }(x_1,\ldots ,x_n;t) = \sum _{\lambda :|\lambda - \mu | = k} \psi '_{\lambda /\mu }(t) P_{\lambda }(x_1,\ldots ,x_n;t), \end{aligned}
(6)
where $$e_k(x_1,\ldots ,x_n)$$ is an elementary symmetric function (see Sect. 2, Chapter I of [9]) and $$\psi '_{\lambda /\mu }(t)$$ is given by
\begin{aligned} \psi '_{\lambda /\mu }(t) = \prod _{i = 1}^{\infty } \genfrac[]{0.0pt}{}{\lambda '_i - \lambda '_{i+1}}{\lambda '_i - \mu '_i}_t = \prod _{i = 1}^{\infty } \genfrac[]{0.0pt}{}{m_i(\lambda )}{\lambda '_i - \mu '_i}_t \end{aligned}
with the $$t$$-binomial coefficient being defined as
\begin{aligned} \genfrac[]{0.0pt}{}{a}{b}_t = \frac{(1-t) \cdots (1-t^a)}{(1-t) \cdots (1-t^b).(1-t) \cdots (1-t^{a-b})}, \quad \forall \ 0 \leqslant b \leqslant a, \quad a,b \in \mathbb {Z}, \end{aligned}
and $$\genfrac[]{0.0pt}{}{a}{b}_t = 0$$ otherwise. We remark that the sum in (6) can be considered to be taken over $$\lambda$$ such that $$\lambda - \mu$$ is a vertical strip (or equivalently, such that $$\lambda ' \succ \mu '$$), since the coefficients $$\psi '_{\lambda /\mu }(t)$$ vanish when this is not the case.

### 2.3 Proof of Equation (1)

In this subsection, we prove Theorem 1, which is an alternative statement of Eq. (1). Our strategy is to show that the left-hand side of (1) satisfies four conditions, which are obvious properties of the right-hand side (they are the usual four properties in the Izergin–Korepin approach to calculating the domain wall partition function). Since these conditions are uniquely determining, it follows that the two sides of (1) must be equal.

### Theorem 1

Define the function
\begin{aligned} \mathcal {F}_n(x_1,\ldots ,x_n) = \sum _{\lambda } w_{\lambda }(n,t) P_{\lambda }(x_1,\ldots ,x_n;t) P_{\lambda }(y_1,\ldots ,y_n;t), \end{aligned}
(7)
where for convenience, we have set
\begin{aligned} w_{\lambda }(n,t) = \prod _{i=0}^{\infty } \prod _{j=1}^{m_i(\lambda )} (1-t^j), \end{aligned}
(8)
and where the dependence on the variables $$\{y_1,\ldots ,y_n\}$$ and $$t$$ has been intentionally suppressed. $$\mathcal {F}_n(x_1,\ldots ,x_n)$$ satisfies the following properties:
1.

$$\mathcal {F}_n(x_1,\ldots ,x_n)$$ is symmetric in $$\{x_1,\ldots ,x_n\}$$.

2.

The renormalized function $$\prod _{i,j=1}^{n} (1-x_i y_j) \mathcal {F}_n(x_1,\ldots ,x_n)$$ is a polynomial in $$x_n$$ of degree $$n-1$$.

3.
Setting $$x_n = 1/(t y_n)$$, one obtains the recursion
\begin{aligned} \mathcal {F}_n \Big |_{x_n = 1/(t y_n)} = -t^n \mathcal {F}_{n-1}(x_1,\ldots ,x_{n-1}). \end{aligned}
4.

$$\mathcal {F}_1(x_1) = (1-t)/(1-x_1 y_1)$$.

Since $$\mathcal {F}_n(x_1,\ldots ,x_n)$$ is a sum of Hall–Littlewood polynomials, each being symmetric in $$\{x_1,\ldots ,x_n\}$$, property 1 is immediate. The remaining properties 24 will be proved in Sects. 2.3.22.3.4, after we make a preliminary observation about the function $$\mathcal {F}_n(x_1,\ldots ,x_n)$$ in Sect. 2.3.1.

#### 2.3.1 Alternative form for $$\mathcal {F}_n(x_1,\ldots ,x_n)$$

Let $$\lambda$$ be a length $$n$$ partition and denoted by $$\lambda ^{*}$$ the partition obtained by removing the entire first column from the Young diagram of $$\lambda$$, i.e.,$$\lambda ^{*} = (\lambda _1-1,\ldots ,\lambda _n-1)$$. Then, one has the following identity between Hall–Littlewood polynomials:
\begin{aligned} P_{\lambda }(x_1,\ldots ,x_n;t) = (x_1 \cdots x_n) P_{\lambda ^{*}}(x_1,\ldots ,x_n;t). \end{aligned}
(9)
Since the function $$w_{\lambda }(n,t)$$ is effectively the same as the function $$b_{\lambda }(t) := \prod _{i=1}^{\infty } \prod _{j=1}^{m_i(\lambda )} (1-t^j)$$ (except that it treats parts of size zero as though they were of nonzero size), by using (9) it follows immediately that
\begin{aligned}&(x_1 \cdots x_n) (y_1 \cdots y_n) \mathcal {F}_n(x_1,\ldots ,x_n) \!= \sum _{\lambda : \ell (\lambda ) = n} b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_n;t) P_{\lambda }(y_1,\ldots ,y_n;t). \end{aligned}
(10)
This alternative way of writing $$\mathcal {F}_n(x_1,\ldots ,x_n)$$ will prove to be helpful in establishing the polynomiality property 2 and the recursive property 3.

#### 2.3.2 Polynomiality

We begin by considering the Cauchy identity for Hall–Littlewood polynomials (see Sect. 4, Chapter III of [9]). While it is standard to write this identity so that the right-hand side is a rational function in $$\{x_1,\ldots ,x_m\}$$ and $$\{y_1,\ldots ,y_n\}$$, here we adopt a polynomial normalization:
\begin{aligned} \prod _{i=1}^{m} \prod _{j=1}^{n} (1-x_i y_j) \sum _{\lambda } b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_m;t) P_{\lambda }(y_1,\ldots ,y_n;t) \!=\! \prod _{i=1}^{m} \prod _{j=1}^{n} (1-t x_i y_j). \end{aligned}
(11)
Thanks to this identity, we deduce that the sum on the left-hand side is in fact a polynomial in $$x_m$$ of degree $$n$$. To make full use of this fact, we now seek to rearrange the left-hand side so that the dependence on $$x_m$$ is fully extracted. By applying both the branching rule (4) and the Pieri identity (6), we find that
\begin{aligned}&\prod _{i=1}^{m} \prod _{j=1}^{n} (1-x_i y_j) \sum _{\lambda } b_{\lambda }(t) P_{\lambda }(\vec {x}_m;t) P_{\lambda }(\vec {y}_n;t) \\&\quad = \prod _{i=1}^{m-1} \prod _{j=1}^{n} (1-x_i y_j) \sum _{k=0}^{n} e_{k}(\vec {y}_n) (-x_m)^k \\&\quad \quad \times \, \sum _{\lambda } \ \sum _{\mu } b_{\lambda }(t) \psi _{\lambda /\mu }(t) x_m^{|\lambda -\mu |} P_{\mu }(\vec {x}_{m-1};t) P_{\lambda }(\vec {y}_n;t) \\&\quad = \prod _{i=1}^{m-1} \prod _{j=1}^{n} (1-x_i y_j) \sum _{\lambda } \ \sum _{\mu } \ \sum _{\nu } b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\lambda }(t) (-1)^{|\nu -\lambda |} x_m^{|\nu -\mu |}\\&\quad \quad \times \, P_{\mu }(\vec {x}_{m-1};t) P_{\nu }(\vec {y}_n;t), \end{aligned}
where we have used the generating series $$\prod _{i=1}^{n} (1+y_i z) = \sum _{k=0}^{n} e_k(y_1,\ldots ,y_n) z^k$$ for the elementary symmetric polynomials to produce the first equality. From the linear independence of the Hall–Littlewood polynomials, the fact that the previous expression is polynomial in $$x_m$$ of degree $$n$$ means that:

Of course the value of $$m$$ is arbitrary, so one can state (12) with no constraint imposed on $$\mu$$.

### Remark 1

Equation (12) is a special case of the following more general formula4:
\begin{aligned} \sum _{\lambda :\mu \subseteq \lambda \subseteq \nu } (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda / \mu }(t) \psi '_{\nu / \lambda }(t) = (-1)^{|\mu |} t^{|\nu -\mu |} b_{\mu }(t) \psi '_{\nu / \mu } (t). \end{aligned}
(13)
Indeed, if $$\ell (\nu ) \leqslant n$$ then when $$|\nu - \mu | > n$$ it is not possible for $$\nu - \mu$$ to be a vertical strip, meaning that the right-hand side of (13) vanishes. Equation (13) follows from the (multivariate) $$q$$-Pfaff-Saalschütz-Rains Macdconald summation formula (Corollary 4.9 of [11] with $$P$$ replaced by $$Q$$) with (in the notation of that paper):
\begin{aligned} b=a/q,\ \ c=bt,\ \ a \rightarrow 0,\ \ q \rightarrow 0. \end{aligned}
The limits are taken in the prescribed order after making the substitutions and using the homogeneity of Macdonald polynomials to cancel all powers of $$a$$ and $$q$$ appearing. For details about the simplifications that occur to obtain (13), see [18].
Returning to the proof of the polynomiality property 2, using the identity (10), it is sufficient to show that
\begin{aligned} \prod _{i,j=1}^{n} (1-x_i y_j) \sum _{\lambda : \ell (\lambda ) = n} b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_n;t) P_{\lambda }(y_1,\ldots ,y_n;t) \end{aligned}
is a polynomial in $$x_n$$ of degree $$n$$. The similarity between this sum and the sum appearing on the left-hand side of the Cauchy identity (11) is apparent: Indeed, the only difference is the constraint $$\ell (\lambda )=n$$ and the fact that we now consider the case $$m=n$$. Hence, by following the same steps as those outlined above (modulo length constraints which are now imposed on the partitions being summed over), we find that property 2 is equivalent to proving that:
Let us define the sums
\begin{aligned}&\mathcal {S}_{\leqslant n}(\mu ,\nu ) = \sum _{\lambda : \ell (\lambda ) \leqslant n} (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\lambda }(t), \\&\mathcal {S}_{=n}(\mu ,\nu ) = \sum _{\lambda : \ell (\lambda ) = n} (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\lambda }(t), \end{aligned}
where we fix partitions $$\mu$$, $$\nu$$ that satisfy $$\ell (\mu )=n-1$$, $$\ell (\nu )=n$$, and $$|\nu -\mu | > n$$. We can clearly write
\begin{aligned} \mathcal {S}_{=n}(\mu ,\nu ) = \mathcal {S}_{\leqslant n}(\mu ,\nu ) - \mathcal {S}_{\leqslant n-1}(\mu ,\nu ), \end{aligned}
(15)
where we already know that $$\mathcal {S}_{\leqslant n}(\mu ,\nu ) = 0$$ using equation (12). It remains only to show that
\begin{aligned} \mathcal {S}_{\leqslant n-1}(\mu ,\nu ) = \sum _{\lambda : \ell (\lambda ) \leqslant n-1} (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\lambda }(t) \end{aligned}
(16)
vanishes. If the final part of $$\nu$$ satisfies $$\nu _n > 1$$, then (16) is zero (since $$\nu -\lambda$$ will never be a vertical strip). Hence, we restrict our attention to the case $$\nu = \tilde{\nu } \cup 1$$, where $$\ell (\tilde{\nu }) = n-1$$. Furthermore, since $$\ell (\mu ) = n-1$$, the only partitions $$\lambda$$ which give a nonzero contribution are those such that $$\ell (\lambda ) = n-1$$ (otherwise $$\lambda - \mu$$ is not a horizontal strip). Hence, all nonzero $$\psi '_{\nu /\lambda }(t)$$ in (16) satisfy
\begin{aligned} \psi '_{\nu /\lambda }(t) = \genfrac[]{0.0pt}{}{m_1(\nu )}{1}_t \psi '_{\tilde{\nu }/\lambda }(t), \end{aligned}
and we obtain
\begin{aligned} \mathcal {S}_{\leqslant n-1}(\mu ,\nu ) = \genfrac[]{0.0pt}{}{m_1(\nu )}{1}_t \sum _{\lambda : \ell (\lambda ) \leqslant n-1} (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\tilde{\nu }/\lambda }(t). \end{aligned}
But this final sum is zero, using (12) (since $$|\tilde{\nu } - \mu | > n-1$$). So everything on the right-hand side of (15) is zero, which proves (14).

### Remark 2

As before, we comment that (14) is a special case of a more general equation:
\begin{aligned}&\sum _{\begin{array}{c} \lambda :\ell (\lambda ) = \ell (\nu ) \\ \mu \subseteq \lambda \subseteq \nu \end{array}} (-1)^{|\lambda |} b_{\lambda }(t) \psi _{\lambda / \mu }(t) \psi '_{\nu / \lambda }(t) \nonumber \\&\qquad = \left\{ \begin{array}{ll} (-1)^{|\mu |} t^{|\nu -\mu |} b_{\mu }(t) \psi '_{\nu / \mu } (t), &{} \quad \ell (\mu ) = \ell (\nu ), \\ \\ (-1)^{|\mu |} t^{|\nu -\mu |} (1-t^{-1}) b_{\mu }(t) \psi '_{\nu / \mu } (t), &{} \quad \ell (\mu ) = \ell (\nu )-1, \end{array} \right. \end{aligned}
(17)
with all other cases being trivially zero.

#### 2.3.3 Recursion relation

Applying the branching rule (5) to both $$P_{\lambda }(x_1,\ldots ,x_n;t)$$ and $$P_{\lambda }(y_1,\ldots ,y_n;t)$$, Eq. (10) becomes
\begin{aligned}&\prod _{i=1}^{n} (x_i y_i) \mathcal {F}_n(x_1,\ldots ,x_n) \\&\quad =\sum _{\lambda : \ell (\lambda ) = n} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = n-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = n-1 \\ \nu ' = \lambda ' - \epsilon \end{array}}\times \, b_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi _{\lambda /\nu }(t) x_n^{|\lambda -\mu |} y_n^{|\lambda -\nu |} \\&\quad \quad \times P_{\mu }(\vec {x}_{n-1};t) P_{\nu }(\vec {y}_{n-1};t) \\&\quad = \sum _{\lambda : \ell (\lambda ) = n} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = n-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = n-1 \\ \nu ' = \lambda ' - \epsilon \end{array}} b_{\lambda }(t) \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )}) \\&\quad \quad \times \, \prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}) x_n^{|\lambda -\mu |} y_n^{|\lambda -\nu |} P_{\mu }(\vec {x}_{n-1};t) P_{\nu }(\vec {y}_{n-1};t). \end{aligned}
Setting $$x_n = 1/(t y_n)$$, we obtain
\begin{aligned} \prod _{i=1}^{n-1} (x_i y_i) t^{-1} \mathcal {F}_n \Big |_{x_n = 1/(t y_n)}= & {} \sum _{\lambda : \ell (\lambda ) = n} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = n-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = n-1 \\ \nu ' = \lambda ' - \epsilon \end{array}} b_{\lambda }(t) \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )}) \\&\!\times \prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}) t^{|\mu -\lambda |} y_n^{|\mu -\nu |} P_{\mu }(\vec {x}_{n-1};t) P_{\nu }(\vec {y}_{n-1};t). \end{aligned}
We isolate the coefficient of $$P_{\mu }(x_1,\ldots ,x_{n-1};t) P_{\nu }(y_1,\ldots ,y_{n-1};t) y_n^{|\mu -\nu |}$$ in the preceding expression and denote it by $$\mathcal {C}(\mu ,\nu )$$:
\begin{aligned} \mathcal {C}(\mu ,\nu ) = \sum _{\begin{array}{c} \lambda : \ell (\lambda ) = n \\ \lambda ' = \mu ' + \delta \\ \lambda ' = \nu ' + \epsilon \end{array}} t^{-|\delta |} b_{\lambda }(t) \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )}) \prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}). \end{aligned}
To prove the required recursion relation, we wish to show that
\begin{aligned} \mathcal {C}(\mu ,\nu ) = \left\{ \begin{array}{ll} -t^{n-1} b_{\mu }(t), &{}\quad \mu = \nu , \\ 0, &{}\quad \mu \not = \nu . \end{array} \right. \end{aligned}
We start by considering the diagonal case $$\mu = \nu$$, for which
\begin{aligned} \mathcal {C}(\mu ,\mu ) = \sum _{j=1}^{\infty } \sum _{\delta _j \in \{0,1\}} t^{-|\delta |} b_{\lambda }(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )})^2 \end{aligned}
where we now sum over all $$\delta _j \in \{0,1\}$$, with $$\lambda$$ given by $$\lambda ' = \mu ' + \delta$$. At first, it seems that we must exclude the possibility $$\delta _j = 0, \delta _{j+1} = 1$$ when $$\mu '_j = \mu '_{j+1}$$, since this gives rise to $$\lambda$$ which is not a partition. In fact, we can ignore this constraint entirely, since $$\mu '_j = \mu '_{j+1}$$ implies that $$m_j(\mu ) = 0$$ and the term $$(1-t^{m_j(\mu )})$$ vanishes, meaning $$\delta _j=0,\delta _{j+1}=1$$ gives no contribution to the summation in any case. We define the partial coefficients
\begin{aligned} \mathcal {C}_{i,\delta _i}(\mu ) = \sum _{j=1}^{i-1} \sum _{\delta _j \in \{0,1\}} t^{-\sum _{k=1}^{i} \delta _k} \prod _{k=1}^{i-1} \prod _{l=1}^{m_k(\lambda )} (1-t^l) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )})^2, \end{aligned}
(18)
where $$\delta _i$$ can be either 0 or 1 and $$\lambda$$ is the length $$n$$ partition formed by taking $$\lambda '_j=\mu '_j+\delta _j$$ for all $$1 \leqslant j \leqslant i$$, $$\lambda '_j = \mu '_j$$ for all $$j > i$$. We proceed to establish some recurrence relations which these coefficients obey. Consider the coefficient $$\mathcal {C}_{i+1,\delta _{i+1}}(\mu )$$ in the case $$\delta _{i+1} = 0$$ and perform the summation on $$\delta _i$$ explicitly. This produces the recurrence
\begin{aligned} \mathcal {C}_{i+1,0}(\mu )&= \prod _{j=1}^{m_i(\mu )} (1-t^j) \left[ \mathcal {C}_{i,0}(\mu ) + (1-t^{m_i(\mu )+1}) \mathcal {C}_{i,1}(\mu ) \right] \end{aligned}
(19)
valid for all $$i \geqslant 1$$, with initial values $$\mathcal {C}_{1,0}(\mu ) = 0$$ ($$\delta _1 = 0$$ is forbidden, because it would lead to $$\ell (\lambda ) = n-1$$) and $$\mathcal {C}_{1,1}(\mu ) = t^{-1}$$. Similarly, the $$\delta _{i+1} = 1$$ case of $$\mathcal {C}_{i+1,\delta _{i+1}}(\mu )$$ gives rise to the recurrence
\begin{aligned} t \mathcal {C}_{i+1,1}(\mu )&= \prod _{j=1}^{m_i(\mu )} (1-t^j) \left[ (1-t^{m_i(\mu )}) \mathcal {C}_{i,0}(\mu ) + \mathcal {C}_{i,1}(\mu ) \right] \end{aligned}
(20)
valid for all $$i \geqslant 1$$, where we have again summed over both possible values of $$\delta _i$$ to obtain the result. Since $$m_i(\mu ) = 0$$ for all $$i > \mu _1$$, the recurrence relations (19) and (20) eventually stabilize:
\begin{aligned} \mathcal {C}_{j,0}(\mu ) = \mathcal {C}_{i,0}(\mu ) + (1-t) \sum _{k=i}^{j-1} \mathcal {C}_{k,1}(\mu ), \quad \mathcal {C}_{j,1}(\mu ) = t^{i-j} \mathcal {C}_{i,1}(\mu ), \quad \forall \ j > i > \mu _1. \end{aligned}
It follows immediately that the quantity that we wish to compute, $$\mathcal {C}(\mu ,\mu )$$, is given by
\begin{aligned} \mathcal {C}(\mu ,\mu ) = \lim _{i \rightarrow \infty } \mathcal {C}_{i,0}(\mu ) = \mathcal {C}_{\mu _1+1,0}(\mu ) - t \mathcal {C}_{\mu _1+1,1}(\mu ). \end{aligned}
For this reason, we now consider the linear combination of coefficients $$\mathcal {C}_{i,0}(\mu ) - t \mathcal {C}_{i,1}(\mu )$$. Subtracting Eq. (20) from (19), we find that this linear combination satisfies the elementary recurrence
\begin{aligned} \mathcal {C}_{i+1,0}(\mu ) - t \mathcal {C}_{i+1,1}(\mu ) = t^{m_i(\mu )} \prod _{j=1}^{m_i(\mu )} (1-t^j) \Big [ \mathcal {C}_{i,0}(\mu ) - t \mathcal {C}_{i,1}(\mu ) \Big ] \end{aligned}
(21)
with initial condition $$\mathcal {C}_{1,0}(\mu ) - t \mathcal {C}_{1,1}(\mu ) = -1$$. Solving the recurrence (21), we find that
\begin{aligned} \mathcal {C}_{\mu _1+1,0}(\mu ) - t \mathcal {C}_{\mu _1+1,1}(\mu ) = -t^{\sum _{i=1}^{\infty } m_i(\mu )} \prod _{i=1}^{\infty } \prod _{j=1}^{m_i(\mu )} (1-t^j) = -t^{n-1} b_{\mu }(t), \end{aligned}
where we have used the fact that $$m_i(\mu ) = 0$$ for all $$i > \mu _1$$ to produce the first equality and the fact that the sum of all the multiplicities in $$\mu$$ is equal to $$n-1$$ to produce the second. This completes the proof of the diagonal case $$\mu = \nu$$.
In the non-diagonal case $$\mu \not = \nu$$, we follow a similar procedure to that outlined above. We wish to calculate
\begin{aligned} \mathcal {C}(\mu ,\nu ) = \sum _{j=1}^{\infty } \sum _{\begin{array}{c} \delta _j \in \{0,1\} \\ \epsilon _j \in \{0,1\} \end{array}} t^{-|\delta |} b_{\lambda }(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )}) \prod _{\begin{array}{c} \epsilon _k = 0 \\ \epsilon _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )}), \end{aligned}
where $$\lambda$$ is the length $$n$$ partition given by $$\lambda ' = \mu ' + \delta = \nu ' + \epsilon$$. Since $$\delta _i, \epsilon _i \in \{0,1\}$$ for all $$i$$, it follows that $$|\mu '_i-\nu '_i| \leqslant 1$$ for all $$i$$, or else $$\mathcal {C}(\mu ,\nu )$$ is necessarily zero. We define a sequence of partial coefficients
\begin{aligned}&\mathcal {C}_{i,\delta _i,\epsilon _i}(\mu ,\nu ) \\&\quad = \sum _{j=1}^{i-1} \sum _{\begin{array}{c} \delta _j \in \{0,1\} \\ \epsilon _j \in \{0,1\} \end{array}} t^{-\sum _{k=1}^{i} \delta _k} \prod _{k=1}^{i-1} \prod _{l=1}^{m_k(\lambda )} (1-t^l) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )}) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \epsilon _k = 0 \\ \epsilon _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )}), \end{aligned}
where both $$\delta _i$$ and $$\epsilon _i$$ take some value in $$\{0,1\}$$. We let $$I$$ denote the largest $$i$$ such that $$|\mu '_i-\nu '_i| = 1$$, i.e.,$$\mu '_i = \nu '_i$$ for all $$i > I$$. Then, either $$\delta _{I} = 1, \epsilon _{I} = 0$$ or $$\delta _{I} = 0, \epsilon _{I} = 1$$, and the summation is constrained by $$\delta _i = \epsilon _i$$ for all $$i > I$$. Given that the summation indices are forced in this way, it is easy to deduce the recurrences
\begin{aligned}&\mathcal {C}_{I+1,0,0}(\mu ,\nu ) = \prod _{j=1}^{m_{I}(\nu )} (1-t^j) \mathcal {C}_{I,1,0}(\mu ,\nu ), \nonumber \\&t \mathcal {C}_{I+1,1,1}(\mu ,\nu ) = \prod _{j=1}^{m_{I}(\nu )} (1-t^j) \mathcal {C}_{I,1,0}(\mu ,\nu ), \quad \text {when}\ \delta _{I} = 1,\quad \epsilon _{I} = 0, \end{aligned}
(22)
\begin{aligned}&\mathcal {C}_{I+1,0,0}(\mu ,\nu ) = \prod _{j=1}^{m_{I}(\mu )} (1-t^j) \mathcal {C}_{I,0,1}(\mu ,\nu ), \nonumber \\&t \mathcal {C}_{I+1,1,1}(\mu ,\nu ) = \prod _{j=1}^{m_{I}(\mu )} (1-t^j) \mathcal {C}_{I,0,1}(\mu ,\nu ), \quad \text {when}\ \delta _{I} = 0,\quad \epsilon _{I} = 1, \end{aligned}
(23)
while for $$i > I$$, we recover the same recurrences already obtained when considering the diagonal case $$\mu =\nu$$:
\begin{aligned} \mathcal {C}_{i+1,0,0}(\mu ,\nu )&= \prod _{j=1}^{m_i(\mu )} (1-t^j) \left[ \mathcal {C}_{i,0,0}(\mu ,\nu ) + (1-t^{m_i(\mu )+1}) \mathcal {C}_{i,1,1}(\mu ,\nu ) \right] , \\ t \mathcal {C}_{i+1,1,1}(\mu ,\nu )&= \prod _{j=1}^{m_i(\mu )} (1-t^j) \left[ (1-t^{m_i(\mu )}) \mathcal {C}_{i,0,0}(\mu ,\nu ) + \mathcal {C}_{i,1,1}(\mu ,\nu ) \right] . \end{aligned}
Hence, by applying precisely the same reasoning as above, we conclude that
\begin{aligned} \mathcal {C}(\mu ,\nu ) = \lim _{i \rightarrow \infty } \mathcal {C}_{i,0,0}(\mu ,\nu ) = \mathcal {C}_{M+1,0,0}(\mu ,\nu ) - t \mathcal {C}_{M+1,1,1}(\mu ,\nu ), \end{aligned}
(24)
where $$M = \max (\mu _1,\nu _1)$$, to cater for the case where these may be different. Hence, we are again required to compute $$\mathcal {C}_{i,0,0}(\mu ,\nu ) - t \mathcal {C}_{i,1,1}(\mu ,\nu )$$, which can be done via a recurrence of the form (21). However, in contrast to the above, the initial condition of this recurrence is $$\mathcal {C}_{I+1,0,0}(\mu ,\nu ) - t \mathcal {C}_{I+1,1,1}(\mu ,\nu ) = 0$$ (by virtue of (22) and (23)). Because of this trivial initial condition, it follows that $$\mathcal {C}_{i,0,0}(\mu ,\nu ) - t \mathcal {C}_{i,1,1}(\mu ,\nu ) = 0$$ for all $$i>I$$, which is what we aimed to show.

#### 2.3.4 Initial condition

In the case $$n=1$$, the Hall–Littlewood polynomials appearing in the sum (7) depend on a single variable. Hence, the partitions in the sum are constrained by $$\ell (\lambda ) \leqslant 1$$. It follows that
\begin{aligned} \mathcal {F}_1(x_1) = \sum _{k=0}^{\infty } (1-t) P_{(k)}(x_1;t) P_{(k)}(y_1;t) = (1-t) \sum _{k=0}^{\infty } x_1^k y_1^k = \frac{1-t}{1-x_1 y_1}. \end{aligned}
(25)

### 2.4 Proof of Equation (2)

In this subsection, we prove Theorem 2, which is equivalent to proving Eq. (2). Similar to the previous proof, we will show that the left-hand side of (2) satisfies four conditions, which are basic properties of the right-hand side. Since these conditions only admit a unique solution, it follows that the two sides of (2) must be equal.

### Theorem 2

Let $$N=2n$$. Define the function
\begin{aligned} \mathcal {G}_{N}(x_1,\ldots ,x_N) = \sum _{\begin{array}{c} \lambda \ \mathrm{with} \\ \mathrm{even\ columns} \end{array}} \ \ w_{\lambda }^\mathrm{el}(N,t) P_{\lambda }(x_1,\ldots ,x_N;t), \end{aligned}
(26)
where for convenience, we have set
\begin{aligned} w_{\lambda }^\mathrm{el}(N,t) = \prod _{i=0}^{\infty }\ \prod _{j=2,4,6,\ldots }^{m_i(\lambda )} (1-t^{j-1}). \end{aligned}
(27)
Then, $$\mathcal {G}_{N}(x_1,\ldots ,x_N)$$ satisfies the following list of properties:
1.

$$\mathcal {G}_{N}(x_1,\ldots ,x_N)$$ is symmetric in $$\{x_1,\ldots ,x_N\}$$.

2.

The renormalized function $$\prod _{1 \leqslant i<j \leqslant N} (1-x_i x_j) \mathcal {G}_{N}(x_1,\ldots ,x_N)$$ is a polynomial in $$x_N$$ of degree $$N-2$$.

3.
Setting $$x_N = 1/(t x_{N-1})$$, one obtains the recursion
\begin{aligned} \mathcal {G}_N \Big |_{x_N = 1/(t x_{N-1})} = -t^{N-1} \mathcal {G}_{N-2}(x_1,\ldots ,x_{N-2}). \end{aligned}
4.

$$\mathcal {G}_2(x_1,x_2) = (1-t)/(1-x_1 x_2)$$.

The property 1 is obvious, since $$\mathcal {G}_{N}(x_1,\ldots ,x_N)$$ is a sum of Hall–Littlewood polynomials and therefore manifestly symmetric in $$\{x_1,\ldots ,x_N\}$$. As we did in the proof of Theorem 1, we begin with an alternative expression for $$\mathcal {G}_{N}(x_1,\ldots ,x_N)$$ in Sect. 2.4.1, before proving the remaining properties 24 in Sects. 2.4.22.4.4.

#### 2.4.1 Alternative form for $$\mathcal {G}_N(x_1,\ldots ,x_N)$$

The function $$w_{\lambda }^\mathrm{el}(N,t)$$ bears close resemblance to the function $$b^\mathrm{el}_{\lambda }(t)$$, which usually appears in the Littlewood identity for Hall–Littlewood polynomials5:
\begin{aligned} b^\mathrm{el}_{\lambda }(t) = \prod _{i=1}^{\infty }\ \prod _{j=2,4,6,\ldots }^{m_i(\lambda )} (1-t^{j-1}). \end{aligned}
The only difference between the two functions is that $$w_{\lambda }^\mathrm{el}(N,t)$$ considers parts of the partition $$\lambda$$ of size zero, whereas $$b^\mathrm{el}_{\lambda }(t)$$ does not. By using the identity (9) once again, it follows that $$\mathcal {G}_N(x_1,\ldots ,x_N)$$ can be expressed alternatively as
\begin{aligned} (x_1 \cdots x_N) \mathcal {G}_N(x_1,\ldots ,x_N) = \sum _{\begin{array}{c} \lambda : \ell (\lambda ) = N \\ \lambda '\ \mathrm{even} \end{array}} \ b^\mathrm{el}_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_N;t). \end{aligned}
(28)
This new way of writing $$\mathcal {G}_N(x_1,\ldots ,x_N)$$ is useful in establishing the polynomiality property 2, as we will see below.

#### 2.4.2 Polynomiality property

We start by considering a renormalized version of the Littlewood identity for Hall–Littlewood polynomials (see Sect. 5, Chapter III of [9]):
\begin{aligned} \prod _{1 \leqslant i<j \leqslant N} (1-x_i x_j) \sum _{\lambda '\ \mathrm{even}} \ b^\mathrm{el}_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_N;t) = \prod _{1 \leqslant i<j \leqslant N} (1-t x_i x_j). \end{aligned}
(29)
We deduce that the left-hand side of (29) is a polynomial in $$x_N$$ of degree $$N-1$$, a fact which is only obvious given its equality with the right-hand side. In what follows, it will be useful to reformulate this fact, which we do by isolating the $$x_N$$ dependence:
\begin{aligned}&\prod _{1 \leqslant i<j \leqslant N} (1-x_i x_j) \sum _{\lambda '\ \mathrm{even}} \ b^\mathrm{el}_{\lambda }(t) P_{\lambda }(\vec {x}_N;t) \\&\quad = \prod _{1 \leqslant i<j \leqslant N-1} (1-x_i x_j) \sum _{k=0}^{N-1} e_k(\vec {x}_{N-1}) (-x_N)^k \\&\quad \quad \times \, \sum _{\lambda '\ \mathrm{even}}\sum _{\mu } b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) x_N^{|\lambda -\mu |} P_{\mu }(\vec {x}_{N-1};t) \\&\quad = \prod _{1 \leqslant i<j \leqslant N-1} (1-x_i x_j) \sum _{\lambda '\ \mathrm{even}} \ \sum _{\mu } \ \sum _{\nu } (-1)^{|\nu -\mu |} b^\mathrm{el}_{\lambda }(t) \\&\quad \quad \times \, \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x_N^{|\lambda -\mu |+|\nu -\mu |} P_{\nu }(\vec {x}_{N-1};t), \end{aligned}
where the first equality follows from the definition of the elementary symmetric functions and the branching rule (4) and the second equality is obtained from the Pieri identity (6). Now we notice that the summation over $$\lambda$$ is constrained trivially, since it imposes that $$\lambda '$$ is even and that $$\lambda \succ \mu$$. Indeed, given any partition $$\mu$$, there is a unique way of adding to it a horizontal strip such that the resulting partition has even-length columns. Hence, we can write
\begin{aligned}&\prod _{1 \leqslant i<j \leqslant N} (1-x_i x_j) \sum _{\lambda '\ \mathrm{even}} \ b^\mathrm{el}_{\lambda }(t) P_{\lambda }(\vec {x}_N;t) = \prod _{1 \leqslant i<j \leqslant N-1} (1-x_i x_j) \\&\quad \quad \times \sum _{\mu } \ \sum _{\nu } (-1)^{|\nu -\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x_N^{|\nu -\mu |+n_\mathrm{\tiny oc}(\mu )} P_{\nu }(\vec {x}_{N-1};t), \end{aligned}
where $$n_\mathrm{\tiny oc}(\mu )$$ denotes the number of odd-length columns in the partition $$\mu$$, and $$\lambda$$ is hereinafter understood as the even-columned partition obtained by adding a horizontal strip to $$\mu$$. From the linear independence of the Hall–Littlewood polynomials (and eliminating any overall factors which play no role), we obtain at last our reformulation of the polynomiality statement:
Coming back to the proof of property 2, because of the identity (28), it suffices to show that
\begin{aligned} \prod _{1 \leqslant i<j \leqslant N} (1-x_i x_j) \sum _{\begin{array}{c} \lambda :\ell (\lambda ) = N \\ \lambda '\ \mathrm{even} \end{array}} \ b^\mathrm{el}_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_N;t) \end{aligned}
is a polynomial in $$x_N$$ of degree $$N-1$$. The strong similarity between the preceding quantity and the left-hand side of the Littlewood identity (29) means that we can inherit information from the latter. Indeed, by following precisely the same arguments already outlined above (but paying attention to the new restriction $$\ell (\lambda ) = N$$), property 2 is equivalent to the statement
Letting $$\mathcal {P}_{N}$$ denote the proposition (31), we prove it by induction on $$N$$. Although we are only interested in the case where $$N$$ is even, one can prove it for generic $$N$$. The base case $$N=1$$ is trivial:
\begin{aligned} \sum _{\mu : \ell (\mu ) = 0} \ (-1)^{|\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x^{|\nu -\mu |+n_\mathrm{\tiny oc}(\mu )} = 1 \end{aligned}
since the only possibility is for the partition $$\nu$$ to be empty. Now let $$N > 1$$ and assume that $$\mathcal {P}_1,\ldots ,\mathcal {P}_{N-1}$$ are all true. It is clearly possible to write
\begin{aligned}&\sum _{\mu : \ell (\mu ) = N-1} \ (-1)^{|\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x^{|\nu -\mu |+n_\mathrm{\tiny oc}(\mu )} \\&\quad \quad = \left( \sum _{\mu } - \sum _{k=0}^{N-2} \sum _{\mu : \ell (\mu ) = k} \right) \ (-1)^{|\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x^{|\nu -\mu |+n_\mathrm{\tiny oc}(\mu )}, \end{aligned}
where the first sum is already known to give a polynomial in $$x$$ of degree $$N-1$$, thanks to (30). As for the remaining sums, they only give a nonzero result when $$\nu = \kappa \cup 1^{N-k-1}$$, where $$\kappa$$ is a partition with length $$\ell (\kappa )=k$$. Under such circumstances, and with $$\ell (\mu ) = k$$, we have
\begin{aligned} \psi '_{\nu /\mu }(t) = \genfrac[]{0.0pt}{}{m_1(\nu )}{N-k-1}_t \psi '_{\kappa /\mu }(t). \end{aligned}
This allows us to conclude that
\begin{aligned}&\sum _{\mu : \ell (\mu ) = N-1} \ (-1)^{|\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\nu /\mu }(t) x^{|\nu -\mu |+n_\mathrm{\tiny oc}(\mu )} \\&\quad = \genfrac[]{0.0pt}{}{m_1(\nu )}{N-k-1}_t x^{N-k-1} \sum _{\mu : \ell (\mu ) = k} (-1)^{|\mu |} b^\mathrm{el}_{\lambda }(t) \psi _{\lambda /\mu }(t) \psi '_{\kappa /\mu }(t) x^{|\kappa -\mu |+n_\mathrm{\tiny oc}(\mu )}, \end{aligned}
which is also a polynomial in $$x$$ of degree $$N-1$$ from the inductive assumption. Hence, $$\mathcal {P}_N$$ is true, completing the proof.

#### 2.4.3 Recursion relation

Applying the branching rule (5) twice to $$P_{\lambda }(x_1,\ldots ,x_N;t)$$, Eq. (28) becomes
\begin{aligned}&\prod _{i=1}^{N} (x_i) \mathcal {G}_N(x_1,\ldots ,x_N) \\&\quad = \sum _{\begin{array}{c} \lambda : \ell (\lambda ) = N \\ \lambda '\ \text {even} \end{array}} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = N-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = N-2 \\ \nu ' = \mu ' - \epsilon \end{array}} b_{\lambda }^\mathrm{el}(t) \psi _{\lambda /\mu }(t) \psi _{\mu /\nu }(t) x_N^{|\lambda -\mu |} x_{N-1}^{|\mu -\nu |} P_{\nu }(\vec {x}_{N-2};t) \\&\quad = \sum _{\begin{array}{c} \lambda : \ell (\lambda ) = N \\ \lambda '\ \text {even} \end{array}} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = N-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = N-2 \\ \nu ' = \mu ' - \epsilon \end{array}} b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )}) \\&\quad \quad \times \, \prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}) x_N^{|\lambda -\mu |} x_{N-1}^{|\mu -\nu |} P_{\nu }(\vec {x}_{N-2};t). \end{aligned}
Setting $$x_N = 1/(t x_{N-1})$$, we find that
\begin{aligned}&\prod _{i=1}^{N-2} (x_i) t^{-1} \mathcal {G}_N \Big |_{x_N = 1/(t x_{N-1})} = \sum _{\begin{array}{c} \lambda : \ell (\lambda ) = N \\ \lambda '\ \text {even} \end{array}} \ \sum _{\begin{array}{c} \mu : \ell (\mu ) = N-1 \\ \mu ' = \lambda ' - \delta \end{array}} \ \sum _{\begin{array}{c} \nu : \ell (\nu ) = N-2 \\ \nu ' = \mu ' - \epsilon \end{array}} b_{\lambda }^\mathrm{el}(t) \\&\quad \times \, \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )})\prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}) t^{|\mu -\lambda |} x_{N-1}^{|\mu -\lambda |+|\mu -\nu |} P_{\nu }(\vec {x}_{N-2};t). \end{aligned}
We isolate the coefficient of $$P_{\nu }(x_1,\ldots ,x_{N-2};t)$$ in the previous expression and denote it by $$\mathcal {D}(\nu )$$:
\begin{aligned} \mathcal {D}(\nu ) = \sum _{\begin{array}{c} \mu :\ell (\mu ) = N-1 \\ \mu ' = \nu ' + \epsilon \end{array}} \ \sum _{\begin{array}{c} \lambda :\ell (\lambda ) = N \\ \lambda ' = \mu ' + \delta \\ \lambda '\ \text {even} \end{array}} b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _i = 0 \\ \delta _{i+1} = 1 \end{array}} (1-t^{m_i(\mu )}) \prod _{\begin{array}{c} \epsilon _j = 0 \\ \epsilon _{j+1} = 1 \end{array}} (1-t^{m_j(\nu )}) t^{-|\delta |} x^{|\epsilon |-|\delta |}, \end{aligned}
where we write $$x_{N-1} \equiv x$$, since the subscript is irrelevant in what follows. To prove the required recursion relation, we need to show that
\begin{aligned} \mathcal {D}(\nu ) = \left\{ \begin{array}{ll} -t^{N-2}, &{}\quad \nu '\ \text {even}, \\ 0, &{}\quad \text {otherwise}. \end{array} \right. \end{aligned}
We start by considering the case $$\nu '\ \text {even}$$. Since $$\lambda$$ has even-length columns, it follows by parity that $$\delta _i = \epsilon _i$$ for all $$i \geqslant 1$$, which simplifies the summation as follows:
\begin{aligned} \mathcal {D}(\nu )&= \sum _{j=1}^{\infty } \sum _{\delta _j \in \{0,1\}} t^{-|\delta |} b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )}) (1-t^{m_k(\nu )}) \\&= \sum _{j=1}^{\infty } \sum _{\delta _j \in \{0,1\}} t^{-|\delta |} b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )-1}) (1-t^{m_k(\nu )}), \end{aligned}
where in the final line we have used the fact that if $$\delta _k = 0, \delta _{k+1} = 1$$, then $$m_k(\mu ) = m_k(\nu )-1$$, and where $$\lambda$$ is the partition satisfying $$\lambda ' = \nu ' + 2 \delta$$. Proceeding in direct analogy with Sect. 2.3.3, we define the sequence of partial coefficients
\begin{aligned} \mathcal {D}_{i,\delta _i}(\nu )&= \sum _{j=1}^{i-1} \sum _{\delta _j \in \{0,1\}} t^{-\sum _{k=1}^{i} \delta _k} \prod _{k=1}^{i-1} \prod _{l\ \mathrm{even}}^{m_k(\lambda )} (1-t^{l-1})\nonumber \\&\quad \times \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )-1}) (1-t^{m_k(\nu )}), \end{aligned}
(32)
where $$\lambda$$ is the partition formed by taking $$\lambda '_j = \nu '_j + 2 \delta _j$$ for all $$1 \leqslant j \leqslant i$$, $$\lambda '_j = \nu '_j$$ for all $$j > i$$. Given the similarity of these coefficients to those defined in Eq. (18), we expect that they will satisfy recurrence relations of an analogous form to (19) and (20). Indeed, by taking $$\mathcal {D}_{i+1,\delta _{i+1}}(\nu )$$ and performing its summation over $$\delta _i$$ explicitly, we find that
\begin{aligned} \mathcal {D}_{i+1,0}(\nu )&= \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) \left[ \mathcal {D}_{i,0}(\nu ) + (1-t^{m_i(\nu )+1}) \mathcal {D}_{i,1}(\nu ) \right] , \end{aligned}
(33)
\begin{aligned} t \mathcal {D}_{i+1,1}(\nu )&= \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) \left[ (1-t^{m_i(\nu )}) \mathcal {D}_{i,0}(\nu ) + \mathcal {D}_{i,1}(\nu ) \right] , \end{aligned}
(34)
valid for all $$i \geqslant 1$$, with initial values $$\mathcal {D}_{1,0}(\nu )=0$$ ($$\delta _1= 0$$ is not allowed, since this would mean that $$\ell (\lambda ) = N-2$$) and $$\mathcal {D}_{1,1}(\nu )=t^{-1}$$. Since the recursion relations (33) and (34) are virtually identical to (19) and (20), all of the reasoning presented in Sect. 2.3.3 also goes through in the present instance. In particular, we find that
\begin{aligned} \mathcal {D}(\nu ) = \lim _{i \rightarrow \infty } \mathcal {D}_{i,0}(\nu ) = \mathcal {D}_{\nu _1+1,0}(\nu ) - t \mathcal {D}_{\nu _1+1,1}(\nu ), \end{aligned}
where $$\mathcal {D}_{i,0}(\nu ) - t\mathcal {D}_{i,1}(\nu )$$ obeys the recurrence
\begin{aligned} \mathcal {D}_{i+1,0}(\nu ) - t \mathcal {D}_{i+1,1}(\nu ) = t^{m_i(\nu )} \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) \Big [ \mathcal {D}_{i,0}(\nu ) - t \mathcal {D}_{i,1}(\nu ) \Big ] \end{aligned}
(35)
with initial condition $$\mathcal {D}_{1,0}(\nu )-t\mathcal {D}_{1,1}(\nu ) = -1$$. Solving this recurrence, we obtain
\begin{aligned} \mathcal {D}_{\nu _1+1,0}(\nu ) - t \mathcal {D}_{\nu _1+1,1}(\nu ) = -t^{\sum _{i=1}^{\infty } m_i(\nu )} \prod _{i=1}^{\infty } \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) = -t^{N-2} b_{\nu }^\mathrm{el}(t), \end{aligned}
completing the proof in the case $$\nu '\ \text {even}$$.
Turning to the case where $$\nu$$ has at least one column of odd length, our task is to calculate
\begin{aligned} \mathcal {D}(\nu ) = \sum _{j=1}^{\infty } \sum _{\begin{array}{c} \delta _j \in \{0,1\} \\ \epsilon _j \in \{0,1\} \end{array}} t^{-|\delta |} x^{|\epsilon |-|\delta |} b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )}) \prod _{\begin{array}{c} \epsilon _k = 0 \\ \epsilon _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )}), \end{aligned}
where $$\mu$$ is the length $$N-1$$ partition given by $$\mu ' = \nu ' + \epsilon$$, and $$\lambda$$ the length $$N$$ partition given by $$\lambda ' = \mu ' + \delta$$. We define partial coefficients
\begin{aligned}&\mathcal {D}_{i,\delta _i,\epsilon _i}(\nu ) = \sum _{j=1}^{i-1} \sum _{\begin{array}{c} \delta _j \in \{0,1\} \\ \epsilon _j \in \{0,1\} \end{array}} \left[ t^{-\sum _{k=1}^{i} \delta _k} \right] \left[ x^{\sum _{k=1}^{i} (\epsilon _k - \delta _k)} \right] \\&\quad \times \, \prod _{k=1}^{i-1} \prod _{l\ \mathrm{even}}^{m_k(\lambda )} (1-t^{l-1}) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\mu )}) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \epsilon _k = 0 \\ \epsilon _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )}) \end{aligned}
and let $$I$$ denote the largest $$i$$ such that $$\nu '_i$$ is odd (meaning that $$\nu '_i$$ is even for all $$i > I$$). Since $$\lambda '_{I}$$ is necessarily even, it follows that either $$\delta _{I} = 1, \epsilon _{I} = 0$$ or $$\delta _{I} = 0, \epsilon _{I} = 1$$. Summing over these possibilities, we obtain the recurrences
\begin{aligned} \mathcal {D}_{I+1,0,0}(\nu )&= \prod _{j\ \mathrm{even}}^{m_{I}(\nu )+1} (1-t^{j-1}) \Big [ \mathcal {D}_{I,1,0}(\nu ) + \mathcal {D}_{I,0,1}(\nu ) \Big ], \end{aligned}
(36)
\begin{aligned} t \mathcal {D}_{I+1,1,1}(\nu )&= \prod _{j\ \mathrm{even}}^{m_{I}(\nu )+1} (1-t^{j-1}) \Big [ \mathcal {D}_{I,1,0}(\nu ) + \mathcal {D}_{I,0,1}(\nu ) \Big ], \end{aligned}
(37)
where we are only obliged to consider $$\delta _{I+1} = \epsilon _{I+1}$$, since by the definition of $$I$$ and using parity, $$\delta _i = \epsilon _i$$ for all $$i > I$$. This ensures that for $$i > I$$,
\begin{aligned} \mathcal {D}_{i+1,0,0}(\nu )&= \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) \left[ \mathcal {D}_{i,0,0}(\nu ) + (1-t^{m_i(\nu )+1}) \mathcal {D}_{i,1,1}(\nu ) \right] , \\ t \mathcal {D}_{i+1,1,1}(\nu )&= \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}) \left[ (1-t^{m_i(\nu )}) \mathcal {D}_{i,0,0}(\nu ) + \mathcal {D}_{i,1,1}(\nu ) \right] , \end{aligned}
which are the same recursion relations as (33) and (34). We are thus in the same situation as in the non-diagonal part ($$\mu \not = \nu$$) of Sect. 2.3.3. The quantity that we wish to calculate is
\begin{aligned} \mathcal {D}(\nu ) = \lim _{i \rightarrow \infty } \mathcal {D}_{i,0,0}(\nu ) = \mathcal {D}_{\nu _1+1,0,0}(\nu ) - t \mathcal {D}_{\nu _1+1,1,1}(\nu ), \end{aligned}
where $$\mathcal {D}_{i,0,0}(\nu ) - t \mathcal {D}_{i,1,1}(\nu )$$ is given by a recurrence of the form (35), but with the trivial initial condition $$\mathcal {D}_{I+1,0,0}(\nu ) - t\mathcal {D}_{I+1,1,1}(\nu ) = 0$$ [by subtracting Eq. (37) from (36)]. Since its initial value is zero, this recurrence has the solution $$\mathcal {D}_{i,0,0}(\nu ) - t \mathcal {D}_{i,1,1}(\nu ) = 0$$ for all $$i > I$$, as we were required to show.

#### 2.4.4 Initial condition

The case $$N=2$$ can be computed explicitly without difficulty. Indeed, we find that
\begin{aligned} \mathcal {G}_2 (x_1,x_2) = (1-t) \sum _{k=0}^{\infty } P_{(k,k)}(x_1,x_2;t) = (1-t) \sum _{k=0}^{\infty } x_1^k x_2^k = \frac{1-t}{1-x_1 x_2}. \end{aligned}

## 3 Identities at Macdonald level

The identities (1)–(3) listed at the start of this paper apply at the level of Hall–Littlewood polynomials. Since Hall–Littlewood polynomials are the $$q=0$$ specialization of Macdonald polynomials, it is natural to suggest that these equations are special cases of yet more general identities involving extra parameters.

In this section, we show that this is indeed the case, by presenting Macdonald analogs of both equations (1) and (2). It turns out that these equations can be generalized by the introduction of two additional parameters, one being the $$q$$ from Macdonald theory. The Macdonald generalization of (1) has been known since the work of Warnaar in [17] and can be proved using Macdonald difference operators. Although we obtain a completely analogous generalization of (2) to Macdonald level, it remains conjectural, since we lack an appropriate family of difference operators to expedite its proof.

As an aside, we remark that we do not know of an appropriate generalization of (3) to Macdonald level, even conjecturally. We are, however, able to deform it by the introduction of certain additional parameters, but we defer this result to Sect. 6 since it does not pertain directly to symmetric polynomials at Macdonald level.

### 3.1 $$u$$-Deformed Macdonald Cauchy identity

Consider the Cauchy identity for Macdonald polynomials [9]:
\begin{aligned} \sum _{\lambda } b_{\lambda }(q,t) P_{\lambda }(x_1,\ldots ,x_n;q,t) P_{\lambda }(y_1,\ldots ,y_n;q,t) = \prod _{i,j=1}^{n} \frac{(t x_i y_j; q)}{(x_i y_j; q)}, \end{aligned}
(38)
where the coefficients $$b_{\lambda }(q,t)$$ are given by
\begin{aligned} b_{\lambda }(q,t) := \prod _{s \in \lambda } b_{\lambda }(s;q,t), \quad \quad b_{\lambda }(s;q,t) := \frac{1-q^{a_{\lambda }(s)} t^{l_{\lambda }(s)+1}}{1-q^{a_{\lambda }(s)+1} t^{l_{\lambda }(s)}}, \end{aligned}
(39)
with the product taken over all boxes $$s$$ in the Young diagram $$\lambda$$, and where $$a_{\lambda }(s)$$ and $$l_{\lambda }(s)$$ denote the arm and leg length of the box $$s$$, respectively.6 The following theorem can be deduced by acting on the Macdonald Cauchy identity with the generating series
\begin{aligned} D_n(u) = \sum _{r=0}^{n} (-u)^r \sum _{ \begin{array}{c} S \subseteq [n] \\ |S| = r \end{array} } t^{r(r-1)/2} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \frac{tx_i - x_j}{x_i - x_j} \prod _{i \in S} T_{q,x_i} \end{aligned}
(40)
of Macdonald difference operators [9]. It was partially discovered in [4] and discussed again in [17] (see Equations (3.2) and (3.3) therein). The fact that the right-hand side is a determinant for all values of the parameter $$u$$ was not made explicit in [17], but the procedure presented therein (for the case $$u=t$$) can be applied mutatis mutandis when $$u$$ is generic. For that reason, we attribute this theorem to Warnaar.

### Theorem 3

(Warnaar)
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1- u q^{\lambda _i} t^{n-i}) b_{\lambda }(q,t) P_{\lambda }(x_1,\ldots ,x_n;q,t) P_{\lambda }(y_1,\ldots ,y_n;q,t) \\&\quad = \prod _{i,j=1}^{n} \frac{(t x_i y_j; q)}{(x_i y_j; q)} \frac{ \prod _{i,j=1}^{n} (1 \!-\! x_i y_j) }{\prod _{1 \leqslant i<j \leqslant n} (x_i - x_j) (y_i \!-\! y_j)} \det _{1\leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) x_i y_j}{(1-x_i y_j) (1-t x_i y_j)} \right] . \nonumber \end{aligned}
(41)

### Proof

Acting on both sides of the Cauchy identity (38) with the operator (40), one obtains
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1- u q^{\lambda _i} t^{n-i}) b_{\lambda }(q,t) P_{\lambda }(x_1,\ldots ,x_n;q,t) P_{\lambda }(y_1,\ldots ,y_n;q,t) \nonumber \\&\quad = \prod _{i,j=1}^{n} \frac{(t x_i y_j; q)}{(x_i y_j; q)} \sum _{r=0}^{n} \sum _{\begin{array}{c} S \subseteq [n] \\ |S| = r \end{array}} (-u)^r t^{r(r-1)/2} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \frac{t x_i - x_j}{x_i - x_j} \prod _{i \in S} \prod _{j=1}^{n} \frac{1 - x_i y_j}{1- t x_i y_j}.\quad \quad \quad \end{aligned}
(42)
It thus suffices to show that
\begin{aligned}&\sum _{r=0}^{n} \sum _{\begin{array}{c} S \subseteq [n] \\ |S| = r \end{array}} (-u)^r t^{r(r-1)/2} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \frac{t x_i - x_j}{x_i - x_j} \prod _{i \in S} \prod _{j=1}^{n} \frac{1 - x_i y_j}{1- t x_i y_j} \nonumber \\&\quad = \frac{ \prod _{i,j=1}^{n} (1 - x_i y_j) }{\prod _{1 \leqslant i<j \leqslant n} (x_i - x_j) (y_i - y_j)} \det _{1\leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) x_i y_j}{(1-x_i y_j) (1-t x_i y_j)} \right] . \end{aligned}
(43)
This can be done using Lagrange interpolation. We let $$\mathcal {L}_n$$ and $$\mathcal {R}_n$$ denote the left- and right-hand sides of (43), having first multiplied this equation by $$\prod _{i,j=1}^{n} (1-tx_i y_j)$$. Both $$\mathcal {L}_n$$ and $$\mathcal {R}_n$$ are polynomials in $$x_n$$ of degree $$n$$, and manifestly symmetric in the variables $$\{y_1,\ldots ,y_n\}$$. We find that $$\mathcal {L}_n$$ satisfies two simple recursion relations:
\begin{aligned} \mathcal {L}_n \Big |_{x_n = 1/y_n}= & {} (1-t) \prod _{i=1}^{n-1} (1-t x_i y_n) \prod _{j=1}^{n-1} (1-t y_j/y_n) \times \prod _{i,j=1}^{n-1} (1-t x_i y_j) \\&\times \sum _{r=0}^{n-1} \sum _{\begin{array}{c} S \subseteq [n-1]\\ |S| = r \end{array}} (-u)^r t^{r(r-1)/2} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \left( \frac{t x_i - x_j}{x_i - x_j} \right) \prod _{i \in S} \left( \frac{t x_i - 1/y_n}{x_i - 1/y_n} \right) \\&\times \prod _{i \in S} \prod _{j=1}^{n-1} \left( \frac{1 - x_i y_j}{1- t x_i y_j} \right) \prod _{i \in S} \left( \frac{1 - x_i y_n}{1- t x_i y_n} \right) \\= & {} (1-t) \prod _{i=1}^{n-1} (1-t x_i y_n) \prod _{j=1}^{n-1} (1-t y_j/y_n) \mathcal {L}_{n-1}, \\ \mathcal {L}_n \Big |_{x_n = 1/(t y_n)}= & {} (1-1/t) \prod _{i=1}^{n-1} (1-t x_i y_n) \prod _{j=1}^{n-1} (1-y_j/y_n) \times \prod _{i,j=1}^{n-1} (1-t x_i y_j) \\&\times \sum _{r=1}^{n} \sum _{\begin{array}{c} S \subseteq [n-1] \\ |S| = r-1 \end{array}} (-u)^{r} t^{r(r-1)/2} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \left( \frac{t x_i - x_j}{x_i - x_j} \right) \prod _{j \not \in S} \left( \frac{1/y_n - x_j}{1/(ty_n) - x_j} \right) \\&\times \prod _{i \in S} \prod _{j=1}^{n} \left( \frac{1 - x_i y_j}{1- t x_i y_j} \right) \prod _{j=1}^{n-1} \left( \frac{1 - y_j/(t y_n)}{1-y_j/y_n} \right) \\= & {} u t^{n-1} (1/t-1) \prod _{i=1}^{n-1} (1- x_i y_n) \prod _{j=1}^{n-1} (1-y_j/(t y_n)) \mathcal {L}_{n-1}. \end{aligned}
Identical recursion relations are satisfied by $$\mathcal {R}_n$$. Thanks to the symmetry in $$\{y_1,\ldots ,y_n\}$$, these recursion relations prove that $$\mathcal {L}_n = \mathcal {R}_n$$ at $$2n$$ values of $$x_n$$ (more than sufficient for Lagrange interpolation), provided that they agree for $$n=1$$. It is immediate from their definitions that $$\mathcal {L}_1 = 1-u+(u-t) x_1 y_1 = \mathcal {R}_1$$. $$\square$$

### 3.2 $$u$$-Deformed Macdonald Littlewood identity

Throughout this subsection, let $$N=2n$$. Consider the “even columns” Littlewood identity for Macdonald polynomials (see Example 4, Section 7, Chapter VI of [9]):
\begin{aligned} \sum _{\lambda '\ \mathrm{even}} b^\mathrm{el}_{\lambda }(q,t) P_{\lambda }(x_1,\ldots ,x_N;q,t) = \prod _{1 \leqslant i<j \leqslant N} \frac{(t x_i x_j;q)}{(x_i x_j;q)}, \end{aligned}
(44)
where the coefficients $$b^\mathrm{el}_{\lambda }(q,t)$$ are given by
\begin{aligned} b^\mathrm{el}_{\lambda }(q,t) := \prod _{\begin{array}{c} s \in \lambda \\ l_{\lambda }(s)\ \text {even} \end{array}} b_{\lambda }(s;q,t), \end{aligned}
(45)
with the product of the functions $$b_{\lambda }(s;q,t)$$ taken only over boxes $$s$$ which have an even leg-length. The following conjecture7 is motivated by (44). Although we do not have a proof of this conjecture, it is tempting to suggest that it can be deduced by acting on the Littlewood identity (44) with an appropriate family of difference operators, in much the same way that Theorem 3 follows from the Cauchy identity (38). A preliminary step in this direction is given in the remark following the conjecture.

### Conjecture 1

\begin{aligned}&\sum _{\lambda ' \ \mathrm{even}} \ \prod _{i\ \mathrm{even}}^{N} (1-u q^{\lambda _i} t^{N-i} ) b^\mathrm{el}_{\lambda }(q,t) P_{\lambda }(x_1,\ldots ,x_{N};q,t) \nonumber \\&\quad = \prod _{1 \leqslant i<j \leqslant N} \frac{(t x_i x_j;q)}{(x_i x_j;q)} \prod _{1 \leqslant i<j \leqslant N} \frac{(1-x_i x_j)}{(x_i - x_j)} \nonumber \\&\quad \quad \times \, \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant N} \left[ \frac{(x_i - x_j) (1 - u + (u-t) x_i x_j)}{(1-x_i x_j) (1-t x_i x_j)} \right] . \end{aligned}
(46)

### Remark 3

One can express the Pfaffian on the right-hand side of (46) as a sum over subsets of $$\{1,\ldots ,N\}$$, as in the following lemma.

### Lemma 1

\begin{aligned}&\sum _{r=0}^{n} (-u)^r \sum _{ \begin{array}{c} S \subseteq [N] \\ |S| = 2r \end{array} } t^{r(r-1)} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \frac{1 - x_i x_j}{x_i - x_j} \prod _{\begin{array}{c} i<j \\ i,j \in S \end{array}} \frac{1-x_i x_j}{1-t x_i x_j} \\&\quad = \prod _{1 \leqslant i<j \leqslant N} \frac{(1-x_i x_j)}{(x_i - x_j)} \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant N} \left[ \frac{(x_i - x_j) (1 - u + (u-t) x_i x_j)}{(1-x_i x_j) (1-t x_i x_j)} \right] . \nonumber \end{aligned}
(47)

### Proof

The proof proceeds along analogous lines to the proof of Theorem 3. We let $$\mathcal {L}_N$$ and $$\mathcal {R}_N$$ denote the left- and right-hand sides of (47), after it is multiplied by $$\prod _{1\leqslant i<j \leqslant N}(1-t x_i x_j)$$. Both $$\mathcal {L}_N$$ and $$\mathcal {R}_N$$ are polynomials in $$x_{N}$$ of degree $$N-1$$, and symmetric in the set of variables $$\{x_1,\ldots ,x_{N}\}$$. $$\mathcal {L}_{N}$$ satisfies the following two recursion relations:
\begin{aligned} \mathcal {L}_{N} \Big |_{x_{N} = \bar{x}_{N-1}}= & {} (1-t) \prod _{i=1}^{N-2} (1-tx_i x_{N-1}) (1-tx_i \bar{x}_{N-1})\\&\sum _{r=0}^{n-1}\sum _{\begin{array}{c} S \subseteq [N-2] \\ |S| = 2r \end{array}} (-u)^r t^{r(r-1)} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \left( \frac{1 - x_i x_j}{x_i - x_j} \right) \prod _{\begin{array}{c} i<j \\ i,j \in S \end{array}} \left( \frac{1-x_i x_j}{1-t x_i x_j} \right) \\= & {} (1-t) \prod _{i=1}^{N-2} (1-tx_i x_{N-1}) (1-tx_i \bar{x}_{N-1}) \mathcal {L}_{N-2}, \\&\mathcal {L}_{N} \Big |_{x_{N} = \bar{x}_{N-1}/t} = (1-1/t) \prod _{i=1}^{N-2} (1-tx_i x_{N-1}) (1-x_i \bar{x}_{N-1}) \\&\times \sum _{r=1}^{n} \sum _{\begin{array}{c} S \subseteq [N-2] \\ |S| = 2r-2 \end{array}} (-u)^r t^{r(r-1)} \prod _{\begin{array}{c} i \in S \\ j \not \in S \end{array}} \left( \frac{1-x_i x_j}{x_i-x_j} \right) \\&\times \prod _{j\not \in S} \left( \frac{1-x_{N-1} x_j}{x_{N-1}-x_j} \right) \left( \frac{1-\bar{x}_{N-1} x_j/t}{\bar{x}_{N-1}/t-x_j} \right) \\&\times \, \prod _{1\leqslant i<j \leqslant N-2} \left( \frac{1-x_i x_j}{1-t x_i x_j} \right) \prod _{i \in S} \left( \frac{1-x_i x_{N-1}}{1-t x_i x_{N-1}} \right) \left( \frac{1-x_i \bar{x}_{N-1}/t}{1-x_i \bar{x}_{N-1}} \right) \\= & {} -u t^{N-2} (1-1/t) \prod _{i=1}^{N-2} (1- x_i x_{N-1}) (1- x_i \bar{x}_{N-1}/t) \mathcal {L}_{N-2}. \end{aligned}
It is straightforward to show that both of these recursion relations are satisfied by $$\mathcal {R}_{N}$$. Due to the symmetry in $$\{x_1,\ldots ,x_{N}\}$$, these recursion relations prove that $$\mathcal {L}_{N} = \mathcal {R}_{N}$$ at $$2N-2$$ points, as long as they agree for $$N=2$$. It is clear from their definitions that $$\mathcal {L}_2 = (1-t x_1 x_2) - u (1-x_1 x_2) = \mathcal {R}_2$$. $$\square$$

The expression of the Pfaffian appearing on the right-hand side of (46) as a sum over subsets of $$\{1,\ldots ,N\}$$, as achieved by Eq. (47), would seem to be an important step toward the proof of (46). Indeed, the analogous result (42) was crucial in the proof of (41), since the type of sum arising in that case was manifestly related to the generating series (40) of difference operators.

Nevertheless, we do not yet know of a family of operators whose action on the right-hand side of the Littlewood identity (44) produces the sum in (47). The discovery of such operators would not only lead to the completed proof of (46), but constitute an important development in the theory of Macdonald polynomials in its own right.

## 4 $$u$$-Deformed Cauchy identity and half-turn symmetric alternating sign matrices

The aim of this section is to study the $$q=0$$ specialization of Eq. (41) and its relation with the six-vertex model. In particular, we will study the six-vertex model on a lattice with half-turn symmetry and calculate its partition function as a product of two determinants following Kuperberg [8]. One of the determinants is precisely the domain wall partition function (the right-hand side of (1)), while the remaining determinant is equal to the right-hand side of (41) with $$q=0$$ and $$u=-\sqrt{t}$$.

### 4.1 Six-vertex model in the bulk

We begin with some preliminary material on the six-vertex model. We consider lattices formed by the intersection of horizontal and vertical lines. The points of intersection form vertices, and each of the four edges surrounding a vertex is assigned an arrow configuration, such that exactly two arrows point toward/away from the point of intersection. This gives rise to six legal vertices, which are shown in Fig. 1.
To each horizontal (respectively, vertical) line of the lattice one associates an orientation and a variable $$x_i$$ (respectively, $$y_j$$), called a rapidity. The six types of vertex are assigned Boltzmann weights, which are rational functions depending on the ratio $$x/y$$ of the rapidities incident on the vertex:
\begin{aligned}&a_{\pm }(x,y) = \frac{1-t x/y}{1-x/y}, \quad \quad b_{\pm }(x,y) = \sqrt{t}, \nonumber \\&c_{+}(x,y) = \frac{(1-t)}{1-x/y}, \quad \quad c_{-}(x,y) = \frac{(1-t) x/y}{1-x/y}. \end{aligned}
(48)
Note that in order to correctly determine the Boltzmann weight of a vertex, it is necessary to place the vertex in some canonical orientation, which means rotating the vertex such that the orientation of its lines are from left to right and bottom to top. The crucial feature of the Boltzmann weights thus chosen is that they satisfy the Yang–Baxter equation, shown in Fig. 2. The Yang–Baxter equation plays an essential role in the partition functions to be studied, since it ensures that these functions are symmetric in their rapidity variables. Without this fact, it would clearly not be possible to expand these objects with respect to Hall–Littlewood polynomials.

### 4.2 Partition function on half-turn symmetric lattice

We turn our attention to the six-vertex model under domain wall boundary conditions, with half-turn (or $$180^{\circ }$$ rotational) symmetry imposed—see Fig. 3. Configurations on this lattice are in one-to-one correspondence with half-turn symmetric alternating sign matrices [8].

### Lemma 2

The partition function $$Z^{(n)}_\mathrm{HT} = Z_\mathrm{HT}(x_1,\ldots ,x_n;y_1,\ldots ,y_n;t)$$ as defined in Fig. 3 satisfies four properties:
1.

Multiplying by $$\prod _{i,j=1}^{n} (1-x_i y_j)^2$$, it is a polynomial in $$x_n$$ of degree $$2n-1$$.

2.

It is symmetric in $$\{y_1,\ldots ,y_n\}$$.

3.
It obeys the recursion relations
\begin{aligned} Z^{(n)}_\mathrm{HT} \Big |_{x_n = \bar{y}_n/t}&= -t^{2n-1/2} Z^{(n-1)}_\mathrm{HT}, \end{aligned}
(49)
\begin{aligned} \lim _{x_n \rightarrow \bar{y}_n} \Big ( (1-x_n y_n)^2 Z^{(n)}_\mathrm{HT} \Big )&= (1-t)^2 \prod _{i=1}^{n-1} \frac{(1-ty_i \bar{y}_n)^2}{(1-y_i \bar{y}_n)^2} \frac{(1-tx_i y_n)^2}{(1-x_i y_n)^2} Z^{(n-1)}_\mathrm{HT}. \end{aligned}
(50)
4.
When $$n=1$$, it is given explicitly by
\begin{aligned} Z^{(1)}_\mathrm{HT} = \frac{(1-t)(1+\sqrt{t})(1-\sqrt{t} x_1 y_1)}{(1-x_1 y_1)^2}. \end{aligned}

### Proof

Working directly from the lattice definition in Fig. 3, we demonstrate these properties one by one.
1.

Multiplying the partition function by $$\prod _{i,j=1}^{n} (1-x_i y_j)^2$$ is equivalent to multiplying each individual Boltzmann weight by $$(1-x_i y_j)$$. After this renormalization, it is clear that every Boltzmann weight is a degree-1 polynomial in $$x_i$$, with the sole exception of the $$c_{+}$$ vertex (which is a constant). Focusing attention on the top and bottom rows of the lattice in Fig. 3, which are the only places which have dependence on $$x_n$$, one can easily deduce that exactly one $$c_{+}$$ vertex occurs in these two rows. It follows that the renormalized partition function is a polynomial in $$x_n$$ of degree $$2n-1$$.

2.

Symmetry in the $$y$$ variables is deduced using a standard argument involving the Yang–Baxter equation (see, for example, [5]). Indeed, any two adjacent vertical lines can be exchanged using this procedure.

3.
Setting $$x_n = \bar{y}_n/t$$ eliminates the possibility that the top-left vertex of the lattice is an $$a_{+}$$ vertex. It follows that it must be a $$c_{+}$$ vertex. This forces a subset of the vertices into a frozen configuration, shown on the left of Fig. 4. Studying the Boltzmann weights of the frozen region, we find that they contribute the total factor $$-t(\sqrt{t})^{4n-3} = -t^{2n-1/2}$$. The remaining (unfrozen) region is just $$Z^{(n-1)}_\mathrm{HT}$$. Hence, we recover the first recursion relation (49). Multiplying by $$(1-x_n y_n)^2$$ and taking $$x_n \rightarrow \bar{y}_n$$ eliminates the possibility that the bottom-left vertex of the lattice is a $$b_{+}$$ vertex. It must therefore be a $$c_{+}$$ vertex. Once again, this forces a subset of the vertices to freeze out, as is shown on the right of Fig. 4. The Boltzmann weights of these frozen vertices contribute the total factor
\begin{aligned} (1-t)^2 \prod _{i=1}^{n-1} \frac{(1-ty_i \bar{y}_n)^2}{(1-y_i \bar{y}_n)^2} \frac{(1-tx_i y_n)^2}{(1-x_i y_n)^2}, \end{aligned}
while the unfrozen region again represents $$Z^{(n-1)}_\mathrm{HT}$$. This yields the second recursion relation (50).
4.
The $$n=1$$ case is small enough to be calculated explicitly: Substituting the Boltzmann weights into this expression, we obtain
\begin{aligned} Z^{(1)}_\mathrm{HT} = \frac{(1-t) \sqrt{t}}{1-x_1 y_1} + \frac{1-tx_1 y_1}{1-x_1 y_1} \frac{(1-t)}{1-x_1 y_1} = \frac{(1-t)(1+\sqrt{t})(1-\sqrt{t} x_1 y_1)}{(1-x_1 y_1)^2}. \end{aligned}
$$\square$$

### Theorem 4

(Kuperberg) The partition function on the half-turn symmetric lattice is given by a product of determinants:
\begin{aligned}&Z_\mathrm{HT}(x_1,\ldots ,x_n; y_1,\ldots ,y_n;t) = \frac{\prod _{i,j=1}^{n} (1-t x_i y_j)^2}{\prod _{1\leqslant i<j \leqslant n} (x_i-x_j)^2 (y_i-y_j)^2} \nonumber \\&\quad \times \, \det _{1 \leqslant i,j \leqslant n} \left[ \frac{(1-t)}{(1-x_i y_j)(1-t x_i y_j)}\right] \det _{1 \leqslant i,j \leqslant n}\left[ \frac{1+\sqrt{t}-(\sqrt{t}+t) x_i y_j}{(1-x_i y_j)(1-t x_i y_j)} \right] .\quad \quad \quad \end{aligned}
(51)

### Proof

Taking the expression (51) as an Ansatz for the partition function, it is straightforward to show that it satisfies the four properties of Lemma 2. Furthermore, these properties are uniquely determining by the usual arguments of Lagrange interpolation. $$\square$$

### Corollary 1

The Cauchy identity for Schur polynomials can be doubly refined, by the introduction of two deformation parameters $$t$$ and $$u$$:
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1 - u t^{\lambda _i - i + n}) s_{\lambda }(x_1,\ldots ,x_n) s_{\lambda }(y_1,\ldots ,y_n) \nonumber \\&\quad = \frac{1}{\Delta (x) \Delta (y)} \det _{1 \leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) x_i y_j}{(1-x_i y_j) (1-t x_i y_j)} \right] . \end{aligned}
(52)

### Corollary 2

The Cauchy identity for Hall–Littlewood polynomials can be refined by the introduction of a single deformation parameter $$u$$:
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{m_0(\lambda )} (1 - u t^{i-1}) b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_n;t) P_{\lambda }(y_1,\ldots ,y_n;t) \nonumber \\&\quad = \frac{\prod _{i,j=1}^{n} (1- t x_i y_j)}{\Delta (x) \Delta (y)} \det _{1 \leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) x_i y_j}{(1-x_i y_j) (1-t x_i y_j)} \right] . \end{aligned}
(53)

These identities are recovered as the special cases $$q=t$$ and $$q=0$$ of Theorem 3. In view of the fact that the Schur polynomials on the left-hand side of (52) are determinants, it is possible to prove (52) by completely elementary means via the Cauchy–Binet identity.8 On the other hand, (53) remains a highly non-trivial identity, admitting no simple proof (that we know of) outside of the use of Macdonald difference operators, or a Lagrange interpolation style of proof similar to that of Sect. 2.3.

One can consider further specializations of (53), by setting the free parameter $$u$$ to various values. Setting $$u=0$$ produces the Cauchy identity for Hall–Littlewood polynomials, while setting $$u=t$$ reproduces Eq. (1). Finally, in the case $$u=-\sqrt{t}$$, we recover one half of the factors in Eq. (51) for $$Z_\mathrm{HT}$$ (the remaining factors being those of the domain wall partition function).

## 5 $$u$$-Deformed Littlewood identity and doubly off-diagonally symmetric alternating sign matrices

This section proceeds largely in parallel with the previous one. The goal here is to study the $$q=0$$ specialization of the conjecture (46) and its connection with the six-vertex model. The relevant domain in this case is the doubly off-diagonally symmetric lattice, whose configurations are in one-to-one correspondence with off-diagonally/off-anti-diagonally symmetric alternating sign matrices [8].

### 5.1 Corner vertices

In this section, it is necessary to introduce boundary vertices, which consist of a single lattice line making a turn through a node. The type of boundary vertices of interest to us are the corner vertices, shown in Fig. 5.
Together with the bulk vertices of Fig. 1, the corner vertices satisfy the corner reflection equations shown in Fig. 6. The corner reflection equations, in conjunction with the regular Yang–Baxter equation, ensure that the $$Z_\mathrm{OO}$$ partition function that we subsequently study is symmetric in its rapidities.

### 5.2 Partition function on doubly off-diagonally symmetric lattice

We now study the six-vertex model on a doubly off-diagonally symmetric domain, as shown in Fig. 7. The corresponding alternating sign matrices have off-diagonal/off-anti-diagonal symmetry [8].

### Lemma 3

The partition function $$Z^{(2n)}_\mathrm{OO} = Z_\mathrm{OO}(x_1,\ldots ,x_{2n};t)$$ as defined in Fig. 7 satisfies four properties:
1.

Multiplying by $$\prod _{1\leqslant i<j \leqslant 2n} (1-x_i x_j)^2$$, it is a polynomial in $$x_{2n}$$ of degree $$4n-3$$.

2.

It is symmetric in $$\{x_1,\ldots ,x_{2n}\}$$.

3.
It obeys the recursion relations
\begin{aligned}&Z^{(2n)}_\mathrm{OO} \Big |_{x_{2n} = \bar{x}_{2n-1}/t} = - t^{4n-5/2} Z^{(2n-2)}_\mathrm{OO}, \end{aligned}
(54)
\begin{aligned}&\lim _{x_{2n} \rightarrow \bar{x}_{2n-1}} \Big ( (1-x_{2n-1} x_{2n})^2 Z^{(2n)}_\mathrm{OO} \Big ) \nonumber \\&\quad = (1-t)^2 \prod _{i=1}^{2n-2} \frac{(1-tx_i \bar{x}_{2n-1})^2}{(1-x_i \bar{x}_{2n-1})^2} \frac{(1-tx_i x_{2n-1})^2}{(1-x_i x_{2n-1})^2} Z^{(2n-2)}_\mathrm{OO}. \end{aligned}
(55)
4.
When $$n=1$$, it is given explicitly by
\begin{aligned} Z^{(2)}_\mathrm{OO} = \frac{(1-t)(1+\sqrt{t})(1 - \sqrt{t} x_1 x_2)}{(1-x_1 x_2)^2}. \end{aligned}

### Proof

The proof is again based closely on the lattice definition of $$Z_\mathrm{OO}$$.
1.

Multiplying the partition function by $$\prod _{1\leqslant i<j \leqslant 2n} (1-x_i x_j)^2$$ is equivalent to renormalizing every vertex by $$(1-x_i x_j)$$. This makes all Boltzmann weights degree-1 polynomials in $$x_i$$, with the sole exception of the $$c_{+}$$ vertex (which is a constant). Examining the left-most vertical line of Fig. 7, which gives rise to all $$x_{2n}$$ dependence, we see that exactly one $$c_{+}$$ vertex will occur on this line. Hence, $$Z^{(2n)}_\mathrm{OO}$$ is a polynomial in $$x_{2n}$$ of degree $$4n-3$$.

2.

Symmetry in the $$x$$ variables can be deduced using both the Yang–Baxter equation and the two corner reflection equations in Fig. 6. These equations, in combination, allow for any two lattice lines bearing the labels $$x_i$$ and $$x_j$$ to be interchanged.

3.
Consider the top-most bulk vertex in Fig. 7. Setting $$x_{2n} = \bar{x}_{2n-1}/t$$ rules out the possibility that this is an $$a_{+}$$ vertex. It must therefore be a $$c_{+}$$ vertex, and this causes a subset of the vertices to be in a frozen configuration, as shown on the left of Fig. 8. The total contribution from these frozen vertices is the weight $$-t (\sqrt{t})^{8n-7} = -t^{4n-5/2}$$, while the surviving region is simply $$Z^{(2n-2)}_\mathrm{OO}$$. Hence, we obtain Eq. (54). A similar argument applies to the bottom-most bulk vertex in Fig. 7. After multiplying by $$(1-x_{2n-1} x_{2n})^2$$ and sending $$x_{2n} \rightarrow \bar{x}_{2n-1}$$, this cannot be a $$b_{+}$$ vertex, meaning that it must be a $$c_{+}$$ vertex. This causes some of the vertices to freeze, as shown on the right of Fig. 8, and they contribute a total weight of
\begin{aligned} (1-t)^2 \prod _{i=1}^{2n-2} \frac{(1-tx_i \bar{x}_{2n-1})^2}{(1-x_i \bar{x}_{2n-1})^2} \frac{(1-tx_i x_{2n-1})^2}{(1-x_i x_{2n-1})^2}, \end{aligned}
with the non-frozen part of the lattice representing $$Z^{(2n-2)}_\mathrm{OO}$$. Hence, we recover Eq. (55).
4.
Calculating the $$n=1$$ case explicitly, we find that Substituting the explicit expression for the Boltzmann weights, we obtain
\begin{aligned} Z^{(2)}_\mathrm{OO} = \frac{(1-t)\sqrt{t}}{1-x_1 x_2} + \frac{1-tx_1 x_2}{1-x_1 x_2} \frac{(1-t)}{1-x_1 x_2} = \frac{(1-t)(1+\sqrt{t})(1 - \sqrt{t} x_1 x_2)}{(1-x_1 x_2)^2}. \end{aligned}
$$\square$$

### Theorem 5

(Kuperberg) The partition function on the doubly off-diagonally symmetric lattice is given by a product of Pfaffians:
\begin{aligned} Z_\mathrm{OO}(x_1,\ldots ,x_{2n};t)= & {} \prod _{1 \leqslant i<j \leqslant 2n} \frac{(1-t x_i x_j)^2}{(x_i-x_j)^2} \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \frac{(1-t)(x_i-x_j)}{(1-x_i x_j)(1-t x_i x_j)} \right] \nonumber \\&\times \, \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \frac{(1+\sqrt{t} - (\sqrt{t} + t) x_i x_j)(x_i-x_j)}{(1-x_i x_j)(1-t x_i x_j)} \right] . \end{aligned}
(56)

### Proof

One needs only to check that (56) satisfies the four properties of Lemma 3, since these properties uniquely determine $$Z_\mathrm{OO}$$. $$\square$$

### 5.3 $$u$$-Deformed Littlewood identity at Schur and Hall–Littlewood level

In this subsection, we will refer to the following Littlewood identities for Schur and Hall–Littlewood polynomials [9]:
\begin{aligned} \sum _{\lambda '\ \text {even}} s_{\lambda }(x_1,\ldots ,x_N)&= \prod _{1 \leqslant i<j \leqslant N} \left( \frac{1}{1-x_i x_j} \right) , \end{aligned}
(57)
\begin{aligned} \sum _{\lambda '\ \text {even}} b^\mathrm{el}_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_N;t)&= \prod _{1 \leqslant i<j \leqslant N} \left( \frac{1-t x_i x_j}{1-x_i x_j} \right) , \end{aligned}
(58)
where as always we take $$N=2n$$.

### Theorem 6

The Littlewood identity (57) for Schur polynomials can be doubly refined, by the introduction of two deformation parameters $$t$$ and $$u$$:
\begin{aligned}&\sum _{\lambda '\ \mathrm{even}} \ \prod _{i=1}^{n} (1 - u t^{\lambda _{2i} - 2i + 2n}) s_{\lambda }(x_1,\ldots ,x_{2n}) \nonumber \\&\quad = \prod _{1 \leqslant i<j \leqslant 2n} \frac{1}{(x_i - x_j)} \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \frac{(x_i - x_j) (1-u + (u-t) x_i x_j)}{(1-x_i x_j) (1-t x_i x_j)} \right] . \end{aligned}
(59)

### Proof

Using the Weyl determinant formula for $$s_{\lambda }$$ and multiplying Eq. (59) by the Vandermonde $$\prod _{1 \leqslant i<j \leqslant 2n} (x_i - x_j)$$, the left-hand side may be written as
\begin{aligned}&\sum _{\lambda '\ \mathrm{even}} \ \prod _{i=1}^{n} (1-ut^{\lambda _{2i} - 2i + 2n}) \det _{1\leqslant i,j \leqslant 2n} \left[ x_j^{\lambda _i-i+2n} \right] \\&\quad = \sum _{\begin{array}{c} k_1 > \cdots > k_{2n} \geqslant 0 \\ k_{2i-1} = k_{2i}+1 \end{array}} \ \prod _{i=1}^{n} (1-ut^{k_{2i}}) \det _{1\leqslant i,j \leqslant 2n} \left[ x_j^{k_i} \right] , \end{aligned}
where we have made the change in summation indices $$k_i = \lambda _i - i + 2n$$. Owing to the Pfaffian factorization
\begin{aligned} \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \delta _{k_i,k_j+1} (1-u t^{k_j}) \right] = \prod _{i = 2,4,6,\ldots }^{2n} \left( \delta _{k_{i-1},k_i+1} (1-u t^{k_i}) \right) \end{aligned}
we find that
\begin{aligned}&\sum _{\lambda '\ \mathrm{even}} \ \prod _{i=1}^{n} (1-ut^{\lambda _{2i} - 2i + 2n}) \det _{1\leqslant i,j \leqslant 2n} \left[ x_j^{\lambda _i-i+2n} \right] \\&\quad = \sum _{k_1 > \cdots > k_{2n} \geqslant 0} \ \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \delta _{k_i,k_j+1} (1-u t^{k_j}) \right] \det _{1\leqslant i,j \leqslant 2n} \left[ x_j^{k_i} \right] \\&\quad = \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \sum _{0 \leqslant k < l} \delta _{k,l+1} (1-ut^l) (x_i^k x_j^l - x_i^l x_j^k) \right] \\&\quad = \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \sum _{l=0}^{\infty } (1-ut^l) (x_i^{l+1} x_j^l - x_i^l x_j^{l+1}) \right] , \end{aligned}
where we have used the Pfaffian analog of the Cauchy–Binet identity to produce the second equality. Taking the formal power series expansion of $$(x_i-x_j)(1-u+(u-t)x_i x_j)/((1-x_i x_j)(1-t x_i x_j))$$, we obtain precisely the entries of the final Pfaffian. $$\square$$

### Theorem 7

The Littlewood identity (58) for Hall–Littlewood polynomials can be refined by the introduction of a single deformation parameter $$u$$9:
\begin{aligned}&\sum _{\lambda '\ \mathrm{even}} \ \prod _{j\ \mathrm{even}}^{m_0(\lambda )} (1-u t^{j-2}) b^\mathrm{el}_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_{2n};t) \nonumber \\&\quad = \prod _{1 \leqslant i<j \leqslant 2n} \left( \frac{1-t x_i x_j}{x_i - x_j} \right) \mathop {\mathrm{Pf}}\limits _{1\leqslant i < j \leqslant 2n} \left[ \frac{(x_i - x_j) (1-u + (u-t) x_i x_j)}{(1-x_i x_j) (1-t x_i x_j)} \right] .\quad \quad \end{aligned}
(60)

### Proof

The idea of the proof is similar to that of Theorem 2. For the sake of brevity, we will simply point out the places where the proof deviates from the scheme exposed in Sect. 2.4.

We denote the left-hand side of (60) by $$\mathcal {G}_{2n}(x_1,\ldots ,x_{2n};u)$$, and by comparing it with the proposed right-hand side we find that necessarily:
1.

$$\mathcal {G}_{2n}$$ is symmetric in $$\{x_1,\ldots ,x_{2n}\}$$.

2.

$$\mathcal {G}_{2n} \times \prod _{1 \leqslant i<j \leqslant 2n} (1-x_i x_j)$$ is a polynomial in $$x_{2n}$$ of degree $$2n-1$$.

3.

$$\mathcal {G}_{2n}|_{x_{2n} = 1/(t x_{2n-1})} = -ut^{2n-2} \mathcal {G}_{2n-2}$$.

4.

$$\mathcal {G}_{2n}(0,\ldots ,0;u) = \prod _{i=1}^{n} (1-ut^{2i-2})$$.

5.

$$\mathcal {G}_{2} = (1-u+(u-t)x_1 x_2)/(1-x_1 x_2)$$.

Each of these is an immediate property of the right-hand side of (60), with the exception of 4, which at first glance seems to require taking a delicate limit. In fact, property 4 can be quickly deduced by setting all $$x_i = 0$$ in Eq. (59) $$\times \prod _{1 \leqslant i<j \leqslant 2n} (1-t x_i x_j)$$. It is straightforward to show that these five properties uniquely determine $$\mathcal {G}_{2n}$$. We remark that the additional property 4 is needed here, because properties 1 and 3 determine $$\mathcal {G}_{2n}$$ at $$2n-1$$ values of $$x_{2n}$$, which only specifies it up to a constant. The value of the constant is fixed by property 4.
Hence, it is sufficient to show that the left-hand side of (60) satisfies properties 15. Properties 1 and 4 are trivial, while 5 follows from
\begin{aligned} \mathcal {G}_2(x_1,x_2;u)= & {} (1-u) P_{(0,0)} + \sum _{k=1}^{\infty } (1-t) P_{(k,k)}(x_1,x_2;t) \\= & {} (1-u) + (1-t) \sum _{k=1}^{\infty } x_1^k x_2^k \\= & {} \frac{1-u+(u-t)x_1 x_2}{1-x_1 x_2}. \end{aligned}
Turning to property 2, it suffices to show that $$\prod _{1 \leqslant i<j \leqslant 2n} (1- x_i x_j) \mathcal {G}_{2n}$$ is a degree $$2n-1$$ polynomial in $$x_{2n}$$ at $$n+1$$ different values of $$u$$ (since $$\mathcal {G}_{2n}$$ is a degree $$n$$ polynomial in $$u$$). The $$n+1$$ points that we choose are $$u=0$$ (for which the claim is trivial, since in that case, we obtain the left-hand side of the Littlewood identity (29)) and $$u=t^{2k-2n}$$ for $$1 \leqslant k \leqslant n$$. For these latter values of $$u$$, we find that
\begin{aligned} \mathcal {G}_{2n}(x_1,\ldots ,x_{2n};t^{2k-2n}) = \sum _{i=k}^{n} \prod _{j=i+1}^{n} (1-t^{2k-2j}) \sum _{\begin{array}{c} \lambda :\ell (\lambda ) = 2i \\ \lambda '\ \mathrm{even} \end{array}} b_{\lambda }^\mathrm{el}(t) P_{\lambda }(x_1,\ldots ,x_{2n};t), \end{aligned}
so it is sufficient to show that for all $$1 \leqslant i \leqslant n$$,
\begin{aligned} \prod _{1 \leqslant i<j \leqslant 2n} (1 - x_i x_j) \sum _{\begin{array}{c} \lambda :\ell (\lambda ) = 2i \\ \lambda '\ \mathrm{even} \end{array}} b_{\lambda }^\mathrm{el}(t) P_{\lambda }(x_1,\ldots ,x_{2n};t) \end{aligned}
is a degree $$2n-1$$ polynomial in $$x_{2n}$$. We treat this as a proposition and denote it by $$\mathcal {P}_{=2i}$$. Similarly, we let $$\mathcal {P}_{\leqslant 2i}$$ denote this proposition in the case where the sum is taken over partitions $$\lambda$$ satisfying $$\ell (\lambda ) \leqslant 2i$$. As we showed in Sect. 2.4.2,
\begin{aligned} \mathcal {P}_{\leqslant 2n}\ \mathrm{true} \implies \mathcal {P}_{= 2n}\ \mathrm{true}. \end{aligned}
In fact, the arguments presented therein can be repeated (almost verbatim) to deduce that
\begin{aligned} \mathcal {P}_{\leqslant 2i}\ \mathrm{true} \implies \mathcal {P}_{= 2i}\ \mathrm{true} \end{aligned}
for general $$i$$. Moreover, if $$\mathcal {P}_{\leqslant 2i}$$ and $$\mathcal {P}_{= 2i}$$ are true, then
\begin{aligned}&\prod _{1 \leqslant i<j \leqslant 2n} (1 - x_i x_j) \left( \sum _{\begin{array}{c} \lambda :\ell (\lambda ) \leqslant 2i \\ \lambda '\ \mathrm{even} \end{array}} - \sum _{\begin{array}{c} \lambda :\ell (\lambda ) = 2i \\ \lambda '\ \mathrm{even} \end{array}} \right) b_{\lambda }^\mathrm{el}(t) P_{\lambda }(x_1,\ldots ,x_{2n};t) \\&\quad = \prod _{1 \leqslant i<j \leqslant 2n} (1 - x_i x_j) \sum _{\begin{array}{c} \lambda :\ell (\lambda ) \leqslant 2i-2 \\ \lambda '\ \mathrm{even} \end{array}} b_{\lambda }^\mathrm{el}(t) P_{\lambda }(x_1,\ldots ,x_{2n};t) \end{aligned}
is a degree $$2n-1$$ polynomial in $$x_{2n}$$, proving $$\mathcal {P}_{\leqslant 2i-2}$$ is true. Hence, we are able to iterate this string of implications to deduce that $$\mathcal {P}_{= 2i}$$ holds for all $$1\leqslant i \leqslant n$$.
For the recursive property 3, one repeats the procedure outlined in Sect. 2.4.3, but with obvious modifications to the formulae to cater for the more general coefficients appearing in the sum (60). The strategy is to expand the left-hand side of (60) (evaluated at $$x_{2n} = 1/(t x_{2n-1})$$) using two applications of the branching rule and to isolate the coefficients $$\mathcal {D}(\nu )$$ of $$P_{\nu }(x_1,\ldots ,x_{2n-2};t)$$ which arise from this expansion. In the case where $$\nu$$ has only even columns, one finds that
\begin{aligned} \mathcal {D}(\nu ) = \sum _{j=1}^{\infty } \sum _{\delta _j \in \{0,1\}} t^{-|\delta |} \prod _{k\ \mathrm{even}}^{m_0(\lambda )} (1-u t^{k-2}) b_{\lambda }^\mathrm{el}(t) \prod _{\begin{array}{c} \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )-1}) (1-t^{m_k(\nu )}), \end{aligned}
where $$\lambda$$ is also a partition with even columns, given by $$\lambda ' = \nu ' + 2 \delta$$ and by definition $$m_0(\lambda ) = 2n - \ell (\lambda )$$. Calculating $$\mathcal {D}(\nu )$$ can be done recursively, via the partial coefficients
\begin{aligned} \mathcal {D}_{i,\delta _i}(\nu )= & {} \sum _{j=1}^{i-1} \sum _{\delta _j \in \{0,1\}} t^{-\sum _{k=1}^{i} \delta _k} \prod _{k\ \mathrm{even}}^{m_0(\lambda )} (1-u t^{k-2}) \\&\times \, \prod _{k=1}^{i-1} \prod _{l\ \mathrm{even}}^{m_k(\lambda )} (1-t^{l-1}) \prod _{\begin{array}{c} 1 \leqslant k \leqslant i-1 \\ \delta _k = 0 \\ \delta _{k+1} = 1 \end{array}} (1-t^{m_k(\nu )-1}) (1-t^{m_k(\nu )}) \end{aligned}
which differ from those in Eq. (32) only by the additional factor $$\prod _{k\ \mathrm{even}}^{m_0(\lambda )} (1-ut^{k-2})$$. One finds that these coefficients satisfy the recurrences (33) and (34) without any alteration, but with the new initial condition
\begin{aligned} \mathcal {D}_{1,0}(\nu ) = \prod _{k\ \mathrm{even}}^{2n-\ell (\nu )} (1-ut^{k-2}), \quad \quad t \mathcal {D}_{1,1}(\nu ) = \prod _{k\ \mathrm{even}}^{2n-2-\ell (\nu )} (1-ut^{k-2}). \end{aligned}
The remaining steps in Sect. 2.4.3 then go through in the same way. In particular, it is still true that $$\mathcal {D}(\nu ) = \mathcal {D}_{\nu _1+1,0}(\nu ) - t \mathcal {D}_{\nu _1+1,1}(\nu )$$, and the recurrence (35) remains valid. The initial condition is now $$\mathcal {D}_{1,0}(\nu ) - t \mathcal {D}_{1,1}(\nu ) = -u t^{m_0(\nu )} \prod _{k\ \mathrm{even}}^{m_0(\nu )} (1-u t^{k-2})$$, where $$m_0(\nu ) = 2n-2-\ell (\nu )$$, so in solving (35) one obtains
\begin{aligned} \mathcal {D}_{\nu _1+1,0}(\nu ) - t \mathcal {D}_{\nu _1+1,1}(\nu )= & {} -u t^{\sum _{i=0}^{\infty } m_i(\nu )} \prod _{j\ \mathrm{even}}^{m_0(\nu )} (1-ut^{j-2}) \prod _{i=1}^{\infty } \prod _{j\ \mathrm{even}}^{m_i(\nu )} (1-t^{j-1}), \\= & {} -u t^{2n-2} \prod _{j\ \mathrm{even}}^{m_0(\nu )} (1-ut^{j-2}) b_{\nu }^\mathrm{el}(t), \end{aligned}
which is the required result. In the case where $$\nu$$ has a column of odd length, the arguments in Sect. 2.4.3 apply (without any change) to prove that $$\mathcal {D}(\nu ) = 0$$. These two evaluations of $$\mathcal {D}(\nu )$$ complete the proof of property 3. $$\square$$

Theorems 6 and 7 are important results, since they serve as checks of Conjecture 1 at the particular values $$q=t$$ and $$q=0$$, respectively. Further specialization (of the parameter $$u$$) leads to various known results. For example, in the case of (60), setting $$u=0$$ yields the Littlewood identity for Hall–Littlewood polynomials, whereas setting $$u=t$$ gives rise to Eq. (2). In complete analogy with the previous section, when we set $$u=-\sqrt{t}$$, we recover one half of the factors in Eq. (56) for $$Z_\mathrm{OO}$$. The remaining factors in (56) are precisely those of the OSASM partition function, on the right-hand side of (2).

## 6 $$u$$-Deformed $$BC$$-type Cauchy identity and double U-turn alternating sign matrices

In this section, we conclude our study of the relationship between refined Cauchy/Littlewood identities and partition functions of the six-vertex model. We present one final example, conjecturing a $$u$$-deformed version of Eq. (3) and showing that its right-hand side contains half of the factors present in the partition function with U-turn boundaries on two sides of the lattice. The remaining factors are those of the UASM partition function, as given by the right-hand side of (3). The explicit formula for this partition function, as a product of two determinants, is again due to Kuperberg [8].

### 6.1 Redefinition of Boltzmann weights for bulk vertices

Throughout this section, it turns out to be most convenient to adopt a more symmetric form for the Boltzmann weights:
\begin{aligned} a_{\pm }(x,y) = \frac{1-t x/y}{1-x/y}, \quad \quad b_{\pm }(x,y) = \sqrt{t}, \quad \quad c_{\pm }(x,y) = \frac{(1-t)\sqrt{x/y}}{1-x/y}. \end{aligned}
(61)
The only difference between this choice and the previous one (48) is that the $$c_{\pm }$$ vertices are now equal. The Yang–Baxter equation remains satisfied, since the two sets of Boltzmann weights (48) and (61) are related by a diagonal conjugation of the corresponding $$R$$-matrix.

### 6.2 U-turn vertices, reflection, and fish equations

We introduce a new set of boundary vertices, the U-turn vertices, as shown in Fig. 9. The U-turn Boltzmann weights (denoted $$r_{\pm }$$ and $$t_{\pm }$$, since they are situated on the right and top edges of the partition function that we subsequently study) are given explicitly by
\begin{aligned}&r_{+}(x) = \sqrt{p x} - \frac{1}{\sqrt{p x}}, \quad \quad r_{-}(x) = \frac{\sqrt{p}}{\sqrt{x}} - \frac{\sqrt{x}}{\sqrt{p}}, \nonumber \\&t_{+}(y) = \frac{\sqrt{p t}}{\sqrt{y}} + \frac{\sqrt{y}}{\sqrt{p t}}, \quad \quad t_{-}(y) = - \sqrt{p y} - \frac{1}{\sqrt{p y}}. \end{aligned}
(62)
Together with the ordinary Boltzmann weights (61), the U-turn weights satisfy the Sklyanin reflection equation [12]. Since we have both $$r$$- and $$t$$-type boundary vertices, two types of reflection equation occur in this work. These are shown in Fig. 10. The reflection equations allow us to establish the symmetry of the partition function $$Z_\mathrm{UU}$$ in its rapidity variables.
We will make use of two further relations satisfied by the bulk and U-turn vertices. Following [8], we refer to these as fish equations, and they are given in Fig. 11.

### 6.3 Partition function on double U-turn lattice

Following [8], we now consider the partition function of the six-vertex model on a lattice with two reflecting boundaries, as shown in Fig. 12. The corresponding alternating sign matrices are the so-called UUASMs [8].

### Lemma 4

The partition function $$Z^{(n)}_\mathrm{UU} = Z_\mathrm{UU}(x_1,\ldots ,x_n;y_1,\ldots ,y_n;t)$$ as defined in Fig. 12 satisfies six properties:
1.

Multiplying by $$\prod _{i,j=1}^{n} (1-x_i y_j)^2 (1-x_i \bar{y}_j)^2$$, it is a polynomial in $$x_n$$ of degree $$4n$$.

2.

It has zeros at $$x_n = \pm 1$$.

3.

It is symmetric in $$\{y_1,\ldots ,y_n\}$$.

4.
It is quasi-symmetric under $$y_n \longleftrightarrow \bar{y}_n$$:
\begin{aligned}&\bar{y}_n (1-t y_n^2) Z_\mathrm{UU} (x_1,\ldots ,x_n; y_1,\ldots ,y_{n-1},y_n;t) \nonumber \\&\quad = y_n (1-t\bar{y}_n^2) Z_\mathrm{UU} (x_1,\ldots ,x_n; y_1,\ldots ,y_{n-1},\bar{y}_n;t). \end{aligned}
(63)
5.
It obeys the recursion relations
\begin{aligned}&\lim _{x_n \rightarrow \bar{y}_n} \Big ( (1-x_n y_n)^2 Z^{(n)}_\mathrm{UU} \Big ) = (1-t)^2 t^{n-1/2} (\bar{p} \bar{y}_n-p y_n) \frac{(1-t\bar{y}_n^2)}{(1-\bar{y}_n^2)} \nonumber \\&\qquad \times \, \prod _{i=1}^{n-1} \left[ \frac{(1-t x_i y_n)}{(1-x_i y_n)} \frac{(1-ty_i \bar{y}_n)}{(1-y_i \bar{y}_n)} \frac{(1-t\bar{y}_i \bar{y}_n)}{(1-\bar{y}_i \bar{y}_n)} \right] ^2 Z^{(n-1)}_\mathrm{UU}, \end{aligned}
(64)
\begin{aligned}&Z^{(n)}_\mathrm{UU} \Big |_{x_n = y_n/t} = t^{3n-1/2} (p y_n - \bar{p} \bar{y}_n) \frac{(1-y_n^2/t^2)}{(1-y_n^2/t)} \prod _{i=1}^{n-1} \frac{(1-tx_i y_n)^2}{(1-x_i y_n)^2} Z^{(n-1)}_\mathrm{UU}.\nonumber \\ \end{aligned}
(65)
6.
When $$n=1$$, it is given explicitly by
\begin{aligned} Z^{(1)}_\mathrm{UU} = \frac{ \sqrt{t}(1\!-\!t)(1\!-\!x_1^2)(y_1-t\bar{y}_1) \left[ (pt+\bar{p}) (x_1 y_1 \!+\! x_1 \bar{y}_1) \!-\! (p\!+\!\bar{p}) (1+tx_1^2) \right] }{(1-x_1 y_1)^2 (1-x_1 \bar{y}_1)^2}. \end{aligned}

### Proof

The proof revolves around the lattice definition of $$Z^{(n)}_\mathrm{UU}$$, as well as using the Yang–Baxter and fish equations to prove various symmetries.
1.

Multiplying the entire partition function by $$\prod _{i,j=1}^{n} (1-x_i y_j)^2 (1-x_i \bar{y}_j)^2$$ is equivalent to renormalizing the individual Boltzmann weights, such that each is a degree-1 polynomial in $$x_i$$ (except the $$c_{\pm }$$ weights, which go as $$\sqrt{x_i}$$). We focus our attention on the bottom two rows of $$Z^{(n)}_\mathrm{UU}$$, which is the sole place having $$x_n$$ dependence. In every legal configuration, there must be an odd total number of $$c_{\pm }$$ vertices in these final two rows. Combining this with the explicit parametrization of the right U-turn vertices ensures that $$Z^{(n)}_\mathrm{UU}$$ is indeed a polynomial in $$x_n$$. Furthermore, since there is minimally one $$c_{\pm }$$ vertex in these two rows, the degree of the polynomial is $$4n$$.

2.

Starting from the U-turn vertex associated with the final two rows of $$Z^{(n)}_\mathrm{UU}$$, we can immediately use the fish equation on the left of Fig. 11 to introduce an extra vertex, internal to the lattice. This also produces an overall multiplicative factor of $$\sqrt{t}(1-x_n^2)/(1-t^2 x_n^2)$$. Using the Yang–Baxter equation repeatedly, it is possible to slide the new vertex horizontally through the lattice until it ultimately emerges from the left as a $$b_{+}$$ vertex, with Boltzmann weight $$\sqrt{t}$$. This process is shown in Fig. 13. The two zeros at $$x_n = \pm 1$$ are due to the factor $$(1-x_n^2)$$ introduced at the start of this procedure.

3.

Using both the Yang–Baxter and reflection equation, it is possible to interchange the order of any two double columns bearing the rapidities $$\{\bar{y}_i,y_i\}$$ and $$\{\bar{y}_j,y_j\}$$. This is a standard argument used in models with a double-row transfer matrix, see [8] and references therein for more details.

4.
One can attach a single $$a_{-}$$ vertex at the base of the first two columns in $$Z^{(n)}_\mathrm{UU}$$, which is equivalent to multiplying the partition function by $$(1-t y_n^2)/(1-y_n^2)$$. The inserted vertex can then be moved vertically through the lattice (using the Yang–Baxter equation) until it reaches the U-turn vertex at the top of the double column. Applying the fish equation on the right of Fig. 11, the internal crossing is removed and the order of the first two columns is interchanged, up to an overall factor of $$-(t-y_n^2)/(1-y_n^2)$$. Hence, we find that
\begin{aligned}&\frac{(1-t y_n^2)}{(1-y_n^2)} Z_\mathrm{UU} (x_1,\ldots ,x_n; y_1,\ldots ,y_{n-1},y_n;t) \nonumber \\&\qquad = - \frac{(t-y_n^2)}{(1-y_n^2)} Z_\mathrm{UU} (x_1,\ldots ,x_n; y_1,\ldots ,y_{n-1},\bar{y}_n;t). \end{aligned}
Trivial rearrangement of the prefactors gives the result (63).
5.

The recursion relation (64) follows from the original lattice representation of $$Z^{(n)}_\mathrm{UU}$$, in Fig. 12. Multiplying the partition function by $$(1-x_n y_n)^2$$ and taking $$x_n \rightarrow \bar{y}_n$$ forces the bottom-left vertex of the lattice to be a $$c_{+}$$ vertex. This restriction causes a larger subset of the vertices in $$Z^{(n)}_\mathrm{UU}$$ to be in a frozen configuration, as shown on the left of Fig. 14. The second recursion relation (65) can be deduced from the lattice representation on the right-hand side of Fig. 13, obtained by a single application of the horizontal fish equation and repeated use of the Yang–Baxter equation. One starts by removing the frozen $$b_{+}$$ vertex from the left side of the lattice, then setting $$x_n=y_n/t$$ forces the bottom- left vertex to be of type $$c_{-}$$. A subset of the vertices then freeze, as shown on the right of Fig. 14. In both cases, by reading off the Boltzmann weights for the frozen vertices, we deduce the prefactors in the recursion relations (64) and (65). One must also be mindful of the overall multiplicative factors which are introduced in the derivation of Fig. 13 and take these into account when arriving at Eq. (65). In either case, the surviving (unfrozen) region represents the partition function of one size smaller, $$Z^{(n-1)}_\mathrm{UU}$$.

6.
The $$n=1$$ case of the partition function can be computed as a sum of five terms: Using the expressions (61) and (62) for the Boltzmann weights, we obtain the explicit sum
\begin{aligned} Z^{(1)}_\mathrm{UU}= & {} - \frac{ \sqrt{t}(1-t)^3 x_1 (p - x_1) (1+\bar{p}\bar{y}_1) }{ (1-x_1 y_1) (1-x_1 \bar{y}_1)^2 } - \frac{ t^{3/2}(1-t) (p x_1 - 1) (1 + \bar{p} \bar{y}_1) }{ (1-x_1 \bar{y}_1)} \\&-\, \frac{ \sqrt{t} (1-t) (1-t x_1 y_1) (1-t x_1 \bar{y}_1) (p - x_1) (y_1 + \bar{p}) }{ (1-x_1 y_1)^2 (1-x_1 \bar{y}_1) } \\&+\, \frac{ \sqrt{t}(1-t)(1-tx_1 \bar{y}_1) (p - x_1) (t \bar{y}_1 + \bar{p}) }{(1-x_1 \bar{y}_1)^2 } \\&+\, \frac{ \sqrt{t} (1-t) (1-t x_1 \bar{y}_1) (p x_1 - 1) (t + \bar{p} y_1) }{ (1-x_1 y_1)(1-x_1 \bar{y}_1) }, \end{aligned}
which can be simplified to
\begin{aligned} Z^{(1)}_\mathrm{UU} \!=\! \frac{ \sqrt{t}(1-t)(1-x_1^2)(y_1-t\bar{y}_1) \left[ (pt\!+\!\bar{p}) (x_1 y_1 \!+\! x_1 \bar{y}_1) \!-\! (p\!+\!\bar{p}) (1\!+\!tx_1^2) \right] }{(1\!-\!x_1 y_1)^2 (1\!-\!x_1 \bar{y}_1)^2}. \end{aligned}
$$\square$$

### Theorem 8

(Kuperberg) The partition function on the double U-turn lattice is given by a product of determinants:
\begin{aligned}&Z_\mathrm{UU}(x_1,\ldots ,x_n;y_1,\ldots ,y_n;t) \nonumber \\&\quad = \frac{ (\sqrt{t})^{n^2} \prod _{i=1}^{n} (1-x_i^2) (y_i-t \bar{y}_i) \prod _{i,j=1}^{n} (1- t x_i y_j)^2 (1- t x_i \bar{y}_j)^2 }{\prod _{1 \leqslant i<j \leqslant n} (x_i - x_j)^2 (y_i - y_j)^2 (1 - t x_i x_j)^2 (1-\bar{y}_i \bar{y}_j)^2} \nonumber \\&\quad \quad \times \, \det _{1\leqslant i,j \leqslant n} \left[ \frac{(1-t)}{(1-x_i y_j) (1-t x_i y_j) (1-x_i \bar{y}_j) (1-t x_i \bar{y}_j)} \right] \nonumber \\&\quad \quad \times \, \det _{1\leqslant i,j \leqslant n} \left[ \frac{(pt+\bar{p})(x_i y_j + x_i \bar{y}_j) - (p+\bar{p}) (1+t x_i^2)}{(1-x_i y_j) (1-t x_i y_j) (1-x_i \bar{y}_j) (1-t x_i \bar{y}_j)} \right] . \end{aligned}
(66)

### Proof

It is a simple matter to verify that (66) satisfies the six properties of Lemma 4. The fact that these properties uniquely determine $$Z_\mathrm{UU}$$ is again a consequence of Lagrange interpolation. Indeed, the recursion relations (64) and (65) (together with the symmetry property 3 and quasi-symmetry property 4) determine the polynomial $$Z_\mathrm{UU}$$ at $$4n$$ values of $$x_n$$. Combined with the two known zeros at $$x_n = \pm 1$$, these are sufficiently many points to fully determine $$Z_\mathrm{UU}$$. $$\square$$

### 6.4 $$u$$-Deformed $$BC_n$$ Cauchy identity at Schur and Hall–Littlewood level

In this subsection, we present a (conjectural) $$u$$-deformation of Eq. (3), involving lifted Koornwinder polynomials [11]. We build up to this via a simpler result at the level of symplectic Schur polynomials, which we are able to prove using the Cauchy–Binet identity. In that sense, the two results presented here are direct analogs of equations (52) and (53) in Sect. 4.3, and (59) and (60) in Sect. 5.3. In contrast with those other equations, we are currently unable to obtain (67) and (68) as the $$q \rightarrow 0$$ case of some identity at Macdonald level.

### Theorem 9

The Cauchy identity for symplectic Schur polynomials can be doubly refined, by the introduction of two deformation parameters $$t$$ and $$u$$:
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1 - u t^{\lambda _i - i + n}) s_{\lambda }(x_1,\ldots ,x_n) sp_{\lambda }(y_1^{\pm 1},\ldots ,y_n^{\pm 1}) \nonumber \\&\quad = \frac{1}{\prod _{1 \leqslant i<j \leqslant n} (x_i-x_j) (y_i-y_j) (1-\bar{y}_i \bar{y}_j)} \nonumber \\&\quad \quad \times \, \det _{1\leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) (x_i y_j + x_i \bar{y}_j) + (t^2 - u) x_i^2}{(1-x_i y_j) (1-t x_i y_j)(1-x_i \bar{y}_j) (1-t x_i \bar{y}_j)} \right] . \end{aligned}
(67)

### Proof

Using the Weyl-type determinant expressions for both $$s_{\lambda }$$ and $$sp_{\lambda }$$, and multiplying Eq. (67) by $$\prod _{1 \leqslant i<j \leqslant n} (x_i-x_j) (y_i-y_j) (1-\bar{y}_i \bar{y}_j) \prod _{i=1}^{n} (y_i-\bar{y}_i)$$, the left-hand side may be written as
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1 - u t^{\lambda _i - i + n}) \det _{1\leqslant i,j \leqslant n} \left[ x_i^{\lambda _j-j+n} \right] \det _{1\leqslant i,j \leqslant n} \left[ y_j^{\lambda _i-i+n+1} - \bar{y}_j^{\lambda _i-i+n+1} \right] \\&\quad = \sum _{k_1 > \cdots > k_n \geqslant 0}\ \prod _{i=1}^{n} (1 - u t^{k_i}) \det _{1\leqslant i,j \leqslant n} \left[ x_i^{k_j} \right] \det _{1\leqslant i,j \leqslant n} \left[ y_j^{k_i+1} - \bar{y}_j^{k_i+1} \right] , \end{aligned}
where we have made the change in summation indices $$\lambda _i - i + n = k_i$$. Applying the Cauchy–Binet identity, we obtain
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{n} (1 - u t^{\lambda _i - i + n}) \det _{1\leqslant i,j \leqslant n} \left[ x_i^{\lambda _j-j+n} \right] \det _{1\leqslant i,j \leqslant n} \left[ y_j^{\lambda _i-i+n+1} - \bar{y}_j^{\lambda _i-i+n+1} \right] \\&\quad = \det _{1\leqslant i,j \leqslant n} \left[ \sum _{k=0}^{\infty } (1-ut^k) x_i^k (y_j^{k+1} - \bar{y}_j^{k+1}) \right] \nonumber \\&\quad = \prod _{i=1}^{n} (y_i - \bar{y}_i) \det _{1\leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) (x_i y_j + x_i \bar{y}_j) + (t^2 - u) x_i^2}{(1-x_i y_j) (1-t x_i y_j)(1-x_i \bar{y}_j) (1-t x_i \bar{y}_j)} \right] , \end{aligned}
where the final equality follows from the formal power series expansion of the entries of the determinant. Keeping track of the multiplicative factors that we introduced at the outset, we recover the result (67). $$\square$$

### Conjecture 2

The Cauchy identity for lifted Koornwinder polynomials at $$q=0$$, $$T=0$$ can be refined by the introduction of a single deformation parameter $$u$$:
\begin{aligned}&\sum _{\lambda } \prod _{i=1}^{m_0(\lambda )} (1 - u t^{i-1}) b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_n;t)\nonumber \\&\quad \quad \times \tilde{K}_{\lambda }(y_1^{\pm 1},\ldots ,y_n^{\pm 1}; 0, t, u t^{n-1}; t_0, t_1, t_2, t_3) \nonumber \\&\quad = \prod _{i=1}^{n} \frac{(1- t_0 x_i )(1-t_1 x_i)(1-t_2 x_i)(1-t_3 x_i)}{(1-t x_i^2)} \nonumber \\&\quad \quad \times \, \frac{\prod _{i,j=1}^{n} (1- t x_i y_j) (1- t x_i \bar{y}_j)}{\prod _{1 \leqslant i<j \leqslant n} (x_i-x_j) (y_i-y_j) (1 - t x_i x_j) (1-\bar{y}_i \bar{y}_j)} \nonumber \\&\quad \quad \times \, \det _{1\leqslant i,j \leqslant n} \left[ \frac{1-u + (u-t) (x_i y_j + x_i \bar{y}_j) + (t^2 - u) x_i^2}{(1-x_i y_j) (1-t x_i y_j)(1-x_i \bar{y}_j) (1-t x_i \bar{y}_j)} \right] , \end{aligned}
(68)
where $$\tilde{K}_{\lambda }(y_1^{\pm 1},\ldots ,y_n^{\pm 1}; 0, t, u t^{n-1}; t_0, t_1, t_2, t_3)$$ is a lifted Koornwinder polynomial with $$q=0$$, $$T= u t^{n-1}$$ (see Section 7 of [11] and Appendix for more details).
We discuss briefly some important specializations of equations (67) and (68). The $$u=0$$ specialization of (67) gives rise to the equation
\begin{aligned} \sum _{\lambda } s_{\lambda }(x_1,\ldots ,x_n) sp_{\lambda }(y_1^{\pm 1},\ldots ,y_n^{\pm 1})= & {} \frac{ \det _{1\leqslant i,j \leqslant n} \left[ \frac{1}{(1-x_i y_j)(1-x_i \bar{y}_j)} \right] }{\prod _{1 \leqslant i<j \leqslant n} (x_i-x_j) (y_i-y_j) (1-\bar{y}_i \bar{y}_j)} \\= & {} \frac{\prod _{1\leqslant i<j \leqslant n} (1-x_i x_j)}{\prod _{i,j=1}^{n} (1-x_i y_j) (1-x_i \bar{y}_j)}, \end{aligned}
which is the Cauchy identity for symplectic Schur polynomials [13]. Setting $$u=t$$, (67) reduces to Theorem 3 of [2], which is a simple $$t$$-refinement of the Cauchy identity.
In a similar way, (68) interpolates between two identities which appeared previously in [2]. When $$u=0$$, the lifted Koornwinder polynomial appearing in (68) has its $$T$$ argument equal to zero. As is explained in [11] and Appendix, the lifted Koornwinder polynomials at $$T=0$$ have many nice properties, including the Cauchy-type identity (71). Setting $$u=0$$ in (68), we obtain
\begin{aligned}&\sum _{\lambda } b_{\lambda }(t) P_{\lambda }(x_1,\ldots ,x_n;t) \tilde{K}_{\lambda }(y_1^{\pm 1},\ldots ,y_n^{\pm 1}; 0, t, 0; t_0, t_1, t_2, t_3) \\&\quad = \prod _{i=1}^{n} \frac{(1- t_0 x_i )(1-t_1 x_i)(1-t_2 x_i)(1-t_3 x_i)}{(1-t x_i^2)} \\&\quad \quad \times \, \prod _{1\leqslant i<j \leqslant n} \frac{(1-x_i x_j)}{(1-t x_i x_j)} \prod _{i,j=1}^{n} \frac{(1-tx_i y_j)(1-tx_i \bar{y}_j)}{(1-x_i y_j) (1-x_i \bar{y}_j)} \end{aligned}
as expected, this being the $$q=0$$ specialization of (71). On the other hand, when $$u=t$$, the lifted Koornwinder polynomial in (68) has its $$T$$ argument equal to $$t^n$$. In this case, it is equal to a Koornwinder polynomial with $$q=0$$ [see Eq. (70)]. Since the Koornwinder polynomials at $$q=0$$ are type $$BC_n$$ Hall–Littlewood polynomials [15], we expect to recover Eq. (3) at this value of $$u$$. We find that this is indeed the case, after we additionally set $$t_i=0$$ for $$0 \leqslant i \leqslant 3$$, since all of these parameters were ignored in the original statement of (3) in [2].

In analogy with previous sections, we wish to point out a further specialization of $$u$$ which leads to a connection with the partition function (66). By choosing the boundary parameter in (66) to be $$\bar{p}=-\sqrt{t}$$, and setting $$u=-t$$ in (68), we obtain agreement between the determinants appearing in (66) and (68) up to an overall factor of $$(\sqrt{t})^n$$. Furthermore, by specializing $$t_0 = 1$$, $$t_1 = -1$$ and $$t_2 = \sqrt{t}$$, $$t_3 = -\sqrt{t}$$, we find that all of prefactors present in (68) are also present in (66). The leftover factors in (66) are those of the UASM partition function, given by the right-hand side of Eq. (3). Hence, this is yet another example of a Cauchy-type identity that is closely related to a partition function appearing in [8].

## Footnotes

1. 1.

We use this as an umbrella term for any proof that involves: 1. Writing down a set of properties, one of which is a simple recursion relation, that uniquely determine an object, and 2. Showing that a certain determinant or Pfaffian Ansatz satisfies these properties.

2. 2.

On the right-hand side of the $$q$$-deformed version of (1) and (2), the dependence on $$q$$ is completely factorized. In particular, $$q$$ does not appear within the determinant/Pfaffian, which rules out the possibility that it plays any non-trivial role within the six-vertex model itself.

3. 3.

We have departed slightly from the convention of [9] for the function $$\psi _{\lambda /\mu }(t)$$, by incorporating the Kronecker delta into its definition (so that it is defined for all partitions $$\lambda$$, $$\mu$$). This turns out to be convenient in many of the equations that follow, since it avoids keeping track of interlacing conditions when writing sums.

4. 4.

We thank O. Warnaar for bringing equations (13) and (17) to our attention.

5. 5.

The superscript in $$w_{\lambda }^\mathrm{el}(N,t)$$ and $$b^\mathrm{el}_{\lambda }(t)$$ is for even legs, since in the Macdonald case $$b^\mathrm{el}_{\lambda }(q,t)$$ is defined as a product over all boxes in $$\lambda$$ with even leg-length [9].

6. 6.

If $$s$$ is a box with coordinates $$(i,j)$$ in some Young diagram $$\lambda$$, then $$a_{\lambda }(s) =\lambda _i-j$$ and $$l_{\lambda }(s) =\lambda '_j-i$$, see Sect. 6, Chapter VI of [9].

7. 7.

We are grateful to E. Rains for a comprehensive numerical test of the factorized dependence on $$q$$ and $$u$$ in (46), and to O. Warnaar for independently suggesting this conjecture to us while the manuscript was in preparation.

8. 8.
Dividing Eq. (52) by $$(1-t)^n$$, letting $$u = t^{-z}$$ and taking the limit $$t \rightarrow 1$$, the left-hand side of (52) becomes
\begin{aligned} \sum _{\lambda } \prod _{i=1}^{n} (\lambda _i - i + n - z) s_{\lambda }(x_1,\ldots ,x_n) s_{\lambda }(y_1,\ldots ,y_n). \end{aligned}
This limiting form was already investigated in [1], although there the right-hand side was not expressed in determinant form. We thank F. Jouhet for showing us this reference.
9. 9.

Equation (60) was originally conjectured by O. Warnaar in a private communication, after our first paper [2] appeared. This communication motivated much of the work that was performed in the current paper.

## Notes

### Acknowledgments

We express our sincere thanks to Ole Warnaar, for many valuable insights and suggestions which motivated this work, and in particular for suggesting the idea of $$u$$-deformations of the original identities (1)–(3), and to Eric Rains, for helping us in arriving at Conjecture 1. M.W. would like to thank Frédéric Jouhet for his interest in this work and for pointing out the reference [1], and Jean-Christophe Aval, Philippe Nadeau and Eric Ragoucy for invitations to present related work at LaBRI, ICJ and LAPTh, respectively. We finally wish to acknowledge the open-source package Sage whose built-in functions for Hall–Littlewood and Macdonald polynomials were indispensable in verifying some of the conjectures. This work was done under the support of the ERC Grant 278124, “Loop models, integrability and combinatorics”.

### References

1. 1.
Andrews, G.E., Goulden, I.P., Jackson, D.M.: Generalizations of Cauchy’s summation theorem for Schur functions. Trans. Am. Math. Soc. 310(2), 805–820 (1988)
2. 2.
Betea, D., Wheeler, M.: Refined Cauchy and littlewood identities, plane partitions, and symmetry classes of alternating sign matrices (2014). arXiv:1402.0229
3. 3.
Izergin, A.G.: Partition function of the six-vertex model in a finite volume. Dokl. Akad. Nauk SSSR 297(2), 331–333 (1987)
4. 4.
Kirillov, A.N., Noumi, M.: $$q$$-difference raising operators for Macdonald polynomials and the integrality of transition coefficients. In: Algebraic Methods and $$q$$-Special Functions (Montréal, QC, 1996), CRM Proc. Lecture Notes, vol. 22. Am. Math. Soc, Providence, RI, pp. 227–243 (1999)Google Scholar
5. 5.
Korepin, V., Zinn-Justin, P.: Thermodynamic limit of the six-vertex model with domain wall boundary conditions. J. Phys. A 33(40), 7053–7066 (2000)
6. 6.
Korepin, V.E.: Calculation of norms of Bethe wave functions. Commun. Math. Phys. 86(3), 391–418 (1982)
7. 7.
Kuperberg, G.: Another proof of the alternating-sign matrix conjecture. Int. Math. Res. Not. 3, 139–150 (1996)
8. 8.
Kuperberg, G.: Symmetry classes of alternating-sign matrices under one roof. Ann. Math. 156(3), 835–866 (2002)
9. 9.
Macdonald, I.G.: Symmetric Functions and Hall Polynomials. Oxford Mathematical Monographs, 2nd edn. The Clarendon Press, Oxford University Press, New York (1995). With contributions by A Zelevinsky, Oxford Science Publications (1995)Google Scholar
10. 10.
Okounkov, A.: BC-type interpolation Macdonald polynomials and binomial formula for Koornwinder polynomials. Transform. Groups 3(2), 181–207 (1998)
11. 11.
Rains, E.M.: $$\text{ BC }_n$$-symmetric polynomials. Transform. Groups 10(1), 63–132 (2005)
12. 12.
Sklyanin, E.K.: Boundary conditions for integrable quantum systems. J. Phys. A 21(10), 2375–2389 (1988)
13. 13.
Sundaram, S.: Tableaux in the representation theory of the classical Lie groups. In: Invariant Theory and Tableaux (Minneapolis, MN, 1988), IMA Vol. Math. Appl., vol. 19. Springer, New York, pp. 191–225 (1990)Google Scholar
14. 14.
Tsuchiya, O.: Determinant formula for the six-vertex model with reflecting end. J. Math. Phys. 39(11), 5946–5951 (1998)
15. 15.
Venkateswaran, V.: Symmetric and nonsymmetric Hall–Littlewood polynomials of type BC (2013). arXiv:1209.2933v2
16. 16.
Vuletić, M.: A generalization of MacMahon’s formula. Trans. Am. Math. Soc. 361(5), 2789–2804 (2009)
17. 17.
Warnaar, S.O.: Bisymmetric functions, Macdonald polynomials and $$sl_{3}$$ basic hypergeometric series. Compos. Math. 144(2), 271–303 (2008)
18. 18.
Warnaar, S.O.: Remarks on the paper “Skew Pieri rules for Hall–Littlewood functions” by Konvalinka and Lauve. J. Algebr. Comb. 38(3), 519–526 (2013)