1 Introduction

Throughout the paper, let \(G=GL(m|n)\) be the general linear supergroup defined over an algebraically closed field K of characteristic zero and \(G_{ev}=GL(m)\times GL(n)\) be its even subsupergroup. Induced (super)modules \(H^0_G(\lambda )\) are fundamental building blocks of the representation theory of G. For the definition and properties of induced modules of algebraic groups, see Chapters 3 and 4 of [10] and Sect. 1.2. For the properties of the induced supermodule \(H^0_G(\lambda )\) used in this paper, see [28] and Sect. 1.3.

In our earlier paper [14], we gave explicit formulae for certain \(G_{ev}\)-primitive (or shortly, even-primitive) vectors \(\pi _{I|J}\). These vectors form a basis of even-primitive vectors of \(H^0_G(\lambda )\) in some special weight spaces (which are described using the concept of robustness)—see Theorem 4.4 of [14].

The main obstacle is that vectors \(\pi _{I|J}\) do not belong to \(H^0_G(\lambda )\) in general. In the comments following Theorem 4.4 of [14], we have discussed the possibility that all even-primitive vectors in \(H^0_G(\lambda )\) could be written as a specific linear combination of vectors \(\pi _{I|J}\). One of the purposes of this paper is to confirm this speculation for even-primitive vectors in the largest polynomial subsupermodule \(\nabla (\lambda )\) of \(H^0_G(\lambda )\). We also give explicit formulae for specific even-primitive vectors in induced G-supermodules \(H^0_G(\lambda )\) and describe a particular basis of even-primitive vectors of \(\nabla (\lambda )\). Since for polynomial weights \(\lambda \), the multiplicity of even-primitive vectors is given by certain Littlewood–Richardson coefficients, one can view the results of this paper as an “algebraization” of these combinatorial quantities.

The combinatorial techniques which we use are related to the Young tableaux and are appropriate for the description of \(\nabla (\lambda )\). The supermodules \(\nabla (\lambda )\) are costandard supermodules in a Schur superalgebra S(m|nr) of an appropriate degree r, which are of independent interest and our results bring an understanding of their \(G_{ev}\)-structure. Results about even-primitive vectors were previously known only for individual cases of Schur superalgebras S(1|1), S(2|1), S(3|1), and S(2|2), and were given in papers [8, 9, 15, 18]. Once all even-primitive vectors in \(\nabla (\lambda )\) are known, we can tensor with appropriate powers of “even” determinants and obtain a description of even-primitive vectors for an arbitrary induced supermodule \(H^0_G(\lambda )\).

Understanding of certain even-primitive vectors in induced supermodules \(H^0_G(\lambda )\) was one of the ingredients used in the proof of the linkage principle for GL(m|n) in arbitrary characteristic p different from 2 (it was recently proved in [20]). The description of even-primitive vectors in modules \(\nabla (\lambda )\) is likely connected to the linkage principle for Schur superalgebras. In a forthcoming paper [17], we apply this description of even-primitive vectors to derive results related to the odd linkage for G. This brings a combinatorial perspective and provides additional valuable insight into the representation theory of GL(m|n).

The structure of the paper is as follows. In Sect. 1 we fix notation related to the general linear supergroup G and induced supermodules \(H^0_G(\lambda )\) and \(H^0_{G_{ev}}(\lambda )\). In particular, we work with the induced supermodule \(H^0_G(\lambda )\) represented as \(H^0_{G_{ev}}(\lambda )\otimes \Lambda (Y)\), where \(Y=V_m^*\otimes V_n\) is the tensor product of the dual of the natural GL(m)-module \(V_m\) and the natural GL(n)-module \(V_n\), and \(\Lambda (Y)\) is the exterior algebra of Y. Also, we describe even-primitive elements \(\pi _{I|J}\) and their building blocks \(\rho _{I|J}\). In Sect. 2 we derive congruences for some transposition operators involving \(\rho _{I|J}\) modulo certain bideterminants. In Sect. 3 we prove many auxiliary determinantal identities used in the following sections. In Sect. 4 we explain the general setup of diagrams and tableaux, define operators \(\sigma ^+\), \(\sigma ^-\), \(\sigma \) and positioning maps. Using these tools, we construct \(G_{ev}\)-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\) and in the floors \(F_k=H^0_{G_{ev}}(\lambda )\otimes \wedge ^k Y\) of \(H^0_G(\lambda )\). In Sect. 5 we define operators \(\tau ^+\), \(\tau ^-\) and \(\tau \) on a tableau T. We also discuss properties of these operators and repositioning maps. Using these operators, we can describe certain even-primitive vectors of \(H^0_G(\lambda )\). In Sect. 6 we discuss Clausen preorders, linear independence of even-primitive vectors and action of the operator \(\tau \) on Littlewood–Richardson tableaux. Also, for a hook partition \(\lambda \) and irreducible \(H^0_G(\lambda )\), we obtain a basis of even-primitive vectors in \(H^0_G(\lambda )\). In Sect. 7 we consider Schur superalgebra S(m|n) and determine a basis of all even-primitive vectors in \(\nabla (\lambda )\), the largest polynomial subsupermodule of \(H^0_G(\lambda )\), and the corresponding costandard supermodule for S(m|n). We also provide a connection of our combinatorial construction to pictures in the sense of Zelevinski [27].

2 Background and notation

For more information about the general linear supergroup \(G=GL(m|n)\), its even subsupergroup \(G_{ev}\) and their coordinate and distribution algebras, superderivations, simple and induced supermodules within the context relevant to this paper see [3, 14, 28].

2.1 General linear supergroups

Let the parity of the index \(1\le i\le m\) be \(|i|=0\) and the parity of the index \(m+1\le j\le n\) be \(|j|=1\). Let A(m|n) be the superalgebra freely generated by elements \(c_{ij}\) for \(1\le i,j \le m+n\) subject to the supercommutativity relation

$$\begin{aligned} c_{ij}c_{kl}=(-1)^{|c_{ij}||c_{kl}|} c_{kl}c_{ij}, \end{aligned}$$

where \(|c_{ij}|\equiv |i|+|j| \pmod 2\) is the parity of the element \(c_{ij}\). Denote by \(A(m|n)_0\) and \(A(m|n)_1\), respectively, the subsets of A(m|n) consisting of elements of even and odd parity, respectively. There is a natural grading on A(m|n) given by the degree r. The homogeneous component of A(m|n) corresponding to degree \(r\ge 0\) is denoted by A(m|nr). The dual of A(m|nr) is the Schur superalgebra denoted by S(m|nr).

Denote by \(A_{ev}(m|n)\) the subsuperalgebra of A(m|n) spanned by the elements \(c_{ij}\) such that \(|i|=|j|\). The set \(A(m|n)_0\) is a subsuperalgebra of A(m|n), but it is not a domain. On the other hand, the superalgebra \(A_{ev}(m|n)\) is a domain. We work inside the localization \(A(m|n)(A_{ev}(m|n)\setminus 0)^{-1}\) that is denoted by K(m|n).

The superalgebra A(m|n) also has a structure of a superbialgebra given by the comultiplication \(\Delta (c_{ij})=\sum _{k=1}^{m+n} c_{ik}\otimes c_{kj}\) and the counit \(\epsilon \) given by \(\epsilon (c_{ij})=\delta _{ij}\).

Write the \((m+n)\times (m+n)\)-matrix \(C=(c_{ij})\) as a block matrix

$$\begin{aligned} C= \begin{pmatrix} C_{11} &{}\quad C_{12} \\ C_{21} &{}\quad C_{22} \end{pmatrix}, \end{aligned}$$

where \(C_{11}, C_{12}, C_{21},\) and \(C_{22}\) are matrices of sizes \(m\times m\), \(m\times n\), \(n\times m\), and \(n\times n\), respectively. The localization of A(m|n) at the element \(\mathrm{det}(C_{11})\, \mathrm{det}(C_{22})\) is a Hopf superalgebra K[G], where the antipode \(s:X\rightarrow X'\) is given by

$$\begin{aligned} \begin{aligned}&X'_{11}=(X_11-X_{12}X_{22}X_{21})^{-1},&X'_{22}&=(X_{22}-X_{21}X_{11}^{-1}X_{12})^{-1},\\&X'_{12}=-X_{11}^{-1}X_{12}X'_{22},&X'_{21}&=-X_{22}^{-1}X_{21}X'_{11}. \end{aligned} \end{aligned}$$

The Hopf superalgebra K[G] is the coordinate algebra of the general linear supergroup \(G=GL(m|n)\). The general linear group G is a functor from commutative superalgebras to groups given by \(A\mapsto \text {Hom}_{\text {superalg}}(K[G],A)\). The category of G-supermodules is identical to the category of K[G]-supercomodules.

General linear groups GL(m) and GL(n) are embedded in GL(m|n) as its even subsupergroups. There is a standard maximal even subsupergroup \(G_{ev}\) of G such that \(G_{ev}\simeq GL(m)\times GL(n)\), corresponding to matrices X where blocks \(X_{12}\) and \(X_{21}\) vanish.

The supergroups G and \(G_{ev}\) have the same standard maximal torus \(T=T(m|n)\simeq (K^*)^{m+n}\) corresponding to diagonal matrices. Therefore, weights of G and \(G_{ev}\) are the same, and a weight \(\lambda \) is denoted by \((\lambda _1, \ldots , \lambda _m|\lambda _{m+1}, \ldots \lambda _{m+n})\) and \(|\lambda |=\sum _{k=1}^{m+n} \lambda _k\) denotes its degree.

Simple G- and \(G_{ev}\)-supermodules are in one-to-one correspondence (up to a parity shift) with dominant weights \(\lambda \), where \(\lambda _1\ge \cdots \ge \lambda _m\) and \(\lambda _{m+1} \ge \cdots \ge \lambda _{m+n}\). We disregard the parity shift and denote the simple G-supermodule of the highest weight \(\lambda \) by \(L(\lambda )\), and the simple \(G_{ev}\)-module of the highest weight \(\lambda \) by \(L_{ev}(\lambda )\).

Since the characteristic of the field K is zero, the G-module structure is described by the action of Lie superalgebra \({\mathfrak {g}}{\mathfrak {l}}(m|n)\). In the paper [14], we have used the language of superderivations. The action of \(e_{ji}\in {\mathfrak {g}}{\mathfrak {l}}(m|n)\) corresponds to the action of the right superderivation \(_{ij}D\) so that \(e_{ji}.v = (v)_{ij}D\).

The superderivation \(_{ij}D\) is determined by the property

$$\begin{aligned} (uv)_{ij}D=(-1)^{(|i|+|j|)|v|}(u)_{ij}Dv + u (v)_{ij}D \end{aligned}$$

and its action on elements of A(m|n) given by \((c_{kl})_{ij}D=\delta _{li} c_{kj}\). The action of \(_{ij}D\) extends to K(m|n) using the quotient rule

$$\begin{aligned} \left( \frac{u}{v}\right) _{ij}D=\frac{(u)_{ij}Dv-u(v)_{ij}D}{v^2} \end{aligned}$$

for \(u,v\in A(m|n)\) and v even. For more information, consult Sect. 4 of [13].

2.2 Induced GL(m)-modules

We need to review some classical results about induced modules for the general linear group GL(m).

The group GL(m) has a standard torus \(T^+\simeq (K^*)^{m}\) corresponding to diagonal matrices and a standard Borel subgroup \(B^+\) corresponding to lower triangular matrices of size \(m\times m\). Let \(\lambda ^+=(\lambda _1, \ldots , \lambda _m)\) be a weight of GL(m) and let \(K_{\lambda ^+}\) be the one-dimensional \(T^+\)-module of the highest weight \(\lambda ^+\). We can consider \(K_{\lambda ^+}\) as a \(B^+\)-module via extending the action from \(T^+\) to \(B^+\) trivially. The induced module \(H^0_{GL(m)}(\lambda ^+)\), which is the zeroth cohomology \(H^0(GL(m)/B^+, K_{\lambda ^+})\), is defined to be \(Ind_{B^+}^{GL(m)}(K_{\lambda ^+})\). Since \(\mathrm{det}(C_{11})\) is invertible, each \(H^0_{GL(m)}(\lambda ^+)\) is isomorphic to the tensor product

$$\begin{aligned} H^0_{GL(m)}(\lambda ')\otimes \mathrm{det}(C_{11})^{\lambda ^+_m}, \end{aligned}$$

where

$$\begin{aligned} \lambda '=(\lambda ^+_1-\lambda ^+_m, \ldots , \lambda ^+_{m-1}-\lambda ^+_m,0) \end{aligned}$$

is a weight with nonnegative entries.

Assume now that \(\lambda ^+\) is dominant and all entries in \(\lambda ^+\) are nonnegative. We can realize \(H^0_{GL(m)}(\lambda ^+)\) as follows. For indices \(i_1, \ldots , i_s\) that are distinct elements of the set \(\{1, \ldots , m\}\), denote the determinant

$$\begin{aligned}D^+(i_1, \ldots , i_s)= \begin{array}{|ccc|} c_{1,i_1} &{} \ldots &{} c_{1,i_s} \\ c_{2,i_1} &{} \ldots &{} c_{2,i_s} \\ \ldots &{} \ldots &{} \ldots \\ c_{s,i_1} &{} \ldots &{} c_{s,i_s} \end{array}. \end{aligned}$$

Denote by \(D^+(s)\) any determinant \(D^+(i_1, \ldots , i_s)\) of size s. We say that any expression of type

$$\begin{aligned} \prod _{a=1}^{m-1} D^+(a)^{\lambda ^+_a-\lambda ^+_{a+1}}D^+(m)^{\lambda ^+_m} \end{aligned}$$

is a bideterminant of shape \(\lambda ^+\).

Then any bideterminant of shape \(\lambda ^+\) is an element of \(H^0_{GL(m)}(\lambda ^+)\), and \(H^0_{GL(m)}(\lambda ^+)\) has a basis given by standard bideterminants of the shape \(\lambda ^+\)—see Sect. 4 of [6] or [21].

The action of Lie algebra \({\mathfrak {g}}{\mathfrak {l}}(m)\) on the basis elements of \(H^0_{GL(m)}(\lambda ^+)\) is expressed in terms of the action of derivations \(_{ij}D\), where \(1\le i,j\le m\), as follows.

$$\begin{aligned} (D^+(i_1,\ldots , i_s))_{ij}D= D^+(i_1, \ldots , \widehat{i_t}, j, \ldots , i_s) \end{aligned}$$

if \(i=i_t\) for some \(t=1,\ldots , s\) and \((D^+(i_1,\ldots , i_s))_{ij}D=0\) otherwise.

An analogous description applies to GL(n), its weights \(\lambda ^-\) and induced GL(n)-modules \(H^0_{GL(n)}(\lambda ^-)\).

2.3 The \(G_{ev}\)-structure of induced G-supermodules

Combining the above descriptions of induced GL(m)- and GL(n)-modules, we can get a description of induced \(G_{ev}\)-modules.

The group \(G_{ev}\) has a standard Borel subgroup \(B_{ev}\) corresponding to lower triangular matrices X, where blocks \(X_{12}\) and \(X_{21}\) vanish.

Denote by \(K_{\lambda }\) the one-dimensional T-supermodule of the highest weight \(\lambda \). We can consider \(K_{\lambda }\) as a \(B_{ev}\)-supermodule via extending the action from T to \(B_{ev}\) trivially. The induced supermodule \(H^0_{G_{ev}}(\lambda )\), which is the zeroth cohomology \(H^0(G_{ev}/B_{ev}, K_{\lambda })\), is defined to be \(Ind_{B_{ev}}^{G_{ev}}(K_{\lambda })\).

Analogously, the group G has a standard Borel subgroup B corresponding to lower triangular matrices. Let \(\lambda \) be a weight of G and let \(K_{\lambda }\) be the one-dimensional T-module of the highest weight \(\lambda \). We can consider \(K_{\lambda }\) as a B-module via extending the action from T to B trivially. The induced supermodule \(H^0_{G}(\lambda )\), which is the zeroth cohomology \(H^0(G/B, K_{\lambda })\), is defined to be \(Ind_{B}^{G}(K_{\lambda })\).

For an understanding of the \(G_{ev}\)-structure of the induced supermodule \(H^0_G(\lambda )\), the following result that presents \(H^0_G(\lambda )\) as a supermodule embedded inside K[G] is significant.

The G-supermodule \(H^0_G(\lambda )\) is described explicitly using the isomorphism \({\tilde{\phi }} :H^0_{G_{ev}}(\lambda )\otimes S(C_{12})\rightarrow H^0_G(\lambda )\) of superspaces defined in Lemma 5.1 of [28]. This map is a restriction of the multiplicative morphism \(\phi :K[G]\rightarrow K[G]\) given on generators as follows:

$$\begin{aligned} C_{11}\mapsto C_{11}, C_{21}\mapsto C_{21}, C_{12}\mapsto C_{11}^{-1}C_{12}, C_{22}\mapsto C_{22}-C_{21}C_{11}^{-1}C_{12}. \end{aligned}$$

The map \({\tilde{\phi }}\) is an isomorphism of \(G_{ev}\)-supermodules and its image is a G-subsupermodule of K[G]. Using this map, we consider \(H^0_{G_{ev}}(\lambda )\) embedded into \(H^0_G(\lambda )\) and its highest vector v is represented as a product of bideterminants of type

$$\begin{aligned}D^+(i_1, \ldots , i_s)= \begin{array}{|ccc|} c_{1,i_1} &{} \ldots &{} c_{1,i_s} \\ c_{2,i_1} &{} \ldots &{} c_{2,i_s} \\ \ldots &{} \ldots &{} \ldots \\ c_{s,i_1} &{} \ldots &{} c_{s,i_s} \end{array} \end{aligned}$$

and

$$\begin{aligned}D^-(j_1, \ldots , j_t)= \begin{array}{|ccc|} \phi (c_{m+1,j_1}) &{} \ldots &{} \phi (c_{m+1,j_t}) \\ \phi (c_{m+2,j_1}) &{} \ldots &{} \phi (c_{m+2,j_t}) \\ \ldots &{} \ldots &{} \ldots \\ \phi (c_{m+t,j_1}) &{} \ldots &{} \phi (c_{m+s,j_t}) \end{array}, \end{aligned}$$

where indices \(i_1, \ldots , i_s\) are distinct elements of the set \(\{1, \ldots , m\}\), and \(j_1, \ldots , j_t\) are distinct elements of the set \(\{m+1, \ldots , m+n\}\). Namely,

$$\begin{aligned} v=\prod _{a=1}^m D^+(1,\ldots , a)^{\lambda _a-\lambda _{a+1}}\prod _{b=1}^n D^-(m+1, \ldots , m+b)^{\lambda _{m+b} - \lambda _{m+b+1}}. \end{aligned}$$

Denote by \(D^+(s)\) any determinant \(D^+(i_1, \ldots , i_s)\) of size s and by \(D^-(t)\) any determinant \(D^-(j_1, \ldots , j_t)\) of size t. Then any bideterminant of type

$$\begin{aligned} \prod _{a=1}^m D^+(a)^{\lambda _a-\lambda _{a+1}}\prod _{b=1}^n D^-(b)^{\lambda _{m+b} - \lambda _{m+b+1}} \end{aligned}$$

is an element of \(H^0_{G_{ev}}(\lambda )\). It is well known that the induced \(G_{ev}\)-modules have a basis given by semistandard bideterminants of the shape \(\lambda \)—for their description within our context consult [14]. The \(G_{ev}\)-module structure of \(H^0_{G_{ev}}(\lambda )\) is completely described by the action of even superderivations \(_{ij}D\) on the above defined bideterminants.

To describe the basis and the G-supermodule structure of \(H^0_G(\lambda )\), the action of odd superderivations \(_{ij}D\) on bideterminants must be computed—this was done in Sect. 2 of [14]. This description also involves products of elements \(y_{kl}\) given as

$$\begin{aligned} y_{kl}=\phi (c_{kl})=\frac{A_{k1}c_{1l}+A_{k2}c_{2l}+\ldots +A_{km}c_{ml}}{D} \end{aligned}$$

for \(1\le k\le m\) and \(m+1\le l \le m+n\), where the matrix \(A=(A_{ij})\) is the adjoint of the matrix \(C_{11}\) and \(D=\mathrm{Det}(C_{11})\).

The span of all elements \(y_{kl}\) for \(1\le k\le m\) and \(m+1\le l\le m+n\) is a \(G_{ev}\)-supermodule that is denoted by Y.

2.4 The largest polynomial subsupermodule \(\nabla (\lambda )\) of \(H^0_G(\lambda )\)

A G-supermodule V is polynomial if and only if its coefficient space cf(V) belongs to A(m|n), that is if any \(g\in G\) acts on V in such way that all matrix coefficients of g (for any basis of V) are polynomial functions in \(g_{ij}\). It follows that all weights of a polynomial supermodule V are such that all of their components are nonnegative. If \(\lambda \) is the highest weight of a polynomial G-supermodule, it is called the polynomial weight of G. Since \(char(K)=0\), the polynomial weights of G correspond to (m|n)-hook partitions—see [1]. A complete description of polynomial weights if \(char(K)=p\ne 2\) was obtained in [3]. The largest polynomial subsupermodule of \(H^0_G(\lambda )\) is denoted by \(\nabla (\lambda )\).

The category of polynomial G-supermodules of degree \(r\ge 0\) is isomorphic to the category of supermodules over the Schur superalgebra S(m|nr). Under this isomorphism, the supermodule \(\nabla (\lambda )\) corresponds to the costandard module of the highest weight \(\lambda \) for the Schur superalgebra S(m|nr). It has been proved in [28] that the category of G-supermodules is a highest weight category. On the other hand, according to [19] the category of supermodules over Schur superalgebra S(m|nr) is a highest weight category if and only if it is semisimple. Consequently, our understanding of the structure of S(m|nr) is much more elusive than that of G. For more information on \(\nabla (\lambda )\), see Sect. 6 of [28] and Sect. 5 of [3].

In the classical case of the general linear group GL(m), any induced module \(H^0_{GL(m)}(\mu )\), where \(\mu =(\mu _1, \ldots , \mu _m)\) is dominant, is isomorphic to a tensor multiple of the induced module \(H^0_{GL(m)}(\mu _1-\mu _m, \ldots , \mu _{m-1}-\mu _m,0)\), which is polynomial GL(m)-module, and of the \(\mu _m\)th power of the determinant for GL(m). The structure of induced modules \(H^0_G(\mu )\) for polynomial \(\mu \) is given by biterminants and is well understood. Therefore, the GL(m)-structure of all induced modules \(H^0_{GL(m)}(\mu )\) is known. By extension, the \(G_{ev}\)-structure of \(H^0_{G_{ev}}(\lambda )\), for every dominant \(\lambda \) as before, is determined and we use it later.

The connection between the \(G_{ev}\)-module structure of \(H^0_G(\lambda )\) and \(\nabla (\lambda )\) is not so satisfactory as in the case of GL(m)-modules. The element \(\mathrm{Ber}(C) = \mathrm {det}(C_{11}-C_{12}C_{22}^{-1}C_{21})\mathrm {det}(C_{22})^{-1}\) generates a one-dimensional G-supermodule \(\mathrm {Ber}\) of the weight \((1,\ldots , 1|-1,\ldots , -1)\). Since the Berezinian \(\mathrm {Ber}(C)\) is a group-like element, tensoring with powers of \(\mathrm {Ber}\) gives an isomorphism between corresponding induced supermodules. If \(\lambda \) is polynomial, then the image of \(\nabla (\lambda )\) is again a subsupermodule of the corresponding induced supermodule, although it need not be its polynomial subsupermodule. This way, by tensoring with powers of \(\mathrm {Ber}\), we can extend results derived for \(\nabla (\lambda )\) to subsupermodules of certain, but not all, induced supermodules.

Assume now that a polynomial weight of G corresponds to an (m|n)-hook partition \(\lambda \). For every (m|n)-hook partition \(\lambda =(\lambda _1, \ldots , \ldots , \lambda _t)\) denote by \(\lambda ^+=(\lambda _1, \ldots , \lambda _{min\{m,t\}})\) and by \(\lambda ^-\) the transpose of the partition \((\lambda _{m+1}, \ldots , \lambda _{t})\). Note that \(\lambda ^-\) is nonzero only if \(t>m\). The corresponding polynomial weight \(\lambda \) of G is identified with \((\lambda ^+|\lambda ^-)\).

Denote by \(S_{\mu }(x_1, \ldots , x_m)\) the Schur function and by \(S_{\lambda /\mu }(y_1, \ldots , y_n)\) the skew Schur function. It is well known (see [1]) that the character of the induced G-supermodule of the highest weight \(\lambda \) is given by the hook Schur function \(HS(\lambda )\) described as:

$$\begin{aligned} HS_{\lambda }(x_1, \ldots , x_m;y_1, \ldots y_n)=\sum _{\mu <\lambda ^+} S_{\mu }(x_1, \ldots , x_m) S_{\lambda '/ \mu '}(y_1, \ldots , y_n), \end{aligned}$$

where \({\lambda '/ \mu '}\) is the conjugate of the skew partition \(\lambda /\mu \). The hook Schur function can also be given as \(\sum _{T_{\lambda }(m,n) \,\, \mathrm {semistandard}} T_{\lambda }(x_1, \ldots , x_m;y_1, \ldots y_n)\), where \(T_{\lambda }(x_1, \ldots , x_m;y_1, \ldots y_n)\) are polynomials counting the number of appearances of symbols 1 through \(m+n\) in the tableau T.

According to Theorem 6.11 of [1], the dimension of even-primitive vectors of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\) is given as the Littlewood–Richardson coefficient \(C^{\lambda '}_{\mu '\nu }\) from the decomposition of the skew Schur function \(S_{\lambda '/ \mu '}=\sum C^{\lambda '}_{\mu '\nu } S_{\nu }\) determined by the Littlewood–Richardson rule. This multiplicity equals the cardinality of Littlewood–Richardson tableaux of the shape \({\lambda '/ \mu '}\) and the content \(\nu \). For more information on the above, consult [1, 5] and [25].

2.5 The primitive vectors \(\pi _{I|J}\)

Primitive vectors play a crucial role in the representation theory of algebraic (groups and) supergroups because they generate subsupermodules within a given supermodule. Their description is essential for the understanding of the structure of induced supermodules.

To define what G-primitive vector is, we need to consider a unipotent subsupergroup \(U^-\) of G corresponding to lower triangular matrices with all diagonal entries equal to 1, and a unipotent subsupergroup \(U^+\) of G corresponding to upper triangular matrices with all diagonal entries equal to 1. There is a Cartan decomposition \(G=U^+TU^-\). A vector v of the weight \(\lambda \) of a supermodule M is called a primitive vector if any element of \(U^-\) annihilates it.

Analogously, we have a Cartan decomposition \(G_{ev}=U^+_{ev}TU^-_{ev}\), where \(U^+_{ev}\) is the unipotent subgroup of \(G_{ev}\) corresponding to lower triangular matrices from \(G_{ev}\) with all diagonal entries equal to 1, and \(U^-_{ev}\) is the unipotent subgroup of \(G_{ev}\) corresponding to lower triangular matrices from \(G_{ev}\) with all diagonal entries equal to 1. A vector v of the weight \(\lambda \) of a module M is called \(G_{ev}\)-primitive (or an even-primitive vector) if any element of \(U^-_{ev}\) annihilates it.

Since we are interested in even-primitive vectors inside of an induced supermodule \(H^0_G(\lambda )\) considered as a subsupermodule of K[G], we can describe its even-primitive vectors using superderivations \(_{ij}D\) as follows. A vector \(v\in K[G]\) (or more generally \(v\in K(m|n)\)) is an even-primitive vector if and only if \((v)_{ij}D=0\) whenever \( i>j\) and either \(1\le i,j \le m\) or \(m+1\le i,j \le m+n\).

Fix a dominant weight \(\lambda \) of G. Consider the multi-index

$$\begin{aligned} (I|J)=(i_1\ldots i_k|j_1\ldots j_k) \end{aligned}$$

such that \(1\le i_1, \ldots , i_k \le m\) and \(1\le j_1, \ldots , j_k\le n\) and call k the length of I and J. Denote by \(cont(I|J)=(cont(I)|cont(J))\) the content of (I|J), the vector counting the number of appearances of symbols 1 through \(m+n\) in (I|J). Denote by \(\delta ^+_i\) the weight of G with all components zeroes, except for ith component that is equal to 1, and denote by \(\delta ^-_j\) the weight of G with all components zeroes, except for the \(m+j\)th component that is equal to 1. Then define the weight \(\lambda _{I|J}\) corresponding to \(\lambda \) and multi-index (I|J) by

$$\begin{aligned} \lambda _{I|J}=\lambda -\sum _{s=1}^k \delta ^+_{i_s}+\sum _{s=1}^k \delta ^-_{j_s}. \end{aligned}$$

Assume \((I|J)=(i_1\ldots i_k|j_1\ldots j_k)\) is such that \(\lambda _{I|J}\) is dominant. If \(i_1\le i_2 \le \ldots \le i_k\) and \(i_r=i_{r+1}\) implies \(j_r<j_{r+1}\), then (I|J) is called left-admissible. Any (I|J) that is left admissible is called just admissible. If \(j_1\le j_2 \le \ldots \le j_k\) and \(j_s=j_{s+1}\) implies \(i_s<i_{s+1}\), then (I|J) is called right-admissible.

For each \(1\le i \le m\) and \(1\le j\le n\) denote by \(\rho _{i|j}\) the following element:

$$\begin{aligned} \sum _{r=i}^m D^+(1, \ldots , i-1,r)\sum _{s=1}^{j} (-1)^{s+j} D^-(m+1, \ldots , \widehat{m+s},\ldots , m+j)y_{r,m+s} \end{aligned}$$

of \(A_{ev}(m|n)\otimes Y\). As customary, we define \(D^-(\emptyset )=1\); this is used for \(s=j=1\) when \(D^-(m+1, \ldots , \widehat{m+s},\ldots , m+j)=1\). For each \((I|J)=(i_1\ldots i_k|j_1\ldots j_k)\) as above denote \(\rho _{I|J}=\otimes _{s=1}^k \rho _{i_s|j_s}\). We abuse the notation and consider \(\rho _{I|J}\) as an element of \(A_{ev}(m|n)\otimes Y^{\otimes k}\) via the map that sends \((f_1\otimes y_1)\otimes \ldots \otimes (f_k\otimes y_k)\) to \((f_1\ldots f_k)\otimes (y_1\otimes \ldots \otimes y_k)\), where \(f_1\ldots f_k\) is the product in \(A_{ev}(m|n)\).

Consider the following element:

$$\begin{aligned} v_{I|J}=\frac{v}{\prod _{s=1}^k D^+(1, \ldots , i_s)\prod _{s=1}^k D^-(m+1,\ldots , m+j_s-1)}, \end{aligned}$$

where, in particular, \(D^-(m+1,\ldots , m+j_s-1)=1\) if \(j_s=1\).

The expression \(v_{I|J}\) depends only on the content \(cont(I|J)=(\iota |\kappa )\). By definition, \(v_{I|J}\) belongs to K(m|n).

Define

$$\begin{aligned} \pi _{I|J}=v_{I|J}\rho _{I|J}. \end{aligned}$$

It follows from Proposition 3.4 of [14] using the argument in the proof of Lemma 4.1 of [14] that every \(\pi _{I|J}\) is an even-primitive vector in K(m|n). Therefore, any linear combination of vectors \(\pi _{I|J}\), where the content of I|J is the same, is an even-primitive vector in K(m|n). It is a crucial observation, which follows from the definition of \(v_{I|J}\), that if such a linear combination belongs to A(m|n) (which can be checked by verifying certain congruences modulo \(D^+(1, \ldots , i_s)\) and \(D^-(m+1,\ldots , m+j_s-1)\)), then it also belongs to the supersubmodule \(\nabla (\lambda )\) of \(H^0_G(\lambda )\). This is the way we construct even-primitive vectors of \(\nabla (\lambda )\) and then of \(H^0_G(\lambda )\).

The weight of \(\pi _{I|J}\) that equals \((\lambda ^+-\iota |\lambda ^-+\kappa )\) is indicated by \((\mu |\nu )\) and \(v_{I|J}\) is indicated by \(v_{\mu |\nu }\). The components \(\iota \) and \(\kappa \) of cont(I|J) correspond to skew partitions \(\lambda ^+/ \mu \) and \(\nu / \lambda ^-\), respectively.

In [14], the weight \(\lambda \) is called (I|J)-robust if \(v_{I|J}\) belongs to A(m|n). It is clear that this happens if and only if the symbol \(i_s<m\) appears at most \(\lambda ^+_{i_s}-\lambda ^+_{i_s+1}\) times in I, symbol m appears at most \(\lambda ^+_m\) times in I, and symbol \(j_t>1\) appears at most \(\lambda ^-_{j_t-1}-\lambda ^-_{j_t}\) times in J.

From now on, the image of an element x under the natural map \(K(m|n) \otimes \otimes ^k Y \rightarrow K(m|n)\otimes \wedge ^k Y\) is denoted by \({\overline{x}}\). In particular, \({\overline{\rho }}_{I|J}\) and \({\overline{\pi }}_{I|J}\), respectively, are images of \(\rho _{I|J}\) and \(\pi _{I|J}\), respectively.

Theorem 4.4 of [14] states that if \(char(K)=0\), \((I|J)=(i_1\ldots i_k|j_1\ldots j_k)\) is admissible, \(\lambda \) is (I|J)-robust, \(\lambda _{I|J}=\tau \), and \(\tau _m\ge n\), then the set of all vectors \({\overline{\pi }}_{K|L}\) for admissible (K|L) such that \(cont(K|L)=cont(I|J)\) form a basis of the set of even-primitive vectors of weight \(\tau \) in \(H^0_G(\lambda )\).

It is clear that each even-primitive vector in \(H^0_G(\lambda )\) has a weight \(\tau =\lambda _{I|J}\) for some admissible (I|J). In this paper, in the case \(char(K)=0\), we describe explicitly a basis of even-primitive vectors of weight \(\tau \) in \(H^0_G(\lambda )\) as linear combinations of vectors \({\overline{\pi }}_{K|L}\), where \(cont(K|L)=cont(I|J)\). Each coefficient in this linear combination is either zero, or plus or minus one.

3 Congruences modulo \(D^+(1, \ldots , i)\) and \(D^-(m+1, \ldots , m+j)\)

We start with the following lemma.

Lemma 2.1

Let \(1\le i<m\) and \(1\le j_1, j_2 \le n\). Then \(\rho _{i,j_1}\otimes \rho _{i+1,j_2}-\rho _{i+1,j_1}\otimes \rho _{i,j_2}\) equals

$$\begin{aligned} \begin{aligned}&\sum _{i\le r_1<r_2\le m}\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,i)D^+(1,\ldots ,i-1,r_1,r_2)(-1)^{s_1+s_2+j_1+j_2} \\&\quad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\quad \times [y_{r_1,m+s_1} \otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]. \end{aligned} \end{aligned}$$

In particular, \(\rho _{i,j_1}\otimes \rho _{i+1,j_2}-\rho _{i+1,j_1}\otimes \rho _{i,j_2} \equiv 0 \pmod {D^+(1, \ldots , i)}\).

Proof

Write

$$\begin{aligned} \begin{aligned}&\rho _{i,j_1}\otimes \rho _{i+1,j_2}-\rho _{i+1,j_1}\otimes \rho _{i,j_2}\\&\quad =\left[ \sum _{r_1=i}^m D^+(1, \ldots , i-1, r_1)\sum _{s_1=1}^{j_1}(-1)^{s_1+j_1}D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j_1) y_{r_1,m+s_1}\right] \\&\qquad \otimes \left[ \sum _{r_2=i+1}^m D^+(1, \ldots , i, r_2)\sum _{s_2=1}^{j_2}(-1)^{s_2+j_2}D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2) y_{r_2,m+s_2}\right] \\&\qquad -\left[ \sum _{r_2=i+1}^m D^+(1, \ldots , i, r_2)\sum _{s_1=1}^{j_1}(-1)^{s_1+j_1}D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j_1) y_{r_2,m+s_1}\right] \\&\qquad \otimes \left[ \sum _{r_1=i}^m D^+(1, \ldots , i-1, r_1)\sum _{s_2=1}^{j_2}(-1)^{s_2+j_2}D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2) y_{r_1,m+s_2}\right] \\&\quad =\sum _{r_1=i}^m\sum _{r_2=i+1}^m\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,r_1)D^+(1,\ldots ,i,r_2)(-1)^{s_1+s_2+j_1+j_2}\\&\qquad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\qquad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]\\ \end{aligned} \end{aligned}$$

and break it up into two sums

$$\begin{aligned} \begin{aligned}&\sum _{r_1=i+1}^m\sum _{r_2=i+1}^m\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,r_1)D^+(1,\ldots ,i,r_2)(-1)^{s_1+s_2+j_1+j_2}\\&\quad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\quad \times [y_{r_1,m+s_1} \otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}] \end{aligned} \end{aligned}$$
(*)

and

$$\begin{aligned} \begin{aligned}&\sum _{r_2=i+1}^m\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,i)D^+(1,\ldots ,i,r_2)(-1)^{s_1+s_2+j_1+j_2}\\&\quad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\quad \times [y_{i,m+s_1} \otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{i,m+s_2}]. \end{aligned} \end{aligned}$$

If \(r_1=r_2\), then the corresponding contribution in the first sum equals zero. Consider now \(i+1\le r_1\ne r_2\le m\).

The determinantal identity

$$\begin{aligned} \begin{aligned}&D^+(1, \ldots , i-1,r_1)D^+(1, \ldots , i,r_2)-D^+(1, \ldots , i-1, r_2)D^+(1, \ldots , i,r_1)\\&\quad =D^+(1, \ldots , i-1,r_1,r_2)D^+(1, \ldots , i-1,i) \end{aligned} \end{aligned}$$

follows from the identity

$$\begin{aligned} c_{i,r_1}\,\begin{array}{|cc|}c_{i,i}&{}\quad c_{i,r_2}\\ c_{i+1,i}&{}\quad c_{i+1,r_2}\end{array}- c_{i,r_2}\,\begin{array}{|cc|}c_{i,i}&{}\quad c_{i,r_1}\\ c_{i+1,i}&{}\quad c_{i+1,r_1}\end{array}= \begin{array}{|cc|}c_{i,r_1}&{}\quad c_{i,r_2}\\ c_{i+1,r_1}&{}\quad c_{i+1,r_2}\end{array}\,c_{i,i} \end{aligned}$$

using the law of extensible minors (see [2] and [14]), by adding new rows \(1,\ldots , i-1\) and new columns \(1, \ldots , i-1\).

Therefore

$$\begin{aligned} \begin{aligned}&D^+(1, \ldots , i-1,r_1)D^+(1, \ldots , i,r_2)[y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]\\&\qquad +D^+(1, \ldots , i-1,r_2)D^+(1, \ldots , i,r_1)[y_{r_2,m+s_1} \otimes y_{r_1,m+s_2} - y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}]\\&\quad =[D^+(1, \ldots , i-1,r_1)D^+(1, \ldots , i,r_2)-D^+(1, \ldots , i-1,r_2)D^+(1, \ldots , i,r_1)] \\&\qquad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]\\&\quad =D^+(1, \ldots , i-1,r_1,r_2)D^+(1, \ldots , i-1,i)[y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}\\&\qquad -y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]. \end{aligned} \end{aligned}$$

Using this, we can rewrite the expression \((*)\) as

$$\begin{aligned} \begin{aligned}&\sum _{i<r_1<r_2\le m}\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,i)D^+(1,\ldots ,i-1,r_1,r_2)(-1)^{s_1+s_2+j_1+j_2}\\&\quad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\quad \times [y_{r_1,m+s_1} \otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}] \end{aligned} \end{aligned}$$

and \(\rho _{i,j_1}\otimes \rho _{i+1,j_2}-\rho _{i+1,j_1}\otimes \rho _{i,j_2}\) as

$$\begin{aligned} \begin{aligned}&\sum _{i\le r_1<r_2\le m}\sum _{s_1=1}^{j_1}\sum _{s_2=1}^{j_2} D^+(1,\ldots ,i-1,i)D^+(1,\ldots ,i-1,r_1,r_2)(-1)^{s_1+s_2+j_1+j_2}\\&\quad \times D^-(m+1,\ldots , \widehat{m+s_1}, \ldots , m+j_1)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j_2)\\&\quad \times [y_{r_1,m+s_1} \otimes y_{r_2,m+s_2}-y_{r_2,m+s_1}\otimes y_{r_1,m+s_2}]. \end{aligned} \end{aligned}$$

\(\square \)

Lemma 2.2

Let \(1\le i_1,i_2\le m\) and \(1\le j < n\). Then \(\rho _{i_1,j}\otimes \rho _{i_2,j+1}-\rho _{i_1,j+1}\otimes \rho _{i_2,j}\) equals

$$\begin{aligned} \begin{aligned}&\sum _{r_1=i_1}^m\sum _{r_2=i_2}^m\sum _{1\le s_1<s_2\le j+1} D^+(1, \ldots , i_1-1,r_1)D^+(1, \ldots , i_2-1,r_2)(-1)^{s_1+s_2+1} \\&\quad \times D^-(m+1, \ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , \widehat{m+s_2}, \ldots , m+j+1) \\&\quad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_1,m+s_2}\otimes y_{r_2,m+s_1}]. \end{aligned} \end{aligned}$$

In particular, \(\rho _{i_1,j}\otimes \rho _{i_2,j+1}-\rho _{i_1,j+1}\otimes \rho _{i_2,j} \equiv 0 \pmod {D^-(m+1, \ldots , m+j)}\).

Proof

The proof of this lemma is similar to the proof of Lemma 2.1 and we only provide its shorter outline. Write \(\rho _{i_1,j}\otimes \rho _{i_2,j+1}-\rho _{i_1,j+1}\otimes \rho _{i_2,j}\) as a sum of

$$\begin{aligned} \begin{aligned}&\sum _{r_1=i_1}^m\sum _{r_2=i_2}^m\sum _{s_1=1}^j\sum _{s_2=1}^j D^+(1, \ldots , i_1-1,r_1)D^+(1, \ldots , i_2-1,r_2)(-1)^{s_1+s_2+1}\\&\quad \times D^-(m+1, \ldots , \widehat{m+s_1},\ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j+1)\\&\quad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_1,m+s_2}\otimes y_{r_2,m+s_1}] \end{aligned} \end{aligned}$$
(**)

and

$$\begin{aligned} \begin{aligned}&\sum _{r_1=i_1}^m\sum _{r_2=i_2}^m\sum _{s_1=1}^j D^+(1, \ldots , i_1-1,r_1)D^+(1, \ldots , i_2-1,r_2)(-1)^{s_1+j}\\&\quad \times D^-(m+1, \ldots , \widehat{m+s_1},\ldots , m+j)D^-(m+1, \ldots , \ldots , m+j)\\&\quad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+j+1}-y_{r_1,m+j+1}\otimes y_{r_2,m+s_1}]. \end{aligned} \end{aligned}$$

If \(s_1=s_2\), then the corresponding contribution in the first sum equals zero. Consider now \(1\le s_1\ne s_2\le j\).

The determinantal identity

$$\begin{aligned} \begin{aligned}&D^-(m+1, \ldots , \widehat{m+s_1},\ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j+1)\\&\qquad -D^-(m+1, \ldots , \widehat{m+s_2},\ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j+1)\\&\quad =D^-(m+1, \ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , \widehat{m+s_2}, \ldots , m+j+1) \end{aligned} \end{aligned}$$

follows from the identity

$$\begin{aligned} \begin{aligned}&c_{m+s_1,m+s_2}\,\begin{array}{|cc|}c_{m+s_1,m+s_1}&{}c_{m+s_1,m+j+1}\\ c_{m+s_2,m+s_1}&{}c_{m+s_2,m+j+1}\end{array}- c_{m+s_1,m+s_1}\,\begin{array}{|cc|}c_{m+s_1,m+s_2}&{}c_{m+s_1,m+j+1}\\ c_{m+s_2,m+s_2}&{}c_{m+s_2,m+j+1}\end{array}\\&\quad =\,\begin{array}{|cc|}c_{m+s_1,m+s_1}&{}c_{m+s_1,m+s_2}\\ c_{m+s_2,m+s_1}&{}c_{m+s_2,m+s_2}\end{array}\,c_{m+s_1,m+j+1} \end{aligned} \end{aligned}$$

using the law of extensible minors (see [2] and [14]), by adding rows

$$\begin{aligned} m+1,\ldots , \widehat{m+s_1}, \ldots , \widehat{m+s_2}, \ldots , m+j \end{aligned}$$

and columns

$$\begin{aligned} m+1,\ldots , \widehat{m+s_1}, \ldots , \widehat{m+s_2}, \ldots , m+j. \end{aligned}$$

Therefore

$$\begin{aligned} \begin{aligned}&D^-(m+1, \ldots , \widehat{m+s_1},\ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_2}, \ldots , m+j+1)\\&\qquad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_1,m+s_2}\otimes y_{r_2,m+s_1}]\\&\qquad +D^-(m+1, \ldots , \widehat{m+s_2},\ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j+1)\\&\qquad \times [y_{r_1,m+s_2}\otimes y_{r_2,m+s_1}-y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}]\\&\quad =D^-(m+1, \ldots , m+j)D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , \widehat{m+s_2}, \ldots , m+j+1) \\&\qquad \times [y_{r_1,m+s_1}\otimes y_{r_2,m+s_2}-y_{r_1,m+s_2}\otimes y_{r_2,m+s_1}]. \end{aligned} \end{aligned}$$

Using this, we rewrite \(\rho _{i_1,j}\otimes \rho _{i_2,j+1}-\rho _{i_1,j+1}\otimes \rho _{i_2,j}\) in the stated form. \(\square \)

4 Determinantal identities

In the previous section, we have used determinantal identities to derive congruences modulo \(D^+(1, \ldots , i)\) and \(D^-(m+1, \ldots , m+j)\). Before we can get a description of \(G_{ev}\)-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\), we need to derive new determinantal identities.

Lemma 3.1

Let \(2\le s \le m\), \(x_1, \ldots , x_{s-1}\) and \(a_1, \ldots , a_s\) be integers from the set \(\{1, \ldots , m\}\). Then there is the following determinantal identity

$$\begin{aligned} \begin{aligned}&\sum _{t=1}^s (-1)^{s-t} D^+(x_1, \ldots , x_{s-1}, a_t)D^+(a_1, \ldots , \widehat{a_t}, \ldots , a_s)\\&\quad =D^+(x_1, \ldots , x_{s-1})D^+(a_1, \ldots , a_s). \end{aligned} \end{aligned}$$

Proof

For each t, use the Laplace expansion by the last column to express

$$\begin{aligned} D^+(x_1, \ldots , x_{s-1}, a_t)=\sum _{u=1}^{s} (-1)^{s-u} c_{ua_t}M_u, \end{aligned}$$

where \(M_u\) is the minor corresponding to the position (us). Since each \(M_u\) does not depend on t, we can rewrite the left-hand-side of the above identity as

$$\begin{aligned} \sum _{u=1}^{s} (-1)^{s-u} M_u \left[ \sum _{t=1}^s (-1)^{s-t} c_{ua_t} D^+(a_1, \ldots , \widehat{a_t}, \ldots , a_s)\right] . \end{aligned}$$

The expression \(\sum _{t=1}^s (-1)^{s-t} c_{ua_t} D^+(a_1, \ldots , \widehat{a_t}, \ldots , a_s)\) is the Laplace expansion by the last row of the determinant \(\begin{array}{|ccc|} c_{1,a_1} &{} \ldots &{} c_{1,a_s} \\ c_{2,a_1} &{} \ldots &{} c_{2,a_s} \\ \ldots &{} \ldots &{} \ldots \\ c_{s-1,a_1} &{} \ldots &{} c_{s-1,a_s}\\ c_{u,a_1} &{} \ldots &{} c_{u,a_s} \end{array}\). This vanishes if \(u\ne s\) and equals \(D^+(a_1, \ldots , a_s)\) if \(u=s\). Since \(M_s=D^+(x_1, \ldots , x_{s-1})\) the claim follows. \(\square \)

Note that the last identity applied to \(s=2\) and \(x_1=1\)

$$\begin{aligned} D^+(1, a_2) D^+(a_1) -D^+(1, a_1)D^+(a_2) =D^+(1)D^+(a_1,a_2) \end{aligned}$$

is essentially the relationship we have used earlier (after relabeling and applying the law of extensible minors).

Combining multiple identities from Lemma 3.1, we obtain the following statement.

Lemma 3.2

Let \(2\le s \le m\), \(x_1, \ldots , x_{s-1}\) and \(a_1, \ldots , a_s\) be integers from the set \(\{1, \ldots , m\}\). Let \(\sigma \) denote a permutation of \((a_1, \ldots , a_s)\). Then

$$\begin{aligned} \sum _{\sigma } (-1)^{\sigma } \prod _{t=1}^s D^+(x_1, \ldots , x_{t-1}, \sigma (a_t))=D^+(a_1, \ldots , a_s)\prod _{t=1}^{s-1} D^+(x_1, \ldots , x_t). \end{aligned}$$

Proof

We proceed by induction on s. The base case \(s=2\) is settled in Lemma 3.1. Assuming the formulas are valid for \(s=u\), consider \(s=u+1\). Fix \((b_1, \ldots , b_{u+1})\) with entries in the set \(\{1, \ldots , m\}\) and consider a permutation \(\tau \) of \((b_1, \ldots , b_{u+1})\). There is a unique cyclic permutation \(\gamma \) such that \(\gamma (b_{u+1})=\tau (b_{u+1})\) and \(\tau \circ \gamma ^{-1}\) can be identified with its restriction \(\sigma \) on the set \(\{b_1, \ldots , b_{u+1}\}/ \tau (b_{u+1})\) which is viewed as \((a_1, \ldots , a_u)=(\gamma (b_1), \ldots , \gamma (b_u))\). Thus \(\tau =\sigma \gamma \).

Then

$$\begin{aligned} \begin{aligned}&\sum _{\tau } (-1)^{\tau } \prod _{t=1}^{u+1} D^+(x_1, \ldots , x_{t-1}, \tau (b_t)) \\&\quad =\sum _{\gamma } \left[ \sum _{\sigma } (-1)^{\sigma } \prod _{t=1}^u D^+(x_1, \ldots , x_{t-1}, \sigma (a_t))\right] (-1)^{\gamma } D^+(x_1, \ldots , x_u, \gamma (b_{u+1})) \\&\quad =\prod _{t=1}^{u-1} D^+(x_1, \ldots , x_t)\sum _{\gamma } (-1)^{\gamma } D^+(x_1, \ldots , x_u, \gamma (b_{u+1}))D^+(a_1, \ldots , a_u)) \end{aligned} \end{aligned}$$

by the inductive assumption, which equals

$$\begin{aligned} \prod _{t=1}^{u-1} D^+(x_1, \ldots , x_t)D^+(x_1, \ldots , x_u)D^+(b_1, \ldots , b_u, b_{u+1}) \end{aligned}$$

by Lemma 3.1. \(\square \)

The following proposition is applied in the proof of Theorem 4.3 to the case when \((a_1, \ldots , a_s)\) are entries in a row of a skew tableau \(T^+\), where the first entry is positioned in its uth column (and the last entry in its \((u+s-1)\)st column).

Proposition 3.3

Let \(1\le u \) and \(2\le s \) be such that \(u+s-1\le m\) and \(a_1, \ldots , a_s\) be integers from the set \(\{1, \ldots , m\}\). Then

$$\begin{aligned} \begin{aligned}&\sum _{\sigma } (-1)^{\sigma } \prod _{t=1}^s D^+(1, \ldots , u+t-2, \sigma (a_t))\\&\quad =D^+(1, \ldots , u-1,a_1, \ldots , a_s)\prod _{t=1}^{s-1} D^+(1, \ldots , u+t-1), \end{aligned} \end{aligned}$$

where \(\sigma \) runs through all permutations of \((a_1, \ldots , a_s)\).

Proof

Setting \(x_{1}=u, \ldots , x_{s-1}=u+s-2\) in Lemma 3.2 yields

$$\begin{aligned} \sum _{\sigma } (-1)^{\sigma } \prod _{t=1}^s D^+(u, \ldots , u+t-2, \sigma (a_t)) =D^+(a_1, \ldots , a_s)\prod _{t=1}^{s-1} D^+(u, \ldots , u+t-1) \end{aligned}$$

and the statement follows using the law of extensible minors by inserting indices \(1, \ldots , u-1\). \(\square \)

Remark 3.4

If there is an index t such that \(\sigma (a_t)\le u+t-2\), then the product \(\prod _{t=1}^s D^+(1, \ldots , u+t-2, \sigma (a_t))\) vanishes. Therefore, in the sum

$$\begin{aligned} \sum _{\sigma } (-1)^{\sigma } \prod _{t=1}^s D^+(1, \ldots , u+t-2, \sigma (a_t)) \end{aligned}$$

of the above proposition, we could consider only permutations \(\sigma \) that satisfy \(u+t-1\le \sigma (a_t)\) for each index t.

The next lemma is an analog of Lemma 3.1.

Lemma 3.5

Let \(2\le s \le n\), \(x_1, \ldots , x_{s-1}\) and \(a_1, \ldots , a_s\) be integers from the set \(\{m+1, \ldots , m+n\}\). Then there is the following determinantal identity

$$\begin{aligned} \begin{aligned}&\sum _{t=1}^s (-1)^{s-t} D^-(x_1, \ldots , x_{s-1}, a_t)D^-(a_1, \ldots , \widehat{a_t}, \ldots , a_s)\\&\quad =D^-(x_1, \ldots , x_{s-1})D^-(a_1, \ldots , a_s). \end{aligned} \end{aligned}$$

Assume \(1\le u\) and \(2\le s\) are such that \(u+s-1\le n\). Fix \((a_1, \ldots , a_s)\) such that \(m+1\le a_1< \cdots <a_s\le m+n\) and denote by \({\mathcal {O}}={\mathcal {O}}_u(a_1, \ldots , a_s)\) the set of those permutations \(\sigma \) of \((a_1, \ldots , a_s)\) that satisfy \(\sigma (a_t)\le m+u+t-1\) for each \(t=1, \ldots , s\).

Our next goal is to prove the following proposition.

Proposition 3.6

Let \(1\le u \) and \(2\le s\) be such that \(u+s-1\le n\) and \(m+1\le a_1< \cdots <a_s\le m+u+s-1\). Then

$$\begin{aligned} \begin{aligned}&\sum _{\sigma \in {\mathcal {O}}} (-1)^{\sigma } \prod _{t=1}^s D^-(m+1, \ldots , \widehat{\sigma (a_t)}, \dots , m+u+t-1)\\&\quad =D^-(m+1, \ldots , \widehat{a_1}, \ldots , \widehat{a_t}, \ldots , \widehat{a_s}, \ldots , m+u+s-1)\\&\qquad \times \prod _{t=1}^{s-1} D^-(m+1, \ldots , m+u+t-1). \end{aligned} \end{aligned}$$

Proof

For simplicity, rewrite \(D^-(m+1, \ldots , \widehat{a_1}, \ldots , \widehat{a_t}, \ldots , \widehat{a_s}, \ldots , m+u+s-1)\) as \(D^-(b_1, \ldots , b_{u-1})\), where \(m+1\le b_1<\ldots <b_{u-1}\le m+u+s-1\).

In the first step, using Lemma 3.5 we rewrite

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1})D^-(m+1, \ldots , m+u)\\&\quad =\sum _{d_1=1}^{m+u} (-1)^{m+u-d_1} D^-(b_1, \ldots , b_{u-1}, d_1) D^-(m+1, \ldots , \widehat{d_1}, m+u). \end{aligned} \end{aligned}$$

Proceeding by induction, in the tth step we rewrite

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{t-1})D^-(m+1, \ldots , m+u+t-1)\\&\quad =\sum _{d_t=1}^{m+u} (-1)^{m+u+t-1-d_t} D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_t) D^-(m +1,\\&\qquad \ldots , \widehat{d_t}, m+u+t-1). \end{aligned} \end{aligned}$$

Combining the above expressions, after \(s-1\) steps we obtain

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1})\prod _{t=1}^{s-1} D^-(m+1, \ldots , m+u+t-1)\\&\quad =\sum _{d_1=1}^{m+u}\ldots \sum _{d_{s-1}=1}^{m+u+s-2} (-1)^{(m+u)+\ldots + (m+u+s-2) -d_1-\ldots d_{s-1}}\\&\qquad \times D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{s-1})\prod _{t=1}^{s-1} D^-(m+1, \ldots , \widehat{d_t}, m+u+t-1). \end{aligned} \end{aligned}$$

Applying Lemma 3.5 one more time we obtain

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{s-1})D^-(m+1, \ldots , m+u+s-1)=(-1)^{m+u+s-1-d_s}\\&\qquad \times D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{s-1}, d_s)D^-(m+1, \ldots , \widehat{d_s}, \ldots , m+u+s-1)\\&\quad =(-1)^{m+u+s-1-d_s}\epsilon D^-(m+1, \ldots , m+u+s-1) D^-(m +1,\\&\qquad \ldots , \widehat{d_s}, \ldots , m+u+s) \end{aligned} \end{aligned}$$

for a unique \(d_s\). Here

$$\begin{aligned} D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{s-1})=(-1)^{m+u+s-1-d_s}\epsilon D^-(m+1, \ldots , \widehat{d_s}, \ldots , m+u+s), \end{aligned}$$

where \(\epsilon =\pm 1\) is determined by

$$\begin{aligned} D^-(b_1, \ldots , b_{u-1}, d_1, \ldots , d_{s-1}, d_s)=\epsilon D^-(m+1, \ldots , m+u+s-1). \end{aligned}$$

Additionally, the only nonzero summands in the above sum for

$$\begin{aligned} D^-(b_1, \ldots , b_{u-1})\prod _{t=1}^{s-1} D^-(m+1, \ldots , m+u+t-1) \end{aligned}$$

are given by those \((d_1, \ldots , d_s)\) that are permutations of \((a_1, \ldots , a_s)\) satisfying \(m+1 \le d_t\le m+s+t-1\) for each \(t=1, \ldots , s\). Hence \((d_1, \ldots , d_s)=\sigma (a_1, \ldots , a_s)\) for \(\sigma \in {\mathcal {O}}\).

Thus

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1})\prod _{t=1}^{s-1} D^-(m+1, \ldots , m+u+t-1)\\&\quad =\sum _{d_1=1}^{m+u}\ldots \sum _{d_{s}=1}^{m+u+s-1} (-1)^{(m+u)+\ldots + (m+u+s-1) -d_1-\ldots d_s}\epsilon \\&\qquad \times \prod _{t=1}^{s} D^-(m+1, \ldots , \widehat{d_t}, m+u+t-1). \end{aligned} \end{aligned}$$

To determine the sign \(\epsilon \), consider \(D^-(m+1, \ldots , m+u+s-1)\), shift the symbols at positions \(a_s+1\) through \(m+u+s-1\) one place to the left and move the symbol \(a_s\) to the position \(m+u+s-1\). This is accomplished by applying \(m+u+s-1-a_s\) transpositions. Then proceed by induction from \(t=s\) to \(t=1\), in the tth step shift the entries at positions \(a_t+1\) through \(m+u+t-1\) one place to the left and move the symbol \(a_{s-1}\) to the position \(m+u+t-1\), by applying \(m+u-t-1-a_t\) transpositions. After s steps we arrive at \(D^-(b_1, \ldots , b_u,a_1, \ldots , a_s)\). This shows that

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_u,d_1, \ldots , d_s)=(-1)^{\sigma } D^-(b_1, \ldots , b_u,a_1, \ldots , a_s)\\&\quad = (-1)^{(m+u)+\ldots + (m+u+s-1) -d_1-\ldots d_s} (-1)^{\sigma } D^-(m+1, \ldots , m+u+s-1). \end{aligned} \end{aligned}$$

Consequently,

$$\begin{aligned} \begin{aligned}&D^-(b_1, \ldots , b_{u-1})\prod _{t=1}^{s-1} D^-(m+1, \ldots , m+u+t-1)\\&\quad =\sum _{\sigma \in {\mathcal {O}}} (-1)^{\sigma } \prod _{t=1}^{s} D^-(m+1, \ldots , \widehat{\sigma (a_t)}, m+u+t-1). \end{aligned} \end{aligned}$$

\(\square \)

Remark 3.7

We define the expression \(D^-(m+1, \ldots , {\widehat{a}}, \ldots , m+j)\) such that \(m+j<a\) to be equal to zero. Therefore, in Proposition 3.6, we can replace the summation over the set \({\mathcal {O}}\) by the summation over the set of all permutations \(\sigma \) of \((a_1, \ldots , a_s)\).

5 The operators \(\sigma ^+\), \(\sigma ^-,\) and \(\sigma \) on \(\rho _{I|J}\)

From now on, assume that \(\lambda =(\lambda ^+|\lambda ^-)\) is an (m|n)-hook partition unless stated otherwise.

To combine various previously defined operators \(\sigma ^+_i\) and \(\sigma ^-_j\), we need to introduce the terminology of skew diagrams and tableaux.

5.1 The general setups for diagrams and tableaux

Let \(\alpha \) and \(\beta \) be partitions. We define the partial order < on partitions by requiring that \(\beta <\alpha \) if and only if \(\beta _i\le \alpha _i\) for every i.

Let \(\beta <\alpha \) and \({\mathcal {D}}\) be the diagram corresponding to the skew partition \(\alpha / \beta \). The canonical column skew tableau \(D^+_\mathrm{can}\) of the shape \(\alpha / \beta \) is defined in such a way that its jth column is filled with entries j for each j. Analogously, the canonical row skew tableau \(D^-_\mathrm{can}\) of the shape \(\alpha / \beta \) is defined so that its ith row is filled with entries \(m+i\) for each i.

Let \(\lambda =(\lambda ^+|\lambda ^-)\) be an (m|n)-hook partition, and a partition \(\mu \) is such that \(\mu <\lambda ^+\). Let \([\lambda ]\) be the diagram corresponding to \(\lambda =(\lambda ^+|\lambda ^-)\) and \([\lambda '/ \mu ']\) be a skew diagram corresponding to the skew partition \(\lambda '/ \mu '\). We denote by T a skew tableau of shape \(\lambda '/ \mu '\) that is filled with (possibly repeated) entries from the set \(m+1, \ldots , m+n\) such that its content equals \((0|\nu )\), where \(\nu \) is a partition. Additionally, we assume that for each \(j>m\) and \(1\le i\le n\), the entry at the position [ij] of the tableau T is equal to \(m+i\). These assumptions imply that \(\lambda ^-<\nu \). Such tableau T consists of two parts. The first part \(T^+\) is a tableau of the shape \((\lambda ^+/ \mu )'\) and the content \((0|\nu / \omega )\). The second part is the canonical row tableau \(L^-_\mathrm{can}\) corresponding to the diagram \([\lambda ^-]\). Therefore, \(\omega =\lambda ^-\) as partitions.

We denote by \(T^{\mathrm {opp}}\) a tableau of the shape \(\nu \) and the content \((\lambda ^+ /\mu |\lambda ^-)\). Additionally, we assume that for \(1\le i\le n\) and each \(1\le j\le \lambda ^-_i\) the entry at the position [ij] of the tableau \(T^{\mathrm {opp}}\) is equal to \(m+i\). Such tableau \(T^{\mathrm {opp}}\) consists of two parts. The first part is the canonical row tableau \(L^-_\mathrm{can}\) corresponding to the diagram \([\omega ]\). The second part is a skew tableau \(T^-\) of the shape \(\nu / \omega \) and the content \((\lambda ^+ /\mu |0)\). Although \(\omega =\lambda ^-\) as partitions, we distinguish them to differentiate between parts of tableaux T and \(T^{\mathrm {opp}}\).

Denote by \({\mathcal {D}}^+\) the diagram \([\lambda ^{+'}/ \mu ']\) corresponding to \(T^+\) and by \({\mathcal {D}}^-\) the diagram \([\nu / \omega ]\) corresponding to \(T^-\).

Example 4.1

To illustrate the above notation, consider \(m=n=3\), \(\lambda =(6,4,4,3,1)\), \(\lambda ^+=(6,4,4)\), \(\lambda ^-=(3,1)\), \((\lambda ^-)'=(2,1,1)\), \(\mu =(4,2,1)\) and \(\nu =(5,4,2)\). Then the tableau

is such that the entries a represent the shape \(\mu \); entries b represent the shape \(\lambda ^+/\mu \), and entries c represent the shape \((\lambda ^-)'\).

Transpose of this tableau is

and its entries a represent the shape \(\mu '\); entries b represent the shape \((\lambda ^+)'/\mu '\), and the entries c represent the shape \(\lambda ^-\).

The tableau

is such that the entries c represent the shape \(\lambda ^-=\omega \), and the entries b represent the shape \(\nu /\omega \).

One example of the tableau T is

and the corresponding tableau \(T^+\) is

An example of the tableau \(T^{\mathrm {opp}}\) is

and the corresponding tableau \(T^-\) is

5.2 The definition of operators and positioning maps

To define the operator \(\sigma ^+\) on \(\rho _{I|J}\), we need to consider a positioning map \(P^{+}\) and tableaux \(T^+_\mathrm{can}\) and \(T^{+}\) of shape \((\lambda ^+/ \mu )'\). Fix a multi-index (I|J) of length k. Choose a positioning map \(P^{+}:\{1, \ldots , k\} \rightarrow {\mathcal {D}}^+\) that is a bijection and satisfies the property that \(P^+(l)=[s,r]\) implies \(i_l=r\). The tableau \(T^+_\mathrm{can}\) is the canonical tableau defined by the property that for each \(1\le i\le m\) its ith column consists of entries equal to i. The tableau \(T^{+}\) is given by J and \(P^+\) in such a way that its entry at the position \(P^+(l)=[s,r]\) equals \(j_l\) (and \(r=i_l\)). Let \(X^+\) be the subgroup of the symmetric group \(\Sigma _k\) consisting of row permutations of \({\mathcal {D}}^+\). For \(\sigma \in X^+\) denote by \(\sigma (T^+_\mathrm{can})\) the tableau obtained by applying permutation \(\sigma \) to the entries of \(T^+_\mathrm{can}\).

The action of \(\sigma \in X^+\) on \(\rho _{I|J}\) is given as \(\sigma .\rho _{I|J}=\rho _{K|J}\), where for each \(1\le l\le k\) the index \(k_l\) is the entry at the position \(P^{+}(l)\) in \(\sigma (T^+_\mathrm{can})\). Finally, the operator \(\sigma ^+\) is given as

$$\begin{aligned} \sigma ^+=\sum _{\sigma \in X^+} (-1)^\sigma \sigma . \end{aligned}$$

Analogously, to define the operator \(\sigma ^-\) on \(\rho _{I|J}\), we need to consider a positioning map \(P^{-}\) and tableaux \(T^-_\mathrm{can}\) and \(T^{-}\) of shape \(\nu / \omega \). Choose a positioning map \(P^{-}:\{1, \ldots , k\} \rightarrow {\mathcal {D}}^-\) that is a bijection and satisfies the property that \(P^-(l)=[r,s]\) implies \(j_l=m+r\). The tableau \(T^-_\mathrm{can}\) is the canonical tableau that is defined by the property that for each \(1\le j\le n\) its jth row consists of entries equal to \(m+j\). The tableau \(T^{-}\) is given by I and \(P^-\) in such a way that its entry at the position \(P^-(l)=[r,s]\) equals \(i_l\) (and \(m+r=j_l\)). Let \(X^-\) be the subgroup of the symmetric group \(\Sigma _k\) consisting of column permutations of \({\mathcal {D}}^-\). For \(\sigma \in X^-\) denote by \(\sigma (T^-_\mathrm{can})\) the tableau obtained by applying permutation \(\sigma \) to the entries of \(T^-_\mathrm{can}\).

The action of \(\sigma \in X^-\) on \(\rho _{I|J}\) is given as \(\sigma .\rho _{I|J}=\rho _{I|L}\), where for each \(1\le a\le k\) the index \(l_a\) is the entry at the position \(P^{-}(a)\) in \(\sigma (T^-_\mathrm{can})\). Finally, the operator \(\sigma ^-\) is given as

$$\begin{aligned} \sigma ^-=\sum _{\sigma \in X^-} (-1)^\sigma \sigma . \end{aligned}$$

Note the difference between definitions of \(\sigma ^+\) and \(\sigma ^-\). It is due to the presence of the transpose in the shape of \(T^+\). Therefore, the canonical tableau \(T^+_\mathrm{can}\) is filled by columns and \(\sigma ^+\) involves row permutations, while the canonical tableau \(T^-_\mathrm{can}\) is filled by rows and \(\sigma ^-\) involves column permutations.

Example 4.2

If , then . If , then

Denote by \(S^+_s=\{[s, p_s], \ldots , [s,p_s+\ell ^+_s-1]\}\) the set of entries in the sth row of \({\mathcal {D}}^+\), by \(SP^+_s\) the set \((P^+)^{-1}(S^+_s)\) and by \(\widehat{SP^+_s}\) the complement of \(SP^+_s\) in \(\{1, \ldots , k\}\). List elements of \(SP^+_s\) in the fixed order \(\{q_1, \ldots , q_{\ell ^+_s}\}\), where \(q_t=(P^+)^{-1}([s, p_s+t-1])\). Then \(i_{q_t}=p_s+t-1\) for each \(1\le t \le \ell ^+_s\).

Denote by \(E^+_s\) the embedding of \(\otimes _{q\in SP^+_s} Y\) to \(\otimes ^k Y\) such that the part Y corresponding to \(q_{t}\in SP^+_s\) is mapped identically to the tth part of \(\otimes ^k Y\) while other parts of \(\otimes ^k Y\) corresponding to \(q_t\in \widehat{SP^+_s}\) are all equal to 1. Then the map \(\otimes _s E_s\) is an isomorphism between \(\otimes _s (\otimes _{q\in SP^+_s} Y)\) and \(\otimes ^k Y\). Using this isomorphism, we identify \(\rho _{I|J}\) with \(\otimes _s \rho ^+_s\), where \(\rho ^+_s=\otimes _{q\in SP^+_s} \rho _{i_t,j_t}\) (each \(i_t=s\) here). Denote by \(X^+_s\) a subset of \(X^+\) consisting of such \(\sigma _s\) that only permute entries in the sth row of \({\mathcal {D}}^+\). Then the action of \(\sigma _s\) on \(\otimes ^k Y\) induces the corresponding action on \(\otimes _{q\in SP^+_s} Y\) and the action of \(\sigma _s\) on \(\rho _{I|J}\) restricts to the action on \(\rho ^+_s\). Moreover, every \(\sigma \in X^+\) can be written as a product \(\sigma =\prod _s \sigma _s\) and the action of \(\sigma \) on \(\otimes ^k Y\) and \(\rho _{I|J}\) breaks down to the products of the commuting actions of \(\sigma _s\). Therefore, it is enough to analyze the actions of \(\sigma _s\in X^+_s\) separately.

Let \({R}=(R_1, \ldots , R_{\ell ^+_s})\) be an arbitrary \(\ell ^+_s\)-tuple of integers such that \(i_{q_t}=p_s+t-1\le R_t\) for each \(1\le t \le \ell ^+_s\). If \(\sigma _s\in SP^+_s\), then \(\sigma _s.(R_1, \ldots , R_{\ell ^+_s})\) is the \(\ell ^+_s\)-tuple obtained by permuting entries of R by \(\sigma _s\). We call R ordered provided \(R_1<R_2<\ldots <R_{\ell ^+_s}\). Denote by \({\mathcal {O}}(R_1, \ldots , R_{\ell ^+_s})\) the set of all \({\ell ^+_s}\)-tuples \((r_1, \ldots , r_{\ell ^+_s})\) that are permutations of \((R_1, \ldots , R_{\ell ^+_s})\).

Denote

$$\begin{aligned} DI^+(R_1, \ldots , R_{\ell ^+_s})=D^+(1, \ldots , i_{q_1}-1, R_1)\ldots D^+(1, \ldots , i_{q_{\ell ^+_s}}-1, R_{\ell ^+_s}) \end{aligned}$$

and define the action of \(\sigma _s\in SP^+_s\) on \(DI^+(R_1, \ldots , R_{\ell ^+_s})\) by

$$\begin{aligned} \sigma _s.DI^+(R_1, \ldots , R_{\ell ^+_s})=DI^+(\sigma _s.(R_1, \ldots , R_{\ell ^+_s})). \end{aligned}$$

Finally, denote \(\sigma ^+_s=\sum _{\sigma _s\in X^+_s} (-1)^{\sigma _s} \sigma _s\).

Analogously, denote by \(S^-_r=\{[p_r, r], \ldots , [p_r+\ell ^-_r-1,r]\}\) the set of entries in the rth column of \({\mathcal {D}}^-\), by \(SP^-_r\) the set \((P^-)^{-1}(S^-_r)\) and by \(\widehat{SP^-_r}\) the complement of \(SP^-_r\) in \(\{1, \ldots , k\}\). List elements of \(SP^-_r\) in the fixed order \(\{q_1, \ldots , q_{\ell ^-_r}\}\), where \(q_t=(P^-)^{-1}([p_r+t-1,r])\). Then \(j_{q_t}=p_r+t-1\) for each \(1\le t \le \ell ^-_r\).

Denote by \(E^-_r\) the embedding of \(\otimes _{q\in SP^-_r} Y\) to \(\otimes ^k Y\) such that the component Y corresponding to \(q_t\in SP^-_r\) is mapped identically to the tth component of \(\otimes ^k Y\) while other components of \(\otimes ^k Y\) corresponding to \(q\in \widehat{SP^-_r}\) are all equal to 1. Then the map \(\otimes _r E_r\) is an isomorphism between \(\otimes _s (\otimes _{q\in SP^-_r} Y)\) and \(\otimes ^k Y\). Using this isomorphism we identify \(\rho _{I|J}\) with \(\otimes _r \rho ^-_r\), where \(\rho ^-_r=\otimes _{q\in SP^-_r} \rho _{i_t,j_t}\) (each \(j_t=r\) here). Denote by \(X^-_r\) a subset of \(X^-\) consisting of such \(\sigma _r\) that only permute entries in the rth column of \({\mathcal {D}}^-\). Then the action of \(\sigma _r\) on \(\otimes ^k Y\) induces the corresponding action on \(\otimes _{q\in SP^-_r} Y\) and the action of \(\sigma _r\) on \(\rho _{I|J}\) restricts to the action on \(\rho ^-_r\). Moreover, every \(\sigma \in X^-\) can be written as a product \(\sigma =\prod _r \sigma _r\) and the action of \(\sigma \) on \(\otimes ^k Y\) and \(\rho _{I|J}\) breaks down to the products of the commuting actions of \(\sigma _r\). Therefore, it is enough to analyze the actions of \(\sigma _r\in X^-_r\) separately.

Let \({S}=(S_1, \ldots , S_{\ell ^-_r})\) be an arbitrary \(\ell ^-_r\)-tuple of integers such that \(S_t\le j_{q_t}=p_r+t-1\) for each \(1\le t \le \ell ^-_r\). If \(\sigma _r\in SP^-_r\), then \(\sigma _r.(S_1, \ldots , S_{\ell ^-_r})\) is the \(\ell ^-_r\)-tuple obtained by permuting entries of S by \(\sigma _r\). We call S ordered provided \(S_1<S_2<\ldots <S_{\ell ^-_r}\). Denote by \({\mathcal {O}}(S_1, \ldots , S_{\ell ^-_r})\) the set of all \({\ell ^-_r}\)-tuples \((s_1, \ldots , s_{\ell ^-_r})\) that are permutations of \((S_1, \ldots , S_{\ell ^-_r})\).

Denote

$$\begin{aligned} \begin{aligned}&DJ^-(S_1, \ldots , S_{\ell ^-_r})=(-1)^{S_1+\ldots + S_{\ell ^-_r}+j_{q_1}+\ldots j_{q_{\ell ^-_r}}}D^-(m+1, \ldots , \widehat{m+S_1}, \ldots , m+j_{q_1}) \\&\quad \ldots D^-(m+1, \ldots , \widehat{m+S_{\ell ^-_r}}, \ldots , m+j_{q_{\ell ^-_r}}) \end{aligned} \end{aligned}$$

and define the action of \(\sigma _r\in SP^-_r\) on \(DJ^-(S_1, \ldots , S_{\ell ^-_r})\) by

$$\begin{aligned} \sigma _r.DJ^-(S_1, \ldots , S_{\ell ^-_r})=DJ^-(\sigma _r.(S_1, \ldots , S_{\ell ^-_r})). \end{aligned}$$

Finally, denote \(\sigma ^-_rs=\sum _{\sigma _r\in X^-_r} (-1)^{\sigma _r} \sigma _r\), and define the operator \(\sigma _{\pm }\) as a composition of \(\sigma ^-\circ \sigma ^+\). In what follows, we write \(\sigma \) for \(\sigma _{\pm }\). From the context, we should be able to distinguish this operator \(\sigma \) from elements \(\sigma \in X^+, X^-\).

5.3 Even-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\)

Theorem 4.3

Let \(\lambda \) be an (m|n)-hook partition and (I|J) be admissible multi-index of length k. Then, for any choice of positioning maps \(P^{+}\) and \(P^{-}\), the vector \(v_{I|J}\sigma .\rho _{I|J}\) (which is an integral linear combination of \(\pi _{K|L}\), where \(cont(K|L)=cont(I|J)\)) is a \(G_{ev}\)-primitive vector in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\).

Proof

Write

$$\begin{aligned} \begin{aligned} v_{I|J}\sigma .\rho _{I|J}=&\frac{\sigma .\rho _{I|J}}{\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\prod _{j=1}^{n-1} D^-(m+1,\ldots , m+j)^{\nu _{j+1}-\lambda ^-_{j}}}\\&\times D^+(1, \ldots , m)^{\mu _m}D^-(m+1,\ldots , m+n)^{\lambda ^-_{n}}. \end{aligned} \end{aligned}$$

We show that \(\sigma .\rho _{I|J}\) is a multiple of

$$\begin{aligned} \prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\prod _{j=1}^{n-1} D^-(m+1,\ldots , m+j)^{\nu _{j+1}-\lambda ^-_{j}} \end{aligned}$$

and therefore \(v_{I|J}\sigma .\rho _{I|J}\) is a linear combination of products of (polynomial) determinants and elements from \(\bigotimes ^k Y\). Moreover, we see that the products of determinants, appearing in this description, are bideterminants of the type

$$\begin{aligned} \prod _{a=1}^m D^+(a)^{\lambda _a-\lambda _{a+1}}\prod _{b=1}^n D^-(b)^{\lambda _{m+b} - \lambda _{m+b+1}}, \end{aligned}$$

and are therefore elements of \(H^0_{G_{ev}}(\lambda )\). We infer from there that \(v_{I|J}\sigma .\rho _{I|J}\in H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\).

We use the determinantal identities of Propositions 3.3 and 3.6 to replace determinants. These identities keep the sizes and multiplicities of determinants, and therefore, keep the shape of the bideterminant that is the product of such determinants. For example, to illustrate this, using the identity of type \(D^+(i)^{\lambda _i-\mu _i}=D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}D^+(i)^{\lambda _i-\lambda _{i+1}}\), we could rewrite \(\frac{D^+(i)^{\lambda _i-\mu _i}}{D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}}\) as \(D^+(i)^{\lambda _i-\lambda _{i+1}}\).

Using the definition of \(\rho \) we express

$$\begin{aligned} \begin{aligned} \rho ^+_s&=\sum _{r_1=i_{q_1}}^m \ldots \sum _{r_{\ell ^+_s}=i_{q_{\ell ^+_s}}}^m \sum _{s_1=1}^{j_{q_1}} \ldots \sum _{s_{\ell ^+_s}=1}^{j_{q_{\ell ^+_s}}}\\&D^+(1, \ldots , i_{q_1}-1, r_1)\ldots D^+(1, \ldots , i_{q_{\ell ^+_s}-1}, r_{\ell ^+_s})\times (-1)^{s_1+\ldots + s_{\ell ^+_s}+j_{q_1}+\ldots j_{q_{\ell ^+_s}}}\\&\quad \times D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j_{q_1})\ldots D^-(m +1,\\&\quad \ldots , \widehat{m+s_{\ell ^+_s}}, \ldots , m+j_{q_{\ell ^+_s}})\\&\quad \times E^+_s(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}}), \end{aligned} \end{aligned}$$

and rewrite

$$\begin{aligned} \begin{aligned} \rho ^+_s&= \sum _{s_1=1}^{j_{q_1}} \ldots \sum _{s_{\ell ^+_s}=1}^{j_{q_{\ell ^+_s}}} (-1)^{s_1+\ldots + s_{\ell ^+_s}+j_{q_1}+\ldots j_{q_{\ell ^+_j}}}\\&\quad \times D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j_{q_1})\ldots D^-(m +1,\\&\quad \ldots , \widehat{m+s_{\ell ^+_s}}, \ldots , m+j_{q_{\ell ^+_s}})\\&\quad \times \, \left[ \sum _{r_1=i_{q_1}}^m \ldots \sum _{r_{\ell ^+_s}=i_{q_{\ell ^+_s}}}^m DI^+(r_1, \ldots r_{\ell ^+_s})E^+_s(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}})\right] . \end{aligned} \end{aligned}$$

Therefore

$$\begin{aligned} \begin{aligned} \sigma ^+_s .\rho ^+_s&= \sum _{s_1=1}^{j_{q_1}} \ldots \sum _{s_{\ell ^+_s}=1}^{j_{q_{\ell ^+_s}}} (-1)^{s_1+\ldots + s_{\ell ^+_s}+j_{q_1}+\ldots j_{q_{\ell ^+_j}}}\\&\quad \times D^-(m+1, \ldots , \widehat{m+s_1}, \ldots , m+j_{q_1})\ldots D^-(m +1,\\&\quad \ldots , \widehat{m+s_{\ell ^+_s}}, \ldots , m+j_{q_{\ell ^+_s}})\\&\quad \times \left[ \sum _{r_1=i_{q_1}}^m \ldots \sum _{r_{\ell ^+_s}=i_{q_{\ell ^+_s}}}^m DI^+(r_1, \ldots r_{\ell ^+_s})\sigma ^+_s.E^+_s(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}})\right] . \end{aligned} \end{aligned}$$

Fix indices \(s_1, \ldots , s_{\ell ^+_s}\), and \(R=(R_1, \ldots , R_{\ell ^+_s})\) such that \(i_{q_t}=p_s+t-1\le R_t\) for each \(1\le t \le \ell ^+_s\) and consider \((r_1, \ldots , r_{\ell ^+_s})\in {\mathcal {O}}(R_1, \ldots , R_{\ell ^+_s})\).

The sum

$$\begin{aligned} \sum _{(r_1, \ldots , r_{\ell ^+_s})\in {\mathcal {O}}(R_1, \ldots , R_{\ell ^+_s})}DI^+(r_1,\ldots , r_{\ell ^+_s}) \sigma ^+_s.E^+_s(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}}) \end{aligned}$$

can be rearranged as

$$\begin{aligned} \begin{aligned}&\left[ \sum _{\sigma _s\in X^+_s} (-1)^{\sigma _s} \sigma _s.DI^+(R_1,\ldots , R_{\ell ^+_s})\right] \\&\qquad \times \left[ \sum _{\sigma _s\in X^+_s} (-1)^{\sigma _s} \sigma _s.E^+_i(y_{R_1,m+s_1}\otimes \ldots \otimes y_{R_{\ell ^+_s},m+s_{\ell ^+_s}})\right] \\&\quad =[\sigma ^+_s. DI^+(R_1,\ldots , R_{\ell ^+_s})][\sigma ^+_s. E^+_i(y_{R_1,m+s_1}\otimes \ldots \otimes y_{R_{\ell ^+_s},m+s_{\ell ^+_s}})] \end{aligned} \end{aligned}$$

because

$$\begin{aligned} \sigma ^+_s.E^+_i(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}})= \sigma ^+_s.E^+_i(y_{R_1,m+s_1}\otimes \ldots \otimes y_{R_{\ell ^+_s},m+s_{\ell ^+_s}}) \end{aligned}$$

for each \((r_1, \ldots , r_{\ell ^+_s})\in {\mathcal {O}}(R_1, \ldots , R_{\ell ^+_s})\). Since

$$\begin{aligned} DI^+(R_1,\ldots , R_{\ell ^+_s})= \prod _{t=1}^{\ell ^+_s} D^+(1,\ldots , p_s+t-2, R_t), \end{aligned}$$

we get

$$\begin{aligned} \sigma _s.DI^+(R_1,\ldots , R_{\ell ^+_s})=\prod _{t=1}^{\ell ^+_s} D^+(1,\ldots , p_s+t-2, \sigma (R_t)) \end{aligned}$$

and

$$\begin{aligned} \sigma ^+_s.DI^+(R_1,\ldots , R_{\ell ^+_s})=\sum _{\sigma _s\in X^+_s} \prod _{t=1}^{\ell ^+_s} D^+(1,\ldots , p_s+t-2, \sigma (R_t)) \end{aligned}$$

which by Proposition 3.3 equals

$$\begin{aligned} D^+(1, \ldots , p_s-1,R_1, \ldots , R_{\ell ^+_s})\prod _{t=1}^{\ell ^+_s-1} D^+(1, \ldots , p_s+t-1). \end{aligned}$$

Note that the last expression vanishes if entries of R are not pairwise different.

This implies that the term

$$\begin{aligned} \sum _{r_1=i_{q_1}}^m \ldots \sum _{r_{\ell ^+_s}=i_{q_{\ell ^+_s}}}^m DI^+(r_1, \ldots r_{\ell ^+_s})\sigma ^+_s.E^+_s(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^+_s},m+s_{\ell ^+_s}}), \end{aligned}$$

appearing in the expression for \(\sigma ^+_s \rho ^+_s\), equals the sum of the expressions

$$\begin{aligned} \begin{aligned}&\sigma ^+_s.DI^+(R_1, \ldots R_{\ell ^+_s}) \sigma ^+_s.E^+_s(y_{R_1,m+s_1}\otimes \ldots \otimes y_{R_{\ell ^+_s},m+s_{\ell ^+_s}})\\&\quad =D^+(1, \ldots , p_s-1,R_1, \ldots , R_{\ell ^+_s})\prod _{t=1}^{\ell ^+_s-1} D^+(1, \ldots , p_s+t-1) \\&\qquad \times \sigma ^+_s.E^+_s(y_{R_1,m+s_1}\otimes \ldots \otimes y_{R_{\ell ^+_s},m+s_{\ell ^+_s}}) \end{aligned} \end{aligned}$$

over all ordered \(R=(R_1, \ldots R_{\ell ^+_s})\) such that \(i_{q_t}=p_s+t-1\le R_t\) for each \(1\le t \le \ell ^+_s\).

Combining contributions from all \(\sigma ^+_s\) we derive that \(\sigma ^+\rho _{I|J}\) is a multiple of \(\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\).

Analogously, we write \(\sigma ^-_r\rho ^-_r\) as

$$\begin{aligned} \begin{aligned}&\sigma ^-_r.\rho ^-_r= \sum _{r_1=i_{q_1}}^m \ldots \sum _{r_{\ell ^-_r}=i_{q_{\ell ^-_r}}}^m D^+(1, \ldots , i_{q_1}-1, r_1)\ldots D^+(1, \ldots , i_{q_{\ell ^-_r}-1}, r_{\ell ^-_r})\\&\quad \left[ \sum _{s_1=1}^{j_{q_1}} \ldots \sum _{s_{\ell ^-_r}=1}^{j_{q_{\ell ^-_r}}} DJ^-(s_1, \ldots s_{\ell ^-_r})\sigma ^-_r.E^-_r(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^-_r},m+s_{\ell ^-_r}})\right] . \end{aligned} \end{aligned}$$

We can use Proposition 3.6 to rewrite the expression

$$\begin{aligned} \sum _{s_1=1}^{j_{q_1}} \ldots \sum _{s_{\ell ^-_r}=1}^{j_{q_{\ell ^-_r}}} DJ^-(s_1, \ldots s_{\ell ^-_r})\sigma ^-_r.E^-_r(y_{r_1,m+s_1}\otimes \ldots \otimes y_{r_{\ell ^-_r},m+s_{\ell ^-_r}}) \end{aligned}$$

as the sum of the expressions

$$\begin{aligned} \begin{aligned}&\sigma ^-_r.DJ^-(S_1, \ldots S_{\ell ^-_r}) \sigma ^-_r.E^-_r(y_{r_1,m+S_1}\otimes \ldots \otimes y_{r_{\ell ^-_r},m+S_{\ell ^-_r}})\\&\quad =D^-(m+1, \ldots , m+p_s-1,m+S_1, \ldots , m+S_{\ell ^-_r})\\&\qquad \times \prod _{t=1}^{\ell ^-_r-1} D^-(m+1, \ldots , m+p_s+t-1) \times \sigma ^-_r.E^-_r(y_{r_1,m+S_1}\otimes \ldots \otimes y_{r_{\ell ^-_r},m+S_{\ell ^-_r}}) \end{aligned} \end{aligned}$$

over all ordered \(S=(S_1, \ldots S_{\ell ^-_r})\) such that \(S_t\le j_{q_t}=p_r+t-1\) for each \(1\le t \le \ell ^-_r\).

Combining contributions from all \(\sigma ^-_r\) we derive that \(\sigma ^-\rho _{I|J}\) is a multiple of \(\prod _{j=1}^{n-1} D^-(m+1, \ldots , m+j)^{\nu _{j+1}-\lambda ^-_j}\).

Finally, we can combine the actions \(\sigma ^+\) and \(\sigma ^-\) into \(\sigma \). Since the maps \(E^+_s\) and \(E^-_r\) commute (because they act on I- and J- components of \(\rho _{I|J}\), respectively), and determinantal identities involving \(DI^+(r_1, \ldots , r_{\ell ^+_s})\) and \(DJ^-(s_1, \ldots , s_{\ell ^-_r})\) are independent of each other (they involve determinants of type \(D^+(i_1, \ldots , i_t)\) and \(D^-(j_1, \ldots , j_t)\), respectively), we can combine arguments used for \(\sigma ^+\) and \(\sigma ^-\) to conclude that \(\sigma \rho _{I|J}\) is a multiple of

$$\begin{aligned} \prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\prod _{j=1}^{n-1} D^-(m+1,\ldots , m+j)^{\nu _{j+1}-\lambda ^-_{j}} \end{aligned}$$

which cancels off all the terms in the denominator of \(v_{I|J}\). During this process, we group those expressions that correspond to simultaneous row permutations of a tableau of the shape \(\lambda '/ \mu '\) with entries strictly increasing in its rows from left to right (corresponding to \((R_1, \ldots R_{\ell ^+_s})\) above) and column permutations of a tableau of the shape \(\nu / \omega \) with entries strictly increasing in its columns from top to bottom (corresponding to \((S_1, \ldots S_{\ell ^-_r})\) above).

Or we can observe that the action of \(\sigma ^+\) and \(\sigma ^-\) commute since they act on the I- and J-component, respectively. Since \(\sigma ^+(\sigma ^-\rho _{I|J})\) is a linear combination of expressions of type \(\sigma ^+\rho _{I|L}\), we derive that \(\sigma \rho _{I|J}\) is a multiple of \(\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\). Since \(\sigma ^-(\sigma ^+\rho _{I|J})\) is a linear combination of expressions of type \(\sigma ^-\rho _{K|J}\), we derive that \(\sigma \rho _{I|J}\) is a multiple of \(D^-(m+1,\ldots , m+j)^{\nu _{j+1}-\lambda ^-_{j}}\). Since the variables in \(D^+(1, \ldots , i)\) and \(D^-(m+1,\ldots , m+j)\) are distinct, we conclude that \(\sigma \rho _{I|J}\) is a multiple of

$$\begin{aligned} \prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\prod _{j=1}^{n-1} D^-(m+1,\ldots , m+j)^{\nu _{j+1}-\lambda ^-_{j}}. \end{aligned}$$

Moreover, the above formulas also show that \(\sigma .\pi _{I|J}=v_{I|J}\sigma \rho _{I|J}\) is a linear combination of tensor products of bideterminants of the shape \(\lambda \) and elements from \(\otimes ^k Y\), which means that it belongs to \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\). \(\square \)

The above theorem provides \(G_{ev}\)-primitive vectors of all possible weights in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\). It seems plausible that these vectors span all \(G_{ev}\)-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\), but we do not need this result and will not pursue it further in this paper.

Corollary 4.4

The image \(v_{I|J}\sigma .{\overline{\rho }}_{I|J}\) of \(v_{I|J}\sigma .\rho _{I|J}\) in \(H^0_{G_{ev}}(\lambda )\otimes \wedge ^k Y\) is a \(G_{ev}\)-primitive vector in \(H^0_G(\lambda )\).

For the understanding of the \(G_{ev}\)-structure of \(H^0_G(\lambda )=H^0_{G_{ev}}\otimes \wedge (Y)\), we need to find a basis of \(G_{ev}\)-primitive vectors in \(H^0_G(\lambda )\). Since in the simplest case when \(H^0_G(\lambda )\) is irreducible, the dimensions of these vectors of a given weight are given by cardinality of certain Littlewood–Richardson tableaux, it is clear that finding such a basis is not a trivial problem. In the second half of this paper, we search for a specific basis of \(G_{ev}\)-primitive vectors of a given weight in \(H^0_G(\lambda )\) and \(\nabla (\lambda )\), consisting of the vectors of the above type \(v_{I|J}\sigma .{\overline{\rho }}_{I|J}\).

6 Operators on tableaux and even-primitive vectors in \(H^0_G(\lambda )\)

While Theorem 4.3 gives even-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \otimes ^k Y\), Corollary 4.4 gives even-primitive vectors in \(H^0_G(\lambda )\). From now on, we concentrate our attention on a more detailed description of the even-primitive vectors in \(H^0_G(\lambda ) = H^0_{G_{ev}}(\lambda )\otimes \wedge (Y)\). We follow the general setup from Sect. 4.1.

The starting point of our construction of the operator \(\sigma ^+\) was the congruence

$$\begin{aligned} \rho _{i,j_1}\otimes \rho _{i+1,j_2}-\rho _{i+1,j_1}\otimes \rho _{i,j_2} \equiv 0 \pmod {D^+(1, \ldots , i)} \end{aligned}$$

that led to the definition of the positioning map \(P^+\) and the operator \(\sigma ^+\), which is permuting entries in the I-component of \(\rho _{I|J}\) while keeping the J-component unchanged.

Since

$$\begin{aligned} \begin{aligned}&\rho _{i,j_1}\wedge \rho _{i+1,j_2}-\rho _{i+1,j_1}\wedge \rho _{i,j_2}= \rho _{i,j_1}\wedge \rho _{i+1,j_2}+\rho _{i,j_2}\wedge \rho _{i+1,j_1}\\&\quad \equiv 0 \pmod {D^+(1, \ldots , i)} \end{aligned} \end{aligned}$$

by Lemma 2.1, when working over the exterior algebra, we can adjust the definition of the operator \(\sigma ^+\) (the new operator is denoted by \(\tau ^+\)) in such a way that the I-component of \(\rho _{I|J}\) is unchanged and the entries in the J-component are permuted. Also, in this case, we have to remove the negative sign in the corresponding sum.

Analogously, the starting point of our construction of the operator \(\sigma ^-\) was the congruence

$$\begin{aligned} \rho _{i_1,j}\otimes \rho _{i_2,j+1}-\rho _{i_1,j+1}\otimes \rho _{i_2,j} \equiv 0 \pmod {D^-(m+1, \ldots , m+j)}. \end{aligned}$$

that led to the definition of the positioning map \(P^-\) and the operator \(\sigma ^-\), which is permuting entries in the J-component of \(\rho _{I|J}\) while keeping the I-component unchanged.

Since

$$\begin{aligned} \begin{aligned}&\rho _{i_1,j}\wedge \rho _{i_2,j+1}-\rho _{i_1,j+1}\wedge \rho _{i_2,j}= \rho _{i_1,j}\wedge \rho _{i_2,j+1}+\rho _{i_2,j}\wedge \rho _{i_1,j+1} \\&\quad \equiv 0 \pmod {D^-(m+1, \ldots , m+j)} \end{aligned} \end{aligned}$$

by Lemma 2.2, when working over the exterior algebra, we can adjust the definition of the induced operator \(\sigma ^-\) (the new operator is denoted by \(\tau ^-\)) in such a way that the J-component of \(\rho _{I|J}\) is unchanged and the entries in the I-component are permuted. Also, in this case, we have to remove the negative sign in the corresponding sum.

6.1 The operators \(\tau ^+\) and \(\tau ^-\)

In Sect. 4.2, using a positioning map \(P^+\), we have assigned to a multi-index (K|L) a tableau \(T^+\) of the shape \((\lambda ^+/ \mu )'\) and the content \((0|\nu / \omega )\) corresponding to (K|L).

Conversely, to a tableau \(T^+\) of the shape \((\lambda ^+/ \mu )'\) and the content \((0|\nu / \omega )\) we assign the multi-index (I|J), in the following way. The entries in I are symbols \(1\le i\le m\), they are weakly increasing, and for each i there are exactly \(\lambda ^+_i-\mu _i\) entries in I that are equal to i. The entries in J are symbols \(1\le j\le n\), and they are obtained by subtracting m from entries in \(T^+\) listed by columns from left to right, in each column ordered from top to bottom. Denote by \(Q^+\) the map \(T^+ \mapsto (I|J)\) defined this way. In the particular case, when the entries in rows of \(T^+\) are strictly increasing from left to right, and \(\lambda _{I|J}\) is dominant, we obtain that (I|J) is left-admissible.

If \(Q^+(T^+)=(I|J)\), then the entries in I are weakly increasing. Hence we do not obtain all possible multi-indices (K|L) as images under \(Q^+\). However, since we are working inside the exterior algebra \(\wedge (Y)\) instead of the tensor algebra over Y, after reordering of the terms in K we get \({\overline{\rho }}_{K|L}=\epsilon {\overline{\rho }}_{I|J}\), where \(\epsilon = \pm 1\) and \((I|J)=Q^+(T^+)\) for some \(T^+\). Additionally, define the vector \({\overline{\rho }}(T^+)={\overline{\rho }}_{I|J}\).

If \(T'^+\) is obtained from \(T^+\) by column permutations, then \({\overline{\rho }}(T'^+)\)\(=\pm {\overline{\rho }}(T^+)\)\(=\pm {\overline{\rho }}_{I|J}\). Using the correspondence between \(T^+\) and (I|J), we can replace the operator \(\sigma ^+\) acting on multi-indices (I|J) by an operator \(\tau ^+\) acting on the tableau \(T^+\).

For \(\sigma \in X^+\) write \(\sigma {\overline{\rho }}_{I|J}=\overline{\sigma \rho _{I|J}}\) and denote by \(\sigma T^+\) the tableau obtained by applying permutation \(\sigma \) to the entries of \(T^+\). The tableau \(\sigma T^+\) corresponds to a multi-index (I|L), where L has the same content as J. It is clear that \({\overline{\rho }}(\sigma T^+)=\sigma {\overline{\rho }}(T^+)\).

The operator \(\tau ^+\) acting on \(T^+\) is defined as

$$\begin{aligned} \tau ^+ T^+=\sum _{\sigma \in X^+} \sigma T^+. \end{aligned}$$

The map \({\overline{\rho }}\) can be extended to linear combinations of tableaux naturally. Then the operator \({\overline{\rho }}\tau ^+\) applied to \(T^+\) is given as

$$\begin{aligned} {\overline{\rho }}(\tau ^+ T^+)=\sum _{\sigma \in X^+} {\overline{\rho }}(\sigma T^+). \end{aligned}$$

The expressions \({\overline{\rho }}\tau ^+ T^+\) can be considered as row bipermanents corresponding to the pair of tableaux \(T^+_\mathrm{can}\) and \(T^+\) based on formal symbols \(\rho _{ij}\), see [7].

The operators \(\tau ^+\) and \(\sigma ^+\) are compatible in the sense that

$$\begin{aligned} {\overline{\rho }}(\tau ^+ T^+)=\sigma ^+{\overline{\rho }}(T^+). \end{aligned}$$
(1)

Therefore, \(v_{I|J}{\overline{\rho }}(\tau ^+ T^+)\) is an even-primitive vector that is a linear combination of vectors \({\overline{\pi }}_{I|L}=v_{I|J}{\overline{\rho }}_{I|L}\) with L as above.

In a similar vein, in Sect. 4.2, using positioning map \(P^-\), we have assigned to a multi-index (K|L) a tableau \(T^-\) of the shape \(\nu / \omega \) and the content \((\lambda ^+/ \mu |0)\) corresponding to (K|L).

Conversely, to a tableau \(T^-\) of the shape \(\nu / \omega \) and the content \((\lambda ^+/ \mu |0)\) we assign the multi-index (I|J), in the following way. The entries in J are symbols \(1\le j\le n\), they are weakly increasing, and for each j there are exactly \(\nu _j-\omega _j\) entries in J that are equal to j. The entries in I are symbols \(1\le i\le m\) that are entries in \(T^-\) listed by rows from top to bottom, in each row, ordered from left to right. Denote by \(Q^-\) the map \(T^- \mapsto (I|J)\) defined this way. In the particular case, when the entries in rows of \(T^-\) are strictly increasing from left to right, and \(\lambda _{I|J}\) is dominant, we obtain that (I|J) is right-admissible.

If \(Q^-(T^-)=(I|J)\), then the entries in J are weakly increasing, hence we do not obtain all possible multi-indices (K|L) as images under \(Q^-\). However, since we are working inside the exterior algebra \(\wedge (Y)\) instead of the tensor algebra over Y, after reordering of the terms in L we get \({\overline{\rho }}_{K|L}=\epsilon {\overline{\rho }}_{I|J}\), where \(\epsilon = \pm 1\) and \((I|J)=Q^-(T^-)\) for some \(T^-\). Additionally, define the vector \({\overline{\rho }}(T^-)={\overline{\rho }}_{I|J}\).

If \(T'^-\) is obtained from \(T^-\) by row permutations, then \({\overline{\rho }}(T'^-)=\pm {\overline{\rho }}(T^-)=\pm {\overline{\rho }}_{I|J}\). Using the correspondence between \(T^-\) and (I|J), we can replace the operator \(\sigma ^-\) acting on multi-indices (I|J) by an operator \(\tau ^-\) acting on tableaux \(T^-\).

For \(\sigma \in X^-\) write \(\sigma {\overline{\rho }}_{I|J}=\overline{\sigma \rho _{I|J}}\) and denote by \(\sigma T^-\) the tableau obtained by applying permutation \(\sigma \) to the entries of \(T^-\). The tableau \(\sigma T^-\) corresponds to a multi-index (K|J), where K has the same content as I. It is clear that \({\overline{\rho }}(\sigma T^-)=\sigma {\overline{\rho }}(T^-)\).

The operator \(\tau ^-\) acting on \(T^-\) is defined as

$$\begin{aligned} \tau ^- T^-=\sum _{\sigma \in X^-} \sigma T^- \end{aligned}$$

and the operator \({\overline{\rho }}\tau ^-\) applied to \(T^-\) is given as

$$\begin{aligned} {\overline{\rho }}(\tau ^- T^-)=\sum _{\sigma \in X^-} {\overline{\rho }}(\sigma T^-). \end{aligned}$$

The expressions \({\overline{\rho }}\tau ^+ T^-\) can be considered as row bipermanents corresponding to the pair of tableaux \(T^-_\mathrm{can}\) and \(T^-\) based on formal symbols \(\rho _{ij}\), see [7].

The operators \(\tau ^-\) and \(\sigma ^-\) are compatible in the sense that

$$\begin{aligned} {\overline{\rho }}(\tau ^- T^-)=\sigma ^-{\overline{\rho }}(T^-). \end{aligned}$$
(2)

Therefore, \(v_{I|J}{\overline{\rho }}(\tau ^- T^-)\) is an even-primitive vector that is a linear combination of vectors \({\overline{\pi }}_{K|J}=v_{I|J}{\overline{\rho }}_{K|J}\) with K as above.

We illustrate the above definitions in the following example.

Example 5.1

Let \(m=2\), \(n=6\), \(\lambda =(3,3)\) and \(\mu =\emptyset \). Then the diagram \([\lambda '/\mu ']\) is of type (2, 2, 2). Consider the tableau \(T^+=\begin{matrix}3&{}4\\ 5&{}6\\ 7&{}8\end{matrix}\). Then

$$\begin{aligned} \tau ^+ T^+=\begin{matrix}3&{}4\\ 5&{}6\\ 7&{}8\end{matrix}+\begin{matrix}4&{}3\\ 5&{}6\\ 7&{}8\end{matrix}+\begin{matrix}3&{}4\\ 6&{}5\\ 7&{}8\end{matrix}+ \begin{matrix}3&{}4\\ 5&{}6\\ 8&{}7\end{matrix}+\begin{matrix}4&{}3\\ 6&{}5\\ 7&{}8\end{matrix}+\begin{matrix}4&{}3\\ 5&{}6\\ 8&{}7\end{matrix}+\begin{matrix}3&{}4\\ 6&{}5\\ 8&{}7\end{matrix} +\begin{matrix}4&{}3\\ 6&{}5\\ 8&{}7\end{matrix} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} {\overline{\rho }}(\tau ^+ T^+)=&\quad \,\rho _{13}\wedge \rho _{15}\wedge \rho _{17}\wedge \rho _{24}\wedge \rho _{26}\wedge \rho _{28}\\&+\rho _{14}\wedge \rho _{15}\wedge \rho _{17}\wedge \rho _{23}\wedge \rho _{26}\wedge \rho _{28}\\&+\rho _{13}\wedge \rho _{16}\wedge \rho _{17}\wedge \rho _{24}\wedge \rho _{25}\wedge \rho _{28}\\&+\rho _{13}\wedge \rho _{15}\wedge \rho _{18}\wedge \rho _{24}\wedge \rho _{26}\wedge \rho _{27}\\&+\rho _{14}\wedge \rho _{16}\wedge \rho _{17}\wedge \rho _{23}\wedge \rho _{25}\wedge \rho _{28}\\&+\rho _{14}\wedge \rho _{15}\wedge \rho _{18}\wedge \rho _{23}\wedge \rho _{26}\wedge \rho _{27}\\&+\rho _{13}\wedge \rho _{16}\wedge \rho _{18}\wedge \rho _{24}\wedge \rho _{25}\wedge \rho _{27}\\&+\rho _{14}\wedge \rho _{16}\wedge \rho _{18}\wedge \rho _{23}\wedge \rho _{25}\wedge \rho _{27}. \end{aligned} \end{aligned}$$

6.2 The operator \(\tau \)

Let (I|J), (K|L) and (M|N) be multi-indices such that (K|L) is left-admissible, (M|N) is right-admissible and \({\overline{\rho }}_{I|J}=\epsilon _1 {\overline{\rho }}_{K|L}=\epsilon _2 {\overline{\rho }}_{M|N}\), where \(\epsilon _1, \epsilon _2 \in \{\pm 1\}\).

There are a unique tableau \(R^+\) such that \(Q^+(R^+)=(K|L)\) and a unique positioning map \(P^+\), such that \(P^+(K|L)=R^+\). Other tableaux \(T^+\) corresponding to (K|L), for different positioning maps \(P^+\), differ from \(R^+\) only by permutations of entries in its rows. Then \({\overline{\rho }}(T^+)=\pm {\overline{\rho }}(R^+)=\pm {\overline{\rho }}_{K|L}\).

There are a unique tableau \(R^-\) such that \(Q^-(R^-)=(M|N)\) and a unique positioning map \(P^-\), such that \(P^-(M|N)=R^-\). Other tableaux \(T^-\) corresponding to (M|N), for different positioning maps \(P^-\), differ from \(R^-\) only by permutations of entries in its columns. Then \({\overline{\rho }}(T^-)=\pm {\overline{\rho }}(R^-)=\pm {\overline{\rho }}_{M|N}\).

Therefore, any tableaux \(T^+\) and \(T^-\) constructed as above are closely related to \({\overline{\rho }}_{I|J}\). We define a repositioning map Rpos that, together with its inverse \(Rpos^{-1}\), fix a correspondence between \(T^+\) and \(T^-\) and allow us to move from one representation to the other one and back while preserving the correspondence to the expression \({\overline{\rho }}_{I|J}\).

Fix a tableau \(T^+:{\mathcal {D}}^+ \rightarrow \{m+1, \ldots , m+n\}\). We want to assign to \(T^+\) a tableau \(T^-:{\mathcal {D}}^-\rightarrow \{1, \ldots , m\}\) that has the property \({\overline{\rho }}(T^-) =\pm {\overline{\rho }}(T^+)\). This property is satisfied if and only if for each r the rth row of \(T^-\) contains exactly those indices i such that the ith column of \(T^+\) contains an entry equal to \(m+r\).

Let a repositioning map \(Rpos: {\mathcal {D}}^+ \rightarrow {\mathcal {D}}^-\) be a bijection that maps each entry \([k,i]\in {\mathcal {D}}^+\) such that \(t^+_{k,i}=m+j\) to some \([j,l]\in {\mathcal {D}}^-\). Then the tableau \(T^-=Rp(T^+)\), corresponding to the map Rpos, is given by \(t^-_{j,l}=i\), where \(t^+_{k,i}=m+j\) as above. Tableaux \(T^-\) like these are in one-to-one correspondence with the maps Rpos satisfying the above property. Then the basic compatibility requirement

$$\begin{aligned} {\overline{\rho }}(T^+)=\epsilon {\overline{\rho }}(T^-)=\epsilon {\overline{\rho }}(Rp(T^+)), \end{aligned}$$

where \(\epsilon =\pm 1\), is satisfied.

Our next goal is to define an analog of the operator \(\sigma \), defined earlier, acting on (I|J) and \(\rho _{I|J}\). We define an operator \(\tau \) acting on a tableau \(T^+\) that combines the actions of operators \(\tau ^-\) on \(T^+\) and \(\tau ^-\) on \(T^-\).

Fix a tableau \(T^+\), a repositioning map Rpos and \(T^-=Rp(T^+)\). If \(\sigma \in X^-\), then the tableau \(\sigma T^-\) is given as \(T^-\circ \sigma \), a composition of \(\sigma \) and \(T^-: {\mathcal {D}}^- \rightarrow \{1, \ldots , m\}\). The map \(Rp(\sigma ):{\mathcal {D}}^+\rightarrow {\mathcal {D}}^+\) is defined to make the following diagram commutative.

We define \(\sigma T^+=T^+\circ Rp(\sigma )\). Then there is the following compatibility condition

$$\begin{aligned} {\overline{\rho }}(\sigma T^+)=\epsilon {\overline{\rho }}(\sigma T^-). \end{aligned}$$

Extending this naturally to linear combinations, we define \(\tau ^- T^+\) and obtain the following compatibility condition

$$\begin{aligned} {\overline{\rho }}(\tau ^- T^+)=\epsilon {\overline{\rho }}(\tau ^- T^-). \end{aligned}$$
(3)

Finally, we define \(\tau .T^+= \tau ^+\tau ^- T^+\).

The following theorem is related to Theorem 4.3 and Corollary 4.4.

Theorem 5.2

For any tableau \(T^+\) of the shape \((\lambda ^+/ \mu )'\), the content \((0|\nu /\omega )\), and a repositioning map \(Rpos:{\mathcal {D}}^+\rightarrow {\mathcal {D}}^-\) as above, the expression \(v_{I|J}{\overline{\rho }}(\tau T^+)\) is a \(G_{ev}\)-primitive vector of \(H^0_G(\lambda )\).

Proof

Denote \(T^-= Rp(T^+)\) given by the map Rpos. It follows from the proof of Theorem 4.3 that

$$\begin{aligned} \sigma ^-{\overline{\rho }}(T^-) \equiv 0 \left( \mathrm{mod} {\prod _{j=1}^{n-1} D^-(m+1, \ldots , m+j)^{\nu _{j+1}-\lambda ^-_j}}\right) . \end{aligned}$$

Equations (3) and (2) imply that

$$\begin{aligned}&\epsilon {\overline{\rho }}(\tau ^- T^+)= {\overline{\rho }}(\tau ^- T^-) = \sigma ^-{\overline{\rho }}(T^-) \\&\quad \equiv 0 \left( \mathrm{mod}{\prod _{j=1}^{n-1} D^-(m+1, \ldots , m+j)^{\nu _{j+1}-\lambda ^-_j}}\right) . \end{aligned}$$

The expression \(\tau ^- T^+\) is a linear combination of tableaux \(U^+\), and for each such \(U^+\) the proof of Theorem 4.3 implies

$$\begin{aligned} \sigma ^+ {\overline{\rho }}(U^+) \equiv 0 \left( \mathrm{mod}{\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}}\right) . \end{aligned}$$

The Eq. (1) implies

$$\begin{aligned} {\overline{\rho }}(\tau ^+ U^+)=\sigma ^+ {\overline{\rho }}(U^+) \equiv 0 \left( \mathrm{mod}{\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}}\right) . \end{aligned}$$

Therefore

$$\begin{aligned} {\overline{\rho }}(\tau ^+ \tau ^- T^+) \equiv 0 \left( \mathrm{mod}{\prod _{i=1}^{m-1} D^+(1, \ldots , i)^{\lambda _{i+1}-\mu _i}\prod _{j=1}^{n-1} D^-(m+1, \ldots , m+j)^{\nu _{j+1}-\lambda ^-_j}}\right) , \end{aligned}$$

showing that \(v_{I|J}{\overline{\rho }}(\tau T^+)\) belongs to \(H^0_G(\lambda )\). It is evident that \(v_{I|J}{\overline{\rho }}(\tau T^+)\) is a \(G_{ev}\)-primitive vector. \(\square \)

6.3 The repositioning map

For a given tableau \(T^+\) there are some choices for the map \(Rpos:{\mathcal {D}}^+ \rightarrow {\mathcal {D}}^-\) and the related tableau \(T^-=Rp(T^+)\) for which the above theorem gives a \(G_{ev}\)-primitive vector \(v_{I|J}{\overline{\rho }}(\tau T^+)\) of \(H^0_G(\lambda )\). We want to fix for every \(T^+\) a specific map Rpos and tableau \(T^-\) in a way that relates to Yamanouchi words and Littlewood–Richardson tableaux—see Sect. 5.2 of [5].

Let Q be a skew tableau of the shape \(\alpha / \beta \) and \(Q^+_\mathrm{can}\) be the canonical column skew tableau corresponding to the diagram \([\alpha / \beta ]\). To Q we assign a word \(w=w(Q)\), obtained by reading and concatenating entries in its rows from right to left starting in the top row and proceeding to the bottom row. The word w is a lattice word if in every initial part of the word w, the symbol i appears at least as many times as the symbol \(i+1\). The tableau Q is called Yamanouchi if w(Q) is a lattice word. Recall that Q is called semistandard if all entries in each row are weakly increasing from left to right and all entries in each column are strictly increasing from top to bottom. A Littlewood–Richardson tableauQ is a tableau that is semistandard and Yamanouchi.

Also, define a word \(z=z(Q)=w(Q^+_\mathrm{can})\), which instead of the entries in Q records their corresponding columns. We can think of z(Q) as recording the places of the corresponding letters in Q. There is a connection of our setup to letter-place algebras defined in [7], but we neither need it nor pursue it here.

Now we define the map Rp that sends each tableau \(T^+\) of the shape \((\lambda ^+/ \mu )'\) and the content \((0|\nu / \omega )\) to a tableau \(Rp(T^+)=T^-\) of the shape \(\nu / \omega \) and the content \((\lambda ^+ / \mu |0)\). For a tableau \(T^+\) as above, the word \(w(T^+)\) codes letters appearing in \(T^+\) which corresponds to the multi-index J. The word \(z(T^+)\) encrypts the corresponding places in \(T^+\) and is related to the multi-index I.

Definition 5.3

The tableau \(T^-=Rp(T^+)\) is obtained in the following way. When reading the word \(w(T^+)\), if the symbol \(w_s=m+i\) appears for the jth time, then \(t^-_{i,\lambda ^-_i+j}=z_s\), where \(z=z(T^+)\). The map \(Rpos:{\mathcal {D}}^+\rightarrow {\mathcal {D}}^-\) is defined to correspond to this setup.

From this definition, it is immediate that Rpos is a bijection. It is interesting that we can define the counting tableau \(C^-\) corresponding to \(T^+\) by putting \(c^-_{i,\lambda ^-_i+j}=s\) if the symbol \(w_s=m+i\) appears for the jth time in \(w(T^+)\). While it might be useful for other purposes, we do not need this in what follows.

Definition 5.4

Let \(T^+\) be a tableau as above. Denote by \(w'\) an initial part of the word \(w(T^+)\), and for each i, denote by \(a_{w'}(i)\) the number of appearances of the symbol \(m+i\) in \(w'\). The tableau \(T^+\) is called shifted Yamanouchi if for every \(w'\) and i we have \(\lambda ^-_i+a_{w'}(i)\ge \lambda ^-_{i+1}+a_{w'}(i+1)\).

An equivalent meaning of \(T^+\) shifted Yamanouchi can be explained as follows. Let T be the tableau corresponding to \(T^+\). Define the new word \(w_{sh}(T)\) obtained by concatenating of the word \(w(L^-_\mathrm{can})\) first and \(w(T^+)\) second. Then \(T^+\) is shifted Yamanouchi if and only if the word \(w_{sh}(T)\) is a lattice word.

Since \(w(L^-_\mathrm{can})\) is a lattice word, the condition that \(w_{sh}(T)\) is a lattice word can be expressed as follows. If \([j,l],[j+1,l]\in {\mathcal {D}}^-\) correspond to the lth appearance of the symbol \(m+j\) and \(m+j+1\) in \(w_{sh}(T)\), respectively, then the lth appearance of \(m+j\) in \(w_{sh}(T)\) is before the lth appearance of \(m+j+1\) in \(w_{sh}(T)\).

Example 5.5

Let \(m=2\), \(n=3\), \(\lambda =(\lambda ^+|\lambda ^-)=(2,2|1,1)\), \(\mu =(1,0)\), \(\nu =(2,2,1)\) and \(T^+=\begin{matrix} &{}4\\ 3&{}5 \end{matrix}\). Then \(w(T^+)=453\), \(z(T^+)=221\) and \(T^-= \begin{matrix} &{}1\\ &{}2\\ 2 \end{matrix}\). Also, \(w_{sh}(T)=45453\) and \(T^+\) is shifted Yamanouchi.

Example 5.6

Let \(m=n=3\), \(\lambda =(5,4,4|3,2,1)\) and \(\mu =(2,2,1)\) which implies \(\lambda '=(6,5,4,3,1)\), \(\mu '=(3,2)\). Let \(\nu =(5,5,4)\) and \(T^+\) be given as \(\begin{matrix}&{}&{}\\ &{}&{}4\\ 4&{}5&{}5\\ 5&{}6&{}6\\ 6\end{matrix}\). Then \(w(T^+)=45546656\) and the tableau \(T^-\) is \(\begin{matrix}&{}&{}&{}3&{}1\\ &{}&{}3&{}2&{}1\\ {} &{}3&{}2&{}1\end{matrix}\). Also, \(w_{sh}(T)=44455645546656\) and \(T^+\) is shifted Yamanouchi.

6.4 Further properties of the operator \(\tau \)

Let \(X^-_j\) be the subgroup of \(X^-\) consisting of all elements that permute only jth column of diagram \({\mathcal {D}}^-\), and \(X^+_i\) be the subgroup of \(X^+\) consisting of all elements that permute only the ith row of diagram \({\mathcal {D}}^+\). Then \(\tau ^-\) decomposes as a product of commuting operators \(\tau ^-_j\), where \(\tau ^-_j = \sum _{\sigma \in X^-_j} \sigma \), and \(\tau ^+\) decomposes as a product of commuting operators \(\tau ^+_i\), where \(\tau ^+_i = \sum _{\sigma \in X^+_i} \sigma \).

The following lemma shows a particular case when \({\overline{\rho }}(\tau T^+)\) vanishes.

Lemma 5.7

If there are two different entries in the same column of \(T^-\) such that the corresponding entries in \(T^+\) lie in the same row of \(T^+\), then \({\overline{\rho }}(\tau T^+)=0\).

Proof

Assume that the symbols \(i_1\) and \(i_2\) appear in positions \((j_1, l)\) and \((j_2, l)\) of \(T^-\), and the corresponding entries \(m+j_1\) and \(m+j_2\) appear in positions \((k,i_1)\) and \((k,i_2)\) of \(T^+\).

Denote by \(\nu ^-_l\) the transposition of positions \((j_1,l)\) and \((j_2,l)\) in \({\mathcal {D}}^-\) and by \(\nu ^+_k\) the transposition of positions \((k,i_1)\) and \((k,i_2)\) of \({\mathcal {D}}^+\). Let \(X^+_k={\widetilde{X}}^+_k \nu ^+_k\) be a decomposition of \(X^+_k\) as products of \({\widetilde{X}}^+_k\) and \(\nu ^+_k\), where \({\widetilde{X}}^+_k\) are representatives of left coset classes of \(X^+_k\) by \(\nu ^+_k\). Analogously, let \(X^-_l=\tau ^-_l {\widetilde{X}}^-_l\) be a decomposition of \(X^-_l\) as products of \(\nu ^-_l\) and \({\widetilde{X}}^-_l\), which consist of representatives of right coset classes of \(X^-_l\) by \(\nu ^-_l\).

Write

$$\begin{aligned} \tau =\prod _{i\ne k} \tau ^+_k \left( \sum _{{\widetilde{\sigma }}^+_k\in {\widetilde{X}}^+_k} {\widetilde{\sigma }}^+_k\right) \nu ^+_k \nu ^-_l \left( \sum _{{\widetilde{\sigma }}^-_l\in {\widetilde{X}}^-_l} {\widetilde{\sigma }}^-_l\right) \prod _{j\ne k} \tau ^-_j \end{aligned}$$

and denote by \(Q^-\) a summand in \((\sum _{{\widetilde{\sigma }}^-_l\in {\widetilde{X}}^-_l} {\widetilde{\sigma }}^-_l) \prod _{j\ne k} \tau ^-_j T^-\). We show that \({\overline{\rho }}(\nu ^+_k \nu ^-_l Q^-)=0\).

We have \(\nu ^-_l Q^- = Q^- + {Q'}^-\), where \((Q')^-\) is obtained from \(Q^-\) by switching the entries \(i_1\) and \(i_2\) at the positions \((j_1,l)\) and \((j_2,l)\). Let \(Q^+\) be such that \(Rp(Q^+)=Q^-\). The identity \(\rho _{i_1,j_2}\wedge \rho _{i_2,j_1}=-\rho _{i_2,j_1}\wedge \rho _{i_1,j_2}\) shows that \({\overline{\rho }}(\nu ^-_l Q^+) = {\overline{\rho }}(Q^+) - {\overline{\rho }}((Q')^+)\), where \((Q')^+\) is obtained from \(Q^+\) by switching the entries \(m+j_1\) and \(m+j_2\) at the positions \((k,i_1)\) and \((k,i_2)\).

Since \({\overline{\rho }}(\nu ^+_k Q^+)={\overline{\rho }}(\nu ^+_k (Q')^+)={\overline{\rho }}(Q^+)+{\overline{\rho }}((Q')^+)\), we obtain \({\overline{\rho }}(\nu ^+_k\nu ^-_l Q^-)=0\). Therefore \({\overline{\rho }}(\nu ^+_k\nu ^-_l(\sum _{{\widetilde{\sigma }}^-_l\in {\widetilde{X}}^-_l} {\widetilde{\sigma }}^-_l) \prod _{j\ne k} \tau ^-_j T^+)=0\) and we conclude that \({\overline{\rho }}(\tau T^+)=0\). \(\square \)

If there are two different entries in the same column of \(T^-\) such that the corresponding entries in \(T^+\) lie in the same row of \(T^+\), then \(T^+\) and \(T^-\) are called insignificant. If \(T^+\) and \(T^-\) are not insignificant, we call them significant.

Lemma 5.8

If \(S^+\) appears as a term in the expression \(\tau T^+\), then \(\tau S^+=\tau T^+\).

Proof

We consider the following sequence of tableaux:

where \(\sigma _-\in X^-\) and \(\sigma _+\in X^+\). First, assume that \(\sigma _-\in X^-_l\), and then combine such permutations to a general \(\sigma _-\in X^-\) later.

Keeping in mind the compatibility of the maps Rp, \(Rp^{-1}\) and the actions of \(\sigma _-\) and \(\sigma _+\), we describe the entries in the lth column of each tableau \(T^-, R^-, S^-\), and the corresponding entries in \(T^+, R^+, S^+\) as follows.

Let \([j_1, l], \ldots , [j_u,l] \in {\mathcal {D}}^-\) are entries in the lth column of \({\mathcal {D}}^-\), and corresponding to these there are entries \([k_1,i_1], \ldots , [k_u,i_u] \in {\mathcal {D}}^+\) such that \(t^+_{k_t,i_t}=m+j_t\) and \(t^-_{j_t,j}=i_t\) for each \(t=1, \ldots , u\).

The action of \(\sigma _-\) is given as \(\sigma _-[j_t,l]=[\sigma _-(j_t),l]=[j_{\sigma _-(t)},l]\) for each \(t=1, \ldots , u\), where the action on indices \(j_1, \ldots , j_u\) (also denoted by \(\sigma _-\)) is induced by \(\sigma _-\). Then \(r^-_{\sigma _-(j_t),l}=t^-_{j_t,l}=i_t\) and \(r^+_{k_t,i_t}=m+\sigma _-(j_t)\) for each \(t=1, \ldots , u\).

The permutation \(\sigma _+\) sends \([k_t,i_t]\) to \([k_t, \sigma _+(i_t)]=[k_t,i_{\sigma _+(i_t)}]\) for each \(t=1, \ldots , u\), where the action on indices \(i_1, \ldots , i_u\) (also denoted by \(\sigma _+\)) is induced by \(\sigma _+\). Then \(s^+_{k_t,\sigma _+(i_t)}=r^+_{k_t,i_t}=m+\sigma _-(j_t)\) and \(s^-_{\sigma _-(j_t),l}=\sigma _+(i_t)\) for each \(t=1, \ldots , u\). This compares to \(t^+_{k_t,i_t}=m+j_t\) and \(t^-_{j_t,j}=i_t\) for each \(t=1, \ldots , u\).

Now consider the sequence of tableaux:

Using the above formulae, we obtain \(q^+_{k_t,i_t}=m+j_t\) and \(q^-_{j_t,l}=i_t\) for each \(t=1, \ldots , u\), showing that \(Q^+=T^+\) and \(Q^-=T^-\).

This implies that \(S^+\) appears as a term in the expression \(\tau T^+\) if and only if \(T^+\) appears as a term in the expression \(\tau S^+\). Therefore, in this case, \(\tau S^+=\tau T^+\). \(\square \)

7 The operator \(\tau \) and its action on Littlewood–Richardson tableaux

Let us recall that the Littlewood–Richardson tableaux T of the shape \(\lambda '/ \mu '\) and the content \((0|\nu )\) play an important role later because they count the cardinality of even-primitive vectors in the simple supermodule \(L_G(\lambda )\).

Instead of tableaux T, we work with tableaux \(T^+\) of shape \((\lambda ^+/ \mu )'\).

7.1 Clausen column and row preorders

For a tableau \(T^+\), every index j corresponding to a column of \(T^+\) and a number \(m+1\le k\le m+n\), we define \(c_{jk}\) to be the number of occurrences of symbols \(\{m+1, \ldots , m+k\}\) in the columns of \(T^+\) of index j or higher. We organize these entries in the form of the Clausen column matrix \(C(T^+)=\begin{pmatrix}c_{jk}\end{pmatrix}\).

Additionally, for every index i corresponding to a row of T and a number \(m+1\le k\le m+n\), we define \(r_{ik}\) to be the number of occurrences of symbols \(\{m+1, \ldots , m+k\}\) in the rows of \(T^+\) of index i or lower. We organize these entries in the form of the Clausen row matrix \(R(T^+)=\begin{pmatrix}r_{ik}\end{pmatrix}\).

Example 6.1

Let \(m=n=3\), \(\lambda =(5,4,4|3,2,1)\), \(\mu =(2,2,1)\) and \(T^+\) be a tableau \(\begin{matrix}&{}&{}\\ &{}&{}4\\ 4&{}5&{}5\\ 5&{}6&{}6\\ 6\end{matrix}\) as in Example 5.6. Then the Clausen column matrix \(C(T^+)=\begin{pmatrix}2&{}\quad 1&{}\quad 1\\ 5&{}\quad 3&{}\quad 2\\ 8&{}\quad 5&{}\quad 3\end{pmatrix}\) and Clausen row matrix \(R(T^+)=\begin{pmatrix}1&{}\quad 1&{}\quad 1\\ 2&{}\quad 4&{}\quad 4\\ 2&{}\quad 5&{}\quad 7\\ 2&{}\quad 5&{}\quad 8\end{pmatrix}\).

It is clear that if \(T'^+\) is obtained from \(T^+\) by permuting entries in the same column, then \(C(T'^+)=C(T^+)\), hence the Clausen column matrix can be defined for the column tabloids, the equivalence classes of tableaux for permutations of entries within columns. Analogously, if \(T'^+\) is obtained from \(T^+\) by permuting entries in the same row, then \(R(T'^+)=R(T^+)\), and hence the Clausen row matrix can be defined for the row tabloids, the equivalence classes of tableaux for permutations of entries within rows.

The Clausen column preorder \(\prec _c\) on the set of tableaux of the same skew shape is defined as follows. Let \(T^+\) and \(T'^+\) be of the shape \((\lambda ^+/ \mu )'\), \(C(T^+)=(c_{jk})\) and \(C(T'^+)=(c'_{jk})\). Then \(T^+\prec _c T'^+\) if and only if \(C(T^+)=C(T'^+)\), or for some j and k we have \(c_{il}=c'_{il}\) for all \(i>j\) and all \(l=1, \ldots ,n\); \(c_{jl}=c'_{jl}\) for all \(l<k\) and \(c_{jk}<c'_{jk}\). If \(T^+\prec _c T'^+\) and \(C(T^+)\ne C(T'^+)\), then we write \(T^+<_c T'^+\).

The Clausen row preorder \(\prec _r\) on the set of tableaux of the same skew shape is defined as follows. Let \(T^+\) and \(T'^+\) be of the shape \((\lambda ^+/ \mu )'\), \(R(T^+)=(r_{ik})\) and \(R(T'^+)=(r'_{ik})\). Then \(T^+\prec _r T'^+\) if and only if \(R(T^+)=R(T'^+)\), or for some i and k we have \(r_{jl}=r'_{jl}\) for all \(j<i\) and all \(l=1, \ldots n\); \(r_{il}=r'_{il}\) for all \(l<k\) and \(r_{ik}<r'_{ik}\). If \(T^+\prec _r T'^+\) and \(R(T^+)\ne R(T'^+)\), then we write \(T^+<_r T'^+\).

Lemma 6.2

The restriction of the Clausen preorders \(\prec _c\), and \(\prec _r\) to the set of semistandard tableaux of the skew shape \((\lambda ^+/ \mu )'\) is a linear order. Consequently, the restriction of the Clausen preorders \(\prec _c\) and \(\prec _r\) to the set of Littlewood–Richardson tableaux of the same shape is a linear order.

Proof

Let \(T^+\) and \(T'^+\) be semistandard tableaux of the shape \((\lambda ^+/ \mu )'\) with entries in the set \(\{m+1, \ldots , m+n\}\); \(T^+\prec _c T'^+\) and \(T'^+\prec _c T^+\). Then \(C(T^+)=C(T'^+)\), and since the entries in all columns are strictly increasing, we infer that \(T^+=T'^+\) and obtain an order on semistandard tableaux. Since \(\prec _c\) is linear order on tableaux with different Clausen matrices, the claim for semistandard tableaux and Littlewood–Richardson tableaux follows. Analogous arguments work for the Clausen preorder \(\prec _r\). \(\square \)

7.2 Linear independence of even-primitive vectors

Assume that \(\lambda \) is a hook partition and \(\mu <\lambda \). Denote by \(LR((\lambda ^+)'/ \mu ',\nu )\) the set of all Littlewood–Richardson tableaux \(T^+\) of the shape \((\lambda ^+)'/ \mu '\) and the content \((0|\nu )\), where \(\nu \) is a partition containing \(\omega \).

Lemma 6.3

Assume that \(T^+\) is semistandard and shifted Yamanouchi. Let \(i_1\) and \(i_2\) be symbols appearing in the same column of \(T^-\), say at its \(j_1\)th and \(j_2\)th row, respectively, where \(j_1<j_2\). If the corresponding symbols \(m+j_1\) appear at the \(k_1\)th row of \(T^+\) and the corresponding symbol \(m+j_2\) appears at the \(k_2\)th row of \(T^+\), then \(k_1<k_2\).

Proof

Assume that the symbols \(i_1\) and \(i_2\) appear in positions \((j_1, l)\) and \((j_2, l)\) of \(T^-\), respectively. Corresponding to this, symbols \(m+j_1\) and \(m+j_2\) appear at positions \((k_1,i_1)\) and \((k_2,i_2)\) in \(T^+\). Since \(T^+\) is shifted Yamanouchi, we infer that \(k_1\le k_2\), and if \(k_1=k_2\), then \(i_2<i_1\). If \(k_1=k_2\), then \(T^+\) semistandard implies \(m+j_2\le m+j_1\), which is a contradiction. Therefore, \(k_1<k_2\). \(\square \)

In the representation theory of Schur algebras, see for example Sect. 2.4 and 2.5 of [21], an important role is played by bideterminants and their straightening formula stating that every bideterminant is a linear combination of bideterminants based on a pair of semistandard tableaux of the same shape.

An analogous result is valid in the setting of the fourfold (or letter-place) algebra of [7]. A particular case of the straightening formula (Theorem 8 of [7]) states that every bipermanent is a linear combination of bipermanents based on a pair of standard tableaux.

In our case, instead of a pair of (semi)standard tableaux, we deal with a pair of related tableaux \((T^+, T^-)\) and the basis we construct corresponds to the case when \(T^+\) is semistandard, and \(T^-\) is anti-semistandard—see below. As a particular case of these pairs, we obtain Littlewood–Richardson tableaux.

Definition 6.4

A tableau is called anti-semistandard if the entries in its rows are strictly decreasing from left to right and entries in its columns are weakly decreasing from top to bottom.

Lemma 6.5

If \(T^+\) is semistandard, then that entries in each row of \(T^-\) are strictly decreasing from left to right.

Proof

Let [jl] and \([j,l+1]\) be two consecutive entries in the jth row of \({\mathcal {D}}^-\), and \([k_1,i_1]\) and \([k_2,i_2]\) be the corresponding entries in \({\mathcal {D}}^+\), such that \(t^+_{k_1,i_1}=m+j\) is the lth appearance and \(t^+_{k_2, i_2}=m+j\) is the \((l+1)\)st appearance of \(m+j\) in \(w_{sh}(T)\). This implies \(k_1\le k_2\), and if \(k_1=k_2\), then \(i_1>i_2\).

Assume now that \(k_1<k_2\) and \(i_1\le i_2\). Then the position \([k_2,i_1]\in {\mathcal {D}}^+\), and \(T^+\) semistandard implies \(T^+_{k_1,i_1}=m+j < t^+_{k_2,i_1} \le t^+_{k_2,i_2} =m+j\), which is a contradiction. Therefore, \(i_1>i_2\) and this shows that entries in the rows of \(T^-\) are strictly decreasing from left to right. \(\square \)

Lemma 6.6

If \(T^+\) is semistandard and shifted Yamanouchi, then \(T^-\) is anti-semistandard.

Proof

By Lemma 6.5, the entries in the rows of \(T^-\) are strictly decreasing from left to right.

Next, let [jl] and \([j+1,l]\) be two consecutive entries in the lth column of \({\mathcal {D}}^-\) and \([k_1,i_1]\) and \([k_2,i_2]\) be the corresponding entries in \({\mathcal {D}}^+\) such that \(t^+_{k_1,i_1}=m+j\) is the lth appearance of \(m+j\) and \(t^+_{k_2, i_2}=m+j+1\) is the lth appearance of \(m+j+1\) in \(w_{sh}(T)\). Then \(k_1< k_2\) by Lemma 6.3.

We claim that \(i_1\ge i_2\). Assume to the contrary that \(i_1<i_2\) and assume that the index \(i_{2}'\) is maximal such that \([k_2,i_2']\in {\mathcal {D}}^+\) and \(t_{k_2, i_2'}=m+j+1\). Then the position \([k_1, i_2']\) belongs to diagram \({\mathcal {D}}^+\). Since \(T^+\) is semistandard, we must have \(t^+_{k_2,a}=m+j+1\) for \(a=i_2, \ldots , i_2'\); \(k_2=k_1+1\) and \(t^+_{k_1, b}=m+j\) for each \(b=i_1, \ldots , i_2'\). The initial part of \(w_{sh}(T)\) that ends at the position \([k_1,i_1+(i_2'-i_2+1)]\in {\mathcal {D}}^+\) has the last symbol \(m+j\), and it contains the same number \(l-(i_2'-i_2+1)\) of symbols \(m+j\) as \(m+j+1\). This is a contradiction with the assumption that \(w_{sh}(T)\) is a lattice word.

Therefore, entries in the columns of \(T^-\) are weakly decreasing from top to bottom and \(T^-\) is anti-semistandard. \(\square \)

Proposition 6.7

Assume \(T^+\) is semistandard. Then \(T^+\) is shifted Yamanouchi if and only if \(T^-\) is anti-semistandard.

Proof

The necessary condition is established in Lemma 6.6.

For the sufficient condition, assume that \(T^+\) is semistandard and \(T^-\) is anti-semistandard. Let \([j_1,l]\) and \([j_2,l]\) belong to \({\mathcal {D}}^-\) and \(j_1<j_2\). Then \(t^-_{j_1,l}=i_1\ge i_2=t^-_{j_2,l}\) since \(T^-\) is anti-semistandard. Denote by \([k_1,i_1]\) and \([k_2,j_2]\) the corresponding elements of \({\mathcal {D}}^+\) such that \(t^+_{k_1,i_1}=m+j_1\) and \(t^+_{k_2,i_2}=m+j_2\).

If \(k_1=k_2\), then \(i_1>i_2\) and \(m+j_1\le m+j_2\) because \(T^+\) is semistandard. This means that the smaller entry \(m+j_1\) appear to the right of the bigger entry \(m+j_1\) in the same row of \(T^+\). Hence the lattice condition in \(w_{sh}(T)\) is satisfied for this pair.

If \(k_1>k_2\), then \([k_1,i_2]\in {\mathcal {D}}^+\) and \(T^+\) semistandard implies \(m+j_2=t^+_{k_2,i_2}<t^+_{k_1,i_2}\le t^+_{k_1,i_1}=m+j_1\), which is a contradiction.

If \(k_1<k_2\), then the smaller entry \(m+j_1\) appears in the higher row of \(T^+\) than the bigger entry \(m+j_2\), hence the lattice condition in \(w_{sh}(T)\) is satisfied for this pair.

Therefore, \(T^+\) is shifted Yamanouchi. \(\square \)

Definition 6.8

If \(T^+\) is semistandard and the corresponding \(T^-\) is anti-semistandard, then \(T^+\) is called marked. The set of all marked tableaux \(T^+\) of the shape \((\lambda ^+/ \mu )'\) and the content \((0|\nu /\omega )\) is denoted by \(M((\lambda ^+)'/ \mu ',\nu /\omega )\).

We analyze tableaux \(T'\) appearing in the expression \(\tau T^+\) for \(T^+\) marked.

Lemma 6.9

Assume \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\). If \(T'^+\) appears in the expression for \(\tau ^- T^+\), then \(T'^+\prec _c T^+\) and \(T'^+\prec _r T^+\). The tableau \(T^+\) appears in \(\tau ^- T^+\) with the coefficient one.

Proof

Denote the entries in the lth column of \({\mathcal {D}}^-\), listed from top to bottom by \([j_1,l], \ldots , [j_s,l]\) (so that \(j_t=j_1+t-1\) for each \(1\le t \le s\)), and by \(\sigma \in X^-\) a permutation of these entries. Also, denote by \([k_1, i_1], \ldots , [k_s,i_s]\) the corresponding entries in \({\mathcal {D}}^+\). Then the entry at the position \([k_t,i_t]\) in the tableau \(T^+\) is the lth appearance of \(m+j_t\) in \(w_{sh}(T)\), for each \(t=1, \cdots , s\). Then \(i_1\ge i_2\ge \cdots \ge i_s\) since \(T^-\) is anti-semistandard. Also, Lemma 6.3 shows \(k_1<k_2< \cdots <k_s\).

Let \(\sigma \in X^-\) be arbitrary such that \(\sigma \ne 1\), and let k be the smallest index such that there is a position \([k,i]\in {\mathcal {D}}^+\) that is moved by \(\sigma \). Then those entries in the kth row of \(T^+\) that are replaced in \(T'^+=\sigma T^+\) (and there is at least one) are replaced by entries that are higher than the original entries (because \(k_t<k_u\) implies \(t^+_{k_t,i_t}=m+j_t<m+j_u=t^+_{k_u,i_u}\)). Therefore, \(T'^+<_r T^+\) and the tableau \(T^+\) appears in \(\tau ^- T^+\) with the coefficient one.

If \(\sigma \in X^-\) only permutes entries in the same column of \(T^+\), then \(C(T^+)=C(T'^+)\), where \(T'^+=\sigma T^+\). In this case \(T'^+\prec _c T^+\). Otherwise, let i be the highest index such that there is an entry in ith column of \(T^+\) that is moved by \(\sigma \) to a different column. Then all entries in ith column of \(T^+\) either remain the same or (on at least one occasion) are replaced in \(T'^+\) by entries that are higher than the original entries (because \(i_t > i_u\) implies \(t^+_{k_t,i_t}=m+j_t<m+j_u=t^+_{k_u,i_u}\)). Therefore, \(T'^+<_c T^+\). \(\square \)

Because of the requirements, we have imposed on tableaux T and \(T^{\mathrm {opp}}\), it is clear that there is a one-to-one correspondence between T, and \(T^+\), and between \(T^{\mathrm {opp}}\) and \(T^-\). We denote \({\overline{\rho }}(T)={\overline{\rho }}(T^+)\), \({\overline{\rho }}(T^{opp})={\overline{\rho }}(T^-)\), \({\overline{\rho }}(\tau T)={\overline{\rho }}(\tau T^+)\) and so on. To a tableau \(T^+\), we have assigned a pair of multi-indices I|J and vectors \(v_{I|J}\), \(\rho _{I|J}\), and \({\overline{\rho }}_{I|J}\). Let us denote \(cont(T^+)=cont(I|J)\) and \(v^+_T=v_{I|J}\). It is clear that \(cont(I|J)=cont(K|L)\) implies \(v_{I|J}=v_{K|L}\). Since all tableaux \(T'^+\) appearing in the expression for \(\tau T^+\) have the same content as \(T^+\), we have \(v^+_{T'}=v^+_T\) and the vector \(v^+_T\rho (\tau T^+)\), which is a linear combination of primitive vectors \(\pi _{I|L}\), is an even-primitive vector. Analogously, \(v^+_T{\overline{\rho }}(\tau T^+)\), which is a linear combination of primitive vectors \({\overline{\pi }}_{I|L}\), is an even-primitive vector.

Theorem 6.10

The vectors \({\overline{v}}(T^+)=v^+_T{\overline{\rho }}(\tau T^+)\) for \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\) are linearly independent over the ground field K.

Proof

It is enough to show that vectors \({\overline{\rho }}(\tau T^+)\) for \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\) are linearly independent. We show that \(\tau T^+\) has \(T^+\) as its leading element for the preorder \(\prec _r\) and all other semistandard tableaux \(Q^+\) appearing in the expression for \(\tau T^+\)satisfy \(Q^+<_r T^+\).

Consider \(\sigma \in X^+\). If \(\sigma \) interchanges only identical entries in \(T^+\), then \(\sigma T^+ = T^+\). If \(\sigma T^+ \ne T^+\), then \(\sigma T^+\) is not semistandard and \(\sigma T^+<_r T^+\). It follows from Lemma 6.9 that all nontrivial summands \(T'^+\) in \(\tau ^- T^+\) (those \(T'^+=\sigma T^+\) for \(\sigma \in X^-\) such that \(\sigma \ne 1\)) satisfy \(T'^+<_r T^+\). Since \(\sigma \in X^+\) permutes the rows of \(T^+\) and \(T'^+\), all tableaux \(T''^+\) appearing as summands in \(\tau ^+ T'^+\) for \(T'^+<_r T^+\) also satisfy \(T''^+ <_r T^+\). Therefore, all summands \(T''^+\) of \(\tau T^+\) either equal to \(T^+\) or otherwise \(T''^+<_r T^+\) (and \(T''^+\) is not semistandard). In any case, \(T''^+\prec _r T^+\).

According to Theorem 4.4 of [24] (see also Sect. 5 of [23] or Sect. 5.7 of [4]), there is a basis for bipermanents of a fixed skew shape and content given by bipermanents corresponding to semistandard tableaux. This statement is a generalization of the classical result of [7].

We can express \({\overline{\rho }}(\tau T^+)\) as

$$\begin{aligned} {\overline{\rho }}(\tau T^+)=\sum _{Q^+ \mathrm {semistandard}} \alpha _{Q^+} {\overline{\rho }}(\tau ^+ Q^+), \end{aligned}$$

a linear combination of the basis elements \({\overline{\rho }}(\tau ^+ Q^+)\), where \(Q^+\) are semistandard tableaux of the shape \((\lambda ^+/ \mu )'\). Due to the above observations, \(\alpha _{T^+}=1\) and \(\alpha _{Q^+}\ne 0\) implies \(Q^+<_r T^+\). Therefore, the terms in this expression for \({\overline{\rho }}(\tau T^+)\) are ordered with respect to \(\prec _r\), and the highest term is \({\overline{\rho }}(\tau ^+ T^+)\).

Since \(\prec _r\) restricted on semistandard tableaux is a linear order by Lemma 6.2, we conclude that the expressions \({\overline{\rho }}(\tau T^+)\) for \(T^+\in M({\lambda '}^+/ \mu ',\nu /\omega )\) are linearly independent over the ground field K of characteristic zero. \(\square \)

Remark 6.11

The above theorem could be proved analogously using the ordering \(\prec _c\) instead of \(\prec _r\).

7.3 The case of irreducible \(H^0_G(\lambda )\)

The indecomposable universal highest weight modules \(V(\lambda )\) over classical Lie superalgebras were investigated by Kac in [11, 12]. He proved that \(V(\lambda )\) is irreducible if and only if the weight \(\lambda \) is typical. Using the contravariant duality given in Proposition 5.8 of [28] that is induced by the supertransposition \(\tau : c_{ij}\mapsto (-1)^{|i|(|j|+1)} c_{ji}\), we infer that the induced module \(H^0_G(\lambda )\) is irreducible if and only if \(\lambda \) is typical.

By Theorem 3.20 of [1], a weight \((\mu |\nu )\) is polynomial if and only if it is given by a hook partition \(\lambda \). If \(\lambda \) is a hook partition, then by Theorem 4.1 of [16], \(H^0_G(\lambda )\) is irreducible if and only if \(\lambda ^+_m\ge n\). In this case, the even-primitive vectors in \(H^0_G(\lambda )=L_G(\lambda )\) are in a bijective correspondence with Littlewood–Richardson tableaux \(LR(\lambda '/\mu ',\nu )\) as was mentioned earlier.

We show that if \(\lambda ^+_m\ge n\), then there is a bijective correspondence between \(M((\lambda ^+)'/ \mu ',\nu /\omega )\) and \(LR(\lambda '/ \mu ',\nu )\).

Lemma 6.12

If \(\lambda ^+_m \ge n\), then \(T^+\) semistandard implies T semistandard.

Proof

If \(T^+\) is semistandard, then entries in T are increasing down each column. Assume that the last column of \(T^+\) has an index \(i\le m\). If \(i<m\), then the second part of the tableau T (\(L^-_\mathrm{can}\) of shape \(\omega \)) splits off from the first part \(T^+\) of the tableau T and then T is automatically semistandard. Hence, assume \(i=m\). Then its lowest entry appears in \(\lambda ^+_i\) th row, and its value is at most \(m+n\). Since \(\lambda ^+_i\ge \lambda ^+_m\ge n\), and the entries in ith column of \(T^+\) are increasing, then any entry in \(T^+\) in kth row and ith column has to be smaller or equal to \(m+k\). Since the entry in the kth row and the \((m+1)\)st column of T (if any) equals to \(m+k\), we conclude that T is semistandard. \(\square \)

The following is an example when \(T^+\) is semistandard but T is not.

Example 6.13

Let \(\lambda =(2,1|1,0)\), \(\mu =(1,0)\) and \(\nu =(2,1)\) in GL(2|2). Then \(T=\begin{matrix} &{}4&{}3\\ 3 \end{matrix}\) is not semistandard while \(T^+=\begin{matrix}&{}4\\ 3 \end{matrix}\) is semistandard.

Analogously to the map Rp defined in 5.3, we define a map Opp which sends each tableau T of shape \((\lambda / \mu )'\) to a tableau \(T^{\mathrm {opp}}\) of the shape \(\nu \) as follows.

Definition 6.14

The tableau \(T^{\mathrm {opp}}=Opp(T)\) is obtained in the following way. When reading the word w(T), if the symbol \(w_s=m+i\) appears for the jth time, then \(t^{\mathrm {opp}}_{i,\lambda ^-_i+j}=z_s\), where \(z=z(T)\). The map \(Oppos:{\mathcal {D}}\rightarrow {\mathcal {D}}^{\mathrm {opp}}\) is defined to correspond to this setup.

Example 6.15

In the setup of Example 5.5, we have \(T=\begin{matrix} &{}4&{}3\\ 3&{}5&{}4 \end{matrix}\), \(w(T)=34453\), \(z(T)=32321\), and \(T^{\mathrm {opp}}= \begin{matrix} 3&{}1\\ 2&{}3\\ 2 \end{matrix}\).

The previous example shows that we cannot expect in general that the tableau \(T^-\) is a subtableau of \(T^{\mathrm {opp}}\). However, such property is important concerning Littlewood–Richardson tableaux of shape \((\lambda / \mu )'\).

Definition 6.16

Tableaux T and \(T^{\mathrm {opp}}\) are called behaved if the following equivalent diagrams are commutative, where the vertical arrows are natural inclusions.

Next lemma shows that there is a large class of tableaux that are behaved.

Lemma 6.17

If a tableau T as above is semistandard, then it is behaved.

Proof

Assume T is semistandard, and k is such that \(\lambda ^-_k>0\). Since the first k entries in the \(m+1\)st column of T are \(m+1, \ldots , m+k\), T semistandard implies that the first \(\lambda ^-_k\) appearances of symbol \(m+k\) are in the kth row of the second part of T, corresponding to \(\lambda ^-\). These appearances then transfer to the kth row of the second part of \(T^{\mathrm {opp}}\) corresponding to \(\omega \). Since this is true for all rows of \(\lambda ^-\), the tableau T is behaved. \(\square \)

Example 6.18

In the setup of Example 5.6, \(T= \begin{matrix}&{}&{}&{}4&{}4&{}4\\ &{}&{}4&{}5&{}5\\ 4&{}5&{}5&{}6\\ 5&{}6&{}6\\ 6\end{matrix}\), \(w(T)=44455465546656\) and \(T^{\mathrm {opp}}= \begin{matrix}6&{}5&{}4&{}3&{}1\\ 5&{}4&{}3&{}2&{}1\\ 4&{}3&{}2&{}1\end{matrix}\). Hence T is behaved.

The following examples show that there is no particular correlation between properties \(T^+\) semistandard and \(T^+\) shifted Yamanouchi (even for behaved tableaux).

Example 6.19

Let \(\lambda =(3,2,2|1,1,0)\), \(\mu =(2,1)\) and \(\nu =(3,2,1)\) in GL(3|3). Let \(T=\begin{matrix} &{}&{}6&{}4\\ &{}4&{}4&{}5\\ 5&{} \end{matrix}\). Then \(T^+=\begin{matrix} &{}&{}6\\ &{}4&{}4\\ 5&{} \end{matrix}\), \(T^{\mathrm {opp}}= \begin{matrix} 4&{}3&{}2\\ 4&{}1\\ 3 \end{matrix}\) and \(T^-= \begin{matrix} &{}3&{}2\\ &{}1\\ 3 \end{matrix}\), showing that T is behaved. Then \(T^+\) is not semistandard, but \(T^+\) is shifted Yamanouchi since the second appearance of symbol 4 is before the second appearance of 5. However, the tableau T is not Yamanouchi because symbol 6 appears before symbol 5.

Example 6.20

Let \(\lambda =(2,2|1,0,0)\), \(\mu =(0,0)\) and \(\nu =(2,1,1,1)\) in GL(2|4). Let \(T=\begin{matrix} 4&{}6&{}4\\ 5&{}7&{} \end{matrix}\). Then \(T^+=\begin{matrix} 4&{}6\\ 5&{}7 \end{matrix}\), \(T^{\mathrm {opp}}= \begin{matrix} 3&{}1\\ 1&{}\\ 2&{}\\ 2&{} \end{matrix}\) and \(T^-= \begin{matrix} &{}1\\ 1&{}\\ 2&{}\\ 2&{} \end{matrix}\) showing that T is behaved. Then \(T^+\) is semistandard, and \(T^+\) is not shifted Yamanouchi because 6 appears before 5.

Nevertheless, there is the following relationship.

Lemma 6.21

If T is semistandard and \(T^+\) is shifted Yamanouchi, then T is Yamanouchi.

Proof

To show that T is Yamanouchi, we need to verify the following. If the entries \(i_1\) and \(i_2\) appear at the position \((j_1, l)\) and \((j_2, l)\) of \(T^{\mathrm {opp}}\) such that \(j_1<j_2\) and \(l\le \lambda ^-_{j_1}\) (meaning that the position \((j_1,l)\) belongs to \(\lambda ^-\)-part of \(T^{\mathrm {opp}}\)), then the lth appearance of \(j_1\) in w(T) comes before the lth appearance of \(j_2\) in w(T).

Since T is semistandard, its first \(j-1\) rows can only contain entries that do not exceed \(m+j-1\). Since \(l\le l^-_j\), the first l entries in the jth row of T read from right to left, are equal to \(m+j\). Therefore, all the entries that are larger than \(m+j\) appear only after the lth appearance of the symbol \(m+j\) in w(T). \(\square \)

Proposition 6.22

If \(\lambda ^+_m\ge n\), then \(T\in LR(\lambda '/ \mu ',\nu )\) if and only if \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\).

Proof

It is clear that T semistandard implies \(T^+\) semistandard. Since T is behaved by Lemma 6.17, the tableau \(T^-\) is a subtableau of \(T^{\mathrm {opp}}\). Therefore, T Yamanouchi implies \(T^+\) shifted Yamanouchi. This shows that \(T\in LR(\lambda '/ \mu ',\nu )\) implies \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\).

Conversely, if \(T^+\) is semistandard, then T is semistandard by Lemma 6.12. Then Lemma 6.21 shows that \(T^+\) shifted Yamanouchi implies T Yamanouchi. Therefore, \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\) implies \(T\in LR(\lambda '/ \mu ',\nu )\). \(\square \)

The following example shows that the assumption \(\lambda ^+_m\ge n\) cannot be removed from Proposition 6.22.

Example 6.23

Let \(\lambda =(2,2|1,1,0)\), \(\mu =(0,0)\) and \(\nu =(2,2,1,1)\) in GL(2|4). Let \(T=\begin{matrix} 4&{}6&{}4\\ 5&{}7&{}5 \end{matrix}\). Then \(T^+=\begin{matrix} 4&{}6\\ 5&{}7 \end{matrix}\), \(T^{\mathrm {opp}}= \begin{matrix} 3&{}1\\ 3&{}1\\ 2&{}\\ 2&{} \end{matrix}\) and \(T^-= \begin{matrix} &{}1\\ &{}1\\ 2&{}\\ 2&{} \end{matrix}\) showing that T is behaved. Then \(T^+\) is semistandard and shifted Yamanouchi, but T is neither semistandard nor Yamanouchi (because 6 appears before 5).

If \(\lambda \) is a hook partition such that \(\lambda ^+_m\ge n\), then a complete description of \(G_{ev}\)-primitive vectors in \(H^0_G(\lambda )\) is given in the following theorem.

Theorem 6.24

Assume \(\lambda \) is a hook partition and \(H^0_G(\lambda )\) is irreducible. The set of all elements \({\overline{v}}(T^+)=v^+_T{\overline{\rho }}(\tau T^+)\) for \(T^+\in M((\lambda ^+)'/ \mu ',\nu /\omega )\), form a basis of all even-primitive vectors of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\).

Proof

If \(H^0_G(\lambda )\) is irreducible, then, by Theorem 6.11 of [1], the multiplicity of even-primitive vectors in \(H^0_G(\lambda )=L_G(\lambda )\) of weight \((\mu |\nu )\) equals the Littlewood–Richardson coefficient \(C^{\lambda '}_{\mu ',\nu }\). This coefficient equals the cardinality of elements of the set \(LR(\lambda '/ \mu ',\nu )\) by Proposition 5.3 of [5]. By Proposition 6.22, this is the same as the cardinality of elements of the set \(M((\lambda ^+)'/ \mu ',\nu /\omega )\).

Therefore, using Theorem 6.10, we conclude that vectors \({\overline{v}}(T^+)\) for \(T^+\) from \( M((\lambda ^+)'/ \mu ',\nu /\omega )\) form a basis of all even-primitive vectors of \(H^0_G(\lambda )\) of the weight \((\mu |\nu )\). \(\square \)

Theorem 6.24 gives a satisfactory description of even-primitive vectors in \(H^0_G(\lambda )\) in the case when \(\lambda \) is hook partition and \(\lambda _m\ge n\). In the next section, we remove the condition \(\lambda _m \ge n\) and together with the induced supermodule \(H^0_G(\lambda )\) we also consider its subsupermodule \(\nabla _G(\lambda )\).

According to [29], over a ground field of characteristic different from 2, we have \(\nabla _G(\lambda )=H^0_G(\lambda )\) if and only if \(\lambda _m\ge n\) and \(\lambda _{m+n}\ge 0\). If \(char(K)=0\), then this implies \(H^0_G(\lambda )=L(\lambda )\) is irreducible, the case investigated above.

Finally, we can describe even-primitive vectors in an arbitrary induced supermodule \(H^0_G(\lambda )\) using the following observation.

Proposition 6.25

Let \(\lambda =(\lambda _1, \ldots , \lambda _m; \lambda _{m+1}, \ldots , \lambda _{m+n})\) be a dominant weight. Denote \(\alpha ^+=(1, \ldots , 1; 0, \ldots , 0)\) and \(\alpha ^-=(0, \ldots , 1;1, \ldots , 1)\). Then there are nonnegative integers \(a^+\), \(a^-\) such that for the weight \(\mu =\lambda +a^+ \alpha ^+ + a^- \alpha ^-\) there is \(\nabla (\mu )=H^0_G(\mu )\) and \(H^0_G(\lambda )\simeq H^0_G(\mu ) \otimes D^+(1, \ldots , m)^{a^+} \otimes D^-(m+1, \ldots , m+n)^{a^-}\) as \(G_{ev}\)-supermodules.

Proof

Choose \(a^+\) and \(a^-\) such that \(\mu _m\ge n\) and \(\mu _{m+n}\ge 0\). Then \(\nabla (\mu )=H^0_G(\mu )\) by [29] or by direct observation that \(D^+(1, \ldots ,m)^n \Lambda (Y)\) is polynomial. We have \(H^0_{G_{ev}}(\mu )\simeq H^0_{G_{ev}}(\lambda )\otimes D^+(1, \ldots , m)^{a^+} \otimes D^-(m+1, \ldots , m+n)^{a^-}\) as \(G_{ev}\)-supermodules. Since \(H^0_G(\mu )\simeq H^0_{G_{ev}}(\mu )\otimes \Lambda (Y)\) and \(H^0_G(\lambda )\simeq H^0_{G_{ev}}(\lambda )\otimes \Lambda (Y)\) as \(G_{ev}\)-supermodules, the claim follows. \(\square \)

Therefore, even-primitive vectors in \(H^0_G(\lambda )\) are obtained by tensoring even-primitive vectors in \(H^0_G(\mu )\) (described in Theorem 6.24) with \(D^+(1, \ldots , m)^{-a^+} \otimes D^-(m+1, \ldots , m+n)^{-a^-}\).

8 The supermodule \(\nabla (\lambda )\)

8.1 Schur superalgebras

The category of polynomial supermodules over GL(m|n) is equivalent to the category of supermodules over the corresponding Schur superalgebra S(m|n), see [1] or Sect. 2.2 of [22]. Under this equivalence, the supermodule \(\nabla _G(\lambda )\) is identified with the costandard module \(\nabla (\lambda )\) of S(m|n). Moreover, the category of S(m|n)-supermodules is semisimple; its simple supermodules \(L_S(\lambda )\) are parametrized by (m|n)-hook partitions \(\lambda \) and their characters are given by Hook Schur functions \(HS_{\lambda }(x,y)\). The function \(HS_{\lambda }(x,y)\) is given by summing certain monomials corresponding to (m|n)-semistandard tableaux. In particular, by 6.11 of [1], there is

$$\begin{aligned} HS_{\lambda }(x,y)= \sum _{\mu <\lambda } \sum _{\nu } C^{\lambda '}_{\mu ' \nu } S_{\mu }(x)S_{\nu }(y), \end{aligned}$$
(4)

where \(S_{\mu }(x)\) is the Schur function on variables \(x_1, \ldots , x_m\) corresponding to \(c_{11}, \ldots ,\)\( c_{mm}\) and \(S_{\nu }(x)\) is the Schur function on variables \(y_1, \ldots , y_n\) corresponding to \(c_{m+1, m+1},\)\(\ldots , c_{m+n,m+n}\), and \(C^{\lambda '}_{\mu ' \nu }\) is the Littlewood–Richardson coefficient.

It follows that the multiplicity of even-primitive vectors of weight \((\mu |\nu )\) in \(\nabla (\lambda )\) is \(C^{\lambda '}_{\mu ' \nu }\). For more details, the reader is asked to consult [1] or [22].

8.2 \(G_{ev}\)-primitive vectors in \(\nabla (\lambda )\)

From now on, we assume that an (m|n)-hook weight \(\lambda \) is fixed and (I|J) is admissible of the length k and the content \(cont(I|J)=(\iota |\kappa )\), such that \(\mu =(\mu _1, \ldots , \mu _m)=\lambda ^+-\iota \) and \(\nu =(\nu _{m+1}, \ldots , \nu _{m+n})\)\(=\lambda ^-+\kappa \) are dominant partitions. Denote by \(P(\mu |\nu )=P_k(\mu |\nu )\) the set of even-primitive vectors in \(H^0_{G_{ev}}(\lambda )\otimes \wedge ^k Y\) (viewed as a subspace of \(H^0_G(\lambda )\)) of weight \((\mu |\nu )\) that are linear combinations of elements \({\overline{\pi }}_{K|L}\) for admissible (K|L) of the content \((\iota |\kappa )\).

Previously we have constructed the even-primitive vector \({\overline{v}}(T^+)\) in \(H^0_G(\lambda )\) corresponding to a tableau \(T^+\), such that its weight is \((\mu |\nu )\) as above. Denote by k the length of \(T^+\), which is the cardinality of diagram \({\mathcal {D}}^+\). Theorem 5.2 shows that the vector \({\overline{v}}(T^+)=v^+_T{\overline{\rho }}(\tau T^+)\) is an even-primitive vector of weight \((\mu |\nu )\) in \(P_k(\mu |\nu )\). Since this construction is based on the tableau \(T^+\), the weight \((\mu |\nu )\) of the vector \({\overline{v}}(T^+)\) is polynomial. In this section, we show that the vectors \({\overline{v}}(T^+)\) for \(T^+\in M((\lambda ^+)'/\mu ', \nu /\omega )\) form a basis of all \(G_{ev}\)-primitive vectors of weight \((\mu |\nu )\) in the supermodule \(\nabla (\lambda )\).

If \(\lambda _m<n\), then \(H^0_G(\lambda )\) is not a polynomial supermodule, and there is an even-primitive vector in \(H^0_G(\lambda )\) that cannot be of the form \({\overline{v}}(T^+)\). However, every even-primitive vector from \(P(\mu |\nu )\) has polynomial weight \((\mu |\nu )\) and belongs to \(\nabla (\lambda )\).

We have already addressed the question raised in the comments after Theorem 4.4. of [14] and confirmed that every even-primitive vector in \(H^0_G(\lambda )\) is a linear combination of elements \({\overline{\pi }}_{K|L}\). We generalize this statement for even-primitive vectors in \(\nabla (\lambda )\) as follows.

Theorem 7.1

Every even-primitive vector of \(\nabla (\lambda )\) of weight \((\mu |\nu )\) belongs to \(P(\mu |\nu )\). A basis of even-primitive vectors in \(\nabla (\lambda )\) of the weight \((\mu |\nu )\) consists of vectors \({\overline{v}}(T^+)\) for \(T^+\in M((\lambda ^+)'/\mu ', \nu /\omega )\).

Proof

Denote by \({\tilde{\lambda }}\) and \({\tilde{\mu }}\) the partitions such that \({\tilde{\lambda }}^+_i=\lambda ^+_i +n\) and \({\tilde{\mu }}_i=\mu _i+n\) for every \(1\le i\le m\), and \({\tilde{\lambda }}^-_j=\lambda ^-_j\) for every \(1\le j\le n\). Then the skew shape \(({\tilde{\lambda }}^+)'/{\tilde{\mu }}'\) and its diagram \(\tilde{{\mathcal {D}}}^+\) are obtained by shifting the skew shape \((\lambda ^+)'/\mu '\) and its diagram \({\mathcal {D}}^+\) by n rows downwards. Therefore, the cardinalities of the sets \(M(({\tilde{\lambda }}^+)'/{\tilde{\mu }}', \nu /\omega )\) and \(M((\lambda ^+)'/\mu ', \nu /\omega )\) coincide.

Since \({\tilde{\lambda }}_m\ge n\), by Proposition 6.22 the cardinalities of the sets \(LR({\tilde{\lambda }}'/{\tilde{\mu }}',\nu )\) and \(M(({\tilde{\lambda }}^+)'/{\tilde{\mu }}', \nu /\omega )\) are the same, and by Theorem 6.24 and Theorem 6.11 of [1] they are equal to the multiplicity \(C^{{\tilde{\lambda }}'}_{{\tilde{\mu }}',\nu }\) of \(G_{ev}\)-primitive vectors of weight \(({\tilde{\mu }},\nu )\) in \(H^0_G({\tilde{\lambda }})\).

Using the representation of induced supermodules \(H^0_G(\lambda )\) as \(H^0_{G_{ev}}(\lambda ) \otimes \wedge (Y)\), we derive that tensoring an arbitrary even-primitive vector of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\) with \(D^+(1, \ldots , m)^n\) gives an even-primitive vector of weight \(({\tilde{\mu }}|\nu )\) in \(H^0_G({\tilde{\lambda }})\), and vice versa, tensoring an arbitrary even-primitive vector of weight \(({\tilde{\mu }}|\nu )\) in \(H^0_G({\tilde{\lambda }})\) with \(D^+(1, \ldots , m)^{-n}\) gives an even-primitive vector of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\). Since the linear independence of vectors is preserved by tensoring, we conclude that the multiplicity of even-primitive vectors of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\) is the same as the multiplicity of even-primitive vectors of weight \(({\tilde{\mu }}|\nu )\) in \(H^0_G({\tilde{\lambda }})\).

Since the cardinality of \(M((\lambda ^+)'/\mu ', \nu /\omega )\) is the same as the multiplicity of all even-primitive vectors of weight \((\mu |\nu )\) in \(H^0_G(\lambda )\), and elements \({\overline{v}}(T^+)\) for \(T^+\in M((\lambda ^+)'/\mu ', \nu /\omega )\) are linearly independent by Theorem 6.10, we conclude that the vectors \({\overline{v}}(T^+)\) for \(T^+\in M((\lambda ^+)'/\mu ', \nu /\omega )\) form a basis of all even-primitive vectors in \(\nabla (\lambda )\) of the weight \((\mu |\nu )\). \(\square \)

Corollary 7.2

Even-primitive vectors in \(\nabla (\lambda )\) are precisely those even-primitive vectors in \(H^0_G(\lambda )\) of polynomial weight.

8.3 A connection of marked tableaux to pictures

We explain the relationship of marked tableaux to pictures in the sense of Zelevinsky—see [27] and [26]. Our notation is a hybrid of [27] and [26].

We denote the set of partitions by \({\mathcal {P}}\) and for \(\alpha , \beta ,\kappa \in {\mathcal {P}}\) the Littlewood–Richardson coefficients by \(C^{\alpha }_{\beta ,\kappa }\).

Definition 7.3

For a skew partition \(\alpha /\beta \) define the partial orders \(\le _{\nwarrow }\) and \(\le _{\swarrow }\) on the entries (ij) of the diagram \([\alpha /\beta ]\) as follows.

\((i,j)\le _{\nwarrow } (i',j')\) if and only if \(i\le i'\) and \(j\le j'\);

\((i,j)\le _{\swarrow } (i',j')\) if and only if \(i\le i'\) and \(j\ge j'\).

We also use the total ordering \(\le _r\) that refines \(\le _{\swarrow }\) and is defined as \((i,j)\le _r (i',j')\) if and only if \(i< i'\) or (\(i=i'\) and \(j\ge j'\)).

Definition 7.4

Let \(\alpha /\beta \) and \(\gamma /\delta \) be skew partitions, and \(f:\alpha /\beta \rightarrow \gamma /\delta \) be a map. The map f is called a picture from \(\alpha /\beta \) to \(\gamma /\delta \) if

  • f is a bijection

  • If \(x \le _{\nwarrow } y\) then \(f(x)\le _{\swarrow } f(y)\)

  • If \(f(x)\le _{\nwarrow } f(y)\) then \(x\le _{\swarrow } y\).

The set of all pictures from \(\alpha /\beta \) to \(\gamma /\delta \) is denoted by \(\mathrm {Pict}(\alpha /\beta ,\gamma /\delta )\).

Definition 7.5

Let \(f:\alpha /\beta \rightarrow \gamma /\delta \) be a picture. Define the row reading of f to be the tableau \(E^+\) of shape \(\alpha /\beta \) such that the entry at the position (ij) is the first coordinate of f(ij). Define the column reading of f to be the tableau \(E^-\) of shape \(\gamma /\delta \) such that the entry at the position (ij) is the second coordinate of \(f^{-1}(i,j)\).

It is immediate from the definitions that the row reading \(E^+\) of picture f is a semistandard tableau and the column reading \(E^-\) is an anti-semistandard tableau.

Example 7.6

Let \(\alpha =(3,3,2,1)\), \(\beta =(2,0,0,0)\), \(\gamma =(4,2,2,1)\) and \(\delta =(1,1,0,0)\). Let the picture \(f:\alpha /\beta \rightarrow \gamma /\delta \) be given as

$$\begin{aligned} \begin{array}{ccc} &{}&{}f\\ a&{}d&{}g\\ b&{}e&{}\\ c&{}&{} \end{array}\mapsto \begin{array}{cccc} &{}f&{}d&{}a\\ &{}g&{}&{}\\ e&{}b&{}&{}\\ c&{}&{}&{} \end{array}. \end{aligned}$$

Then the row and column readings of f are given as

$$\begin{aligned} E^+=\begin{array}{ccc} &{}&{}1\\ 1&{}1&{}2\\ 3&{}3&{}\\ 4&{}&{} \end{array}\text { and } E^-=\begin{array}{cccc} &{}3&{}2&{}1\\ &{}3&{}&{}\\ 2&{}1&{}&{}\\ 1&{}&{}&{} \end{array}. \end{aligned}$$

Lemma 7.7

A tableau E of shape \(\alpha /\beta \) is a row reading tableau of picture \(f:\alpha /\beta \rightarrow \gamma /\delta \) if and only if E is semistandard and shifted Yamanouchi.

Proof

The proof is obtained by adjusting arguments in the proof of Proposition 2.6.1 of [26] and the remark following it. Here, we need to keep in mind that the definitions of pictures and the partial order \(\le _{\swarrow }\) in [26] differs from ours (which conform to those given in [27]) by the transposition of the domain and the image sides—see the footnote on page 325 of [26]. \(\square \)

Proposition 7.8

There is a bijective correspondence between pictures \(f:(\lambda ^+)'/\mu ' \rightarrow \nu /\omega \) and marked tableau \(T^+\in M((\lambda ^+)'/\mu ',\nu /\omega )\). Under this correspondence the tableau \(T^+\) is the row reading of f, and vice versa, f is given by the repositioning maps \(Rpos:{\mathcal {D}}^+ \rightarrow {\mathcal {D}}^-\) corresponding to the tableau \(T^+\).

Proof

By Lemma 7.7 we know that pictures \(f:(\lambda ^+)'/\mu ' \rightarrow \nu /\omega \) correspond to tableau \(T^+\) of the shape \((\lambda ^+)'/\mu '\), which are semistandard and shifted Yamanouchi.

Using Proposition 6.7, we conclude that such \(T^+\) is the row reading of picture \(f:(\lambda ^+)'/\mu ' \rightarrow \nu /\omega \) if and only if \(T^+\in M((\lambda ^+)'/\mu ',\nu /\omega )\). \(\square \)

Note that Proposition 6.7 allows us to characterize a picture \(f:(\lambda ^+)'/\mu ' \rightarrow \nu /\omega \) in an entirely symmetric way, using marked tableau \(T^+\). The symmetry is given by the requirement that \(T^+\) is semistandard and \(T^-\) is anti-semistandard. Usually, pictures are characterized using lattice permutations (see Proposition 2.6.1 of [26] and the remark following it) and the description does not seem symmetric. With our approach, we are using a pair of tableaux—semistandard \(T^+\) and anti-semistandard \(T^-\) that are connected using the repositioning map Rp. It is the repositioning map Rp that provides the connection between \(T^+\) and \(T^-\), namely \(Rp(T^+)=T^-\), and the specific definition of Rp we are using is related to the lattice condition.

An immediate consequence of Proposition 7.8 is the following result.

Proposition 7.9

In the notation as above, we have

$$\begin{aligned} C^{\lambda '}_{\mu ' \nu }=\sum _{\kappa \in {\mathcal {P}}} C^{(\lambda ^+)'}_{\mu ',\kappa } C^{\nu }_{\omega , \kappa }. \end{aligned}$$

Proof

By Theorem 7.1, the cardinality of \(M((\lambda ^+)'/\mu ',\nu /\omega )\) equals the multiplicity of even-primitive vectors of weight \((\mu |\nu )\) in \(\nabla (\lambda )\). It follows from (4) that this multiplicity equals \(C^{\lambda '}_{\mu ' \nu }\).

On the other hand, the cardinality of the set \(\mathrm {Pict}((\lambda ^+)'/ \mu ',\nu /\omega )\) equals

$$\begin{aligned} \sum _{\kappa \in {\mathcal {P}}} C^{(\lambda ^+)'}_{\mu ',\kappa } C^{\nu }_{\omega , \kappa } \end{aligned}$$

by Theorem 1 of [27]. Proposition 7.8 concludes the proof. \(\square \)