1 Introduction

As anticipated long ago [1,2,3,4,5,6], Schur functions and their various generalizations, like Macdonald [7] and Kerov [8, 9] functions, generalized Macdonald polynomials [10,11,12,13,14,15,16], tensor-model characters [17,18,19,20,21,22,23,24,25,26,27,28,29,30] and the still-hypothetical 3-Schur functions [31, 32] play an increasing role in modern theory, especially in consideration of essentially non-perturbative phenomena. Technically they appear at least in three different contexts:

  • in formal representation theory – as characters of \(Sl_N\) representations \(S_R[X]=\mathrm{Tr}_R \mathcal{X}^{(R)}\), and thus as the building blocks for integrable tau-functions through the general construction, reviewed in [33]

  • in decomposition formulas for the integrands of free-field screening correlators like

    $$\begin{aligned} \left<\prod _i e^{\phi (x_i)} \prod _j e^{-\phi (y_j)}\right>_{\phi }\sim & {} \prod _{i,j} (x_i-y_j)^{-1} \nonumber \\\sim & {} \sum _R S_R[X]S_{R^\vee }[Y^{-1}] \end{aligned}$$
    (1)
  • and as preserved quantities in Selberg–Kadell-type integrals [34,35,36,37,38,39,40], which stand behind the basic superintegrability/localization property [41]

    $$\begin{aligned} \Big < S_R[X] \Big >_X \sim S_R[X^*] \end{aligned}$$
    (2)

    actually serving as a selection rule for “good” theories, which provide “matrix-model \(\tau \)-functions” [1,2,3,4,5,6] after (functional) integration over fields.

One of the many widely-known examples of (2), appears when we integrate exactly over x and y in (1) to obtain a correlator of screenings \(\oint e^{\phi (x)} dx\) [34,35,36,37,38,39,40]. In this case the a combination of (1) and (2) immediately provides the AGT-induced [51,52,53] Nekrasov decomposition [54] of conformal blocks, realized in terms of conformal (Dotsenko–Fateev) matrix models [55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73] – and their far-going network-model generalizations [80,81,82,83,84,85,86,87,88,89,90,91]. Further steps on this way, as well as a related development with matrix \(\longrightarrow \) tensor model generalizations require essential extension of the theory of Schur characters in various directions. Some things, however, should supposedly remain intact – and serve as the carrying construction of this future general theory.

In this short note we consider two of such properties: Cauchy decomposition formula which stands behind (1) and the skew-character decomposition, which plays the central role in technical applications of Schur functions to representation theory. These two properties are in fact intimately related: imposing one implies another. This is a simple but important remark, it considerably weakens their impact on generalizations: one restriction (to keep these properties) is much less than two. Also, it reduces the number of “miracles” and thus the attractiveness of particular generalization attempts. After a brief presentation of the formal relation we present an explicit example – of the problems with the building project of the 3-Schur functions, encountered at the level of the size-four plain partitions, where the (general) relation between Cauchy and skew decompositions shows up in a somewhat unusual way.

2 Cauchy vs skew

Imagine that we have a set of functions \(S_\sigma \{ p\}\) which depend on a multi-component \(p_k\), \(k\in K\) and are labeled by elements \(\sigma \in \Sigma \) of some set \(\Sigma \). Let them form a full linear basis in the space of functions of \(\{p\}\). Then they also form a closed ring under the ordinary multiplication

$$\begin{aligned} S_{\sigma '}\{ p\}\cdot S_{\sigma ''} \{p\} = \sum _{\sigma \in \Sigma }N_{\sigma '\sigma ''}^\sigma S_\sigma \{p\} \end{aligned}$$
(3)

with some structure constants N (not obligatory integer). In this setting there is an obvious equivalence between two different-looking of statements: Cauchy summation formula and decomposition rule of the skew-functions.

Cauchy formula states that

(4)

with a certain norm in the space of \(\{p\}\)-variables. Ideally one can think of a scalar product, with respect to which both \(p_k\) and \(S_\sigma \{p\}\) are orthogonal:

$$\begin{aligned}&\displaystyle \Big <p_k\Big |p_k'\Big > = ||p_k||^2\cdot \delta _{k,k'} \end{aligned}$$
(5)
$$\begin{aligned}&\displaystyle \Big <S_\sigma \{p\}\Big |S_{\sigma '}\{p\}\Big > = ||S_\sigma ||^2\cdot \delta _{\sigma ,\sigma '} \end{aligned}$$
(6)

However, really important is the bilinear exponent. As a corollary, by multiplying two copies of (4) with the same p but different \(p'\), we get:

(7)

If we now consider the function of \(p'+p''\) as that of \(p'\), we obtain

$$\begin{aligned} S_\sigma \{p'+p''\} = \sum _{\sigma '\in \Sigma } S_{\sigma /\sigma '}\{p''\}\cdot S_{\sigma '}\{p'\} \end{aligned}$$
(8)

where the \(p''\)-dependent coefficients are known as skew-functions. Then equivalence of the two relations in the second line of (7) implies that

(9)

with the same structure constants N as in (3). The differently-normalized bold-faced \(\mathbf{N}\) are instead the structure constants in multiplication of “dual” functions (boldfaced):

$$\begin{aligned} \mathbf{S}_{\sigma ' }\{ p\}\cdot \mathbf{S}_{\sigma ''} \{p\}:= & {} \frac{S_{\sigma '}\{ p\}}{||S_{\sigma '}||^2}\cdot \frac{S_{\sigma ''} \{p\}}{||S_{\sigma ''}||^2} \nonumber \\= & {} \sum _{\sigma \in \Sigma } \mathbf{N}_{\sigma ',\sigma ''}^\sigma \frac{S_\sigma \{p\}}{||S_\sigma ||^2} \nonumber \\= & {} \sum _{\sigma \in \Sigma } \mathbf{N}_{\sigma ',\sigma ''}^\sigma \mathbf{S}_\sigma \{p\} \end{aligned}$$
(10)

Thus we see that (4) implies (9).

This statement can be partly inverted: if the skew functions in (8) possess the expansion (9) with the same structure constants as in (3), this implies some version of Cauchy summation formula (4) with a bilinear exponent – but, strictly speaking, with some unspecified coefficients at the place of \(||p_k||^{-2}\).

To avoid possible confusion, (3) and (10) are not the statements – these are just the definitions of the structure constants N and \(\mathbf{N}\) for a given set of functions \(S_\sigma \{p\}\). Of course, one can instead use (9) as a definition of \({\bar{N}}\), like it was done in [31, 32], – then the statement will be that (10) depends on validity of some version of (4).

3 Particular cases

So far, in most applications in physics the set \(\Sigma \) is that of Young diagrams (partitions of integers) – this is especially natural for applications to representations of linear and symmetric groups \(Gl_N\) and \(\mathcal{S}_N\). Then the relevant set K is just that of natural numbers: the “time variables” are just \(\{p_1,p_2,\ldots \}\), and these are exactly enough to “enumerate” all Young diagrams by the rule

$$\begin{aligned} R= & {} [r_1\ge r_2\ge \cdots \ge r_{l_R}>0] = [l_R^{m_{l_R}},\ldots ,3^{m_3},2^{m_2},1^{m_1} ] \nonumber \\&\longleftrightarrow&\ p^R = \prod _{k=1}^{l_R} p_k^{m_k} = \prod _{a=1}^{l_R} p_{r_a} \end{aligned}$$
(11)

Relation to representation theory and conformal matrix/network models in (1) and (2), appears on the Miwa locus \(p_k=\mathrm{tr}\,X^k\) with the \(N\times N\) matrix X, which in representation R becomes a matrix \(\mathcal{X}^{(R)}\) of the size \(\mathrm{dim}_R = \mathrm{Schur}_R[I] = \mathrm{Schur}_R\{p_k=N\}\), made from the N eigenvalues of X. Associated scalar product is usually taken to be

$$\begin{aligned} \Big < p^R\Big |p^{R'}\Big >^{(g)} = \delta _{R,R'}\cdot \overbrace{\prod _{a=1}^{l_R} a^{m_a}\, m_a!}^{z_R}\, g_a^{m_a} \end{aligned}$$
(12)

For all \(g_a=1\) we get the Schur functions per se, the corresponding factor \(z_R\) is the one which appears in the orthogonality condition for symmetric-group characters \(\psi _R(\Delta )\),

$$\begin{aligned} \sum _{\Delta \vdash |R|} \frac{\psi _R(\Delta )\psi _{R'}(\Delta )}{z_\Delta }= & {} \delta _{R,R'} \nonumber \\\Longleftrightarrow & {} \sum _{R\vdash \Delta } \psi _R(\Delta )\psi _R(\Delta ') = z_\Delta \,\delta _{\Delta ,\Delta '} \nonumber \\ \end{aligned}$$
(13)

and the structure constants \(N^R_{R'R''}\) in (3) are the integer-valued Richardson-Littlewood coefficients, counting multiplicities of representation R in the product \(R'\otimes R''\). In deformation to Macdonald polynomials, when

$$\begin{aligned} g_a = \frac{q^a-q^{-a}}{t^a-t^{-a}} \end{aligned}$$
(14)

these N become functions of q and t, still they vanish whenever \(R\notin R'\otimes R''\).

For arbitrary parameters \(g_a\) we get Kerov functions [8, 9], for them the restriction on R is softened to a one, natural for the Young-diagrams per se:

$$\begin{aligned} N^R_{R'R''} \ne 0 \ \ \ \ \Longrightarrow \ \ \ \ R'+R'' \le R \le R\cup R' \end{aligned}$$
(15)

and exact relation to representation theory of \(SL_\infty \) and \(\mathcal{S}_\infty \) is lost. Still the absolute majority of other properties, including the Cauchy and skew-Kerov decompositions remain true – and application of generic Kerov functions to physical theories is just a matter of time (see [92,93,94,95,96] for the first examples).

However, already for the by-now-conventional applications, restriction to \(\Sigma =\{partitions\}\) is insufficient. Nekrasov calculus for generic \(\Omega \)-backgrounds (for \(c\ne 1\), i.e. \(\epsilon _1\ne -\epsilon _2\)) requires “generalized” Macdonald functions [10,11,12,13,14,15,16], depending on collections (strings) of Young diagrams. This, however, is not a very big problem – it is enough just to consider several copies of time variables, though the scalar product can require non-trivial modification [97]. More challenging are the ordered sequences of Young diagrams (forming the plane partitions), which are needed in generic network models and representation theory of DIM-algebras. The corresponding “triple-Macdonald polynomials”, though constructible in terms of the ordinary ones [98], should depend on a very different set K of time-variables and be described by a more-first-principle theory.

One of the fresh related directions is the basic tangles-calculus [99,100,101,102] relation

$$\begin{aligned} H^\mathrm{Hopf}_{(R'\otimes R'')\times Q} = H^\mathrm{Hopf}_{R'\times Q} \cdot H^\mathrm{Hopf}_{R''\times Q} \end{aligned}$$
(16)

for the properly normalized colored Hopf-link invariants, which provides for them an interpretation as Q-dependent characters (note that this is a manifestation of the rule (2), because these invariants are averages of Wilson loops \(\mathrm{Tr}_R P\exp \left( \oint \mathcal{A}\right) \), which are themselves the gauge-field-dependent characters in Chern–Simons theory). Since Hopf invariants are supposedly related to topological vertices [103,104,105] (DIM-algebra intertwiners), this has direct connection to the still-underdeveloped representation theory of DIM algebras.

An ever further-going challenge is adequate description of tensor-model characters, where some “non-abelization” looks unavoidable already at the level of (2) – straightforward lifting of Schur functions to these theories does not seem to provide a full basis in the operator space [17]. In this note we do not go as far as full-fledged tensor-model considerations, but provide just a simple example of difficulties, encountered at the plain-partition stage. We demonstrate that, conversely to possible expectations, Cauchy formula is considerably easier to satisfy than building a true collection of 3-Schur functions.

4 The 3-Schur attempt

When we switch from the ordinary to plane partitions in the role of the set \(\Sigma \), the first thing to change is the set K of time-variables. In order for the space polynomials of \(p_k\) to have the same dimension as that of the plane partitions we need \(p_{k,i}\) with integer \(1\le i\le k\) with the grading degree \(\sum _{i\le k} kp_{k,i}\). Then at the “level” (degree) one we have just a single monomial \(p_{1,1}\) and a single plane partition with one box, at level two – three monomials \(p_{2,1}\), \(p_{2,2}\), \(p_{1,1}^2\) and three plane partitions with two boxes and so on. Since the grading does not depend on i it can be convenient to speak of the k dimensional vector spaces and denote the time variables \(\vec p_k\) – assuming that the number of vector components is k. The 3-Schur functions should be homogeneous functions of these variables and form a full basis – and thus a ring. However, the first naive attempt in [31, 32] to build these functions runs into problems, which we will now try to illustrate. This attempt was build on two postulates: that the scalar product does not depend on i and is given by the same formula (12) with all \(g_k=1\),

$$\begin{aligned} \Big < \prod _{i\le k} p_{k,i}^{m_{k,i}}\Big |\prod _{i\le k} p_{k,i}^{m'_{k,i}}\Big >^{(g)} = \prod _{i\le k}\delta _{m_{k,i},m'_{k,i}} \cdot k^{m_{k,i}}\, m_{k,i}! \, g_k^{m_{k,i}} \nonumber \\ \end{aligned}$$
(17)

and that the multiplication operation (3) is dictated by “natural” composition of plane partitions, see below. Both postulates are not very well justified, but it is instructive to see what is exactly the problem they lead to.

We denote the three dimensions of the space where the plane partitions lie, by xyz and use \(\rho =\{x,y,z\}\) as a label. When the number of boxes is small, partitions lie entirely in one of the three planes and can be labeled by Young diagrams together with the ordered pair of indices xyz. When there is just one column/row, only one index remains. For symmetric Young diagrams the order does not matter and it is also convenient to use orthogonal direction z instead of \(xy\cong yx\). Then the 3-Schur functions at the first three levels are:

$$\begin{aligned} \mathcal{S}_{[1]} = p_1 \ \ \ \ \ \ \ \ \ \ \&\mathcal{S}_{[2]}^\rho = \frac{{\vec {\alpha }}_2^\rho {\vec {p}}_2 + p_1^2}{2} \ \ \ \ \ \ \ \ \ \ \&\begin{array}{l} \mathcal{S}_{[3]}^\rho = \frac{{\vec {\alpha }}_3^\rho {\vec {p}}_3}{3} + \frac{{\vec {\alpha }}_2^\rho {\vec {p}}_2\,p_1}{2} + \frac{p_1^3}{6}\\ \\ \mathcal{S}_{[2,1]}^\rho = \frac{{\vec {\beta }}_3^\rho {\vec {p}}_3}{3} - \frac{{\vec {\alpha }}_2^\rho {\vec {p}}_2\,p_1}{2} + \frac{p_1^3}{3} \end{array} \nonumber \\ \end{aligned}$$
(18)

The have simple \(\rho \)-independent norms \( \ \ \ \ ||\mathcal{S}_{[1]}||^2=1, \ \ \ ||\mathcal{S}_{[2]}||^2 = \frac{3}{2}, \ \ \ ||\mathcal{S}_{[3]}||^2 = \frac{9}{2}, \ \ \ \ \ ||\mathcal{S}_{[2,1 ]}||^2 = \frac{9}{4} \ \ \ \) and satisfy the relation (9) and (10) in the most natural way:

$$\begin{aligned}&\mathcal{S}_{[1]}^2 = \frac{\sum _{\rho =x,y,z}\mathcal{S}^\rho _{[2]}}{||S_{[2]}||^2} \ \ \ \nonumber \\&\quad \Longleftrightarrow \ \ \Delta \mathcal{S}_{[2]}^\rho := \mathcal{S}_{[2]}^\rho \{p'+p''\} -\mathcal{S}_{[2]}^\rho \{p'\}-\mathcal{S}_{[2]}^\rho \{p''\} \nonumber \\&\qquad \qquad \qquad \qquad \,\, = S_{[1]}^\rho \{p'\}S_{[1]}^\rho \{p''\} := S_{[1]}^\rho \otimes S_{[1]}^\rho \nonumber \\ \end{aligned}$$
(19)
$$\begin{aligned}&\frac{\mathcal{S}_{[2]}^\rho \cdot \mathcal{S}_{[1]}}{||\mathcal{S}_{[2]}||^2} = \frac{ \mathcal{S}^\rho _{[3]}}{||S_{[3]}||^2} + \frac{\sum _{\rho '\ne \rho } \mathcal{S}^{\rho '}_{[2,1]}}{||S_{[2,1]}||^2} \nonumber \\&\quad \Longleftrightarrow \begin{array}{c} \Delta \mathcal{S}_{[3]}^\rho = \mathcal{S}_{[2]}^\rho \otimes \mathcal{S}_{[1]} + \mathcal{S}_{[1]}\otimes \mathcal{S}_{[2]}^\rho \\ \\ \Delta \mathcal{S}_{[2,1]}^\rho = \Big (\mathcal{S}_{[2]}^{\rho '}+\mathcal{S}_{[2]}^{\rho ''}\Big )\otimes \mathcal{S}_{[1]} + \mathcal{S}_{[1]}\otimes \Big (\mathcal{S}_{[2]}^{\rho '}+\mathcal{S}_{[2]}^{\rho ''}\Big ) \end{array} \nonumber \\ \end{aligned}$$
(20)

Despite we put here the sign \( \Longleftrightarrow \) we know from Sect. 2 that such an identical correspondence between multiplication and decomposition should be tied to validity of Cauchy formula – and indeed it is true:

$$\begin{aligned}&1+ \mathcal{S}_{[1]}\{p\}\mathcal{S}_{[1]}\{p'\} +\sum _{\rho =x,y,z}\frac{\mathcal{S}_{[2]}^\rho \{p\}\mathcal{S}_{[2]}^\rho \{p'\}}{||S_{[2]}||^2} \nonumber \\&\qquad +\sum _{\rho =x,y,z}\frac{\mathcal{S}_{[3]}^\rho \{p\}\mathcal{S}_{[3]}^\rho \{p'\}}{||S_{[3]}||^2} +\sum _{\rho =x,y,z}\frac{\mathcal{S}_{[2,1]}^\rho \{p\}\mathcal{S}_{[2,1]}^\rho \{p'\}}{||S_{[2,1]}||^2}\nonumber \\&\quad = 1 +p_1p_1' + \frac{\vec p_2\vec p_2 \,\!\!'+(p_1p_1')^2}{2} + \frac{\vec p_3\vec p_3 \,\!\!'}{3}\nonumber \\&\qquad +\frac{(\vec p_2\vec p_2 \,\!\!')(p_1p_1')}{2} + \frac{(p_1p_1')^3}{6}. \end{aligned}$$
(21)

Moreover, in accordance with the still another natural expectation (6), all these \(\mathcal{S}\)-functions are mutually orthogonal

$$\begin{aligned} \Big<\mathcal{S}^\rho _{[2]}\Big | \mathcal{S}^{\rho '}_{[2]}\Big>= & {} ||\mathcal{S}_{[2]}||^2\,\delta ^{\rho ,\rho '}, \ \ \ \ \Big<\mathcal{S}^\rho _{[3]}\Big | \mathcal{S}^{\rho '}_{[3]}\Big> = ||\mathcal{S}_{[3]}||^2\,\delta ^{\rho ,\rho '}, \nonumber \\ \Big<\mathcal{S}^\rho _{[2,1]}\Big | \mathcal{S}^{\rho '}_{[2,1]}\Big>= & {} ||\mathcal{S}_{[2,1]}||^2\,\delta ^{\rho ,\rho '}, \ \ \Big <\mathcal{S}^\rho _{[3]}\Big | \mathcal{S}^{\rho '}_{[2,1]}\Big > = 0 \end{aligned}$$
(22)

To check all these formulas one needs to substitute explicit expressions for the Mercedes-star vectors [31, 32]:

$$\begin{aligned} \vec \alpha _2^x= & {} \Bigg (-\frac{1}{\sqrt{2}},\sqrt{\frac{3}{2}}\Bigg ), \ \ \ \ \vec \alpha _2^y = \Bigg (-\frac{1}{\sqrt{2}},-\sqrt{\frac{3}{2}}\Bigg ), \ \ \ \ \vec \alpha _2^z = (\sqrt{2},0)\\ \vec \alpha _3^x= & {} \Bigg (- \sqrt{\frac{3}{2}},\frac{3}{\sqrt{2}},-2\Bigg ) =-2(\beta _3^y+\beta _3^z), \\ \vec \alpha _3^y= & {} \Bigg (-\sqrt{\frac{3}{2}},-\frac{3}{\sqrt{2}},-2\Bigg ) =-2(\beta _3^y+\beta _3^z), \\ \vec \alpha _3^z= & {} (\sqrt{6},0,-2)=-2(\beta _3^x+\beta _3^y)\\ \vec \beta _3^x= & {} \Bigg (- \sqrt{\frac{3}{8}},\frac{3}{\sqrt{8}},-\frac{1}{2}\Bigg ), \ \ \ \ \vec \beta _3^y = \Bigg (-\sqrt{\frac{3}{8}},-\frac{3}{\sqrt{8}},-\frac{1}{2}\Bigg ), \\&\quad \vec \beta _3^z = \Bigg ( \sqrt{\frac{3}{2}},0,-\frac{1}{2}\Bigg ). \end{aligned}$$

Note that the relation \(\vec \alpha ^\rho _3 ||\mathcal{S}_{[3]}||^{-2} +(\vec \beta ^{\rho '}_3+\beta ^{\rho ''}_{[3]})||\mathcal{S}_{[2,1]}||^{-2}=0\) between \(\vec \alpha _3\) and \(\vec \beta _3\) is necessary for the l.h.s. of (20) to hold, because \(\mathcal{S}_{[2]}^\rho \mathcal{S}_{[1]}\) there does not depend on \({\vec p_3}\).

5 Expectation at level four

The first truly interesting level is four, when one of 13 plane partitions, which we denote by , is essentially 3-dimensional. The “natural” multiplication and decomposition rules in this case seem to be

(23)

and, “accordingly”,

(24)

If true, these (24) would imply that

(25)

As usual, only the \(\vec p_4\)-independent parts of these formulas, which we denote by tildes, are prescribed by (24). Likewise, only these pieces are seen in multiplication formulas – irrespective of their exact shape and relation to decompositions (24), i.e. irrespective of the literal validity of (23).

If expectation of [31, 32] was fully correct, both (23) and (24) would hold – what, as we know, imply also the validity of Cauchy formula

(26)

and, in the dream case, also orthogonality conditions

(27)

6 The situation at level four

Given explicit expressions (18) we can check the \(\vec p_4\)-independent parts of (23) and (26). It turns out that instead of them we have similar, still very different relations:

and

(28)

Here

$$\begin{aligned} X\{p,p'\} = -\frac{1}{96} \sum _\rho (\vec \alpha _2^\rho \vec p_2)^2 (\vec \alpha _2^\rho \vec p_2 \!\!\,')^2 + \frac{(\vec p_2\vec p_2 \!\!\,')^2}{16}\ne 0 \nonumber \\ \end{aligned}$$
(29)

is not expressed through the \(\mathcal{S}\)-functions and will be discussed in the next Sect. 7. It makes no direct sense to consider orthogonality at this stage – because it is expected only when the \(\vec p_4\)-dependent terms are included. However, one can wonder what are orthogonality constraints on these \(p_4\)-dependent terms and if they look resolvable. Such analysis in the future can help to find a substitute of (17), which better reflects the structure of plane, rather than ordinary partitions.

Coming back to multiplication rules, the differences between expected and actual formulas are marked by boxes. The main of them is the absence of any contribution from \(S_{[4]}\) – but, according to the argument in Sect. 2, this absence in both multiplication and Cauchy formulas is not independent. Thus it is enough to explain it in just one of these cases. The simplest is the first line in the multiplication list: there it is sufficient to look only at the terms \(p_3p_1\) and \(p_1^4\). The fact is that the ratio of coefficients in front of these structures is exactly the same in the sum \(\mathcal{S}^{\rho '\rho ''}_{[3,1]}+\mathcal{S}^{\rho ''\rho '}_{[3,1]}\) and \(\mathcal{S}^\rho _{[3]}\). Indeed, in the latter case the ratio is \(\frac{\vec \alpha ^\rho _3\vec p_3}{3}\left( \frac{p_1^3}{6}\right) ^{-1} = \frac{2\vec \alpha ^\rho _3\vec p_3}{p_1^3}\), while in the former case it is rather \(\ \frac{(2\vec \alpha _3^\rho +\vec \beta _3^{\rho '}+\vec \beta _3^{\rho ''})\vec p_3p_1}{3} \left( 2\frac{p_1^4}{8}\right) ^{-1} = \frac{4(2\vec \alpha _3^\rho +\vec \beta _3^{\rho '}+\vec \beta _3^{\rho ''})\vec p_3}{3p_1^3}\ \) – but since \(\ \vec \beta _3^{\rho '}+\vec \beta _3^{\rho ''}=-\frac{1}{2}\vec \alpha _3^\rho \ \) this is actually the same. At the same time the same ratio for \(\mathcal{S}_{[4]}^\rho \) is four times bigger: \(\frac{\vec \alpha _3^\rho \vec p_3 p_1}{3}\left( \frac{p_1^4}{24}\right) ^{-1} = \frac{8\vec \alpha _3^\rho \vec p_3}{p_1^4}\), thus already for these two items one has no chances to add \(\mathcal{S}_{[4]}^\rho \) with any non-vanishing coefficient.

Equally interesting can be the emerging additional terms in the multiplication rule. We remind that the product of representations \([2]\otimes [1,1] = [3,1]\oplus [2,1,1]\), i.e. this is the first example when the product does not contain the intermediate diagram [2, 2], which lies between \([2]+[1,1]=[2,1,1]\) and \([2]\cup [1,1]=[3,1]\) in the lexicographical ordering. This is exactly the situation reflected in (15), i.e. the [2, 2] contribution should vanish for Schur and Macdonald functions, but show up in the generic Kerov case. In fact, the Kerov function \(\mathrm{Kerov}_{[2,2]}\) appears in the product \(\mathrm{Kerov}_{[2]}\cdot \mathrm{Kerov}_{[1,1]}\) with a peculiar coefficient \(g_4g_1^5-3g_4g_2^2g_1+2g_4g_3g_1^2+2g_2^3g_1^3-3g_3g_2g_1^4+g_3g_2^3\) which is the simplest combination of g-variables, vanishing at the Macdonald locus (14). The fact that a boxed item appears in the product of the corresponding 3-Schur functions \((\vec \alpha _2\vec p_2)^2 \in \mathcal{S}_{[2]}^\rho \cdot \mathcal{S}_{[2]}^{\rho ''}\) can be a signal that they know about the violation (15) of the representation-product selection rule – and have a potential of describing the generic situation, including the Kerov functions.

7 Anomaly in the Cauchy formula

Since the true multiplication formulas at level 4 are different from the expectation, i.e. do not fully match the decomposition formulas (24), we should observe the violation of Cauchy formula. Indeed, this is what immediately observes in (28). This formula does not contain any reference to \(\mathcal{S}_{[4]}\) – and this is in accordance with the multiplication rule, where this function also does not appear, thus this not a violation. However, instead it contains an anomalous term \(X\{p,p'\}\), reflecting the true difference between multiplication and decomposition, which we now analyze in a little more detail.

Repeating the argument of Sect. 2 we multiply two copies of (28) at \((p,p')\) and \((p,p'')\) and use the fact that expressions at the r.h.s. are bilinear exponentials, thus they can be substituted by the l.h.s. of still another (28) at \((p,p'+p'')\). This gives:

(30)

where \(A\otimes B\otimes C\) denotes \(A\{p\}\cdot B\{p'\}\cdot C\{p''\}\). Substituting the products from the “true” table into the l.h.s. we get:

(31)

and additionally at the l.h.s. we have a contribution from the boxed terms in the multiplication formulas:

$$\begin{aligned}&-\sum _\rho \frac{ (\vec \alpha _2^{\rho }\vec p_2)^2 }{18} \Big (\mathcal{S}_{[2]}^{\rho '}-\mathcal{S}_{[2]}^{\rho ''}\Big )\otimes \Big (\mathcal{S}_{[2]}^{\rho '}-\mathcal{S}_{[2]}^{\rho ''}\Big ) \nonumber \\&\quad = -\frac{1}{72}\sum _\rho (\vec \alpha _2^{\rho }\vec p_2)^2 \Big ((\vec \alpha _2^{\rho '}-\vec \alpha _2^{\rho ''})\vec p_2 \!\!\,'\Big ) \Big ((\vec \alpha _2^{\rho '}-\vec \alpha _2^{\rho ''})\vec p_2 \!\!\,''\Big ) \ne 0. \nonumber \\ \end{aligned}$$
(32)

This is exactly the same as the \(\Delta X\) term at the r.h.s:

$$\begin{aligned} \Delta X:= & {} X\{p,p'+p''\}-X\{p,p'\}-X\{p,p''\} \nonumber \\= & {} \frac{2}{16}(\vec p_2\vec p_2 \!\!\,')(\vec p_2\vec p_2 \!\!\,'')\nonumber \\&-\frac{2}{96}\sum _\rho (\vec \alpha _2^\rho \vec p_2)^2(\vec \alpha _2^{\rho }\vec p_2 \!\!\,') (\vec \alpha _2^{\rho }\vec p_2 \!\!\,''). \end{aligned}$$
(33)

Thus the anomaly in Cauchy summation formula can indeed be used to measure the deviation of multiplication from skew decomposition.

8 Conclusion

In this note we explained the nearly rigid relation between Cauchy summation formula (4) and the equivalence of the structure constants in multiplication and skew-decomposition formulas (9) and (10). We illustrated this fact by an important example of the would-be 3-Schur functions for 4-box plane partitions: mismatches/anomailes are simultaneously present and well correlated in expressions of both kinds. Thus it is sufficient to cure just one of them – the other will be automatically fixed. This, however, remains to be done. More generally, beyond the 3-Schur topic, this paper can help to understand the abundance of Cauchy formula, i.e. why it appears in one and the same form for a broad variety of special functions and why it may not be obligatory restricted to the case of Young diagrams.