Cauchy formula and the character ring

Cauchy summation formula plays a central role in application of character calculus to many problems, from AGT-implied Nekrasov decomposition of conformal blocks to topological-vertex decompositions of link invariants. We briefly review the equivalence between Cauchy formula and expressibility of skew characters through the Littlewood–Richardson coefficients. As not-quite-a-trivial illustration we consider how this equivalence works in the case of plane partitions – at the simplest truly interesting level of just four boxes.

In this short note we consider two of such properties: Cauchy decomposition formula which stands behind (1) and the skew-character decomposition, which plays the central role in technical applications of Schur functions to representation theory. These two properties are in fact intimately related: imposing one implies another. This is a simple but important remark, it considerably weakens their impact on generalizations: one restriction (to keep these properties) is much less than two. Also, it reduces the number of "miracles" and thus the attractiveness of particular generalization attempts. After a brief presentation of the formal relation we present an explicit example -of the problems with the building project of the 3-Schur functions, encountered at the level of the size-four plain partitions, where the (general) relation between Cauchy and skew decompositions shows up in a somewhat unusual way.

Cauchy vs skew
Imagine that we have a set of functions S σ { p} which depend on a multi-component p k , k ∈ K and are labeled by elements σ ∈ of some set . Let them form a full linear basis in the space of functions of { p}. Then they also form a closed ring under the ordinary multiplication with some structure constants N (not obligatory integer). In this setting there is an obvious equivalence between two different-looking of statements: Cauchy summation formula and decomposition rule of the skew-functions. Cauchy formula states that with a certain norm in the space of { p}-variables. Ideally one can think of a scalar product, with respect to which both p k and S σ { p} are orthogonal: However, really important is the bilinear exponent. As a corollary, by multiplying two copies of (4) with the same p but different p , we get: If we now consider the function of p + p as that of p , we obtain where the p -dependent coefficients are known as skewfunctions. Then equivalence of the two relations in the second line of (7) implies that with the same structure constants N as in (3). The differentlynormalized bold-faced N are instead the structure constants in multiplication of "dual" functions (boldfaced): Thus we see that (4) implies (9). This statement can be partly inverted: if the skew functions in (8) possess the expansion (9) with the same structure constants as in (3), this implies some version of Cauchy summation formula (4) with a bilinear exponent -but, strictly speaking, with some unspecified coefficients at the place of || p k || −2 .
To avoid possible confusion, (3) and (10) are not the statements -these are just the definitions of the structure constants N and N for a given set of functions S σ { p}. Of course, one can instead use (9) as a definition ofN , like it was done in [31,32], -then the statement will be that (10) depends on validity of some version of (4).

Particular cases
So far, in most applications in physics the set is that of Young diagrams (partitions of integers) -this is especially natural for applications to representations of linear and symmetric groups Gl N and S N . Then the relevant set K is just that of natural numbers: the "time variables" are just { p 1 , p 2 , . . .}, and these are exactly enough to "enumerate" all Young diagrams by the rule Relation to representation theory and conformal matrix/ network models in (1) and (2), appears on the Miwa locus p k = tr X k with the N × N matrix X , which in representation R becomes a matrix X (R) of the size dim R = Schur R [I ] = Schur R { p k = N }, made from the N eigenvalues of X . Associated scalar product is usually taken to be For all g a = 1 we get the Schur functions per se, the corresponding factor z R is the one which appears in the orthogonality condition for symmetric-group characters ψ R ( ), and the structure constants N R R R in (3) are the integer-valued Richardson-Littlewood coefficients, counting multiplicities of representation R in the product R ⊗ R . In deformation to Macdonald polynomials, when these N become functions of q and t, still they vanish whenever R / ∈ R ⊗ R . For arbitrary parameters g a we get Kerov functions [8,9], for them the restriction on R is softened to a one, natural for the Young-diagrams per se: and exact relation to representation theory of SL ∞ and S ∞ is lost. Still the absolute majority of other properties, including the Cauchy and skew-Kerov decompositions remain trueand application of generic Kerov functions to physical theories is just a matter of time (see [92][93][94][95][96] for the first examples). However, already for the by-now-conventional applications, restriction to = {partitions} is insufficient. Nekrasov calculus for generic -backgrounds (for c = 1, i.e. 1 = − 2 ) requires "generalized" Macdonald functions [10][11][12][13][14][15][16], depending on collections (strings) of Young diagrams. This, however, is not a very big problem -it is enough just to consider several copies of time variables, though the scalar product can require non-trivial modification [97]. More challenging are the ordered sequences of Young diagrams (forming the plane partitions), which are needed in generic network models and representation theory of DIM-algebras. The corresponding "triple-Macdonald polynomials", though constructible in terms of the ordinary ones [98], should depend on a very different set K of time-variables and be described by a more-first-principle theory.
One of the fresh related directions is the basic tanglescalculus [99][100][101][102] relation for the properly normalized colored Hopf-link invariants, which provides for them an interpretation as Q-dependent characters (note that this is a manifestation of the rule (2), because these invariants are averages of Wilson loops Tr R P exp A , which are themselves the gauge-fielddependent characters in Chern-Simons theory). Since Hopf invariants are supposedly related to topological vertices [103][104][105] (DIM-algebra intertwiners), this has direct connection to the still-underdeveloped representation theory of DIM algebras.
An ever further-going challenge is adequate description of tensor-model characters, where some "non-abelization" looks unavoidable already at the level of (2) -straightforward lifting of Schur functions to these theories does not seem to provide a full basis in the operator space [17]. In this note we do not go as far as full-fledged tensor-model considerations, but provide just a simple example of difficulties, encountered at the plain-partition stage. We demonstrate that, conversely to possible expectations, Cauchy formula is considerably easier to satisfy than building a true collection of 3-Schur functions.

The 3-Schur attempt
When we switch from the ordinary to plane partitions in the role of the set , the first thing to change is the set K of time-variables. In order for the space polynomials of p k to have the same dimension as that of the plane partitions we need p k,i with integer 1 ≤ i ≤ k with the grading degree i≤k kp k,i . Then at the "level" (degree) one we have just a single monomial p 1,1 and a single plane partition with one box, at level two -three monomials p 2,1 , p 2,2 , p 2 1,1 and three plane partitions with two boxes and so on. Since the grading does not depend on i it can be convenient to speak of the k dimensional vector spaces and denote the time variables p k -assuming that the number of vector components is k. The 3-Schur functions should be homogeneous functions of these variables and form a full basis -and thus a ring. However, the first naive attempt in [31,32] to build these functions runs into problems, which we will now try to illustrate. This attempt was build on two postulates: that the scalar product does not depend on i and is given by the same formula (12) with all g k = 1, and that the multiplication operation (3) is dictated by "natural" composition of plane partitions, see below. Both postulates are not very well justified, but it is instructive to see what is exactly the problem they lead to. We denote the three dimensions of the space where the plane partitions lie, by x, y, z and use ρ = {x, y, z} as a label. When the number of boxes is small, partitions lie entirely in one of the three planes and can be labeled by Young diagrams together with the ordered pair of indices x, y, z. When there is just one column/row, only one index remains. For symmetric Young diagrams the order does not matter and it is also convenient to use orthogonal direction z instead of x y ∼ = yx. Then the 3-Schur functions at the first three levels are: The have simple ρ-independent norms ||S [1] || 2 = 1, ||S [2] || 2 = 3 2 , ||S [3] || 2 = 9 2 , ||S [2,1] || 2 = 9 4 and satisfy the relation (9) and (10) in the most natural way: S ρ [2] · S [1] ||S [2] || 2 = S ρ [3] ||S [3] Despite we put here the sign ⇐⇒ we know from Sect. 2 that such an identical correspondence between multiplication and decomposition should be tied to validity of Cauchy formula -and indeed it is true: Moreover, in accordance with the still another natural expectation (6), all these S-functions are mutually orthogonal To check all these formulas one needs to substitute explicit expressions for the Mercedes-star vectors [31,32]: Note that the relation α ρ 3 ||S [3] || −2 +( β ρ 3 +β ρ [3] )||S [2,1] || −2 = 0 between α 3 and β 3 is necessary for the l.h.s. of (20) to hold, because S ρ [2] S [1] there does not depend on p 3 .

Expectation at level four
The first truly interesting level is four, when one of 13 plane partitions, which we denote by , is essentially 3dimensional. The "natural" multiplication and decomposition rules in this case seem to be S ρ [3] · S [1] ||S [3] As usual, only the p 4 -independent parts of these formulas, which we denote by tildes, are prescribed by (24). Likewise, only these pieces are seen in multiplication formulas -irrespective of their exact shape and relation to decompositions (24), i.e. irrespective of the literal validity of (23). If expectation of [31,32] was fully correct, both (23) and (24) [2,2] (26) and, in the dream case, also orthogonality conditions Here is not expressed through the S-functions and will be discussed in the next Sect. 7. It makes no direct sense to consider orthogonality at this stage -because it is expected only when the p 4 -dependent terms are included. However, one can wonder what are orthogonality constraints on these p 4dependent terms and if they look resolvable. Such analysis in the future can help to find a substitute of (17), which better reflects the structure of plane, rather than ordinary partitions. Coming back to multiplication rules, the differences between expected and actual formulas are marked by boxes. The main of them is the absence of any contribution from S [4] -but, according to the argument in Sect. 2, this absence in both multiplication and Cauchy formulas is not independent. Thus it is enough to explain it in just one of these cases. The simplest is the first line in the multiplication list: there it is sufficient to look only at the terms p 3 p 1 and p 4 1 . The fact is that the ratio of coefficients in front of these structures is exactly the same in the sum S ρ ρ [3,1] + S ρ ρ [3,1] and S ρ [3] . Indeed, in the latter case the ratio , thus already for these two items one has no chances to add S ρ [4] with any non-vanishing coefficient.
Equally interesting can be the emerging additional terms in the multiplication rule. We remind that the product of representations [2] [3,1] in the lexicographical ordering. This is exactly the situation reflected in (15), i.e. the [2,2] contribution should vanish for Schur and Macdonald functions, but show up in the generic Kerov case. In fact, the Kerov function Kerov [2,2] appears in the product Kerov [2] · Kerov [1,1] with a peculiar coefficient g 4 g 5 1 − 3g 4 g 2 2 g 1 + 2g 4 g 3 g 2 1 + 2g 3 2 g 3 1 − 3g 3 g 2 g 4 1 + g 3 g 3 2 which is the simplest combination of g-variables, vanishing at the Macdonald locus (14). The fact that a boxed item appears in the product of the corresponding 3-Schur functions ( α 2 p 2 ) 2 ∈ S ρ [2] · S ρ [2] can be a signal that they know about the violation (15) of the representation-product selection rule -and have a potential of describing the generic situation, including the Kerov functions.

Anomaly in the Cauchy formula
Since the true multiplication formulas at level 4 are different from the expectation, i.e. do not fully match the decomposition formulas (24), we should observe the violation of Cauchy formula. Indeed, this is what immediately observes in (28). This formula does not contain any reference to S [4] -and this is in accordance with the multiplication rule, where this function also does not appear, thus this not a violation. However, instead it contains an anomalous term X { p, p }, reflecting the true difference between multiplication and decomposition, which we now analyze in a little more detail.
Repeating the argument of Sect. 2 we multiply two copies of (28) at ( p, p ) and ( p, p ) and use the fact that expressions at the r.h.s. are bilinear exponentials, thus they can be substituted by the l.h.s. of still another (28) at ( p, p + p ). This gives: ρ S ρ [3] · S [1] ||S [3] || 2 ⊗ S ρ [3] ⊗ S [1] where A ⊗ B ⊗ C denotes A{ p} · B{ p } · C{ p }. Substituting the products from the "true" and additionally at the l.h.s. we have a contribution from the boxed terms in the multiplication formulas: This is exactly the same as the X term at the r.h.s: Thus the anomaly in Cauchy summation formula can indeed be used to measure the deviation of multiplication from skew decomposition.

Conclusion
In this note we explained the nearly rigid relation between Cauchy summation formula (4) and the equivalence of the structure constants in multiplication and skew-decomposition formulas (9) and (10). We illustrated this fact by an important example of the would-be 3-Schur functions for 4-box plane partitions: mismatches/anomailes are simultaneously present and well correlated in expressions of both kinds. Thus it is sufficient to cure just one of them -the other will be automatically fixed. This, however, remains to be done. More generally, beyond the 3-Schur topic, this paper can help to understand the abundance of Cauchy formula, i.e. why it appears in one and the same form for a broad variety of special functions and why it may not be obligatory restricted to the case of Young diagrams.