Cauchy formula and the character ring

Cauchy summation formula plays a central role in application of character calculus to many problems, from AGT-implied Nekrasov decomposition of conformal blocks to topological-vertex decompositions of link invariants. We briefly review the equivalence between Cauchy formula and expressibility of skew characters through the Littlewood-Richardson coefficients. As not-quite-a-trivial illustration we consider how this equivalence works in the case of plane partitions -- at the simplest truly interesting level of just four boxes.


Introduction
As anticipated long ago [1], Schur functions and their various generalizations, like Macdonald [2] and Kerov [3] functions, generalized Macdonald polynomials [4], tensor-model characters [5] and the still-hypothetical 3-Schur functions [7] play an increasing role in modern theory, especially in consideration of essentially non-perturbative phenomena. Technically they appear at least in three different contexts: • in formal representation theory -as characters of Sl N representations S R [X] = Tr R X (R) , and thus as the building blocks for integrable tau-functions through the general construction, reviewed in [8] • in decomposition formulas for the integrands of free-field screening correlators like • and as preserved quantities in Selberg-Kadell-type integrals [9], which stand behind the basic superintegrability/localization property [10] S R [X] actually serving as a selection rule for "good" theories, which provide "matrix-model τ -functions" [1] after (functional) integration over fields.
One of the many widely-known examples of (2), appears when we integrate exactly over x and y in (1) to obtain a correlator of screenings e φ(x) dx [9]. In this case the a combination of (1) and (2) immediately provides the AGT-induced [13] Nekrasov decomposition [14] of conformal blocks, realized in terms of conformal (Dotsenko-Fateev) matrix models [15] -and their far-going network-model generalizations [17]. Further steps on this way, as well as a related development with matrix −→ tensor model generalizations require essential extension of the theory of Schur characters in various directions. Some things, however, should supposedly remain intact -and serve as the carrying construction of this future general theory.
In this short note we consider two of such properties: Cauchy decomposition formula which stands behind (1) and the skew-character decomposition, which plays the central role in technical applications of Schur functions to representation theory. These two properties are in fact intimately related: imposing one implies another. This is a simple but important remark, it considerably weakens their impact on generalizations: one restriction (to keep these properties) is much less than two. Also, it reduces the number of "miracles" and thus the attractiveness of particular generalization attempts. After a brief presentation of the formal relation we present an explicit example -of the problems with the building project of the 3-Schur functions, encountered at the level of the size-four plain partitions, where the (general) relation between Cauchy and skew decompositions shows up in a somewhat unusual way.

Cauchy vs skew
Imagine that we have a set of functions S σ {p} which depend on a multi-component p k , k ∈ K and are labeled by elements σ ∈ Σ of some set Σ. Let them form a full linear basis in the space of functions of {p}. Then they also form a closed ring under the ordinary multiplication with some structure constants N (not obligatory integer). In this setting there is an obvious equivalence between two different-looking of statements: Cauchy summation formula and decomposition rule of the skew-functions. Cauchy formula states that with a certain norm in the space of {p}-variables. Ideally one can think of a scalar product, with respect to which both p k and S σ {p} are orthogonal: However, really important is the bilinear exponent. As a corollary, by multiplying two copies of (4) with the same p but different p , we get: If we now consider the function of p + p as that of p , we obtain where the p -dependent coefficients are known as skew-functions. Then equivalence of the two relations in the second line of (7) implies that with the same structure constants N as in (3). The differently-normalized bold-faced N are instead the structure constants in multiplication of "dual" functions (boldfaced): Thus we see that (4) implies (9). This statement can be partly inverted: if the skew functions in (8) possess the expansion (9) with the same structure constants as in (3), this implies some version of Cauchy summation formula (4) with a bilinear exponent -but, strictly speaking, with some unspecified coefficients at the place of ||p k || −2 .
To avoid possible confusion, (3) and (10) are not the statements -these are just the definitions of the structure constants N and N for a given set of functions S σ {p}. Of course, one can instead use (9) as a definition ofN , like it was done in [7], -then the statement will be that (10) depends on validity of some version of (4).

Particular cases
So far, in most applications in physics the set Σ is that of Young diagrams (partitions of integers) -this is especially natural for applications to representations of linear and symmetric groups Gl N and S N . Then the relevant set K is just that of natural numbers: the "time variables" are just {p 1 , p 2 , . . .}, and these are exactly enough to "enumerate" all Young diagrams by the rule Relation to representation theory and conformal matrix/network models in (1) and (2), appears on the Miwa locus p k = tr X k with the N × N matrix X, which in representation R becomes a matrix X (R) of the size dim R = Schur R [I] = Schur R {p k = N }, made from the N eigenvalues of X. Associated scalar product is usually taken to be For all g a = 1 we get the Schur functions per se, the corresponding factor z R is the one which appears in the orthogonality condition for symmetric-group characters ψ R (∆), and the structure constants N R R R in (3) are the integer-valued Richardson-Littlewood coefficients, counting multiplicities of representation R in the product R ⊗ R . In deformation to Macdonald polynomials, when these N become functions of q and t, still they vanish whenever R / ∈ R ⊗ R . For arbitrary parameters g a we get Kerov functions [3], for them the restriction on R is softened to a one, natural for the Young-diagrams per se: and exact relation to representation theory of SL ∞ and S ∞ is lost. Still the absolute majority of other properties, including the Cauchy and skew-Kerov decompositions remain true -and application of generic Kerov functions to physical theories is just a matter of time (see [18] for the first examples). However, already for the by-now-conventional applications, restriction to Σ = {partitions} is insufficient. Nekrasov calculus for generic Ω-backgrounds (for c = 1, i.e. 1 = − 2 ) requires "generalized" Macdonald functions [4], depending on collections (strings) of Young diagrams. This, however, is not a very big problem -it is enough just to consider several copies of time variables, though the scalar product can require non-trivial modification [19]. More challenging are the ordered sequences of Young diagrams (forming the plane partitions), which are needed in generic network models and representation theory of DIM-algebras. The corresponding "triple-Macdonald polynomials", though constructible in terms of the ordinary ones [20], should depend on a very different set K of time-variables and be described by a more-first-principle theory.
One of the fresh related directions is the basic tangles-calculus [21] relation for the properly normalized colored Hopf-link invariants, which provides for them an interpretation as Q-dependent characters (note that this is a manifestation of the rule (2), because these invariants are averages of Wilson loops Tr R P exp A , which are themselves the gauge-field-dependent characters in Chern-Simons theory). Since Hopf invariants are supposedly related to topological vertices [22] (DIM-algebra intertwiners), this has direct connection to the still-underdeveloped representation theory of DIM algebras. An ever further-going challenge is adequate description of tensor-model characters, where some "non-abelization" looks unavoidable already at the level of (2) -straightforward lifting of Schur functions to these theories does not seem to provide a full basis in the operator space [5]. In this note we do not go as far as full-fledged tensormodel considerations, but provide just a simple example of difficulties, encountered at the plain-partition stage. We demonstrate that, conversely to possible expectations, Cauchy formula is considerably easier to satisfy than building a true collection of 3-Schur functions.

The 3-Schur attempt
When we switch from the ordinary to plane partitions in the role of the set Σ, the first thing to change is the set K of time-variables. In order for the space polynomials of p k to have the same dimension as that of the plane partitions we need p k,i with integer 1 ≤ i ≤ k with the grading degree i≤k kp k,i . Then at the "level" (degree) one we have just a single monomial p 1,1 and a single plane partition with one box, at level two -three monomials p 2,1 , p 2,2 , p 2 1,1 and three plane partitions with two boxes and so on. Since the grading does not depend on i it can be convenient to speak of the k dimensional vector spaces and denote the time variables p k -assuming that the number of vector components is k. The 3-Schur functions should be homogeneous functions of these variables and form a full basis -and thus a ring. However, the first naive attempt in [7] to build these functions runs into problems, which we will now try to illustrate. This attempt was build on two postulates: that the scalar product does not depend on i and is given by the same formula (12) with all g k = 1, and that the multiplication operation (3) is dictated by "natural" composition of plane partitions, see below. Both postulates are not very well justified, but it is instructive to see what is exactly the problem they lead to. We denote the three dimensions of the space where the plane partitions lie, by x, y, z and use ρ = {x, y, z} as a label. When the number of boxes is small, partitions lie entirely in one of the three planes and can be labeled by Young diagrams together with the ordered pair of indices x, y, z. When there is just one column/row, only one index remains. For symmetric Young diagrams the order does not matter and it is also convenient to use orthogonal direction z instead of xy ∼ = yx. Then the 3-Schur functions at the first three levels are: The have simple ρ-independent norms and satisfy the relation (10) and (9) in the most natural way: S ρ [2] · S [1] ||S [2] || 2 = S ρ [3] ||S [3] || 2 + ρ =ρ S ρ [2,1] ||S [2,1] Despite we put here the sign ⇐⇒ we know from sec.2 that such an identical correspondence between multiplication and decomposition should be tied to validity of Cauchy formula -and indeed it is true: Moreover, in accordance with the still another natural expectation (6), all these S-functions are mutually orthogonal To check all these formulas one needs to substitute explicit expressions for the Mercedes-star vectors [7]: Note that the relation α ρ 3 ||S [3] || −2 + ( β ρ 3 + β ρ [3] )||S [2,1] || −2 = 0 between α 3 and β 3 is necessary for the l.h.s. of (20) to hold, because S ρ [2] S [1] there does not depend on p 3 .

Anomaly in the Cauchy formula
Since the true multiplication formulas at level 4 are different from the expectation, i.e. do not fully match the decomposition formulas (24), we should observe the violation of Cauchy formula. Indeed, this is what immediately observes in (28). This formula does not contain any reference to S [4] -and this is in accordance with the multiplication rule, where this function also does not appear, thus this not a violation. However, instead it contains an anomalous term X{p, p }, reflecting the true difference between multiplication and decomposition, which we now analyze in a little more detail.
Repeating the argument of sec.2 we multiply two copies of (28) at (p, p ) and (p, p ) and use the fact that expressions at the r.h.s. are bilinear exponentials, thus they can be substituted by the l.h.s. of still another (28) at (p, p + p ). This gives: +∆X (30) where A ⊗ B ⊗ C denotes A{p} · B{p } · C{p }. Substituting the products from the "true" and additionally at the l.h.s. we have a contribution from the boxed terms in the multiplication formulas: This is exactly the same as the ∆X term at the r.h.s: ∆X := X{p, p + p } − X{p, p } − X{p, p } = 2 16 ( p 2 p 2 )( p 2 p 2 ) − 2 96 ρ ( α ρ 2 p 2 ) 2 ( α ρ 2 p 2 )( α ρ 2 p 2 ) Thus the anomaly in Cauchy summation formula can indeed be used to measure the deviation of multiplication from skew decomposition.

Conclusion
In this note we explained the nearly rigid relation between Cauchy summation formula (4) and the equivalence of the structure constants in multiplication and skew-decomposition formulas (9) and (10): We illustrated this fact by an important example of the would-be 3-Schur functions for 4-box plane partitions: mismatches are simultaneously present and well correlated in expressions of both kinds. Thus it is sufficient to cure just one of the anomalies -the other will be automatically fixed. This, however, remains to be done. More generally, this paper can help to understand the abundance of Cauchy formula, i.e. why it appears in one and the same form for a broad variety of special functions and why it is actually not restricted to the case of Young diagrams in the role of σ and one-parameter sets of time-variables in the role of p.