JORDAN–KRONECKER INVARIANTS OF LIE ALGEBRA REPRESENTATIONS AND DEGREES OF INVARIANT POLYNOMIALS

For an arbitrary representation ρ of a complex finite-dimensional Lie algebra, we construct a collection of numbers that we call the Jordan–Kronecker invariants of ρ. Among other interesting properties, these numbers provide lower bounds for degrees of polynomial invariants of ρ. Furthermore, we prove that these lower bounds are exact if and only if the invariants are independent outside of a set of large codimension. Finally, we show that under certain additional assumptions our bounds are exact if and only if the algebra of invariants is freely generated.


Introduction
The main idea of this paper has its roots in the theory of bi-Hamiltonian systems where it was discovered that the algebraic structure of a pair of compatible Poisson brackets and even the dynamical properties of bi-Hamiltonian systems related to it (see, e.g., [16], [18], [10], [2], [1]). This observation has recently been used in [3] to introduce Jordan-Kronecker invariants for a finite-dimensional Lie algebra g, which are directly related to a natural pencil of compatible Poisson brackets on g * . From the algebraic viewpoint, this construction is based on a simple fact that every element x ∈ g * defines a natural skew-symmetric bilinear form Ax(ξ, η) = x, [ξ, η] on g. The Jordan-Kronecker invariant of g is, by definition, the algebraic type of the pencil of forms A a+λb for a generic pair (a, b) ∈ g * × g * . All possible algebraic types are described by the Jordan-Kronecker theorem on a canonical form of a pencil of skew-symmetric matrices (see, e.g., [15], [4]) and, in each dimension, there are only finitely many of them.
In the present paper, this construction is generalized to arbitrary finite-dimensional representations of finite-dimensional Lie algebras. In particular, we show how the classical theorem on the canonical form of a pair of linear maps can be applied in the study of Lie algebra representations and their invariants. In what follows, we assume that all objects are defined over the field C of complex numbers, although everything can be generalized to the case of an arbitrary field of characteristic zero (in which case some of the objects we consider are defined over the algebraic closure of the initial field).
Our construction can be outlined as follows. Let ρ : g → gl (V ) be a linear representation of a finite-dimensional Lie algebra g on a finite-dimensional vector space V . To this representation and an arbitrary element x ∈ V , one can naturally associate an operator Rx : g → V defined by Rx(ξ) = ρ(ξ)x. Consider a pair of such operators Ra, R b and the pencil Ra + λR b = R a+λb generated by them. It is well known that such a pencil can be completely characterized by a collection of quite simple numerical invariants (see Section 2 for details). In the present paper we show that many important and interesting properties of the representation ρ : g → gl (V ) are related to and can be derived from the invariants of such a pencil R a+λb generated by a generic pair (a, b) ∈ V ×V . In particular, one of our main results gives lower bounds for degrees of invariant polynomials of the representation ρ in terms of the numerical invariants of the associated pair of operators Ra, R b (Theorem 26). Furthermore, we show that these lower bounds are exact if and only if the invariant polynomials of ρ are independent outside of a subset of codimension ≥ 2 (Theorem 31). Finally, we prove that under certain additional assumptions our bound is exact if and only if the algebra of invariant polynomials of ρ is freely generated (Theorem 36). This generalizes several previously known results (see, in particular, [8], [5], [9]) which relate polynomiality of the algebra of invariants with the sum of degrees of the invariants (while our statements concern individual invariants). In the last Section 6 we show that our results have quite non-trivial implications already in the case of standard representations of simple Lie groups.

Canonical form of a pair of linear maps
In this section, we recall the normal form theorem for a pair of linear maps. We first state the theorem in the matrix form, and then discuss the invariant meaning of the ingredients involved.
Theorem 1 (On the Jordan-Kronecker normal form [4]). Consider two complex vector spaces U and V . Then for every two linear maps A, B : U → V there are bases in U and V in which the matrices of the pencil P = {A + λB} have the following block-diagonal form: A + λB = diag 0m,n, A 1 + λB 1 , . . . , A k + λB k where 0m,n is the zero m × n-matrix, and each pair of the corresponding blocks A i and B i takes one of the following forms:

Vertical Kronecker block
The number and types of blocks in decomposition (1) are uniquely defined up to permutation. Remark 2. The negative signs in the matrices B i are not important and are introduced to simplify some of the formulas to follow.

Remark 3.
In what follows, we interchangeably use the following two notions of a pencil: {A + λB} and {αA + βB}. These two notions are equivalent if we allow λ to be infinite (in which case A + λB = B) and do not distinguish between operators that are scalar multiples of each other. Under this convention, (β : α) in {αA + βB} are simply homogeneous coordinates of λ ∈C in {A + λB} .
It is convenient to regard the zero block 0m,n in (1) as a block-diagonal matrix that is composed of m vertical Kronecker blocks of size 1 × 0 and n horizontal Kronecker blocks of size 0 × 1. In particular, in view of the above interpretation of the 0m,n block, the first m horizontal indices and first n vertical indices are equal to 1. We will denote the total number p of horizontal indices by n h , and the total number q of vertical indices by nv.
Remark 5. There also exist closely related notions of minimal row and column indices. Namely, minimal column indices are equal to the numbers h i − 1, while minimal row indices are equal to v i − 1. However, we find the terms horizontal and vertical indices more intuitive and more suitable for our purposes.
Definition 6. The total number of columns in horizontal blocks h tot = n h i=1 h i is said to be the total Kronecker h-index of the pencil P. Similarly, the total number of rows in the vertical Kronecker blocks v tot = nv i=1 v j is said to be the total Kronecker v-index of P.
We now give an invariant interpretation for eigenvalues of Jordan blocks, as well as for vertical and horizontal indices. We begin with Jordan blocks.
Definition 7. The rank of the pencil P = {A + λB} is the number rk P = max λ∈C rk (A + λB).
Definition 8. The characteristic polynomial χ(α, β) of the pencil P is defined as the greatest common divisor of all the r × r minors of the matrix αA + βB, where r = rk P.
One can show that the polynomial χ(α, β) does not depend on the choice of bases and therefore is an invariant of the pencil. Furthermore, it is easy to see that χ(α, β) is the product of characteristic polynomials of all the Jordan blocks. These polynomials, in turn, are called elementary divisors of the pencil and also admit a natural invariant interpretation, see [4] for details. Proposition 9. The eigenvalues of Jordan blocks can be characterized as those λ ∈ C for which the rank of A + λB drops, i.e., rk (A + λB) < r = rk P. The infinite eigenvalue appears in the case when rk B < r. In other words, the eigenvalues of Jordan blocks, written as λ = (β : α) ∈ CP 1 , are solutions of the characteristic equation χ(α, β) = 0. Moreover, the multiplicity of each eigenvalue coincides with the multiplicity of the corresponding root of the characteristic equation. Jordan blocks are absent if and only if the rank of all non-trivial linear combinations αA + βB is the same.
Proof. This is easily verified when A and B are written in the Jordan-Kronecker form (1). The crucial point is that rk (A i + λB i ) = const for any pair (A i , B i ) of Kronecker blocks, so only Jordan blocks affect the dependence of rk (A i + λB i ) on λ. (1) The number n h of horizontal indices (or equivalently, the number of horizontal Kronecker blocks) is equal to dim U − rk P .
(2) The number nv of vertical indices (or equivalently, the number of vertical Kronecker blocks) is equal to dim V − rk P .
Proof. Indeed, for generic λ the kernel of A i + λB i is one-dimensional if the pair of blocks (A i , B i ) is horizontal Kronecker, and is trivial otherwise. So, dim Ker (A + λB) is equal to the number of horizontal blocks. Similarly, dim Ker (A + λB) * is the number of vertical blocks.
We will also need the following relation between the total indices and the degree of the characteristic polynomial: Proof. The number of columns in matrices A and B is equal to dim U . On the other hand, this number can be computed as the total number of columns in Jordan blocks, which is equal to deg χ, plus the total number of columns in horizontal Kronecker blocks, which is equal to h tot , plus the total number of columns in vertical blocks, which is equal to v tot − nv = v tot − dim V + rk P (see Proposition 10). The result follows.
Finally, we characterize the horizontal and vertical indices themselves. We begin with the following preliminary statement.
Proposition 12. Let A be regular in a pencil P = {A + λB}, i.e., rk A = rk P. Then for every u 0 ∈ Ker A there exists a sequence of vectors {u 0 , . . . , u l ∈ U } such that the expression u(λ) = l j=0 u j λ j is a solution of the equation Similarly, for any u 0 ∈ Ker A * there exists a sequence of vectors {u 0 , . . . , u l ∈ V * } such that the expression u(λ) = l j=0 u j λ j is a solution of the equation Proof. We prove only the first statement, as the second one is similar. Notice that since A is regular, its kernel is generated by basis vectors in U corresponding to the leftmost columns in horizontal Kronecker blocks. So, by linearity, it suffices to consider the case when u 0 is one of those basis vectors. In this case, as u 1 , u 2 , . . . , one takes the remaining basis vectors in U generating the given horizontal block (which, in particular, means that the number l is equal to one of the horizontal indices minus one).
Proposition 13. Let P = {A + λB} and A ∈ P be regular. Let also where u ij ∈ U , be polynomial solutions of (3) such that the vectors u i (0) = u i0 are linearly independent. Suppose also that deg u 1 ≤ · · · ≤ deg u l . Then, for each i = 1, . . . , l we have where h 1 ≤ h 2 ≤ . . . is the ordered sequence of horizontal indices of P. Similarly, if u i 's are polynomial solutions of the dual problem (4), then where v 1 ≤ v 2 ≤ . . . are vertical indices of P.
Note that the proof of Proposition 12 provides a way to construct polynomials u i (λ) for which inequalities (5) and (6) become equalities. This, along with Proposition 13, immediately implies the following: Corollary 14. Horizontal indices h 1 , . . . , hp are given by h i = r i +1, where r 1 , . . . , rp are the minimal degrees of independent solutions of (3). Similarly, vertical indices v 1 , . . . , vq are given by v i = r i + 1, where r 1 , . . . , r q are the minimal degrees of independent solutions of the dual problem (4).

Remark 15.
Recall that the numbers h i − 1, v i − 1 are called minimal indices (see Remark 14). Corollary 14 explains in which sense these numbers are minimal.
Proof of Proposition 13. We only consider the horizontal case, as the vertical one is analogous. Begin with the first statement. Letũ i (λ), i = 1, . . . , n h , be polynomial solutions of (3) constructed in the proof of Proposition 12, with degũ i = h i − 1. For dimension reasons, these solutions form a basis in the kernel of A + λB, when the latter is considered as a matrix over the field of formal Laurent series (we consider Laurent series that are finite in the negative direction). Therefore, we have where f ij 's are formal Laurent series. Furthermore, from the linear independence of coefficients ofũ j 's it follows that f ij 's are in fact power series (otherwise the left-hand side would have terms of negative degree in λ). Now assume that (5) does not hold for some i = k, and k is the minimal number with this property. Then But this means that the rank of the matrix f ij is not maximal, which contradicts linear independence of the vectors u i (0). So, the proposition is proved.

Jordan-Kronecker invariants of Lie algebra representations
In this section we define the main object of the present paper: Jordan-Kronecker invariants of Lie algebra representations.
Consider a finite-dimensional linear representation ρ : g → gl (V ) of a finite-dimensional Lie algebra g. To each point x ∈ V , the representation ρ assigns a linear operator Rx : g → V , Rx(ξ) = ρ(ξ)x ∈ V . Since the mapping x → Rx is in essence equivalent to ρ, many natural algebraic objects related to ρ can be defined in terms of Rx.
Example 16. The stabilizer of x ∈ V can be defined as Now consider the pencil of such operators generated by a pair of vectors a, b ∈ V . By the algebraic type of a pencil Ra + λR b = R a+λb , we will understand the following collection of discrete invariants: • the number of distinct eigenvalues of Jordan blocks, • the number and sizes of the Jordan blocks associated with each eigenvalue, • horizontal and vertical indices. In other words, the type characterizes two-dimensional subspaces in V or, which is the same, one-dimensional projective subspaces in the projectivization of V .
Proof. It is easy to see that replacing two operators with their independent linear combinations does not change the discrete invariants in the Jordan-Kronecker normal form (only eigenvalues of the Jordan blocks do change).
Since the number of different algebraic types is finite, it is easily seen that in the space V × V there exists a non-empty Zariski open subset of pairs (a, b) for which the algebraic type of the pencil R a+λb will be one and the same (cf. [3, Prop. 1]).
Definition 18. A pair (a, b) ∈ V × V from this subspace and the corresponding pencil R a+λb will be called generic.
Definition 19. The Jordan-Kronecker invariant of ρ is the algebraic type of a generic pencil R a+λb .

Interpretation of Jordan-Kronecker invariants
In this section we give an interpretation for some of the Jordan-Kronecker invariants. All properties we discuss here are quite elementary but will be useful in the sequel. We begin with the Jordan part.
Those points which are not regular are called singular.
The set of singular points will be denoted by Sing ⊂ V . In terms of Rx we have The dimension of the stabilizer of a regular point is a natural characteristic of ρ and we will denote it by dim Streg. Though in our paper we never use the action of the Lie group G associated with the Lie algebra g, it will be convenient to keep in mind the action and its orbits. We will need, however, not the orbits themselves but their dimensions only. In particular, for the dimension of a regular orbit we will use the notation dim Oreg. Notice that TxOx = Im Rx and dim Ox = rk Rx.
(1) The eigenvalues of Jordan blocks of a pencil R a+λb are those values of λ ∈ C for which the line a + λb intersects the singular set Sing. In particular, R a+λb has a Jordan block with eigenvalue ∞ if and only if b is singular.
(2) A generic pencil Ra + λR b has no Jordan blocks if and only if the codimension of the singular set Sing is greater or equal than 2.
Proof. The first statement is an immediate corollary of Proposition 9. Furthermore, it follows from Proposition 9 that a generic pencil Ra + λR b has no Jordan blocks if and only if all these operators are of the same rank, i.e., a generic line a+λb does not intersect the singular set Sing. Clearly, the latter condition is fulfilled if and only if codim Sing ≥ 2.
Let us discuss the case codim Sing = 1 in more detail. Consider the matrix of the operator Rx and take all of its minors of size r × r, where r = dim Oreg, that do not vanish identically (such minors certainly exist). We consider them as polynomials p 1 (x), . . . , p N (x) on V . The singular set Sing ⊂ V is then given by the system of polynomial equations This set is of codimension one if and only if these polynomials possess a non-trivial greatest common divisor which we denote by pρ.
Thus, we have p i (x) = pρ(x)h i (x), which implies that the singular set Sing can be represented as the union of two subsets: It is easy to see that pρ(x) is a semi-invariant of the representation ρ. This follows from the fact that the action of G leaves the singular set Sing 0 invariant and therefore may only multiply pρ by a character of G. We will refer to this polynomial pρ as the fundamental semi-invariant of ρ. The fundamental semi-invariant is closely related to the characteristic polynomial χ a,b of the pencil R a+λb : Proposition 22. Let a, b ∈ V be such that the set {αa + βb | (α, β) = (0, 0)} of their non-trivial linear combinations does not intersect Sing 1 and is not completely contained in Sing (i.e., contains at least one regular element ). Then Proof. As above, let p 1 (x), . . . , p N (x) be the r × r minors of the matrix Rx. Then p i (x) = pρ(x)h i (x) for certain polynomials h i (x). Substituting x = αa + βb, we get Further, notice that since the set {αa + βb | (α, β) = (0, 0)} contains a regular element, it follows that the rank of the pencil R a+λb is equal to dim Oreg. Therefore, χ a,b is, by definition, the greatest common divisor of the expressions p i (αa + βb), where the latter are viewed as polynomials in α and β. So, For the sake of contradiction, assume the latter factor is non-constant. Then there exist α and β, not simultaneously equal to zero, such that h i (αa + βb) = 0 for every i (here we use that the polynomials h i (αa + βb) are homogeneous). But this means that the set {αa + βb | (α, β) = (0, 0)} intersects Sing 1 , which is not the case. So, χ a,b = pρ(αa + βb), up to a constant factor. We now turn to the Kronecker part. Following Section 2, for an arbitrary pencil R a+λb we define the numbers v tot (a, b) and h tot (a, b). These numbers computed for a generic pair (a, b) are invariants of the representation ρ. We denote them v tot (ρ), h tot (ρ) and call the total Kronecker v-index and h-index of ρ. Similarly, let n h (ρ), nv(ρ) be the numbers of horizontal and vertical indices for a generic pencil R a+λb .

Proposition 24.
(1) The number n h (ρ) of horizontal indices of ρ is equal to dim Streg.
(2) The number nv(ρ) of vertical indices of ρ is equal to codim Oreg.
Proof. This follows from rk R a+λb = dim O a+λb and Proposition 10.

Degrees of invariant polynomials and vertical indices
This section contains our main results. The first result gives a bound for degrees of invariant polynomials in terms of vertical indices. In the case of the coadjoint representation it was obtained by A. Vorontsov [17].
Proof. Let (a, b) ∈ V × V be generic. Expanding f i (a + λx) in powers of λ, we get Furthermore, since f i is an invariant, we have , the first term in the latter sum is zero, so we can divide the latter equation by λ: Also notice that df i,1 (x) = df i (a) for any x ∈ V , so if we take a such that the differentials of f i 's at a are independent, then Proposition 13 gives exactly the desired estimate (8).
Another immediate corollary of Theorem 26 is the following: Corollary 30. Suppose that there exist algebraically independent invariant polynomials f 1 , f 2 , . . . , fq, q = codim Oreg, of a representation ρ satisfying the condition Then Proof. Indeed, (10) is equivalent to which, in view of (8), is equivalent to (11).
We now investigate in more detail the case when one of the equivalent conditions (10), (11) hold.
Theorem 31 (On the set where the invariants become dependent). Let ρ : g → gl (V ) be a representation of a finite-dimensional Lie algebra g on a finite-dimensional vector space V . Assume that f 1 , f 2 , . . . , fq are algebraically independent invariant polynomials of ρ and q = codim Oreg. Let also v 1 (ρ), . . . , vq(ρ) be the vertical indices of ρ. Then the following conditions are equivalent: (2) The sum of the degrees of f i 's is equal to the total vertical index of ρ: (3) The set where the differentials df 1 , . . . , dfq are linearly dependent has codimension ≥ 2 in V .
(4) The set where the differentials df 1 , . . . , dfq are linearly dependent is contained in the set Sing 1 , i.e., in the codimension ≥ 2 stratum of the set of singular points of ρ in V .
The proof is based on the following lemma, which, in particular, gives an interpretation of the total Kronecker indices: Lemma 32. Let r = dim Oreg. For x ∈ V , consider the operator Λ r Rx : Λ r g → Λ r V , where Λ r Rx(ξ 1 ∧ · · · ∧ ξr) = Rx(ξ 1 ) ∧ · · · ∧ Rx(ξr). Then where ω h , ωv are homogeneous polynomials in x with values in Λ r g * , Λ r V respectively. The polynomials ω h , ωv are defined uniquely up to constant factors, not divisible by any non-trivial scalar polynomial of x, and are of degrees Remark 33. Formulas (13) can be also rewritten as i.e., the degrees of ω h , ωv are given by sums of minimal indices. Also note that two different formulas for each of the degrees in (13) are equivalent due to Proposition 24.
Proof of Lemma 32. The matrix entries of the operator Λ r Rx are r × r minors of the matrix Rx. The greatest common divisor of those minors is, by definition, the fundamental semi-invariant pρ(x). So, Λ r Rx = pρ(x)S(x), where S(x) is a polynomial in x with values in Hom(Λ r g, Λ r V ). Further, note that the rank of Rx for regular x is r = dim Oreg, so dim Im Rx = r, and hence the space Im Λ r Rx = Λ r Im Rx is generically one-dimensional. Therefore, the image of S(x) is generically one-dimensional too. So, if we regard S(x) as a matrix over the field of rational functions in x, its image has dimension 1, which means that it is decomposable: where ω h , ωv are rational functions in x with values in Λ r g * and Λ r V respectively. Furthermore, multiplying, if necessary, ω h by a scalar rational function of x and dividing ωv by the same function, we can arrange that ω h is polynomial, and that the greatest common divisor of its coefficients is equal to 1. But then ωv must be polynomial too, because the product ω h ⊗ ωv = S is polynomial. Hence, existence of factorization (12) is proved.
To prove that ω h , ωv are not divisible by any non-trivial scalar polynomial, recall that pρ is the greatest common divisor of the matrix entries of Λ r Rx. Therefore, the greatest common divisor of the matrix entries of S = 1 pρ Λ r Rx is 1, and hence the same is true for its factors ω h , ωv, as desired.
To prove uniqueness, we use that the representation of a rank 1 operator as a tensor product is unique up to multiplying the first factor by an element of the base field, and dividing the second factor by the same element. Therefore, if we have another representation then there exists a scalar rational function µ(x), such that ωv(x).
If, moreover, ω h , ω v are polynomials, then it follows that ω h is divisible by the denominator of µ, while ωv is divisible by the numerator of µ (we assume that µ is written in the reduced form). But we know that ω h , ωv do not have non-trivial polynomial factors. Therefore, µ(x) must be constant, which proves that factorization (12) is unique. This also shows that the polynomials ω h , ωv are homogeneous, because if they were not, then the expression ω h (λx) ⊗ ωv(λx), divided by a suitable constant, would provide another factorization of S(x). Now it remains to compute the degrees of ω h , ωv. To that end, we restrict (12) to a 2-plane x = αa + βb, where a, b ∈ V × V are generic. This gives where we used Proposition 22 to rewrite the first factor in the right-hand side. Now, consider the Jordan-Kronecker normal form of the pencil R αa+βb . Take all columns of all vertical Kronecker blocks of non-zero width (these can be naturally viewed as vectors in V ). In addition to that, take basis vectors in V corresponding to rows of all Jordan blocks and all horizontal Kronecker blocks of non-zero height. Taken together, all these vectors form a basis in Im R αa+βb for generic α, β. Thus, their wedge product, which we denote by f (α, β), is a non-trivial polynomial of α, β valued in Im Λ r R αa+βb . At the same time, by (14) the latter space is generated by ωv(αa + βb), so we must have where µ is a rational function. Note also that since ωv(x) is not divisible by a non-trivial scalar polynomial, it follows that its vanishing set in PV has codimension at least 2. A generic projective line αa + βb does not intersect this set, which means that ωv(αa + βb) does not vanish at all (unless α = β = 0), and hence µ(α, β) is actually a polynomial function. This gives where we used that deg f = v tot (ρ) − nv(ρ) by construction. Furthermore, an analogous argument for the dual pencil gives At the same time while the sum of right-hand sides in (15) and (16) is where the first equality uses Proposition 24, while the last one uses (7). So, the sum of left-hand sides in (15) and (16) is equal to the sum of right-hand sides, and thus both inequalities must be equalities, providing the desired formulas for deg ωv, deg ω h .
In other words, ω is a multi-vector dual to the form df 1 ∧ · · · ∧ dfq. Note that for generic x the differentials df 1 (x), . . . , dfq(x) span the annihilator of the tangent space TxOx to the orbit of ρ. Therefore, ω ∈ Λ r TxOx. But the latter space is generically one-dimensional and generated by the form ωv(x) given by (12). Therefore, we must have where µ(x) is a rational function. But the form ωv is not divisible by any scalar polynomial (see Lemma 32), so µ(x) is in fact a polynomial. Furthermore, we have So, assuming Condition 2 of the theorem, i.e., But the latter number is equal to deg ωv by Lemma 32, so we must conclude that µ(x) in (17) is a constant function, and zeros of ω(x) are the same as zeros of ωv(x). Furthermore, zeros of ω are exactly those points where df 1 , . . . , dfq become linearly dependent, while zeros of ωv are contained in the set of zeros of ω h (x)⊗ωv(x) = 1 pρ Λ r Rx. But the latter set is exactly Sing 1 , which proves the implication 2 ⇒ 4.
We now prove 3 ⇒ 2. Condition 3 says that df 1 , . . . , dfq are linearly dependent on a set of codimension ≥ 2, which is equivalent to saying that the zero set of ω(x) has codimension ≥ 2. But this is only possible if µ(x) in (17) is a constant function, in which case we have deg ω = deg ωv. In view of (18), this gives deg f i = deg ωv + codim Oreg = v tot (ρ), as desired.
As can be seen from the proof, in general the set Sing 1 consists of two components: the zeros of ωv (which, under the conditions of Theorem 31, coincide with the set where the invariants become dependent), and the zeros of ω h . This immediately gives the following: Corollary 34. Let g be a finite-dimensional Lie algebra, and let ρ : g → gl (V ) be either its coadjoint representation, or any finite-dimensional representation such that the stabilizer of a generic element in V is trivial. Assume that q = codim Oreg and f 1 , f 2 , . . . , fq are algebraically independent invariant polynomials of ρ whose degrees are equal to the vertical indices: ). Then their differentials df 1 , df 2 , . . . , dfq are linearly dependent exactly on the set Sing 1 .
Remark 35. In the case of the coadjoint representation, this is equivalent to a result of D.Panyushev (Theorem 1.2. in [11]), proved in the case codim Sing ≥ 2.
Proof of Corollary 34. Indeed, for the coadjoint representation due to skew-symmetry of Rx we have ω h = ωv, while in the trivial stabilizer case we have that ω h is of degree 0 and hence constant. In both cases, the zero set of ω h ⊗ ωv is the same as the zero set of ωv, hence the result.
Theorem 36 (On polynomiality of the algebra of invariants). Let ρ : g → gl (V ) be a representation of a finite-dimensional Lie algebra g on a finite-dimensional vector space V . Assume that f 1 , f 2 , . . . , fq are algebraically independent invariant polynomials of ρ and q = codim Oreg. Let also v 1 (ρ), . . . , vq(ρ) be the vertical indices of ρ. Then we have the following: (1) If the degrees of f i 's are equal to the vertical indices: , then the algebra C[V ] g of polynomial invariants of ρ is freely generated by f 1 , f 2 , . . . , fq (i.e., it is a polynomial algebra).
(2) Conversely, if the algebra C[V ] g of polynomial invariants of ρ is freely generated by f 1 , f 2 , . . . , fq, and, in addition, ρ has no proper semi-invariants (i.e., any semiinvariant is an invariant), then the degrees of f i 's are equal to the vertical indices.
Remark 37. The condition on semi-invariants holds, for example, when g is perfect, i.e., coincides with its derived subalgebra. This condition cannot be omitted, as shown by the following example. Consider a 3-dimensional Lie algebra with relations [z, x] = x, [z, y] = −2y. Then the algebra of invariants of its coadjoint representation is freely generated by a degree 3 function x 2 y, while the only vertical index is equal to 2 (and, as predicted by Theorem 31, the gradient of the invariant vanishes on a hypersurface). Theorem 36 does not apply in this case because x and y are proper semi-invariants.
Remark 38. A similar result is obtained in [5] (see Corollary 5.5): if the algebra of polynomial invariants is freely generated by f 1 , f 2 , . . . , fq, and, in addition, a certain semi-invariant is an invariant, then deg f i is equal to the degree of a certain form. A novel feature of our work is that we, first, compute the degree of the latter form (and hence show that the number deg f i is equal to the total vertical index), and second, provide formulas for the degrees deg f i themselves, and not just for their sum.
Proof of Theorem 36. Assume that the degrees of f i 's are equal to the vertical indices. Take any polynomial invariant f ∈ C[V ] g of ρ. Then, since tr.deg. C[V ] g = q, it follows that f is algebraically dependent with f 1 , f 2 , . . . , fq. Given also that f 1 , f 2 , . . . , fq are independent outside of a subset of codimension ≥ 2 (Theorem 31), it follows from Theorem 1.1 of [12] that f is a polynomial function of f 1 , f 2 , . . . , fq. So, the polynomials f 1 , f 2 , . . . , fq generate C[V ] g , and since those polynomials are independent, the algebra is freely generated, as desired.
Conversely, assume that C[V ] g is freely generated by f 1 , f 2 , . . . , fq. Assume, for the sake of contradiction, that df 1 , df 2 , . . . , dfq are linearly dependent on a subset S ⊂ V of codimension 1. Then the union of codimension 1 irreducible components of S is given by a single polynomial equation h f (x) = 0. Since f 1 , f 2 , . . . , fq are invariants, it follows that h f is a semi-invariant. So, under the assumptions of the theorem, h f must be an invariant, and hence a scalar by Proposition 5.2 of [5]. Therefore, f 1 , f 2 , . . . , fq are actually dependent on a subset of codimension ≥ 2, and the result follows by Theorem 31.
Corollary 39. Let ρ : g → gl (V ) be a representation of a finite-dimensional Lie algebra g on a finite-dimensional vector space V . Assume that ρ admits no proper semiinvariants, and that the algebra C[V ] g of polynomial invariants of ρ is freely generated by f 1 , f 2 , . . . , fq. Then Proof. Indeed, in these assumptions we have Remark 40. When g is simple, this corollary is a particular case of Popov's conjecture [14, p. 350] proved in [6].

JK invariants for standard representations of simple Lie algebras
In this section we consider classical simple Lie algebras, namely sl(n), so(n), and sp(n). For each of them we calculate the JK invariants for the sums of their standard representations. In each case we give the answer first and then show how it correlates with the general results from Sections 4 and 5, most importantly, Theorems 26, 31 and 36 about vertical indices and the algebra of invariants.
In all cases we consider a matrix subalgebra g ⊂ gl(n) and the sum of its m standard representations ρ ⊕m : g → gl(V ⊕m ).
Fix a basis of V , so that we could identify all the linear maps with matrices. Then an element of V ⊕m is given by a matrix X ∈ Mat n×m and it defines a linear mapping Lemma 41. Consider the sum of m standard representations for any of the Lie algebras sl(n), so(n), or sp(n). Define n × m matrices X and A as follows: • If m < n, then where λ i 's are distinct numbers.
• If m > n, then Then the pencil R X+λA is generic.
Proof. We need to show that there exists an open dense subset U ⊂ Mat n×m × Mat n×m such that for any pair (X,Ã) ∈ U , the pencil RX +λÃ has the same algebraic type as the pencil R X+λA . What is well-known is that there exists an open dense subset U ⊂ Mat n×m × Mat n×m such that, for any pair (X,Ã) ∈ U , its orbit under the natural left-right action of GL(n) × GL(m) on Mat n×m contains a pair (X, A) of the above form, see, e.g., [13, p. 119]. In other words, any generic pair of operators R m → R n can be written in the above form (X, A) in a suitable basis. So, to show that the pencil R X+λA is generic, it suffices to prove that for any C ∈ GL(n), D ∈ GL(m) the JK decompositions of the pencils R X + λR A and R CXD + λR CAD coincide. The latter is equivalent to the existence of invertible linear mappings ϕ : g → g and ψ : Mat n×m → Mat n×m (where ϕ does not have to be a Lie algebra automorphism) such that the following diagram commutes For sl(n), a suitable choice of ϕ, ψ is Similarly for so(n) and sp(n) we take where C * = C for so(n) and C * = Ω −1 C Ω for sp(n), with Ω being the matrix of the symplectic form given by formula (19) below. In all cases we see that the automorphisms ϕ, ψ intertwine the pencils R X + λR A and R CXD + λR CAD , so those pencils have the same JK type, as desired.

Standard representations of sl(n)
Proposition 42. Let ρ be the sum of m standard representations of sl(n).
(2) Assume that m = n. Then the JK invariants of ρ consist of one vertical index v 1 = n and n distinct eigenvalues with n − 1 Jordan 1 × 1 blocks corresponding to each eigenvalue. Proof. We need to compute the Jordan-Kronecker normal form for explicitly given operators R X , R A , where X, A are defined in Lemma 41. This is a tedious, yet direct, calculation.
Now, let us demonstrate how Theorem 36 works in this case.
Proposition 43. Let ρ : sl(n) → gl(Mat n×m ) be the sum of m standard representations of sl(n), and let X ∈ Mat n×m . Then: (1) If m < n, the algebra of invariants of ρ is trivial.
(2) If m = n, the algebra of invariants is freely generated by the polynomial det X.
(3) If m = n + 1, the algebra of invariants is freely generated by n × n minors of X.
(4) If m > n + 1, the algebra of invariants is not freely generated.
Remark 44. Of course, these results are well known. For m > n, the algebra of invariants of ρ can be identified with the homogeneous coordinate ring of the Grassmannian Gr(n, m). If m = n + 1, then the Grassmannian Gr(n, m) is the projective space P n , whose homogeneous coordinate ring is the ring of polynomials in n + 1 variables, hence freely generated. For m > n + 1, the homogeneous coordinate ring of the Grassmannian Gr(n, m) is generated by Plücker coordinates, subject to Plücker relations.
Proof of Proposition 43. There are clearly no invariants in the m < n case, because in this case the action of the group SL(n) on m-tuples of vectors in R n has an open orbit. (One could also say that in this case there are no vertical indices and hence no invariants by Theorem 26.) As for the case m = n, in this case we have one vertical index equal to n, and a polynomial invariant det X of degree n. So, by Theorem 36, the algebra of invariants is freely generated by det X (which is obvious in this case, because the transcendence degree of the algebra of invariants is equal to 1, so it must be freely generated). Further, when m = n + 1, the vertical indices are n, . . . , n (n + 1 times), which again coincides with the degrees of the n×n minors of X. So, these minors generate the algebra of invariants by Theorem 36 (they are clearly independent, as one can find a matrix X with any prescribed values of these minors).
Finally, consider the case m > n + 1. Observe that in this case there may also be no invariants of degree k < n. Indeed, assume that f is such an invariant. Then there exist k indices 1 ≤ i 1 < · · · < i k ≤ n such that f has non-trivial restriction to the subspace of matrices whose all columns, except for the ones with indices i 1 , . . . , i k , vanish. But this restriction must be an invariant of the representation sl(n) → gl(Mat n×k ), and hence trivial, which is a contradiction. So, any non-trivial invariant of ρ has degree at least n. On the other hand, it is easy to see from Proposition 42 that at least one of the vertical indices is less than n. So, by the second part of Theorem 36, the algebra of invariants is not freely generated, as desired (this statement applies in our case, because sl(n) is simple and thus none of its representations admit proper semi-invariants).

Standard representations of so(n) and sp(n)
Let us now consider the Lie algebras so(n), which is the Lie algebra of skew-symmetic matrices, and sp(n). For sp(n) we always assume that n is even n = 2k and we denote by sp(n) what would usually be denoted by sp(2k), that is the space of n × n matrices X given by the equation X Ω + ΩX = 0, Ω = 0 I n/2 −I n/2 0 .
In other words, the elements of sp(n) have the form X = Ω −1 S, where S is symmetric. Note that for both so(n) and sp(n) the index n is the dimension of the underlying space. Since these Lie algebras correspond to skew-symmetric and symmetric matrices of the same size, most formulas for them differ by some choices of signs. For this reason it is convenient to write the formulas in terms of the number ε, which is equal to +1 for sp(n) and −1 for so(n).
Proposition 45. Let ρ be the sum of m standard representations of so(n) or sp(n).
(1) Assume that m < n. Let q be the integral part of the quotient m / (n − m), and r be the remainder. Then the JK invariants of ρ are • (n − m)(n − m + ε)/2 horizontal indices: 2q + 1, . . . , 2q + 1 (2) Assume that m = n. Then, in the so(n) case, the JK invariants of ρ consist of n(n + 1)/2 vertical indices: The proof of this result is analogous to that of Proposition 42. Now, we use this to study the algebra of invariants. As in the sl(n) case, the structure of the algebra of invariants for sums of standard representations of so(n) and sp(n) is well known, and we discuss it here for the sole purpose of demonstrating the power of Theorem 36.
Proposition 46. Let ρ : so(n) → gl(Mat n×m ) be the sum of m standard representations of so(n), and let X ∈ Mat n×m . Then: (1) For m < n, the algebra of invariants of ρ is freely generated by m(m + 1)/2 pairwise inner products of columns of X.
(2) For m ≥ n, the algebra of invariants of ρ is not freely generated.
Proof of Proposition 46. The first statement is a direct consequence of Theorem 36, since all pairwise inner products of columns of X have degree 2 and hence coincide with vertical indices. To prove the second statement, observe that the same restriction argument as we used in the proof of Proposition 43 shows that there can be no invariants of degree 1. On the other hand, some of the vertical indices are equal to 1, so the algebra of invariants is not freely generated by Theorem 36.
Remark 47. For m ≥ n, the algebra of invariants of ρ is known to be generated by pairwise inner products of columns of X, supplemented by n × n minors of X. It is easy to see that pairwise inner products alone do not generate the algebra of invariants, as any minor of X can be expressed as the square root of the Gram determinant of its columns, which is a non-polynomial function in terms of the pairwise inner products.
The next statement is proved similarly to Proposition 46.