Symmetric Non-Negative Forms and Sums of Squares

We study symmetric non-negative forms and their relationship with symmetric sums of squares. For a fixed number of variables n and degree 2d, symmetric non-negative forms and symmetric sums of squares form closed, convex cones in the vector space of n-variate symmetric forms of degree 2d. Using representation theory of the symmetric group we characterize both cones in a uniform way. Further, we investigate the asymptotic behavior when the degree 2d is fixed and the number of variables n grows. Here, we show that, in sharp contrast to the general case, the difference between symmetric non-negative forms and sums of squares does not grow arbitrarily large for any fixed degree 2d. We consider the case of symmetric quartic forms in more detail and give a complete characterization of quartic symmetric sums of squares. Furthermore, we show that in degree 4 the cones of non-negative symmetric forms and symmetric sums of squares approach the same limit, thus these two cones asymptotically become closer as the number of variables grows. We conjecture that this is true in arbitrary degree 2d.


Introduction
Throughout the paper let R[X 1 , . . . , X n ] denote the ring of polynomials in n real variables and H n,k the set of homogeneous polynomials (forms) of degree k in R[X 1 , . . . , X n ]. Certifying that a form f ∈ H n,2d assumes only non-negative values is one of the fundamental questions of real algebra. One such possible certificate is a decomposition of f as a sum of squares, i.e., one finds forms p 1 , . . . , p m ∈ H n,d such that f = p 2 1 + · · · + p 2 m . In 1888 Hilbert [16] gave a beautiful proof showing that in general not all non-negative forms can be written as a sum of squares. In fact, he showed that the sum of squares property only characterizes non-negativity in the cases of binary forms, of quadratic forms, and of ternary quartics. In all other cases there exist forms that are non-negative but do not allow a decomposition as a sum of squares. Despite its elegance, Hilbert's proof was not constructive. A constructive approach to Hilbert's proof appeared in an article by Terpstra [37] in 1939, but the first explicit example was found by Motzkin in 1965 [22] and an explicit example based on Hilbert's method was constructed by Robinson in 1969 [29]. We refer the interested reader to [24,33] for more background on this topic.
The sum of squares decomposition of non-negative polynomials has been the cornerstone of recent developments in polynomial optimization. Following ideas of Lasserre and Parrilo, polynomial optimization problems, i.e., the task of finding f * = min f (x) for a polynomial f , can be relaxed and transferred into semidefinite optimization problems. If f − f * can be written as a sum of squares, these semidefinite relaxations are in fact exact. Hence a better understanding of the difference of sums of squares and non-negative polynomials is highly desirable.
We study the case of forms in n variables of degree 2d that are symmetric, i.e., invariant under the action of the symmetric group S n that permutes the variables. Let R[X 1 , . . . , X n ] S denote the ring of symmetric polynomials and H S n,2d denote the real vector space of symmetric forms of degree 2d in n variables. Let S n,2d be the cone of forms in H S n,2d that can be decomposed as sums of squares and P S n,2d be the cone of non-negative symmetric forms. Choi and Lam [7] showed that the following symmetric form of degree 4 in 4 variables is non-negative but cannot be written as a sum of squares: Thus one can conclude that S 4,4 = P S 4,4 and therefore even in the case of symmetric polynomials the sum of squares property already fails to characterize non-negativity in the first case covered by Hilbert's classical result. These results have been recently extended by Goel et al. [13] into a full characterization of equality cases between S n,2d and P S n,2d . Unfortunately, there are no other interesting cases of equality beyond those covered by Hilbert's Theorem.
The case of even symmetric forms has also received some attention. Choi et al. [8] fully described the cones of even symmetric sextics in any number of variables, and showed that under some normalization these cones have the same limit as the number of variables grows. Harris [15] showed that even symmetric ternary octics are non-negative, only if they are sums of squares, providing a new interesting case of equality between non-negative polynomials and sums of squares. Goel et al. [14] showed that there are no other interesting cases of equality beyond Harris' and Hilbert's results for even symmetric forms.
In addition to the qualitative statement of Hilbert's characterization, a quantitative understanding of the gap between sums of squares and non-negative forms has been studied by several authors. In particular, in [3] the first author added to the work of Hilbert by showing that the gap between sum of squares and non-negative forms of fixed degree grows infinitely large with the number of variables if the degree is at least 4. This result has been recently refined by Ergur to the multihomogeneous case [10]. In this article we study the relationship between symmetric sums of squares and symmetric non-negative forms. In particular, we are interested in the asymptotic behavior of the cones, which we can realize for example as symmetric mean inequalities naturally associated to a symmetric polynomial. The study of such symmetric inequalities has a long history (see for example [9]) and it is an interesting question to ask when one can use sum of squares certificates to verify such an inequality. For instance, Hurwitz [17] showed that a sum of squares decomposition can be used to verify the arithmetic mean-geometric mean inequality. Recently, Frenkel and Horváth [11] studied the connection of Minkowski's inequality to sums of squares. Our results imply that a positive fraction of such inequalities come from sums of squares symmetric polynomials. Furthermore, in degree 4 we show that a family of symmetric power mean inequalities is valid for all n if and only if each member can be written as a sum of squares. We conjecture that this holds for all degrees.

Symmetric Sums of Squares
Symmetric polynomials are classical objects in algebra. In order to represent symmetric polynomials, we will make use of the power sum polynomials. Definition 2.1 For i ∈ N define P (n) i := X i 1 + · · · + X i n to be the ith power sum polynomial. We will also work with the power means: It is known (for example [20, 2.11]) that R[X 1 , . . . , X n ] S is freely generated by the algebraically independent polynomials P for some polynomial g ∈ R[z 1 , . . . , z 2d ], with deg w g = deg f , where deg w denotes the weighted degree corresponding to the weight (1, . . . , 2d). Recall that for a natural number k a partition λ of k (written λ k) is a sequence of weakly decreasing positive integers λ = (λ 1 , λ 2 , . . . , λ l ) with l i=1 λ i = k. For n ≥ k and to a partition λ = (λ 1 , . . . , λ l ) k we associate polynomials It now follows that for every n ≥ k the families of polynomials {P λ : λ k} as well as p (n) λ : λ k form a basis of H S n,k . In particular, if n ≥ k then the dimension of H S n,k is equal to π(k), the number of partitions of k. Thus the dimension of H S n,k is constant for fixed k and all sufficiently large n.
Using representation theory of the symmetric group, and in particular so-called higher Specht polynomials, we are able to give a uniform representation of the cone of symmetric sums of squares of fixed degree 2d in terms of matrix polynomials, with coefficients that are rational functions in n (see Theorem 4.15) and similarly a uniform representation of the sequence of dual cones in terms of linear matrix polynomials whose coefficients are "symmetrizations" of sums of squares in 2d variables. This gives us in particular a better understanding of the faces of S n,2d that are not faces of P S n,2d . We make these findings more concrete in the case of quartic symmetric forms, where we completely characterize the cone n,4 and its boundary. This in particular allows us to easily compute a family of symmetric sums of square polynomials that are on the boundary of S n,4 without having a real zero, thus certifying the difference between symmetric sums of squares and symmetric non-negative forms (see Theorem 5.5).

Asymptotic Behavior of Sums of Squares and Non-Negative Forms
Our characterization allows us to study the asymptotic relationship between symmetric sums of squares and symmetric non-negative forms of fixed degree in a growing number of variables. Even though vector spaces H S n,2d have the same dimension π(2d) for all n ≥ 2d, there is no canonical way to identify vector spaces H S n,2d for different n. In fact there are several natural ways to define transition maps identifying vector spaces of symmetric forms in different numbers of variables (see for example [2]), and different transition maps will lead to different limits as n goes to infinity. The system of vector spaces H S n,2d together with transition maps will define a directed system of vector spaces, and we can define the direct limit H S ∞,2d of vector space H S n,2d [30,Sect. 7.6]. One way of defining these transitions is by symmetrization: we define the symmetrization of f as defined by ϕ m,n ( p) = sym n p.
Then the system of vector spaces H S n,2d together with the maps ϕ m,n defines a directed system and for m ≥ 2d the maps ϕ m,n are isomorphisms.
We consider the direct limit H ϕ ∞,k of the directed system above. Since the maps ϕ m,n are isomorphisms for m ≥ 2d, it follows that H ϕ ∞,k is also a real vector space of dimension π(2d). Therefore we have natural isomorphisms ϕ n : for n ≥ 2d, which allow us to view the cones S n,2d and P S n,2d as subsets of H ϕ ∞,2d . Note that we have ϕ m,n ( S m,2d ) ⊆ S n,2d and ϕ m,n (P S m,2d ) ⊆ P S n,2d . It follows that with transition maps ϕ m,n the cones of sums of squares and the cones of non-negative polynomials form nested increasing sequences in H ϕ ∞,2d . We define the following cones of non-negative elements and sums of squares in H ϕ ∞,k : The following theorem is immediate from the above discussion.
Forms in fixed degree make up a vanishingly small portion of non-negative forms as the number of variables grows [3]. More precisely, (non-symmetric) non-negative forms and sums of squares in n variables of degree 2d with average 1 on the unit sphere form compact convex setsP n,2d and¯ n,2d of dimension D = n+d−1 d − 1. It was shown in [3] that the ratio of volumes vol¯ n,2d volP n,2d 1/D converges to 0 for all 2d ≥ 4 as n goes to infinity. The ratio of volumes is raised to the power 1/D to take into account the effects of large dimension on volumes as the volume of (1 + ε) n,2d is equal to (1 + ε) D vol n,2d .
By contrast, the cones of symmetric non-negative forms and sums of squares of fixed degree live in the vector space H S n,2d which has fixed dimension π(2d) for a sufficiently large number of variables n. Therefore, to prove that asymptotically symmetric sums of squares make up a non-trivial portion of symmetric non-negative forms (with respect to some transition maps) it suffices to show that both limits are full-dimensional in H ϕ ∞,2d R π(2d) , which is done in Theorem 2.4.
Besides the direct limit we also study symmetric power mean inequalities. We can express a symmetric form f in H S n,2d in the power mean basis p Using the power mean basis we can define transition maps ρ m,n by identifying λ 2d As before the system of vector spaces H S n,2d together with the maps ρ m,n defines a directed system, and for m ≥ 2d the maps ρ m,n are isomorphisms. We consider the direct limit H ρ ∞,k . Since the maps ρ m,n are isomorphisms for m ≥ 2d, it follows that H ρ ∞,k is again a real vector space of dimension π(2d). The natural isomorphisms ρ n : H ρ ∞,2d → H S n,2d for n ≥ 2d, allow us to view the cones S n,2d and P S n,2d as subsets of H ρ ∞,2d . We will denote these images by ρ n,2d and P ρ n,2d and consider the limit cones: n,2d for all n ≥ 2d and The sequences P ρ n,2d and ρ n,2d are not nested in general. Let x = (X 1 , . . . , X n ) be a point in R n and letx be the point in R k·n with each X i repeated k times. Then It follows that f (k·n) ∈ P p k·n,d implies f (n) ∈ P p n,d and hence we get the following.

Proposition 2.6
Consider the cones P p n,2d as convex subsets of R π(d) using the coefficients c λ of p λ . Then for every n ≥ 2d and k ∈ N we have

Remark 2.7
We note that the same proof also yields that Although the cone of symmetric non-negative quartics is strictly bigger than the cone of symmetric quartic sums of squares for any number of variables n ≥ 4, we show that in the limit the two cones coincide: In particular, this result applies in the situation of power mean inequalities studied in [23], and hence it is possible to verify any such inequality using sums of squares. We conjecture that this happens in arbitrary degree 2d, i.e., we suggest the following.

Structure of the Article and Guide for the Reader
This article is structured as follows: We provide a characterization of symmetric nonnegative forms and the limit cone in Sect. 3. Section 4 provides a detailed study of symmetric sums of squares. To this end we present the general framework of how to use representation theory to study invariant sums of squares in Sect. 4.1. In Sect. 4.2 we outline the basic notions of the representation theory of the symmetric group. These results are then used in Sect. 4.3 to represent the cone of symmetric sums of squares (without restrictions on the degree) in terms of matrix polynomials in Theorems 4.11 and 4.12. The subsequent Sect. 4.4 then discusses how restricting degree allows for a uniform description of the cones n,2d in terms of the power mean bases p (n) λ (Theorem 4.15). The final subsection of Sect. 4 discusses some results on the dual cone which are needed in the sequel. The subsequent Sect. 5 makes these results more concrete as we give a description of the cone of symmetric quartic sums of squares (Theorem 5.1). Furthermore, we describe the elements of the boundary of n,4 that are strictly positive in Theorem 5.3 and give an explicit example of such a polynomial for every n ≥ 4 in Example 5.4. From this example it follows in particular that besides the cases where Hilbert showed the equality of sums of squares and non-negative forms there always exist symmetric positive definite forms which are not sums of squares (see Theorem 5.5). In Sect. 6 we explore the two notions of limits and prove Theorem 2.8. We also discuss the connection with the power mean inequalities. These power mean inequalities are then again studied in more detail in the final Sect. 7, where we show in particular that all valid power mean inequalities of degree 4 are sums of squares (Theorem 2.9).
The order of sections was chosen to present the more general statements in Sects. 3, 4, and 6 and then apply them in the quartic case in Sects. 5 and 7. Depending on reader's preferences, one can also begin by reading Sect. 5 first before actually diving into Sect. 4, and similarly Sect. 7 before Sect. 6, while taking the necessary results from previous sections for granted. a natural number k ∈ N we define A k to be the set of all points in R n with at most k distinct components, i.e., A k := {x ∈ R n : |{X 1 , . . . , X n }| ≤ k}.
The half degree principle says that a symmetric form of degree 2d > 2 is non-negative, if and only if it is non-negative on A d : Remark 3.2 By considering f − (X 2 1 + · · · + X 2 n ) d for a sufficiently small > 0 we see that we can also replace non-negative by positive in the above theorem, thus characterizing strict positivity of symmetric forms.
A non-increasing sequence of k natural numbers ϑ := (ϑ 1 , . . . , ϑ k ) such that ϑ 1 + · · · + ϑ k = n is called a k-partition of n (written ϑ k n). Given a symmetric form f ∈ H S n,2d and ϑ a k-partition of n we define .
From now on assume that 2d > 2. Then the half-degree principle implies that nonnegativity of f = λ 2d c λ p λ is equivalent to non-negativity of for all ϑ d n, since the polynomials f ϑ give the values of f at all points with at most d parts. We note that for all i ∈ N we have For a partition λ = (λ 1 , . . . , λ l ) 2d we define a 2d-variate form λ in the variables s 1 , . . . , s d and t 1 , . . . , t d by and use it to associate to any form f ∈ H S n,2d , f = λ 2d c λ p λ , the form We define the set It follows from the arguments above that f ∈ H S n,2d is non-negative if and only if the forms f (s, t) are non-negative forms in t for all w ∈ W n . This is summarized in the following corollary. This result enables us to characterize the elements of P 2d . We expand the sets W n to the standard simplex in R d : Then we have the following theorem characterizing P 2d .
λ . Since W n ⊂ for all n, we see from Corollary 3.3 that f (n) is a non-negative form for all n and thus f ∈ P 2d .
On the other hand, suppose there exists α 0 ∈ such that f (α 0 , t) < 0 for some t ∈ R d . Then we can find a rational point α ∈ with all positive coordinates and sufficiently close to α 0 so that f (α, t) < 0. Let h be the least common multiple of the denominators of α. Then we have α ∈ W ah for all a ∈ N. Choose a such that ah ≥ 2d. Then f (ah) is negative at the corresponding point and we have f / ∈ P 2d .

Symmetric Sums of Squares
We now consider symmetric sums of squares. It was already observed in [12] that invariance under a group action allows us to demand sum of squares decompositions which put strong restrictions on the underlying squares. First, we explain the general approach, which uses representation theory and can be used for other groups as well.
Our presentation follows the ideas of [12] which we present in a slightly different way. The interested reader is advised to consult there for more details.

Invariant Sums of Squares
Let G be a finite group acting linearly on R n . As G acts linearly on R n also the R-vector space R[X ] can be viewed as a G-module and by Maschke's theorem (the reader may consult for example [34] for basics in linear representation theory) there exists a decomposition of the form are the irreducible components and the V ( j) are the isotypic components, i.e., the direct sums of isomorphic irreducible components. The component with respect to the trivial irreducible representation is the invariant ring R[X ] G . The elements of the other isotypic components are called semi-invariants. It is classically known that each isotypic component is a finitely generated R[X ] G -module (see [36,Theorem 1.3]). To any element f ∈ H n,d we can associate a symmetrization by which we mean its image under the following linear map: is called the Reynolds operator of G. In the case of G = S n we say that R S n ( f ) is a symmetrization of f and we write sym f in this case.
For a set of polynomials f 1 , . . . , f l we will write R{ f 1 , . . . , f l } 2 to refer to the sums of squares of elements in the linear span of the polynomials f 1 , . . . , f l . It has already been observed by Gaterman and Parrilo [12] that invariant sums of squares can be written as sums of squares of semi-invariants using Schur's Lemma. However, a closer inspection of the situation allows in many cases-as for example in the case of S n -a finer analysis of the decomposition into sums of squares. Consider a set of . Further assume that they are chosen in such a way, that for each j and each pair (l, k) there exists a G-isomorphism ρ . . , f j,η j } which contains only one polynomial per irreducible module. However, since every irreducible module is generated by the G-orbit of only one element, every such set uniquely describes the chosen decomposition. We call such a set a symmetry basis and show that invariant sums of squares are in fact symmetrizations of sums of squares of a symmetry basis. The following theorem, which we state in a slightly more general setup, highlights the use of a symmetry basis.

Theorem 4.2 Let G be a finite group and assume that all real irreducible representations V ⊂ H n,d are also irreducible over their complexification. Let p be a form of degree 2d that is invariant with respect to G. If p is a sum of squares, then p can be written in the form
The main tool for the proof is Schur's Lemma, and we remark that a dual version of this theorem can be found in [28,Thm. 3.4] and [25].
Proof Let p ∈ H n,2d be a G-invariant sum of squares. Then there exists a symmetric positive semidefinite bilinear form which is a Gram matrix for p, i.e., for every x ∈ R n we can write where X d stands for the d-th power of x in the symmetric algebra of R n . Since p is G-invariant, we have p = R G ( p) and by linearity we may assume that B is a Ginvariant bilinear form. Now decompose H n,2d as in (4.1) and consider the restriction of B to and so the form B i j naturally can be seen as an element of Hom G V (i) * , V ( j) . Since real representations are self-dual we have that V (i) * and V ( j) are not isomorphic and thus by Schur's Lemma we find that B i j (v, w) = 0 for all v ∈ V (i) and w ∈ V ( j) . So the isotypic components are orthogonal with respect to B and hence it suffices to look at k is generated by a semiinvariant f j,k , i.e., there is a basis f j,k,1 , . . . , f j,k,ν j for every W ( j) k such that the basis elements f j,k,i are taken from the orbit of f j,k under G. To again use Schur's Lemma we identify B j with its complexification B C j , which is possible since we assumed that all representations are irreducible also over C. Consider a pair W To apply Schur's Lemma we relate the quadratic from B j j to a linear map ψ ( j) Since we assumed that W ( j) k are absolutely irreducible we have by Schur's Lemma and we can conclude that this map is unique up to scalar multiplication. Therefore it can be represented in the form ψ where δ u,v denotes the Kronecker delta. By considering the matrix of B with respect to the basis f j,k,l of H n,d we see that p has the desired decomposition. In some situations it is convenient to formulate the above Theorem 4.2 in terms of matrix polynomials, i.e., matrices with polynomial entries. Given two k ×k symmetric matrices A and B define their inner product as A, B = trace (AB). Define a blockdiagonal symmetric matrix A with h blocks A (1) , . . . , A (h) with the entries of each block given by: Then Theorem 4.2 is equivalent to the following statement: We now aim to apply Theorem 4.2 to a symmetric form p ∈ H S n,2d . In order to do this we need to identify an explicit representative in every irreducible S n -submodule of H n,d . We first recall some useful facts from the representation theory of S n . The irreducible representations in this case are the so-called Specht modules, which we will define in the following section. We refer to [18,31] for more details.

Specht Modules as Polynomials
Let λ = (λ 1 , λ 2 , . . . , λ l ) n be a partition of n. A Young tableau of shape λ consists of l rows, with λ i entries in the i-th row. Each entry is an element in {1, . . . , n}, and each of these numbers occurs exactly once. A standard Young tableau is a Young tableau in which all rows and columns are increasing. An element σ ∈ S n acts on a Young tableau by replacing each entry by its image under σ . Two Young tableaux T 1 and T 2 are called row-equivalent if the corresponding rows of the two tableaux contain the same numbers. The classes of row-equivalent Young tableaux are called tabloids, and the equivalence class of a tableau T is denoted by {T }. The stabilizer of a row-equivalence class is called the row-stabilizer, denoted by RStab T . If R 1 , . . . , R l are the rows or a given Young tableau T this group can be written as where S R i is the symmetric group on the elements of row i. The action of S n on the equivalence classes of row-equivalent Young tableaux gives rise to the permutation module M λ corresponding to λ which is the S n -module defined by Let T be a Young tableau for λ n, and let C i be the entries in the i-th column of t. The group where S C i is the symmetric group on the elements of columns i, is called the column stabilizer of T . The irreducible representations of the symmetric group S n are in 1-1correspondence with the partitions of n, and they are given by the Specht modules, as explained below. For λ n, the polytabloid associated with T is defined by Then for a partition λ n, the Specht module S λ is the submodule of the permutation module M λ spanned by the polytabloids e T . The dimension of S λ is given by the number of standard Young tableaux for λ n, which we will denote by s λ .
A classical construction of Specht realizes Specht modules as submodules of the polynomial ring (see [35]): For λ n let T λ be a standard Young tableau of shape λ and C 1 , . . . , C ν be the columns of T λ . To T λ we associate the monomial , where m(i) is the index of the row of T λ containing i. Note that for any λ-tabloid {T λ } the monomial X T λ is well defined, and the mapping {T λ } → X T λ is an S n -isomorphism. For any column C i of T λ we denote by C i ( j) the element in the j-th row and we associate to it a Vandermonde determinant: The Specht polynomial sp T λ associated to T λ is defined as where CStab T λ is the column stabilizer of T λ . By the S n -isomorphism {T λ } → X t λ , S n acts on sp T λ in the same way as on the polytabloid e T λ . If T λ,1 , . . . , T λ,k denote all standard Young tableaux associated to λ, then the polynomials sp T λ,1 , . . . , sp T λ,k are called the Specht polynomials associated to λ. We then have the following proposition; see [35]. Proposition 4.5 The Specht polynomials sp T λ,1 , . . . , sp T λ,k span an S n -submodule of R[X ] which is isomorphic to the Specht module S λ .
The Specht polynomials identify a submodule of R[X ] isomorphic to S λ . In order to get a decomposition of the entire ring R[X ] we will use a generalization of this construction which is described in the next section.

Higher Specht Polynomials and the Decomposition of R[X]
In what follows we will need to understand the decomposition of the polynomial ring R[X ] and S n -module H n,d in terms of S n -irreducible representations. Notice that such a decomposition is not unique. It is classically known that the ring R[X ] is a free module of dimension n! over the ring of symmetric polynomials. Similarly, every isotypic component is a free R[X ] S n -module. Therefore, one general strategy in order to get a symmetry basis of R[X ] consists in building a free module basis for R[X ] over R[X ] S n which additionally is symmetry adapted, i.e., which respects a decomposition into irreducible S n -modules. One such construction, which generalizes Specht's original construction presented above, is due to Ariki et al. [1]. The resulting word is given by w(T ) = 31524, with i(w(T )) = 10001. Taking Definition 4.8 Let λ n and T be a λ-tableau. Then the Young symmetrizer associated to T is the element in the group algebra R[S n ] defined to be Now let T be a standard Young tableau, and define the higher Specht polynomial associated with the pair (T , V ) to be .
For λ n we will denote by

Remark 4.9
Let s λ denote the number of standard Young tableaux of shape λ. It follows from the so-called Robinson-Schensted correspondence (see [31]) that λ n s 2 λ = n! Therefore the cardinality of F is exactly n!
The importance of the higher Specht polynomials now is summarized in the following theorem which can be found in [1, Thm. 1].

Theorem 4.10
The following holds for the set of higher Specht polynomials. For every λ n we denote by V λ 0 the standard λ-tableau with entries {1, . . . , λ 1 } in the first row, {λ 1 + 1, . . . , λ 2 } in the second row, and so on. Consider the set : T runs over all standard λ-tableaux , which is of cardinality s λ . The set Q λ is a symmetry basis of the vector space spanned by F. Using these polynomials we define s λ × s λ matrix polynomials Q λ by: where T , T run over all standard λ-tableaux. Since by (i) in Theorem 4.10 we know that every polynomial h ∈ R[X ] can be uniquely written as a linear combination of elements in F with coefficients in R[X ] S n , the following theorem can be thought of as a generalization of Corollary 4.4 to sums of squares from an S n -module with coefficients in an S n -invariant ring (see also [12, where Q λ is defined in (4.2) and each B λ ∈ R[X ] s λ ×s λ is a sum of symmetric squares matrix polynomial, i.e., B λ (x) = L t (x)L(x) for some matrix polynomial L(x) whose entries are symmetric polynomials.
Each entry of the matrix Q λ is a symmetric polynomial and thus can be represented as a polynomial in any set of generators of the ring of symmetric polynomials. We will use the power means p 1 , . . . , p n to phrase the next theorem. However, any other choice works similarly. With this choice of basis it follows that there exists a matrix polynomialQ λ (z 1 , . . . , z n ) in n variables z 1 , . . . , z n such that With this notation one can restate Theorem 4.11 in the following way: While Theorems 4.11 and 4.12 give a characterization of symmetric sums of squares in a given number of variables, we need to understand the behavior of the S n -module H n,d for polynomials of a fixed degree d in a growing number of variables n. This will be done in the next section.

The Cone 6 S n,2d
A symmetric sum of squares f ∈ S n,2d has to be a sum of squares from H n,d . Therefore we now consider restricting the degree of the squares in the underlying sum of squares representation. With a little abuse of notation we denote by F n,d the vector space spanned by higher Specht polynomials for the group S n of degree at most d. Further, for a partition λ n let F λ,d denote the span of the higher Specht polynomials of degree at most d corresponding to the Specht module S λ , i.e., F λ,d is exactly the isotypic component of F n,d corresponding to S λ . In order to describe this isotypic component combinatorially, recall that the degree of the higher Specht polynomial F S T is given by the charge c(S) of S. Thus, it follows from the above construction that For every λ 2d we choose m λ many higher Specht polynomials q λ 1 , . . . , q λ m λ that form a symmetry basis of the λ-isotypic component of F 2d,d . Let q λ = (q λ 1 , . . . , q λ m λ ) be the vector with entries q λ i . As before we construct the matrix Q λ 2d by Further, we define the matrix Q λ n by Q n = sym n q t λ q λ , Q λ n (i, j) = sym n q λ i q λ j .
By construction we have the following: Proposition 4.14 The matrix Q λ n is the S n -symmetrization of the matrix Q λ 2d : We now give a parametric description of the family of cones S n,2d . Note again that this statement is given in terms of a particular basis, but similarly can be stated with any set of generators.

or, equivalently, every entry B λ (i, j) of B λ is a weighted homogeneous form such that
Proof In order to apply Theorem 4.11 to our fixed degree situation we have to show that the forms {q λ 1 , . . . , q λ m λ } when viewed as functions in n variables also form a symmetry basis of the λ (n) -isotypic component of F n,d for all n ≥ 2d. Indeed, consider a standard Young tableau t λ of shape λ and construct a standard Young tableau t λ (n) of shape λ (n) by adding numbers 2d + 1, . . . , n as rightmost entries of the top row of t λ (n) , while keeping the rest of the filling of t λ (n) the same as for t λ . It follows by construction of the Specht polynomials that We may assume, the q (λ) k were chosen so that they map to sp t λ by an S 2d -isomorphism. We observe that sp t λ (and therefore sp t (n) λ ) and q λ k do not involve any of the variables X j , j > 2d. Therefore both are stabilized by S n−2d (operating on the last n − 2d variables), and further the action on the first 2d variables is exactly the same. Thus there is an S n -isomorphism mapping q λ k to sp t (n) λ and the S n -modules generated by the two polynomials are isomorphic. Therefore it follows that q (λ) k also form a symmetry basis of the λ (n) -isotypic component of F n,d . 16 We remark that the sum of squares decomposition of f = λ 2d B λ , Q λ n , with B λ = L t λ L λ can be read off as follows:

Remark 4.
In particular, if for a fixed λ n and for every 1 ≤ i ≤ m λ we denote δ i := d −deg q λ i , then the set of polynomials is a symmetry basis of the isotypic component H n,d corresponding to λ.

The Dual Cone of Symmetric Sums of Squares
Recall, that for a convex cone K ⊂ R n the dual cone K * is defined as Our analysis of the dual cone ( S n,2d ) * proceeds similarly to the analysis of the dual cone in the non-symmetric situation given in [4,6].
Let S n,d be the vector space of real quadratic forms on H n,d . Let S + n,d be the cone of positive semidefinite quadratic forms in S n,d . An element Q ∈ S n,d is said to be S n -invariant if Q( f ) = Q(σ ( f )) for all σ ∈ S n , f ∈ H n,d . We will denote byS n,d the space of S n -invariant quadratic forms on H n,d . Further we can identify a linear functional l ∈ (H S n,2d ) * with a quadratic form Q l defined by LetS + n,d be the cone of positive semidefinite forms inS n,d , i.e., The following lemma is straightforward, but very important, as it allows us to identify the elements of dual cone l ∈ ( S n,2d ) * with quadratic forms Q inS + n,d . Since for ∈ (H S n,2d ) * we have Q ∈S n,d , Schur's Lemma again applies and we can use the symmetry basis constructed above to simplify the condition that Q is positive semidefinite. In order to arrive at a dual statement of Corollary 4.15 we construct the following matrices:
With this notation the following lemma is just the dual version of Corollary 4. In order to examine the kernels of quadratic forms we use the following construction. Let W ⊂ H n,d be any linear subspace. We define W <2> to be the symmetrization of the degree 2d part of the ideal generated by W : In Lemma 4.17 we identified the dual cone ( S n,2d ) * with a linear section of the cone of positive semidefinite quadratic forms S + n,d with the subspaceS n,d of symmetric quadratic forms. By a slight abuse of terminology we think of positive semidefinite forms Q as elements of the dual cone ( n,2d ) * . The following important proposition is a straightforward adaptation of the equivalent result in the non-symmetric case [ The dual correspondence yields that any facet F of a cone K , i.e., any maximal face of K , is given by an extreme ray of the dual cone K * . More precisely, for any maximal face F of K there exists an extreme ray of K * spanned by a linear functional ∈ K * such that We now aim to characterize the extreme rays of ( S n,2d ) * that are not extreme rays of the cone (P S n,2d ) * . For v ∈ R n define a linear functional v : We say that the linear functional v corresponds to point evaluation at v. It is easy to show with the same proof as in the non-symmetric case that the extreme rays of the cone (P S n,2d ) * are precisely the point evaluations v (see [5,Chap. 4] for the nonsymmetric case). Therefore we need to identify extreme rays of ( S n,2d ) * which are not point evaluations. We now examine the case of degree 4 in detail, and give an explicit construction of an element of ( S n,4 ) * which does not belong to (P S n,2d ) * .

Symmetric Quartic Sums of Squares
We now look at the decomposition of H n,2 as an S n -module in order to apply Theorem 4.2 and characterize all symmetric sums of squares of degree 4. appearing in the statement decomposition. These are defined as the symmetrization of the pairwise products of those Specht polynomials which generate the corresponding Specht modules in degree 2. In degree 2 the Specht polynomials X n − X 1 and X 2 n − X 2 1 generate two disjunct irreducible S n -modules isomorphic to the S (n−1,1) part and the Specht polynomial (X n−1 − X 1 )(X n − X 2 ) generates a module isomorphic to S (n−2,2) . Thus we have:

Theorem 5.1 Let f (n) ∈ H n,4 be symmetric and n ≥ 4. If f (n) is a sum of squares then it can be written in the form
Then the symmetrization can be calculated quite directly, since every product involves at most 4 variables. These calculations then yield (2,1) p (4) − p (2 2 ) , which gives exactly the statement in the theorem.

The Boundary of 6 S n,4
We now apply Proposition 4.20 to the case of degree 4 and examine the possible kernels of an extreme ray of ( S n,4 ) * which do not come from a point evaluation.

Lemma 5.2
Suppose a linear functional spans an extreme ray of ( S n,4 ) * that is not an extreme ray of (P S n,4 ) * . Let Q be quadratic form corresponding to . Then and n is odd.
Proof Since Q is an S n -invariant quadratic form, its kernel Ker Q ⊆ H n,2 is an S n -module. It follows from the arguments in the proof of Theorem 5.1 that Ker Q decomposes as where α, β ∈ {0, 1, 2} and γ ∈ {0, 1}. We now examine the possible combinations of α, β, and γ . As above let W denote the kernel of Q. We first observe that α = 2 is not possible: if α = 2 then we have p 2 ∈ W and so p 2 2 ∈ W <2> S , which is a contradiction since p 2 2 is not on the boundary of S n n,4 . By Proposition 4.20 the kernel W of Q must be maximal. Let w ∈ R n be the all 1 vector: w = (1, . . . , 1). We now observe that α = 0 is also impossible: if α = 0 then all forms in the kernel W of Q are 0 at w. Therefore ker Q ⊆ ker Q w and by Proposition 4.20 we have Q = λQ w , which is a contradiction, since Q does not correspond to point evaluation. Thus we must have α = 1.
Since we have dim H S n,4 = 5, from Corollary 4.20 we see that dim W <2> = 4. This excludes the case β = 0, since even with α = 1 and γ = 1 the dimension of W <2> is at most 3. Now suppose that β = 2, i.e., the S n -module generated by (X 1 − X 2 ) p 1 and X 2 1 − X 2 2 lies in W as well as a polynomial q = ap 2 1 + bp 2 . We consider the symmetrizations of the five pairwise products and express these in the basis { p (4) , p (3,1) , p (2 2 ) , p (2,1 2 ) , p (1 4 ) }. Now the condition dim W <2> = 4 implies that these five products cannot be linearly independent and an explicit calculation of the determinant of the corresponding matrix M yields det M = b(a + b). We now examine the possible roots of this determinant. In the case when a = −b all polynomials in W (even if γ = 1) will be zero at (1, . . . , 1), which is excluded. Therefore the only possible case is b = 0. In that case, by calculating the kernel of M we see that the unique (up to a constant multiple) linear functional vanishing on W <2> is given by We observe using (5.1) that we must have γ = 0 since sym n (X 1 − X 2 ) 2 (X 3 − X 4 ) 2 > 0 for n ≥ 4. Now suppose that n is even and let w ∈ R n be given by w = (1, . . . , 1, −1, . . . , −1) where 1 and −1 occur n/2 times each. It is easy to verify that for all f ∈ W we have f (w) = 0. Therefore it follows that W ⊆ ker Q w , which is a contradiction, since W is a kernel of an extreme ray that does not come from point evaluation.
When n is odd the forms in W have no common zeros and therefore is not a positive combination of point evaluations. It is not hard to verify that is non-negative on squares and the kernel W is maximal. Therefore by Proposition 4.20 we know that spans an extreme ray of ( S n,2d ) * Finally we need to deal with the case α = β = γ = 1. Suppose that the S n -module W is generated by three polynomials: Again we consider the symmetrizations of the five pairwise products and represent these in a matrix M. Explicit calculations now show that det M = −(a + b)(ad 2 n 2 − 4ad 2 n + 4ad 2 + bd 2 n 2 + 4bcdn + bc 2 n − 4bcd − bc 2 ).
We see that q 3 (w) = 0 and from the above equation it also follows that for all f in the S n -module generated by q 2 we have f (w) = 0. Direct calculation shows that (5.2) also implies q 1 (w) = 0. Thus we have W ⊆ Q w , which is a contradiction by Proposition 4.20, since W is a kernel of an extreme ray that does not come from point evaluation. We remark that it is possible to show that the functional vanishing on W <2> and giving rise to W is in fact a multiple of w , but this is not necessary for us to finish the proof.
The above description allows us to explicitly characterize degree 4 symmetric sums of squares that are positive and on the boundary of S n,4 . Theorem 5.3 Let n ≥ 4 and f (n) ∈ H n,4 be symmetric and positive on the boundary of S 4,n . Then (i) either f (n) can be written as (ii) or, if n is odd, then f (n) may have the form with coefficients a, b 11 , b 12 , b 22 ∈ R which additionally satisfy Proof Suppose that f (n) is a strictly positive form on the boundary of S 4,n . Then there exists a non-trivial functional l spanning an extreme ray of the dual cone ( S 4,n ) * such that ( f (n) ) = 0. Let W l ⊂ H n,2 denote the kernel of Q . In view of Lemma 5.2 we see that there are two possible situations that we need to take into consideration.
Let q ∈ H n,2 . By Proposition 4.20 we have q ∈ W l if and only if the S n -linear map p → ( pq) is the zero map on H n,2 .
Since a = 0 we find that the linear functional is given by By Lemma 5.2 we must have n odd in order for not to be a point evaluation and this sends us to case (ii) discussed below. Meanwhile with a, c = 0 the solution of the linear system (up to a common multiple) is given by which then yields the conditions in (5.3).
(ii) If n is odd we know from Lemma 5.2 that there is one additional case: f (n) is a sum of the square (ap (11) ) 2 , and a sum of squares of elements from the isotypic component of H n,2 which corresponds to the representation S (n−1,1) . Since f (n) is strictly positive, we must have a = 0 (otherwise f (n) has a zero at (1, . . . , 1)) and it also follows that the matrix (b i j ) 2×2 must be strictly positive definite. Therefore we get the announced decomposition from Theorem 5.1.
Note that although the first symmetric counterexample by Choi and Lam in four variables gives S 4,4 P S 4,4 it does not immediately imply that we have strict containment for all n. However, using our methods, one can produce a sequence of strictly positive symmetric quartics that lie on the boundary of S n,4 for all n as a witness for the strict inclusion.  These matrices are all positive semidefinite for n ≥ 4 and therefore we have ∈ ( S n,4 ) * . This implies that f (n) ∈ ∂ S n,4 . Now we argue that for any n ∈ N the forms f (n) are strictly positive. By Corollary 3.3 it follows that f (n) has a zero, if and only if there exists k ∈ {1/n, . . . , (n − 1)/n} such that the bivariate form has a real projective zero (x, y). Since f (n) is a sum of squares and therefore nonnegative, we also know that h k (x, y) is non-negative for all k ∈ {1/n, . . . , (n − 1)/n}. Therefore the real projective roots of h k (x, y) must have even multiplicity. This implies that h k (x, y) has a real root only if its discriminant δ(h k )-viewed as polynomial in the parameter k-has a root in the admissible range for k. We calculate We see that δ(h k ) is zero only for i .
Thus we see that for all natural numbers n there is no k ∈ {1/n, . . . , (n − 1)/n} such that h k (x, y) has a real projective zero. Therefore we can conclude that for any n ∈ N the form f (n) will be strictly positive.
From the above example the following characterization, which recently had been independently given by Goel et al. [13], is an immediate consequence.

Theorem 5.5
The inclusion S n,2d ⊂ P S n,2d is strict except in the cases of symmetric bivariate forms, or symmetric quadratic forms, or symmetric ternary quartics. This is a monomial mean basis of H S n,2d . We observe that with this choice of basis of H S n,2d the transition maps ϕ m,n are given by the identity matrices. Since the stabilizer of the monomial X μ 1 1 · · · X μ r r is isomorphic to S s 1 × . . . × S s t × S n−r , it follows that μ is the monomial symmetric polynomial associated to μ.
Clearly, both cones are full-dimensional.
In order to establish the result for the power mean basis, we first have to study the relationship between these two bases: Proposition 6.4 Let M n be the matrix acting between the monomial mean and power mean bases of H S n,2d . Then M n converges entry-wise to a full rank matrix M * as n grows to infinity.
Therefore we see that asymptotically where the coefficients a μ,ν (n) tend to 0 as n → ∞. The proposition now follows. Now with these preparations the proof of Theorem 2.8 will be immediate after the following two lemmas. From the above lemma we can easily obtain the following generalization, which shows that the conclusions also hold if the limit of the linear maps M i is any full-rank map. The existence of the limits of sequences and their full-dimensionality now can be established by translating from Proposition 6.3.
Proof of Theorem 2. 8 We only give the proof for S 2d since the statement for P 2d follows in an analogous manner. We first observe that the sequence ρ n,2d is seminested; it follows that lim inf ρ n,2d = n≥2d ρ n,2d . We now apply Lemma 6.5 to the sequence ρ n,2d , with A i = ϕ n,2d and M i being transition maps between monomial mean and power mean bases. From Proposition 6.4 we know that the maps M i converge to identity. Therefore, we see that S 2d ⊆ n≥2d ρ n,2d and lim sup ρ n,2d ⊆ S 2d .
The theorem now follows, since the full-dimensionality is a direct consequence of Proposition 6.3.

Symmetric Mean Inequalities of Degree Four
In this last section we characterize quartic symmetric mean inequalities that are valid for all values of n. Recall from Section 2 that P 4 denotes the cone of all sequences f = ( f (4) , f (5) , . . .) of degree 4 power means that are non-negative for all n and S 4 the cone of such sequences that can be written as sums of squares.
In the case of quartic forms the elements of P 4 can be characterized by a family of univariate polynomials as Theorem 3.4 specializes to the following By the above we must have 1/2 g (1, 1) > 0 and the same will be true for a sufficiently small perturbationf of f. But then it follows by Proposition 7.1 that a neighborhood of f is in P 4 , which contradicts the assumption that f ∈ ∂P 4 . Therefore there exists α ∈ (0, 1) such that α f (x, y) has a double real root. Now suppose f ∈ P 4 and α f (x, y) has a double real root for some α ∈ (0, 1). Let f = f − p 2 2 . It follows that for all > 0 we have f / ∈ P 4 , since α f is strictly negative at the double zero of α f . Thus f is on the boundary of P 4 .
We now deduce a corollary from Theorem 5.1, completely describing the polynomials in S 4 . Proof We observe from Theorem 5.1 that the coefficients of the squares of symmetric polynomials and of (n − 1, 1) semi-invariants do not depend on n. Thus the cone generated by these sums of squares is the same for any n, and it corresponds precisely to the cone given in the statement of the corollary. Now observe that the limit of the square of the (n − 2, 2) component is equal to p (1 4 ) /2 − p (21 2 ) + p (2 2 ) /2, which is a sum of symmetric squares. Thus the squares from the (n − 2, 2) component do not contribute anything in the limit.
In order to algebraically characterize the elements on the boundary recall that the discriminant disc f of a bivariate form f is a homogeneous polynomial in the coefficients of f , which vanishes exactly on the set of forms with multiple projective roots. However, note that disc f = 0 alone does not guarantee that f has a double real root, since the double root may be complex. where δ 1 and δ 2 are quadratic polynomials in α. We examine these factors δ 1 and δ 2 now assuming the conditions on a, b, c, d imposed by (5.3).
We are now in the position to show that P 4 = S 4 .

Proof of Theorem 2.9
Since S 4 ⊂ P 4 and both sets are closed convex cones, it suffices to show that every f on the boundary of S 4 also lies in the boundary of P 4 . It follows from Theorem 5.3 that a sequence f := ( f (4) , f (5) , . . .) in the boundary of S 4 that is not in the boundary of P 4 has to be a form as considered in Lemma 7.5, but by combining Lemmas 7.5 and 7.2 we find that f ∈ ∂S 4 implies f ∈ ∂P 4 , and we can conclude that S 4 = P 4 .

Conclusion and Open Questions
Besides Conjecture 1 there is another important question left open in our work. Corollary 7.3 gave a description of the asymptotic symmetric sums of squares cone in terms of the squares involved. In this description of the limit not all semi-invariant polynomials were necessary. It is natural to investigate the situation also in arbitrary degree: Question 1 Let f ∈ S 2d . What semi-invariant polynomials are necessary for a description of f as a sum of squares?
The general setup of our work focused on the case of a fixed degree. Examples like the difference of the geometric and the arithmetic mean show however, that it would be very interesting to understand also the situation where the degree is not fixed.

Question 2
What can be said about the quantitative relationship between the cones S n,2d and P S n,2d in asymptotic regimes other than fixed degree 2d?