Symmetric nonnegative forms and sums of squares

We study symmetric nonnegative forms and their relationship with symmetric sums of squares. For a fixed number of variables $n$ and degree $2d$, symmetric nonnegative forms and symmetric sums of squares form closed, convex cones in the vector space of $n$-variate symmetric forms of degree $2d$. Using representation theory of the symmetric group we characterize both cones in a uniform way. Further, we investigate the asymptotic behavior when the degree $2d$ is fixed and the number of variables $n$ grows. Here, we show that, in sharp contrast to the general case, the difference between symmetric nonnegative forms and sums of squares does not grow arbitrarily large for any fixed degree $2d$. We consider the case of symmetric quartic forms in more detail and give a complete characterization of quartic symmetric sums of squares. Furthermore, we show that in degree $4$ the cones of nonnegative symmetric forms and symmetric sums of squares approach the same limit, thus these two cones asymptotically become closer as the number of variables grows. We conjecture that this is true in arbitrary degree $2d$.


Introduction
Throughout the paper let R[X 1 , . . . , X n ] denote the ring of polynomials in n real variables and H n,k the set of homogeneous polynomials (forms) of degree k in R[X 1 , . . . , X n ]. Certifying that a form f ∈ H n,2d assumes only nonnegative values is one of the fundamental questions of real algebra. One such possible certificate is a decomposition of f as a sum of squares, i.e., one finds forms p 1 , . . . p m ∈ H n,d such that f = p 2 1 + . . . + p 2 m . In 1888 Hilbert [16] gave a beautiful proof showing that in general not all nonnegative forms can be written as a sum of squares. In fact, he showed that the sum of squares property only characterizes nonnegativity in the cases of binary forms, of quadratic forms, and of ternary quartics. In all other cases there exist forms that are non-negative but do not allow a decomposition as a sum of squares. Despite its elegance, Hilbert's proof was not constructive. A constructive approach to Hilbert's proof appeared in an article by Terpstra [37] in 1939, but the first explicit example was found by Motzkin in 1965 [22] and an explicit example based on Hilbert's method was constructed by Robinson in 1969 [29]. We refer the interested reader to [33,24] for more background on this topic.
The sum of squares decomposition of nonnegative polynomials has been the cornerstone of recent developments in polynomial optimization. Following ideas of Lasserre and Parrilo, polynomial optimization problems, i.e. the task of finding f * = min f (x) for a polynomial f , can be relaxed and transferred into semidefinite optimization problems. If f − f * can be written as a sum of squares, these semidefinite relaxations are in fact exact. Hence a better understanding of the difference of sums of squares and nonnegative polynomials is highly desirable.
We study the case of forms in n variables of degree 2d that are symmetric, i.e., invariant under the action of the symmetric group S n that permutes the variables. Let R[X 1 , . . . , X n ] S denote the ring of symmetric polynomials and H S n,2d denote the real vector space of symmetric forms of degree 2d in n variables. Let Σ S n,2d denote be the cone of forms in H S n,2d that can be decomposed as sums of squares and P S n,2d be the cone of non-negative symmetric forms. Choi and Lam [7] showed that the following symmetric form of degree 4 in 4 variables is non-negative but cannot be written as a sum of squares: Thus one can conclude that Σ S 4,4 = P S 4,4 and therefore even in the case of symmetric polynomials the sum of squares property already fails to characterize nonnegativity in the first case covered by Hilbert's classical result. These results have been recently extended by Goel, Kuhlmann and Reznick [13] into a full characterization of equality cases between Σ S n,2d and P S n,2d . Unfortunately, there are no other interesting cases of equality beyond those covered by Hilbert's Theorem.
The case of even symmetric forms has also received some attention. Choi, Lam and Reznick [8] fully described the cones of even symmetric sextics in any number of variables, and showed that under some normalization these cones have the same limit as the number of variables grows. Harris [15] showed that even symmetric ternary octics are non-negative, only if they are sums of squares, providing a new interesting case of equality between nonnegative polynomials and sums of squares. Goel, Kuhlmann and Reznick [14] showed that there are no other interesting cases of equality beyond Harris' and Hilbert's results for even symmetric forms.
Additionally to the qualitative statement of Hilbert's characterization, a quantitative understanding of the gap between sums of squares and nonnegative forms has been studied by several authors. In particular, in [3] the first author added to the work of Hilbert by showing that the gap between sum of squares and nonnegative forms of fixed degree grows infinitely large with the number of variables if the degree is at least 4. This result has been recently been refined by Ergur to the multihomogenous case [10]. In this article we study the relationship between symmetric sums of squares and symmetric nonnegative forms. In particular, we are interested in the asymptotic behavior of the cones, which we can realize for example as symmetric mean inequalities naturally associated to a symmetric polynomial. The study of such symmetric inequalities has a long history (see for example [9]) and it is an interesting question to ask when one can use sum of squares certificates to verify such an inequality. For instance, Hurwitz [17] showed that a sum of squares decomposition can be used to verify the arithmetic mean-geometric mean inequality. Recently, Frenkel and Horváth [11] studied the connection of Minkowski's inequality to sums of squares. Our results imply that a positive fraction of such inequalities come from sums of squares symmetric polynomials. Furthermore, in degree 4 we show that a family of symmetric power mean inequalities is valid for all n if and only if each member can be written as a sum of squares. We conjecture that this holds for all degrees.

Overview and main results
2.1. Symmetric sums of squares. Symmetric polynomials are classical objects in algebra. In order to represent symmetric polynomials, we will make use of the power sum polynomials.
to be the i-th power sum polynomial. We will also work with the power means: It is known (for example [20, 2.11]) that R[X 1 , . . . , X n ] S is freely generated by the algebraically independent polynomials P (n) 1 , . . . , P (n) n . Hence it follows that every symmetric polynomial f ∈ R[X 1 , . . . , X n ] S of degree 2d ≤ n can uniquely be written as for some polynomial g ∈ R[z 1 , . . . , z 2d ], with deg w g = deg f , where deg w denotes the weighted degree corresponding to the weight (1, . . . , 2d). Recall that for a natural number k a partition λ of k (written λ k) is a sequence of weakly decreasing positive integers λ = (λ 1 , λ 2 , . . . , λ l ) with l i=1 λ i = k. For n ≥ k and to a partition λ = (λ 1 , . . . , λ l ) k we associate polynomials P (n) It now follows that for every n ≥ k the families of polynomials {P λ | λ k} as well as p (n) λ | λ k form a basis of H S n,k . In particular, if n ≥ k then the dimension of H S n,k is equal to π(k), the number of partitions of k. Thus dimension of H S n,k is constant for fixed k and all sufficiently large n.
Using representation theory of the symmetric group, and in particular so-called higher Specht polynomials, we are able to give a uniform representation of the cone of symmetric sums of squares of fixed degree 2d in terms of matrix polynomials, with coefficients that are rational functions in n (see Theorem 4.15) and similarly a uniform representation of the sequence of dual cones in terms of linear matrix polynomials whose coefficients "symmetrizations" of sums of squares in 2d variables. This gives us in particular a better understanding of the faces of Σ S n,2d that are not faces of P S n,2d . We make these findings more concrete in the case of quartic symmetric forms, where we completely characterize the cone Σ n,4 and its boundary. This in particular allows us to easily compute a family of symmetric sums of squares polynomials that are on the boundary of Σ S n,4 without having a real zero, thus certifying the difference of symmetric sums of squares and symmetric non-negative forms (see Theorem 5.5).

2.2.
Asymptotic behavior of sums of squares and nonnegative forms. Our characterization allows us to study the asymptotic relationship between symmetric sums of squares and symmetric nonnegative forms of fixed degree in a growing number of variables. Even though vector spaces H S n,2d have the same dimension π(2d) for all n ≥ 2d, there is no canonical way to identify vector spaces H S n,2d for different n. In fact there are several natural ways to define transition maps identifying vector spaces of symmetric forms in different numbers of variables (see for example [2]), and different transition maps will lead to different limits as n goes to infinity. The system of vector spaces H S n,2d together with transition maps will define a directed system of vector spaces, and we can define the direct limit H S ∞,2d of vector space H S n,2d [30, Section 7.6].
One way of defining these transitions is by symmetrization: we define the symmetrization of f as The composition of the natural inclusion i n,n+1 : H n,2d → H n+1,2d with sym n+1 defines injective maps ϕ n,n+1 : H S n,2d → H S n+1,2d . Therefore, we have the following. Proposition 2.3. For n, m ∈ N with n > m consider the maps ϕ m,n : H S m,2d → H S n,2d defined by ϕ m,n (p) = sym n (p).
Then, the system of vector spaces H S n,2d together with the maps ϕ m,n defines a directed system and for m ≥ 2d the maps ϕ m,n are isomorphisms.
We consider the direct limit H ϕ ∞,k of the directed system above. Since the maps ϕ m,n are isomorphisms with m ≥ 2d, it follows that H ϕ ∞,k is also a real vector space of dimension π(2d). Therefore we have natural isomorphisms ϕ n : H ϕ ∞,2d → H S n,2d for n ≥ 2d, which allow us to view the cones Σ S n,2d and P S n,2d as subsets of H ϕ ∞,2d . Note that we have ϕ m,n (Σ S m,2d ) ⊆ Σ S n,2d and ϕ m,n (P S m,2d ) ⊆ P S n,2d . It follows that with transition maps ϕ m,n the cones of sums of squares and the cones of nonnegative polynomials form nested increasing sequences in H ϕ ∞,2d . We define the following cones of nonnegative elements and sums of squares in H ϕ ∞,k : : ϕ n (f) ∈ P S n,2d for all n ≥ 2d , The following Theorem is immediate from the above discussion.
Forms in fixed degree make up a vanishingly small portion of nonnegative forms as the number of variables grows [3]. More precisely (non-symmetric), nonnegative forms and sums of squares in n variables of degree 2d with average 1 on the unit sphere form compact convex setsP n,2d andΣ n,2d of dimension D = n+d−1 d − 1. It was shown in [3] that the ratio of volumes volΣ n,2d volP n,2d 1 D converges to 0 for all 2d ≥ 4 as n goes to infinity. The ratio of volumes is raised to the power 1/D to take into account the effects of large dimension on volumes as the volume of (1 + ε)Σ n,2d is equal to (1 + ε) D vol Σ n,2d .
By contrast, the cones of symmetric nonnegative forms and sums of squares of fixed degree live in the vector space H S n,2d which has fixed dimension π(2d) for a sufficiently large number of variables n. Therefore, to prove that asymptotically symmetric sums of squares make up a nontrivial portion of symmetric nonnegative forms (with respect to some transition maps) it suffices to show that both limits are full-dimensional in H ϕ ∞,2d R π(2d) , which is done in Theorem 2.4.
Besides the direct limit we also study symmetric power mean inequalities. We can express a symmetric form f in H S n,2d in the power mean basis p (n) λ with λ 2d: Using the power mean basis we can define transition maps ρ m,n by identifying λ 2d As before the system of vector spaces H S n,2d together with the maps ρ m,n defines a directed system, and for m ≥ 2d the maps ρ m,n are isomorphisms. We consider the direct limit H ρ ∞,k . Since the maps ρ m,n are isomorphisms with m ≥ 2d, it follows that H ρ ∞,k is again a real vector space of dimension π(2d). The natural isomorphisms ρ n : H ρ ∞,2d → H S n,2d for n ≥ 2d, allow us to view the cones Σ S n,2d and P S n,2d as subsets of H ρ ∞,2d . We will denote these images by Σ ρ n,2d and P ρ n,2d and consider the limit cones: Definition 2.5.
The sequences P ρ n,2d and Σ ρ n,2d are not nested in general. Let x = (X 1 , . . . , X n ) be a point in R n and letx be the point in R k·n with each X i repeated k times. Then, It follows that f (k·n) ∈ P p k·n,d ⇒ f (n) ∈ P p n,d and hence we get the following.
Proposition 2.6. Consider the cones P p n,2d as convex subsets of R π(d) using the coefficients c λ of p λ . Then for every n ≥ 2d and k ∈ N we have P ρ k·n,2d ⊆ P ρ n,2d ⊂ H ρ ∞,2d R π(2d) .
It is not directly clear from Proposition 2.6 that the sequences P ρ n,2d and Σ ρ n,2d have limits, which we show separately: (a) The cones S 2d and P 2d are full dimensional cones.
Although the cone of symmetric nonnegative quartics is strictly bigger than the cone of symmetric quartic sums of squares for any number of variables n ≥ 4, we show that in the limit the two cones coincide: In particular, this result applies in the situation of power mean inequalities studied in [23], and hence it is possible to verify any such inequality using sums of squares. We conjecture that this happens in arbitrary degree 2d, i.e., we suggest the following.
2.3. Structure of the article and guide for the reader.
This article is structured as follows: We provide a characterization of symmetric non-negative forms and the limit cone in Section 3. Section 4 provides a detailed study of symmetric sums of squares. To this end we present the general framework of how to use representation theory to study invariant sums of squares in Subsection 4.1. In Subsection 4.2 we outline the basic notions of the representation theory of the symmetric group. These results are then used in Subsection 4.3 to represent the cone of symmetric sums of squares (without restrictions on the degree) in terms of matrix polynomials in Theorems 4.11 and 4.12. The subsequent Subsection 4.4 then discusses how restricting degree allows for a uniform description of the cones Σ n,2d in terms of the power mean bases p (n) λ (Theorem 4.15). The final subsection of Section 4 discusses some results on the dual cone with are needed in the sequel. The subsequent Section 5 makes these results more concrete as we give a description of the cone of symmetric quartic sums of squares (Theorem 5.1). Furthermore, we describe the elements of the boundary of Σ n,4 which are strictly positive in Theorem 5.3 and give an explicit example of such a polynomial for every n ≥ 4 in Example 5.4. From this example it follows in particular that besides the cases where Hilbert showed the equality of sums of squares and non-negative forms there always exist symmetric positive definite forms which are not sums of squares (see Theorem 5.5). In Section 6 we explore the two notions of limits and prove Theorem 2.8. We also discuss the connection with the power mean inequalities. These power mean inequalities are then again studied in more detail In the final Section 7, where we show in particular that all valid power mean inequalities of degree 4 are sums of squares (Theorem 2.9 ).
The order of sections was chosen to present the more general statements in Sections 3, 4, and 6 and then apply them in the quartic case in Sections 5 and 7. Depending on reader's preferences they can also begin by reading Section 5 first before actually diving into Section 4 and similarly Section 7 before Section 6, while taking the necessary results from previous sections for granted.

Symmetric PSD forms
We begin by characterising the cone P 2d . One key result needed to describe the non-negative symmetric forms is the so-called half degree principle (see [38,26,27]): For a natural number k ∈ N we define A k to be the set of all points in R n with at most k distinct components, i.e., The half degree principle says that a symmetric form of degree 2d > 2 is non-negative, if and only if it is non-negative on A d : Remark 3.2. By considering f − (X 2 1 + · · · + X 2 n ) d for a sufficiently small > 0 we see that we can also replace non-negative by positive in the above Theorem, thus characterizing strict positivity of symmetric forms.
A non increasing sequence of k natural numbers ϑ := (ϑ 1 , . . . , ϑ k ) such that ϑ 1 + . . . + ϑ k = n is called a k-partition of n (written ϑ k n). Given a symmetric form f ∈ H S n,2d and ϑ a k-partition From now on assume that 2d > 2. Then the half-degree principle implies that nonnegativity of f = λ 2d c λ p λ is equivalent to nonnegativity of for all ϑ d n, since the polynomials f ϑ give the values of f on all points with at most d parts.
We note that for all i ∈ N we have For a partition λ = (λ 1 , . . . , λ l ) 2d we define a 2d-variate form Φ λ in the variables s 1 , . . . , s d and t 1 , . . . , t d by and use it to associate to any form f ∈ H S n,2d , f = λ 2d c λ p λ the form We define the set It follows from the arguments above that f ∈ H S n,2d is non-negative if and only if the forms Φ f (s, t) are non-negative forms in t for all w ∈ W n . This is summarized in the following corollary. This result enables us to characterize the elements of P 2d . We expand the sets W n to the standard simplex ∆ in R d : Then we have the following Theorem characterizing P 2d .
λ . Since W n ⊂ ∆ for all n we see from Corollary 3.3 that f (n) is a non-negative form for all n and thus f ∈ P 2d .
On the other hand, suppose there exists α 0 ∈ ∆ such that Φ f (α 0 , t) < 0 for some t ∈ R d . Then we can find a rational point α ∈ ∆ with all positive coordinates and sufficiently close to α 0 so that Φ f (α, t) < 0.
Let h be the least common multiple of the denominators of α. Then we have α ∈ W ah for all a ∈ N. Choose a such that ah ≥ 2d. Then f (ah) is negative at the corresponding point and we have f / ∈ P 2d .

Symmetric sums of squares
We now consider symmetric sums of squares. It was already observed in [12] that invariance under a group action allows us to demand sum of squares decompositions which put strong restrictions on the underlying squares. First, we explain the general approach, which uses representation theory and can be used for other groups as well. Our presentation follows the ideas of [12] which we present in a slightly different way. The interested reader is advised to consult there for more details.
4.1. Invariant Sums of Squares. Let G be a finite group acting linearly on R n . As G acts linearly on R n also the R−vector space R[X] can be viewed as a G-module and by Maschke's theorem (the reader may consult for example [34] for basics in linear representation theory) there exists a decomposition of the form are the irreducible components and the V (j) are the isotypic components, i.e., the direct sum of isomorphic irreducible components. The component with respect to the trivial irreducible representation is the invariant ring R[X] G . The elements of the other isotypic components are called semi-invariants. It is classically known that each isotypic component is a finitely generated R[X] G -module (see [36,Theorem 1.3]). To any element f ∈ H n,d we can associate a symmetrization by which we mean its image under the following linear map: is called the Reynolds operator of G. In the case of G = S n we say that R Sn (f ) is a symmetrization of f and we write sym(f ) in this case.
For a set of polynomials f 1 , . . . , f l we will write R{f 1 , . . . , f l } 2 to refer to the sums of squares of elements in the linear span of the polynomials f 1 , . . . , f l . It has already been observed by Gaterman and Parrilo [12] that invariant sums of squares can be written as sums of squares of semi-invariants using Schur's Lemma. However, a closer inspection of the situation allows in many cases -as for example in the case of S n -a finer analysis of the decomposition into sums of squares. Consider a set of forms {f 1,1 , . . . , f 1,η 1 , f 2,1 , . . . , f h,η h } such that for fixed j the forms f j,i generate irreducible components of V (j) . Further assume that they are chosen in such a way, that for each j and each pair (l, k) there exists a G-isomorphism ρ (j) l,k : V j → V j which maps f j,l to f j,k . Now for every j we consider the set {f j,1 , . . . f j,η j } which contains only one polynomial per irreducible module. However, since every irreducible module is generated by the G-orbit of only one element, every such set uniquely describes the chosen decomposition. We call such a set a symmetry basis and show that invariant sums of squares are in fact symmetrizations of sums of squares of a symmetry basis. The following theorem, which we state in a slightly more general setup highlights the use of a symmetry basis.
Theorem 4.2. Let G be a finite group and assume that all real irreducible representations V ⊂ H n,d are also irreducible over their complexification. Let p be a form of degree 2d which is invariant with respect to G. If p is a sum of squares, then p can be written in the form The main tool for the proof is Schur's Lemma, and we remark that a dual version of this Theorem can be found in [28,Theorem 3.4] and [25].
Proof. Let p ∈ H n,2d be a G-invariant sum of squares. Then there exists a symmetric positive semidefinite bilinear form which is a Gram matrix for p, i.e. for every x ∈ R n we can write p(x) = B(X d , X d ), where X d stands for the d-th power of x in the symmetric algebra of R n . Since p is G-invariant, we have p = R G (p) and by linearity we may assume that B is a G-invariant bilinear form. Now decompose H n,2d as in (4.1) and consider the restriction of B to For every v ∈ V (i) the quadratic form B ij defines a linear map φ v : V (j) → R via φ v (w) := B ij (v, w) and so B ij naturally can be seen as an element of Hom G (V (i) * , V (j) ). Since real representations are self dual we have that V (i) * and V (j) are not isomorphic and thus by Schur's Lemma we find that B ij (v, w) = 0 for all v ∈ V (i) and w ∈ V (j) . So the isotypic components are orthogonal with respect to B and hence it suffices to look at k 2 )) = 1 and we can conclude that this map is unique up to scalar multiplication. Therefore it can be represented in the form ψ (j) k 1 ,k 2 = c k 1 ,k 2 · ρ k 1 ,k 2 , where ρ k 1 ,k 2 is the G-isomorphism with ρ k 1 ,k 2 (f j,k 1 ) = f j,k 2 as above. It therefore follows that where δ u,v denotes the Kronecker Delta. By considering the matrix of B with respect to the basis f j,k,l of H n,d we see that p has the desired decomposition. In some situations it is convenient to formulate the above Theorem 4.2 in terms of matrix polynomials, i.e. matrices with polynomial entries. Given two k × k symmetric matrices A and B define their inner product as A, B = trace(AB). Define a block-diagonal symmetric matrix A with h blocks A (1) , . . . , A (h) with the entries of each block given by: Then Theorem 4.2 is equivalent to the following statement: We now aim to apply Theorem 4.2 to a symmetric form p ∈ H S n,2d . In order to do this we need to identify an explicit representative in every irreducible S n -submodule of H n,d . We first recall some useful facts from the representation theory of S n . The irreducible representations in this case are the so-called Specht Modules, which we will define in the following section. We refer to [18,31] for more details.

Specht Modules as Polynomials.
Let λ = (λ 1 , λ 2 , . . . , λ l ) n be a partition of n. A Young tableau of shape λ consists of l rows, with λ i entries in the i-th row. Each entry is an element in {1, . . . , n}, and each of these numbers occurs exactly once.. A standard Young tableau is a Young tableau in which all rows and columns are increasing. An element σ ∈ S n acts on a Young tableau by replacing each entry by its image under σ. Two Young tableaux T 1 and T 2 are called row-equivalent if the corresponding rows of the two tableaux contain the same numbers. The classes of row-equivalent Young tableaux are called tabloids, and the equivalence class of a tableau T is denoted by {T }. The stabilizer of a row-equivalence class is called the row-stabilizer denoted by RStab T . If R 1 , . . . , R l are the rows or a given Young tableau T this group can be written as RStab T = S R 1 × S R 2 × · · · × S R l , where S R i is the symmetric group on the elements of row i. The action of S n on the equivalence classes of row-equivalent Young tableaux gives rise to the permutation module M λ corresponding to λ which is the S n -module defined by Let T be a Young tableau for λ n, and let C i be the entries in the i-th column of t. The group where S C i is the symmetric group elements of columns i, is called the column stabilizer of T . The irreducible representations of the symmetric group S n are in 1-1-correspondence with the partitions of n, and they are given by the Specht modules, as explained below. For λ n, the polytabloid associated with T is defined by Then for a partition λ n, the Specht module S λ is the submodule of the permutation module M λ spanned by the polytabloids e T . The dimension of S λ is given by the number of standard Young tableaux for λ n, which we will denote by s λ .
A classical construction of Specht realizes Specht modules as submodules of the polynomial ring (see [35]): For λ n let T λ be a standard Young tableau of shape λ and C 1 , . . . , C ν be the columns of T λ . To T λ we associate the monomial X t λ : , where m(i) is the index of the row of T λ containing i. Note that for any λ-tabloid {T λ } the monomial X T λ is well defined, and the mapping {T λ } → X T λ is an S n -isomorphism. For any column C i of T λ we denote by C i (j) the element in the j-th row and we associate to it a Vandermonde determinant: The Specht polynomial sp T λ associated to T λ is defined as where CStab T λ is the column stabilizer of T λ .
By the S n -isomorphism {T λ } → X t λ , S n acts on sp T λ in the same way as on the polytabloid e T λ . If T λ,1 , . . . , T λ,k denote all standard Young tableaux associated to λ, then the set of polynomials sp T λ,1 , . . . , s T λ,k are called the Specht polynomials associated to λ. We then have the following Proposition [35]: Proposition 4.5. The Specht polynomials sp T λ,1 , . . . , s T λ,k span an S n -submodule of R[X] which is isomorphic to the Specht module S λ .
The Specht polynomials identify a submodule of R[X] isomorphic to S λ . In order to get a decomposition of the entire ring R[X] we will use a generalization of this construction which is described in the next section.

Higher Specht polynomials and the decomposition of R[X].
In what follows we will need to understand the decomposition of the polynomial ring R[X] and S n -module H n,d in terms of S n -irreducible representations. Notice that such a decomposition is not unique. It is classically known that the ring R[X] is a free module of dimension n! over the ring of symmetric polynomials. Similarly, every isotypic component is a free R[X] Sn -module. Therefore, one general strategy in order to get a symmetry basis of R[X] consists in building a free module basis for R[X] over R[X] Sn which additionally is symmetry adapted, i.e., which respects a decomposition into irreducible S n -modules. One such construction, which generalizes Specht's original construction presented above is due to Ariki, Terasoma, and Yamada [1].
(1) A finite sequence w = (w 1 , . . . , w n ) of non-negative integers is called a word of length n. A word w of length n is called a permutation if the set of non-negative integers forming a word of length n is {1, . . . , n}.
(2) Given a word w and a permutation u we define the monomial associated to the pair as X w u := X w 1 u 1 · · · X wn un . (3) Given a permutation w. We associate to w is index denoted by i(w), by constructing the following word of length n. The word i(w) contains 0 exactly at the same position where 1 occurs in w and the other entries we defined recursively with the following rule: Suppose that the entry in i(w) at a given position is c and that k occurs in w at the same position then i(w) should be also c if it lies to the right of k and it should be c + 1 is it lies to the left of k. we obtain X i(w(T )) w(V ) Definition 4.8. Let λ n and T be a λ-tableau. Then the Young symmetrizer associated to T is the element in the group algebra R[S n ] defined to be Now let T be a standard Young tableau, and define the higher Specht polynomial associated with the pair (T, V ) to be For λ n we will denote by where T, V run over all standard λ tableaux } the set of all standard higher Specht polynomials corresponding to λ and by  Let s λ denote the number of standard Young tableaux of shape λ. It follows from the so-called Robinson-Schensted correspondence (see [31]) that λ n s 2 λ = n!.
Therefore the cardinality of F is exactly n!.

11
The importance of the higher Specht polynomials now is summarized in the following Theorem which can be found in [1,Theorem 1].
Theorem 4.10. The following holds for the set of higher Specht polynomials.
(1) The set F is a free basis of the ring R[X] over the invariant ring R[X] Sn .
(2) For any λ n and standard λ-tableau T , the space spanned by the polynomials in where V runs over all standard λ-tableaux} is an irreducible S n -module isomorphic to the Specht module S λ .
For every λ n we denote by V λ 0 the standard λ tableau with entries {1, . . . , λ 1 } in the first row {λ 1 + 1, . . . , λ 2 } in the second row and so on. Consider the set where T runs over all standard λ-tableaux}, which is of cardinality s λ . The set Q λ is a symmetry basis of the vector space spanned by F. Using these polynomials we define s λ × s λ matrix polynomials Q λ by: where T, T run over all standard λ-tableaux. Since by (1) where Q λ is defined in (4.2) and each B λ ∈ R[X] s λ ×s λ is a sum of symmetric squares matrix polynomial, i.e. B λ (x) = L t (x)L(x) for some matrix polynomial L(x) whose entries are symmetric polynomials.
Each entry of the matrix Q λ is a symmetric polynomial and thus can be represented as a polynomial in any set of generators of the ring of symmetric polynomials. We will use the the power means p 1 , . . . , p n to phrase the next theorem. However, any other choice works similarly. With this choice of basis it follows that there exists a matrix polynomialQ λ (z 1 , . . . , z n ) in n variables z 1 , . . . , z n such that (4.3)Q λ (p 1 (x), . . . , p n (x)) = Q λ (x).
With this notation can restate Theorem 4.11 in the following way: While Theorems 4.11 and 4.12 give a characterization of symmetric sums of squares in a given number of variables, we need to understand the behavior of the S n -module H n,d for polynomials of a fixed degree d in a growing number of variables n. This will be done in the next section.

4.4.
The cone Σ S n,2d . A symmetric sum of squares f ∈ Σ S n,2d has to be a sum of squares from H n,d . Therefore we now consider restricting the degree of the squares in the underlying sum of squares representation. With a little abuse of notation we denote by F n,d be the vector space spanned by higher Specht polynomials for the group S n of degree at most d. Further, for a partition λ n let F λ,d denote the span of the higher Specht polynomials of degree at most d corresponding to the Specht module S λ , i.e., F λ,d is exactly the isotypic component of F n,d corresponding to S λ . In order to describe this isotypic component combinatorially, recall that the degree of the higher Specht polynomial F S T is given by the charge c(S) of S. Thus, it follows from the above construction that F λ,d = span{F S T : S, T are standard λ-tableaux and c(S) ≤ d}.
We now show that sums of squares of degree 2d in n variables can be constructed by symmetrizing sums of squares in 2d variables. So we first consider the case n = 2d. Let be the decomposition of F 2d,d as an S 2d -module. The following Proposition gives the multiplicities of the different S n modules appearing in the vector space of homogeneous polynomials of degree d. For a partition λ 2d and n ≥ 2d define a new partition λ (n) n by simply increasing the first part of λ by n − 2d: λ (n) Then the decomposition Theorem 4.10 in combination with [28,Theorem 4.7.] yields that For every λ 2d we choose m λ -many higher Specht polynomials {q λ 1 , . . . , q λ m λ } that form a symmetry basis of the λ-isotypic component of F 2d,d .
Let q λ = (q λ 1 , . . . , q λ m λ ) be a vector with entries q λ i . As before we construct a matrix Q λ 2d by: . Further, we define a matrix Q λ n by Q n = sym n (q t λ q λ ) Q λ n (i, j) = sym n q λ i q λ j . By construction we have the following: Proposition 4.14. The matrix Q λ n is the S n -symmetrization of the matrix Q λ 2d : We now give a parametric description of the family of cones Σ S n,2d . Note again, that this statement is given in terms of a particular basis, but similarly can be stated with any set of generators. d ) whose entries are weighted homogeneous forms. Additionally, we have for every column k of L λ deg w Q λ n (i, k) + 2 deg w L λ (k, i) = 2d, or equivalently every entry B λ (i, j) of B λ is a weighted homogeneous form such that, Proof. In order to apply Theorem 4.11 to our fixed degree situation we have to show that the forms {q λ 1 , . . . , q λ m λ } when viewed as functions in n variables also form a symmetry basis of the λ (n) -isotypic component of F n,d for all n ≥ 2d. Indeed consider a standard Young tableaux t λ of shape λ and construct a standard Young tableau t λ (n) of shape λ (n) by adding numbers 2d + 1, . . . , n as rightmost entries of the top row of t λ (n) , while keeping the rest of the filling of t λ (n) the same as for t λ . It follows by construction of the Specht polynomials that We may assume, that the q (λ) k were chosen such that they map to sp t λ by an S 2d -isomorphism. We observe that sp t λ (and therefore sp t (n) λ ) and q λ k do not involve any of the variables X j , j > 2d. Therefore both are stabilized by S n−2d (operating on the last n − 2d variables), and further the action on the first 2d variables is exactly the same. Thus there is an S n -isomorphism mapping q λ k to sp t (n) λ and the S n -modules generated by the two polynomials are isomorphic. Therefore it follows that q (λ) k also form a symmetry basis of the λ (n) isotypic component of F n,d . Remark 4. 16. We remark that the sum of squares decomposition of f = λ 2d B λ , Q λ n , with B λ = L t λ L λ can be read off as follows: In particular, if for a fixed λ n and for every 1 ≤ i ≤ m λ we denote δ i := d − deg q λ i , then the set of polynomials is a symmetry basis of the isotypic component H n,d corresponding to λ.

4.5.
The dual cone of symmetric sums of squares. Recall, that for a convex cone K ⊂ R n the dual cone K * is defined as Our analysis of the dual cone (Σ S n,2d ) * proceeds similarly to the analysis of the dual cone in the non-symmetric situation given in [4,6].
Let S n,d be the vector space of real quadratic forms on H n,d . Let S + n,d be the cone of positive semidefinite quadratic forms in S n,d . An element Q ∈ S n,d is said to be S n -invariant if Q(f ) = Q(σ(f )) for all σ ∈ S n , f ∈ H n,d . We will denote byS n,d the space of S n -invariant quadratic forms on H n,d . Further we can identify a linear functional l ∈ (H S n,2d ) * with a quadratic form Q l defined by Q (f ) = (sym(f 2 )).
LetS + n,d be the cone of positive semidefinite forms inS n,d , i.e., S + n,d := {Q ∈S n,d : Q(f ) ≥ 0 for all f ∈ H n,d }.
The following Lemma is straightforward, but very important, as it allows us to identify the elements of dual cone l ∈ (Σ S n,2d ) * with quadratic forms Q inS + n,d .
Lemma 4.17. A linear functional ∈ (H S n,2d ) * belongs to the dual cone (Σ S n,2d ) * if and only if the quadratic form Q is positive semidefinite.
Since for ∈ (H S n,2d ) * we have Q ∈S n,d Schur's Lemma again applies and we can use the symmetry basis constructed above to simplify the condition that Q is positive semidefinite. In order to arrive at a dual statement of Corollary 4.15 we construct the following matrices: In order to examine the kernels of quadratic forms we use the following construction. Let W ⊂ H n,d be any linear subspace. We define W <2> to be the symmetrization of the degree 2d part of the ideal generated by W : In Lemma 4.17 we identified the dual cone (Σ S n,2d ) * with a linear section of the cone of positive semidefinite quadratic forms S + n,d with the subspaceS n,d of symmetric quadratic forms. By a slight abuse of terminology we think of positive semidefinite forms Q as elements of the dual cone (Σ n,2d ) * . The following important Proposition is a straightforward adaptation of the equivalent result in the non-symmetric case [6, Proposition 2.1]: Proposition 4.20. Let ∈ (Σ S n,2d ) * be a linear functional non-negative on squares and let W ⊂ H n,d be the kernel of the quadratic form Q . The linear functional spans an extreme ray of (Σ S n,2d ) * if and only if W <2> is a hyperplane in H S n,2d . Equivalently, the kernel of Q is maximal, i.e. if ker Q ⊆ ker Q m for some m ∈ H * n,2d then m = λ for some λ ∈ R.
The dual correspondence yields that any facet F of a cone K, i.e. any maximal face of K, is given by an extreme ray of the dual cone K * . More precisely, for any maximal face F of K there exists an extreme ray of K * spanned by a linear functional ∈ K * such that We now aim to characterize the extreme rays of (Σ S n,2d ) * which are not extreme rays of the cone (P S n,2d ) * . For v ∈ R n define a linear functional v : H S n,2d → R v (f ) = f (v). We say that the linear functional v corresponds to point evaluation at v. It is easy to show with the same proof as in the non-symmetric case that the extreme rays of the cone (P S n,2d ) * are precisely the point evaluations v (see [5,Chapter 4] for the non-symmetric case). Therefore we need to identify extreme rays of (Σ S n,2d ) * which are not point evaluations. We now examine the case of degree 4 in detail, and give an explicit construction of an element of (Σ S n,4 ) * , which does not belong to (P S n,2d ) * .

Symmetric quartic sums of squares
We now look at the decomposition of H n,2 as an S n -module in order to apply Theorem 4.2 and characterize all symmetric sums of squares of degree 4.
Theorem 5.1. Let f (n) ∈ H n,4 be symmetric and n ≥ 4. If f (n) is a sum of squares then it can be written in the form such that γ ≥ 0 and the matrices α 11 α 12 α 12 α 22 and β 11 β 12 β 12 β 22 are positive semidefinite.
Proof. The statement follows directly from the arguments presented in Subsection 4.4. Following Theorem 4.15 we get that f (n) has a decomposition in the form where B (n) is a sum of symmetric squares, B (n−1,1) is a 2 × 2 sum of symmetric squares matrix polynomial and due to the degree restrictions B (n−2,2) is a non-negative scalar. It remains to calculate the matrices Q (n−1,1) n and Q (n−2,2) n appearing in the statement decomposition. These are defined as the symmetrization of the pairwise products of those Specht polynomials which generate the corresponding Specht modules in degree 2. In degree 2 the Specht polynomials X n − X 1 and X 2 n − X 2 1 generate two disjunct irreducible S n modules isomorphic to S (n−1,1) part and the Specht polynomial (X n−1 − X 1 )(X n − X 2 ) generates a module isomorphic to S (n−2,2) . Thus we have: Then the symmetrisation can be calculated quite directly, since ever of the products is only involves at most 4 variables. These calculations then yield (5.1) , which gives exactly the statement in the Theorem. 5.1. The boundary of Σ S n,4 . We now apply Proposition 4.20 to the case of degree 4 and examine the possible kernels of an extreme ray of (Σ S n,4 ) * which does not come from a point evaluation.
Lemma 5.2. Suppose a linear functional spans an extreme ray of (Σ S n,4 ) * which is not an extreme ray of (P S n,4 ) * . Let Q be quadratic form corresponding to . Then Ker Q S (n) ⊕ S (n−1,1) , or λ 4 c λ p λ = c (4) + c (2 2 ) and n is odd.
As above let W denote the kernel of Q. We first observe that α = 2 is not possible: if α = 2 then we have p 2 ∈ W which implies p 2 2 ∈ W <2> S , which is a contradiction since p 2 2 is not on the boundary of Σ Sn n,4 . By Proposition 4.20 the kernel W of Q must be maximal. Let w ∈ R n be the all 1 vector: w = (1, . . . , 1). We now observe that α = 0 is also not possible: if α = 0 then all forms in the kernel W of Q are 0 at w. Therefore ker Q ⊆ ker Q w and by Proposition 4.20 we have Q = λQ w , which is a contradiction, since Q does not correspond to point evaluation. Thus we must have α = 1.
Now the condition dim W <2> = 4 implies that these 5 products cannot be linearly independent and an explicit calculation of the determinant of the corresponding matrix M yields det M = b(a + b). We now examine the possible roots of this determinant. In the case when a = −b all polynomials in W (even if γ = 1) will be zero at (1, . . . , 1), which is excluded. Therefore the only possible case is b = 0. In that case, by calculating the kernel of M we see that the unique (up to a constant multiple) linear functional vanishing on W <2> is given by We observe using (5.1) that we must have γ = 0 since sym n (X 1 − X 2 ) 2 (X 3 − X 4 ) 2 > 0 for n ≥ 4. Now suppose that n is even and let w ∈ R n be given by w = (1, . . . , 1, −1, . . . , −1) where 1 and −1 occur n/2 times each. It is easy to verify that for all f ∈ W we have f (w) = 0. Therefore it follows that W ⊆ ker Q w , which is a contradiction, since W is a kernel of an extreme ray which does not come from point evaluation.
When n is odd the forms in W have no common zeroes and therefore is not a positive combination of point evaluations. It is not hard to verify that is non-negative on squares and the kernel W is maximal. Therefore by Proposition 4.20 we know that spans an extreme ray of (Σ S n,2d ) * Finally we need to deal with the case α = β = γ = 1. Suppose that the S n -module W is generated by three polynomials: ). Again we consider the symmetrizations of the five pairwise products and represent these in a matrix M . Explicit calculations now show that det M = −(a + b)(ad 2 n 2 − 4ad 2 n + 4ad 2 + bd 2 n 2 + 4bcdn + bc 2 n − 4bcd − bc 2 ).
Since we have α = β = γ = 1 we must have rank M = 4 since the rows of M generate W <2> . Again we cannot have a = −b, and thus we must have: ad 2 n 2 − 4ad 2 n + 4ad 2 + bd 2 n 2 + 4bcdn + bc 2 n − 4bcd − bc 2 = 0 Therefore there exists a unique linear functional , which vanishes on W <2> and comes from the kernel of M .
Let w ∈ R n be a point with coordinates w = (s, . . . , s, t) with s, t ∈ R, such that they satisfy: We see that q 3 (w) = 0 and from the above equation it also follows that for all f in the S nmodule generated by q 2 we have f (w) = 0. Direct calculation shows that (5.2) also implies that q 1 (w) = 0. Thus we have W ⊆ Q w , which is a contradiction by Proposition 4.20, since W is a kernel of an extreme ray which does not come from point evaluation. We remark that it is possible to show that the functional vanishing on W <2> and giving rise to W is in fact a multiple of w , but this is not necessary for us to finish the proof.
The above description allows us to explicitly characterize degree 4 symmetric sums of squares that are positive and on the boundary of Σ S n,4 . Proof. Suppose that f (n) is a strictly positive form on the boundary of Σ S 4,n . Then there exists a non-trivial functional l spanning an extreme ray of the dual cone (Σ S 4,n ) * such that (f (n) ) = 0. Let W l ⊂ H n,2 denote the kernel of Q . In view of Lemma 5.2 we see that there are two possible situations that we need to take into consideration.
Let q ∈ H n,2 . By Proposition 4.20 we have q ∈ W l if and only if the S n -linear map p → (pq) is the zero map for on H n,2 .
The dimension of the vector space of S n -invariant quadratic maps from H n,2 to R is 5. However, since q ∈ W Schur's lemma implies (q · r) = 0 for all r in the isotypic component of the type Proposition 6.4. Let M n be the matrix converting between the monomial mean and power mean basis of H S n,2d . Then M n converges entry-wise to a full rank matrix M * as n grows to infinity.
Therefore we see that asymptotically where the coefficients a µ,ν (n) tend to 0 as n → ∞. The Proposition now follows.
Now with these preparations the proof of Theorem 2.8 will be immediate after the following two Lemmata. Proof. For the first inclusion, let a ∈ A. Since A is the limit of A i there exists N such that for all i ≥ N a is contained in A i . Let b i = M i a. Then b i ∈ B i for all i ≥ N and moreover, since the linear maps M i converge to identity we have that b i converges to a. This in turn implies that a ∈ lim inf i→∞ B i For the second inclusion, we remark that (lim sup i→∞ B i ) c = lim inf i→∞ B c i . Hence, one can argue in an analogous way by considering the complement of A.
From the above lemma we can easily obtain the following generalization, which shows that the conclusions also hold if the limit of the linear maps M i is any full-rank map. Proof. We can apply Lemma 6.5 to the sequence C i = M −1 M i (A i ). Since B i = M (C i ), and M is a full-rank linear map, the desired conclusions follow for the sequence B i as well.