Coordinate-wise powers of algebraic varieties

We introduce and study coordinate-wise powers of subvarieties of Pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^n$$\end{document}, i.e. varieties arising from raising all points in a given subvariety of Pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^n$$\end{document} to the r-th power, coordinate by coordinate. This corresponds to studying the image of a subvariety of Pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^n$$\end{document} under the quotient of Pn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {P}}^n$$\end{document} by the action of the finite group Zrn+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {Z}}_r^{n+1}$$\end{document}. We determine the degree of coordinate-wise powers and study their defining equations, in particular for hypersurfaces and linear spaces. Applying these results, we compute the degree of the variety of orthostochastic matrices and determine iterated dual and reciprocal varieties of power sum hypersurfaces. We also establish a link between coordinate-wise squares of linear spaces and the study of real symmetric matrices with a degenerate eigenspectrum.


Introduction
associated to the restricted Boltzmann machine was described as a repeated Hadamard product of the secant variety of P 1 × · · · × P 1 ⊂ P 2 n −1 with itself. Further study in Bocci et al. (2016Bocci et al. ( , 2018, Friedenberg et al. (2017), Calussi et al. (2018) made progress towards understanding Hadamard products.
Of particular interest are the r -th Hadamard powers X r := X · · · X of an algebraic variety X ⊂ P n . They are the multiplicative analogue of secant varieties that play a central role in classical projective geometry: The r -th secant variety σ r (X ) is the closure of the set of coordinate-wise sums of r points in X . Its subvariety corresponding to sums of r equal points is the original variety X . In the multiplicative setting, the Hadamard power X r replaces σ r (X ), but it does not typically contain X if [1 : . . . : 1] / ∈ X . As a multiplicative substitute for the inclusion X ⊂ σ r (X ), it is natural to study the subvariety of X r given by coordinate-wise products of r equal points in X .
Formally, for a projective variety X ⊂ P n and an integer r ∈ Z (possibly negative), we are interested in studying its image under the rational map ϕ r : P n P n , [x 0 : . . . : x n ] → [x r 0 : . . . : x r n ].
We call the image, X •r , of X under ϕ r the r -th coordinate-wise power of X ⊂ P n . In this article, we investigate these coordinate-wise powers X •r with a main focus on the case r > 0. These varieties show up naturally in many applications. For the Grassmannian variety Gr(k, P n ) in its Plücker embedding, the intersection with its r -th coordinate-wise power Gr(k, P n ) ∩ Gr(k, P n ) •r was described combinatorially in terms of matroids in Lenz (2020) for even r . In Bonnafé (2018), highly singular surfaces in P 3 have been constructed as preimages of a specific singular surface under the morphism ϕ r for r > 0. In the case r = − 1, the map ϕ r is a classical Cremona transformation and images of varieties under this transformation are called reciprocal varieties whose study has received particular attention in the case of linear spaces, see De Loera et al. (2012), Kummer and Vinzant (2019) and Fink et al. (2018).
For r > 0, the coordinate-wise powers X •r of a variety X ⊂ P n have the following natural interpretation: The quotient of P n by the finite subgroup Z n+1 r of the torus (C * ) n+1 is again a projective space. The image of a variety X ⊂ P n in P n /Z n+1 r ∼ = P n is the variety X •r , since ϕ r : P n → P n is the geometric quotient of P n by Z n+1 r . In other words, coordinate-wise powers of algebraic varieties are images of subvarieties of P n under the quotient by a certain finite group. The case r = 2 has the special geometric significance of quotienting by the group generated by reflections at the coordinate hyperplanes of P n . We are, therefore, especially interested in coordinate-wise squares of varieties.
A particular application of interest is the variety of orthostochastic matrices. An orthostochastic matrix is a matrix arising by squaring each entry of an orthogonal matrix. In other words, they are points in the coordinate-wise square of the variety of orthogonal matrices. Orthostochastic matrices play a central role in the theory of majorization (Marshall et al. 2011) and are closely linked to finding real symmetric matrices with prescribed eigenvalues and diagonal entries, see Horn (1954) and Mirsky (1963). Recently, it has also been shown that studying the variety of orthostochastic matrices is central to the existence of determinantal representations of bivariate polynomials and their computation, see Dey (2018).
As a further application, we show that coordinate-wise squares of linear spaces show up naturally in the study of symmetric matrices with a degenerate spectrum of eigenvalues.
The article is structured as follows: As customary when studying any variety, first and foremost, we compute the degree of X •r . We use this to derive the degree of the variety of orthostochastic matrices. In Sect. 3, we dig a little deeper and find explicitly the defining equations of the coordinate-wise powers of hypersurfaces. We define generalised power sum hypersurfaces and give relations between their dual and reciprocal varieties.
We study in more detail coordinate-wise powers of linear spaces in the final section. We show the dependence of the degree of the coordinate-wise powers of a linear space on the combinatorial information captured by the corresponding linear matroid. Particular attention is drawn to the case of coordinate-wise squares of linear spaces. For low-dimensional linear spaces we give a complete classification. We also describe the defining ideal for the coordinate-wise square of general linear spaces of arbitrary dimension in a high-dimensional ambient space, and we link this question to the study of symmetric matrices with a codimension 1 eigenspace.

Degree formula
Throughout this article, we work over C. We denote the homogeneous coordinate ring of P n by C[x] := C[x 0 , . . . , x n ]. For any integer r ∈ Z, we consider the rational map For r ≥ 0, the rational map ϕ r is a morphism. Throughout, let X ⊂ P n be a projective variety, not necessarily irreducible. We denote by X •r ⊂ P n the image of X under the rational map ϕ r . More explicitly, For r < 0, we will only consider the case that no irreducible component of X is contained in any coordinate hyperplane of P n . We call the image X •r ⊂ P n the r-th coordinate-wise power of X . In the case r = − 1, the variety X •(− 1) is called the reciprocal variety of X . We primarily focus on positive coordinate-wise powers in this article, and therefore we will from now on always assume r > 0 unless explicitly stated otherwise. Observe that ϕ r : P n → P n is a finite morphism, and hence, the image X •r of X under ϕ r has same dimension as X .
The cyclic group Z r of order r is identified with the group of r -th roots of unity {ξ ∈ C | ξ r = 1}. We consider the action of the (n + 1)-fold product Z n+1 r := Z r × · · · × Z r on C[x] given by rescaling the variables x 0 , . . . , x n with r -th roots of unity. We denote the quotient of Z n+1 r by the subgroup {(ξ, ξ, . . . , ξ) ∈ C r | ξ r = 1} ⊂ Z n+1 r as G r := Z n+1 r /Z r . The group action of Z n+1 r on C[x] determines a linear action of G r on P n . In this way, we can also view G r as a subgroup of Aut(P n ). For r = 2, this has the geometric interpretation of being the linear group action generated by reflections at coordinate hyperplanes. Note that G r does not act on the vector space C[x] d of homogeneous polynomials of degree d, instead it acts on P(C[x] d ).
Given a projective variety, the following proposition describes set-theoretically the preimage under ϕ r of its coordinate-wise r -th power.
Proposition 2.1 (Preimages of coordinate-wise powers) Let X ⊂ P n be a variety and let X •r ⊂ P n be its coordinate-wise r -th power. The preimage ϕ −1 Proof This follows from X •r = ϕ r (X ) and the fact that In particular, for r = 2, we obtain the following geometric description.

Corollary 2.2
The preimage of X •2 under ϕ 2 : P n → P n is the union over the orbit of X under the subgroup of Aut(P n ) generated by the reflections in the coordinate hyperplanes.
In the following theorem, we give a degree formula for the coordinate-wise powers of an irreducible variety. Theorem 2.3 (Degree formula) Let X ⊂ P n be an irreducible projective variety. Let Stab r (X ) := {τ ∈ G r | τ · X = X } and Fix r (X ) := {τ ∈ G r | τ | X = id X }. Then the degree of the r -th coordinate-wise power of X is Proof Let H 1 , . . . , H k ⊂ P n for k := dim X •r = dim X be general hyperplanes whose common intersection with X •r consists of finitely many reduced points. We want to determine |X •r ∩ k i=1 H i |. By Proposition 2.1, we have Note that each ϕ −1 r H i is a hypersurface of degree r fixed under the G r -action, and their common intersection with X consists of finitely many reduced points by Bertini's theorem (as in Flenner et al. 1999, 3.4.8). By Bézout's theorem, We note that Z := X ∩ τ ∈G r \ Stab r (X ) τ · X is of dimension <k by irreducibility of X . Therefore, the common intersection of k general hyperplanes H i with ϕ r (Z ) is empty. This implies that the intersection of τ · X and τ · X does not meet k i=1 ϕ −1 r H i for all τ, τ ∈ G r with τ · τ −1 / ∈ Stab r (X ). Hence, the above can be written as a disjoint union In particular, For a general point p ∈ X , we have {τ ∈ G r | τ · p = p} = Fix r (X ). Then Proposition 2.1 shows that a general point of X •r = ϕ r (X ) has |G r |/| Fix r (X )| preimages under ϕ r , so for general hyperplanes H i we conclude

Orthostochastic matrices
We use Proposition 2.3 to compute the degree of the variety of orthostochastic matrices. By O(m) ⊂ P m 2 (resp. SO(m) ⊂ P m 2 ) we mean the projective closure of the affine variety of orthogonal (resp. special orthogonal) matrices in A m 2 . It was shown in Dey (2018) Proof The variety O(m) consists of two connected components that are isomorphic to SO(m). The images of these components under ϕ 2 : P m 2 → P m 2 coincide. In particular, O(m) •2 = SO(m) •2 and deg O(m) = 2 deg SO(m). We determine Fix 2 (SO(m)) and Stab 2 (SO(m)). Identify elements of G 2 with m × m-matrices whose entries are ±1. Then a group element S ∈ G 2 = {±1} m×m acts on the affine open subset A m 2 ⊂ P m 2 corresponding to m × m-matrices M ∈ C m×m as S • M, where S • M denotes the Hadamard product (i.e. entry-wise product) of matrices. Clearly, Fix 2 (SO(m)) is trivial, or else every special orthogonal matrix would need to have a zero entry at a certain position.
We claim that Stab 2 (SO(m)) ⊂ {S ∈ {±1} m×m | rk S = 1}. Indeed, assume that S ∈ {±1} m×m lies in Stab 2 (SO(m)), but is not of rank 1. Then m ≥ 2 and we may assume that the first two columns of S are linearly independent. Consider the vectors u, v ∈ C m given by Since u and v are orthogonal, we can find a special orthogonal matrix M ∈ C m×m whose first two columns are M •1 = u/ u 2 and M •2 = v/ v 2 . But S ∈ Stab 2 (SO(m)), so the matrix S • M must be a special orthogonal matrix. In particular, the first two columns of S • M must be orthogonal, i.e.
and equality in (2.1) holds if and only if S i1 S i2 = S j1 S j2 for all i, j ∈ {1, . . . , m}. However, this contradicts the linear independence of the first two columns of S. Hence, the claim follows.
Any rank 1 matrix in {±1} m×m can be uniquely written as uv T with u, v ∈ {±1} m and u 1 = 1. Such a rank 1 matrix S = uv T lies in Stab 2 (SO(m)) if and only if for each special orthogonal matrix M ∈ C m×m the matrix is again a special orthogonal matrix. This is true if and only if m and, thus, | Stab 2 (SO(m))| = 2 2m−2 .
Since SO(m) ⊂ P m 2 is irreducible, applying Proposition 2.3 gives .
Finally, we observe that the affine variety of orthogonal matrices in A m 2 is an intersection of m+1 2 quadrics which correspond to the polynomials given by the equation M T M = id satisfied by orthogonal matrices M ∈ C m×m . Therefore,

Linear spaces
We now determine the degree of coordinate-wise powers L •r for a linear space L ⊂ P n , based on Proposition 2.3. It can be expressed in terms of the combinatorics captured by the matroid of L ⊂ P n . We briefly recall some basic definitions for matroids associated to linear spaces in P n . We refer to Oxley (2011) for a detailed introduction to matroid theory. Let L ⊂ P n be a linear space. The combinatorial information about the intersection of L with the linear coordinate spaces in P n is captured in the linear matroid M L . It is the collection of index sets I ⊂ {0, 1, . . . , n} such that L does not intersect  Proof By Proposition 2.3, we need to determine the cardinality of the groups Consider the affine cone over L, which is a (k + 1)-dimensional subspace W ⊂ C n+1 . We denote the canonical basis of C n+1 by e 0 , . . . , e n .
We observe that | Fix r (L)| = |{τ ∈ Z n+1 From this, we see that | Fix r (L)| = r s . For the stabiliser of L, we have | Stab r (L)| = 1 In particular, there are precisely r t elements τ ∈ Z n+1 r with τ · W = W . We deduce that | Stab r (L)| = r t−1 , which concludes the proof by Proposition 2.3.

Corollary 2.7
The degree of the coordinate-wise r -th power of a linear space only depends on the associated linear matroid. If L 1 , L 2 ⊂ P n are linear spaces such that the linear matroids M L 1 and M L 2 are isomorphic (i.e. they only differ by a permutation of {0, 1, . . . , n}), then L •r 1 ⊂ P n and L •r 2 ⊂ P n have the same degree. Corollary 2.8 Let L ⊂ P n be a linear space of dimension k. Then deg L •r ≤ r k . For general k-dimensional linear spaces in P n , equality holds.
Proof Every coloop of M L forms a component of M L and the set {0, 1, . . . , n}\ {coloops} is a union of components, hence t ≤ s + 1. Therefore, by Proposition 2.6, deg L •r ≤ r k . A general linear space L ∈ Gr(k, P n ) intersects only those linear coordinate space in P n of dimension at least n − k. Therefore, the linear matroid of a general linear space is the uniform matroid: It is easily checked from the definitions that this matroid has no coloops and only one component.

Example 2.9
We illustrate Proposition 2.6 for hyperplanes. Up to permuting and rescaling the coordinates of P n , each hyperplane is given by L = V ( f ) with f = x 0 + · · · + x m for some m ∈ {0, 1, . . . , n}. Its linear matroid is

Hypersurfaces
In this section, we study the coordinate-wise powers of hypersurfaces. Here, by a hypersurface, we mean a pure codimension 1 variety. In particular, hypersurfaces are assumed to be reduced, but are allowed to have multiple irreducible components. We describe a way to find the explicit equation describing the image of the given hypersurface under the morphism ϕ r . We define generalised power sum symmetric polynomials and we give a relation between duality and reciprocity of hypersurfaces defined by them. Finally, we raise the question whether and how the explicit description of coordinate-wise powers of hypersurfaces may lead to results on the coordinate-wise powers for arbitrary varieties.

The defining equation
The defining equation of a degree d hypersurface is a square-free (i.e. reduced) polynomial unique up to scaling, corresponding to a unique f ∈ P(C[x] d ). We work with points in P(C[x] d ), i.e. polynomials up to scaling. We do not always make explicit which degree d we are talking about if it is irrelevant to the discussion. The product Since the finite morphism ϕ r preserves dimensions, the coordinate-wise r -th power of a hypersurface is again a hypersurface, leading to the following definition.
For a given square-free polynomial f , we want to compute f •r . To this end, we introduce the following auxiliary notion.
. . , f m is known. Indeed, the irreducible factors of each s r ( f i ) are immediate from case (i) of the definition, so determining the least common multiple does not require any additional factorisation.
. Proof It is enough to show the claim for f irreducible because we can deduce the general case in the following manner. If f factors into irreducible factors as We now assume that f is irreducible. If f = x i for some i ∈ {0, 1, . . . , n}, then the claim holds trivially by the definition of s r ( f ). Let f = x i for all i and g be a polynomial representing Since g is not divisible by x i , it must contain a monomial not divisible by x i . This shows that g is fixed by Based on Definition 3.2 and Lemma 3.3, the following proposition gives a method to find the equation of the coordinate-wise power of a hypersurface.

Proposition 3.4 (Powers of hypersurfaces) Let V ( f ) ⊂ P n be a hypersurface. The defining equation f •r of its coordinate-wise r -th power is given by replacing each
The claim is therefore an immediate consequence of Lemma 3.3.
For clarity, we illustrate the above results for a hyperplane in P 3 .

Example 3.5 For n = 3 and f
Expanding this expression, we obtain a polynomial in C[x 2 0 , x 2 1 , x 2 2 , x 2 3 ] and, substituting x 2 i by x i , we obtain by Proposition 3.4 that the coordinate-wise square V ( f ) •2 ⊂ P 3 is the vanishing set of

Fig. 2 Circles and their coordinate-wise squares
This rational quartic surface is illustrated in Fig. 1. It is a Steiner surface with three singular lines forming the ramification locus of Example 3.6 (Squaring the circle) Consider the plane conic C = V ( f ) ⊂ P 2 given by f : In the affine chart x 0 = 1, this corresponds over the real numbers to the circle with center (a, b) and radius c. From Proposition 3.4, we show that the coordinate-wise square of the circle C ⊂ P 2 can be a line, a parabola or a singular quartic curve. See Fig. 2 for an illustration of the following three cases: If the center of the circle lies on a coordinate-axis and is not the origin (i.e. ab = 0, but (a, b) = (0, 0)), then C •2 ⊂ P 2 is a conic. Say a = 0, then C •2 is defined by the equation In the affine chart x 0 = 1, C is a circle and C •2 is a parabola. (iii) If the center of the circle does not lie on a coordinate-axis, then |G r · f | = 4. Therefore, C •2 is a quartic plane curve. Its equation can be computed explicitly using Proposition 3.4. Being the image of a conic, the quartic curve C •2 is rational, hence it cannot be smooth. In fact, its singularities are the two points [0 : in P 2 . They form the branch locus of ϕ 2 | C : C → C •2 . The point [0 : 1 : − 1] ∈ P 2 is the image of the two complex points [0 : 1 : ±i] at infinity lying on all of the four conics τ · C for τ ∈ G 2 . The other singular point of C •2 is the image under ϕ 2 of the two intersection points of the two circles C and τ · C for τ = [1 : − 1 : − 1] ∈ G 2 inside the affine chart x 0 = 1.
Then the Newton polytope of f •r arises from the Newton polytope of f by rescaling according to the cardinality of the orbit : i by x i rescales the Newton polytope with the factor 1 r , so the claim follows.

Duals and reciprocals of power sum hypersurfaces
We now highlight the interactions between coordinate-wise powers, dual and reciprocal varieties for the case of power sum hypersurfaces V (x p 0 + · · · + x p n ) ⊂ P n . Specifically, we determine explicitly all hypersurfaces that arise from power sum hypersurfaces by repeatedly taking duals and reciprocals as the coordinate-wise r -th power of some hypersurface. In this subsection, we also allow r to take negative integer values.
Recall that the reciprocal variety V ( f ) •(−1) of a hypersurface V ( f ) ⊂ P n not containing any coordinate hyperplane of P n is defined as the closure of For linear spaces the reciprocal variety and its Chow form has been studied in detail in Kummer and Vinzant (2019).
We also recall the definition of the dual variety of V ( f ) ⊂ P n . Consider the set of hyperplanes in P n that arise as the projective tangent space at a smooth point of V ( f ). This is a subset of the dual projective space (P n ) * and its Zariski closure is the dual We identify (C n+1 ) * with C n+1 via the standard bilinear form and therefore identify (P n ) * with P n .
Consider the power sum polynomial As before, we regard polynomials only up to scaling. For power sums with negative exponents we consider the numerator of the rational function as In particular, f −1 ∈ P(C[x] n ) is the elementary symmetric polynomial of degree n.
Recall that the morphism ϕ r : P n → P n for r > 0 is finite, hence preserves dimension. Since ϕ −1 : P n P n is a birational map, the rational map Lemma 3.8 For all s ∈ Z and r , λ ∈ Z\{0}, we have f •(λr ) where we have used the surjectivity of ϕ λ : P n → P n . For λ < 0, we use the above to see This naturally leads us to the our next definition.
Definition 3.9 (Generalised power sum polynomial) For any rational number p = s r ∈ Q (r , s ∈ Z, r = 0), we define the generalised power sum polynomial By Lemma 3.8, the generalised power sum polynomial f p is well-defined. With this definition, we get the following duality result for hypersurfaces generalising Example 4.16 in Gelfand et al. (1994).
Proposition 3.10 (Duality of generalised power sum hypersurfaces) Let p, q ∈ Q\{0} be such that 1 The morphism ϕ r : P n \V (x 0 x 1 . . . x n ) → P n \V (x 0 x 1 . . . x n ) induces a linear isomorphism on projective tangent spaces T a P n = P n → P n = T b P n given by diag(ra r −1 0 , ra r −1 1 , . . . , ra r −1 n ). This maps In particular, V (f p ) * ⊂ P n is the image of the rational map :

Remark 3.11
This statement can be understood as an algebraic analogue of the duality theory for p -spaces (R n , | · | p ). Indeed, let p, q ≥ 1 be rational with 1 p + 1 q = 1. The unit ball in (R n , | · | p ) is U p := {v ∈ R n | i v p i = 1} and, by p -duality, hyperplanes tangent to U p correspond to the points on the unit ball U q of the dual normed vector space (R n , | · | q ). The complex projective analogues of the unit balls U p ⊂ R n are the generalised power sum hypersurfaces V (f p ) ⊂ P n and Proposition 3.10 shows the previous statement in this setting.
Using Proposition 3.4 we can compute f p for any p ∈ Q explicitly. In particular, we make the following observation: By Lemma 3.12, in order to determine the generalised power sum polynomials f p , we may restrict our attention to f 1/r . These have a particular geometric interpretation as repeated dual-reciprocals of the linear space V (x 0 + x 1 + · · · + x n ) ⊂ P n as in Corollary 3.14.
Theorem 3.13 The repeated dual-reciprocals of generalised power sum hypersurfaces V (f p ) are given by Proof We show the claim for V (f p ) by induction on k. For k = 0, the claim is trivial. For k > 0, we get by induction hypothesis: where ( * ) holds by Lemma 3.8 and ( * * ) by Proposition 3.10. From this, we also see concluding the proof. Fig. 3 The iterated dual-reciprocal Corollary 3.14 For r > 0, the repeated alternating reciprocals and duals of the linear space V (f 1 ) ⊂ P n are the coordinate-wise powers of V (f 1 ) given as Example 3.15 Let n = 3 and f := 10. This is the quartic surface from Example 3.5.
Higher iterated dual-reciprocal varieties of V ( f ) can be explicitly computed analogous to Example 3.5 via Corollary 3.13. For instance, the surface D R D R V ( f ) ⊂ P 3 is the coordinate-wise cube of V ( f ) which is the degree 9 surface illustrated in Fig. 3.

Remark 3.16 (Coordinate-wise rational powers)
The construction of the generalised power sum hypersurfaces V (f p ) may be understood in a broader context of coordinatewise powers with rational exponents: For a subvariety X ⊂ P n , and a rational number p = r /s with r ∈ Z and s ∈ Z >0 relatively prime, we may define the coordinate-wise p-th power X • p := ϕ −1 s (X •r ) = (ϕ −1 s (X )) •r . This is a natural generalisation of the coordinate-wise integer powers X •r . With this definition, the generalised power sum hypersurface V (f p ) is the 1/ p-th coordinate-wise power of V (f 1 ). While we focus on coordinate-wise powers to integral exponents in this article, many results easily transfer to the case of rational exponents. For instance, the defining ideal of X •(r /s) is obtained by substituting x i → x s i in each of the generators of the vanishing ideal of X •r . In particular, the number of minimal generators for these two ideals agree.

From hypersurfaces to arbitrary varieties?
We briefly discuss to what extent Proposition 3.4 can be used to determine coordinatewise powers of arbitrary varieties, and mention the difficulties involved in this approach.
Let r > 0 and let f 1 , . . . , f m be homogeneous polynomials vanishing on a variety X ⊂ P n . Their r −th coordinate-wise powers give rise to the inclusion . We may ask when equality holds, which leads us to the following definition, reminiscent of the notion of tropical bases in Tropical Geometry (Maclagan and Sturmfels 2015, Section 2.6).
We show the existence of such power bases for a given ideal in the following proposition.
On the other hand, we have f . . , f m to a generating set of I gives an r -th power basis of I . In the following two examples, we will see that even in the case of squaring codimension 2 linear spaces, obvious candidates for f 1 , . . . , f m do not form a power basis.
The polynomials f •2 1 and f •2 2 have degrees 4 and 2, respectively, by Proposition 3.4. Note that the polynomial f 3 := 3x 2 x 2 ) f 2 also lies in I , so the ideal of V (I ) •2 contains the linear form f •2 3 = 3x 0 − x 1 + x 2 − 3x 3 . The polynomials f 1 , f 2 do not form a power basis of I . In fact, one can check that V ( f •2 1 , f •2 2 ) ⊂ P 3 is the union of four rational quadratic curves, one of which is V (I ) •2 , see Fig. 4 for an illustration. A power basis of I is given by f 1 , f 2 , f 3 .

Fig. 4 Distinction between
Example 3.21 Another natural choice for polynomials f 1 , . . . , f m in the ideal of a linear space X ⊂ P n consists of the circuit forms, i.e. linear forms vanishing on X that are minimal with respect to the set of occurring variables. However, for these circuit forms are and one can check that the point [16 : 16 : 1 : 36 : 9] ∈ P 4 lies in V ( f •2 1 , . . . , f •2 5 ), but not in X •2 . In particular, f 1 , . . . , f 5 is not an r -th power basis for r = 2.
We have seen in Examples 3.20 and 3.21 that even for the case of linear spaces of codimension 2 it is not an easy task to a priori identify an r -th power basis.
The following proposition shows how one can straightforwardly find a very large r -th power basis of an ideal I , without first computing the ideal of V (I ) •r .
Proof We assume that g 1 , . . . , g k are linearly independent, or else we can replace them with a linearly independent subset. For m := (k − 1)r n + 1, let f 1 , . . . , f m ∈ g 1 , . . . , g k be such that no k of them are linearly dependent.
For X := V (g 1 , . . . , g k ), we will show that V ( f •r 1 , . . . , f •r m ) = X •r by comparing the preimages of both sides under ϕ r : P n → P n .
In particular, Proposition 3.22 shows that for a subvariety of P n defined by k forms of degree d, its coordinate-wise r -th power can be described set-theoretically by the vanishing of (k − 1)r n + 1 forms of degree ≤ dr n−1 . However, we will see in Sect. 4 that for linear spaces this bound is rather weak in many cases and should be expected to allow dramatic refinement in general. We raise the following as a broad open question:

Linear spaces
In this section, we specialise to linear spaces L ⊂ P n and investigate their coordinatewise powers L •r . First, we highlight the dependence of L •r on the geometry of a finite point configuration associated to L ⊂ P n . For r = 2, we point out its relation to symmetric matrices with degenerate eigenvalues. Based on this, we classify the coordinate-wise squares of lines and planes. Finally, we turn to the case of squaring linear spaces in high-dimensional ambient space.

Point configurations
We study the defining ideal of L •r for a k-dimensional linear space L ⊂ P n . The degrees of its minimal generators do not change under rescaling and permuting coordinates of P n , i.e. under the actions of the algebraic torus G n+1 m = (C * ) n+1 and the symmetric group S n+1 . Fixing a (k + 1)-dimensional vector space W , we have the identification Hence, we may express coordinate-wise powers of a linear space L in terms of the corresponding finite multi-set Z ⊂ PW * . In fact, it is easy to check that the degrees of the minimal generators of the defining ideal only depend on the underlying set Z , forgetting repetitions in the multi-set. We study coordinate-wise powers of a linear space in terms of the corresponding non-degenerate finite point configuration.
For the entirety of Sect. 4, we establish the following notation: Let L ⊂ P n be a linear space of dimension k. We understand L as the image of a chosen linear embedding ι : PW Since 0 , 1 , . . . , n ∈ W * define the linear embedding ι, they cannot have a common zero in W . Hence, the linear span of Z is the whole space PW * . We denote by I (Z ) ⊂ Sym • W the defining ideal of Z ⊂ PW * . The subspace of degree r forms vanishing on Z is written as I (Z ) r ⊂ Sym r W .
The main technical tool is the following observation that L •r ⊂ P n equals (up to a linear re-embedding) the image of the r -th Veronese variety ν r (PW ) ⊂ P Sym r W under the projection from the linear space P(I (Z ) r ) ⊂ P Sym r W .

Lemma 4.1 The diagram commutes, where ν r is the r -th Veronese embedding, π is the linear projection of P Sym r W from the linear space P(I (Z ) r ), ψ is a morphism and ϑ is a linear embedding.
Proof We observe that the morphism ϕ r • ι is given by The n+1 elements r i ∈ Sym r W * correspond to a linear map χ : Sym r W → C n+1 via the natural identification (Sym r W * ) n+1 = Hom C (Sym r W , C n+1 ).
The rational mapχ between projective spaces corresponding to the linear map χ gives the following commuting diagram: where ϑ is the linear embedding of projective spaces induced by factoring χ over Sym r W / ker χ . In particular, ν r (PW ) ∩ P(ker χ) = ∅, since ϕ r • ι is defined everywhere on PW . Hence, π| ν r (PW ) : ν r (PW ) → P(Sym r W / ker χ) is a morphism.
Let f ∈ Sym r W such that f ∈ I (Z ) r . Naturally identifying W and W * * , we may view f as a form of degree r on W * . Then, the condition that f ∈ I (Z ) r translates to f ( i ) = 0 ∀i. Viewing f as a symmetric r -linear form W * × · · · × W * → C, we have f ( i , . . . , i ) = 0 ∀i. Also, when f is considered as a linear form on Sym r W * , f ( r i ) = 0 ∀i. The latter expression is equivalent to f ∈ ker χ , via the identification of W and W * * . We conclude I (Z ) r = ker χ .
In particular, we deduce the following:

Proposition 4.2 Let L be a linear space such that the finite set of points Z does not lie on a degree r hypersurface. Then the ideal of L •r is generated by linear and quadratic forms.
Proof Since I (Z ) r = 0, we deduce from Lemma 4.1 that L •r = ϕ r (L) is a linear re-embedding of the k-dimensional r -th Veronese variety ν r (PW ) ⊂ P Sym r W . The ideal of this Veronese variety is generated by quadrics. Since dim Sym r W = k+r r , the linear re-embedding ϑ : P Sym r W → P n adds n − k+r r + 1 linear forms to the ideal.

Degenerate eigenvalues and squaring
We now specialise to the case of coordinate-wise squaring, i.e. r = 2. This case has special geometric importance, since it corresponds to computing the image of a linear space under the quotient of P n by the reflection group generated by the coordinate hyperplanes. In this section through Proposition 4.3 we point out that the case of coordinate-wise square of a linear space is closely related to studying symmetric matrices with a degenerate spectrum of eigenvalues. Here, we interpret P Sym 2 F k+1 (for F = R or C) as the projective space consisting of symmetric (k + 1) × (k + 1)matrices up to scaling with entries in F.

Proposition 4.3
Let X ⊂ P Sym 2 R k+1 be the set of real symmetric (k + 1) × (k + 1)matrices with an eigenvalue of multiplicity ≥ k. Then the Zariski closure of X in P Sym 2 C k+1 is projectively equivalent to the projective cone over the coordinate-wise square L •2 of any k-dimensional linear space L whose point configuration Z ⊆ PW * lies on a unique and smooth quadric.
Proof Let L ⊂ P n be a k-dimensional linear space such that I (Z ) 2 is spanned by a smooth quadric q ∈ P Sym 2 W . Choosing coordinates of W ∼ = C k+1 , we identify points in P Sym 2 W with complex symmetric (k + 1) × (k + 1)-matrices up to scaling and we can assume q = id ∈ P Sym 2 W . The second Veronese variety ν 2 (PW ) ⊂ P Sym 2 W consists of rank 1 matrices. Let X 0 ⊂ P(Sym 2 W / q ) be the image of ν 2 (PW ) under the natural projection. By Lemma 4.1, X 0 is the coordinate-wise square L •2 up to a linear re-embedding.
The projective cone over X 0 ∼ = L •2 is the subvariety X 1 ⊂ P Sym 2 W consisting of complex symmetric matrices M such that the set M + id contains a matrix of rank ≤ 1. We observe that the rank of M − λ id is the codimension of the eigenspace of M with respect to λ ∈ C. Hence, X 1 = {M ∈ P Sym 2 C k+1 | M has an eigenspace of codimension ≤ 1}.
We are left to show that X 1 is the Zariski closure in P Sym 2 C k+1 of X ⊂ P Sym 2 R k+1 . Since real symmetric matrices are diagonalizable, the multiplicity of an eigenvalue is the dimension of the corresponding eigenspace. Hence, X 1 ∩ P Sym 2 R k+1 = X . The set X is the orbit of the line V := {diag(λ, . . . , λ, μ) | [λ : μ] ∈ P 1 R } under the action of O(k + 1). The action is given by conjugation with orthogonal matrices and the stabiliser is O(k) × {±1}. Therefore, X has real dimension dim V + dim O(k + 1) − dim O(k) = k + 1. Also, X 1 is the projective cone over X 0 ∼ = L •2 , so it is a (k + 1)-dimensional irreducible complex variety. We conclude that X 1 is the Zariski closure of X in P Sym 2 C k+1 .
We illustrate Proposition 4.3 in the case of 3 × 3-matrices: Example 4.4 Consider the set of real symmetric 3 × 3-matrices with a repeated eigenvalue. We denote its Zariski closure in P Sym 3 C 2 by Y . By Proposition 4.3, it can be understood in terms of the coordinate-wise square L •2 for some plane L. We make this explicit as follows: Consider the planar point configuration lying only on the conic V (x 2 + y 2 + z 2 ). Let L be the corresponding plane in P 4 , given as the image of Under the linear embedding ψ : P 4 → P Sym 2 C 3 , , the plane L gets mapped into Y . Indeed, it is easily checked that a point [x : y : z] gets mapped to the matrix −4(x 2 + y 2 + z 2 ) id +12(x, y, z) T (x, y, z) under the composition ψ • ι : P 2 → P Sym 2 C 3 ; note that this matrix has a repeated eigenvalue. More precisely, Proposition 4.3 shows that Y is the projective cone over ψ(L •2 ) with the vertex id.
In Sect. 4.4 we give an explicit set-theoretic description of the coordinate-wise square of a linear space in high-dimensional ambient space. We will show the following result as a special case of Theorem 4.11. Given a matrix A ∈ C s×s , we denote a 2 × 2 minor of A by A i j|k where i, j are the rows and k, are the columns of the minor.
Corollary 4.5 Let s ≥ 4. A symmetric matrix A ∈ C s×s has an eigenspace of codimension ≤ 1 if and only if its 2 × 2-minors satisfy the following for i, j, k, ≤ s distinct: These equations describe the Zariski closure in the complex vector space Sym 2 C s of the set of real symmetric matrices with an eigenvalue of multiplicity ≥ s − 1.

Squaring lines and planes
In this subsection we consider the low-dimensional cases and classify the coordinatewise squares of lines and planes in arbitrary ambient spaces.
Theorem 4.6 (Squaring lines) Let L be a line in P n .
(i) If |Z | = 2, then L •2 is a line in P n . (ii) If |Z | > 2, then L •2 is a smooth conic in P n .
Proof Since Z ⊂ PW * spans the projective line PW * , we must have |Z | ≥ 2.
If |Z | > 2, then I (Z ) 2 = 0, since no non-zero quadratic form on the projective line PW * vanishes on all points of Z . Then Lemma 4.1 implies that L •2 = (ϕ 2 •ι)(PW ) is a linear re-embedding of ν 2 (PW ), which is a smooth conic in the plane P Sym 2 W ∼ = P 2 .
If |Z | = 2, then dim I (Z ) 2 = 1, since up to scaling there is a unique quadric vanishing on the points Z . By Lemma 4.1, the image ϕ 2 (L) lies in a projective line P 1 ∼ = ϑ(P(Sym 2 W /I (Z ) 2 )) ⊂ P n . On the other hand dim L •2 = dim L = 1. Hence, L •2 = ϕ 2 (L) is a line in P n .

Remark 4.7
We observe that the two possibilities in Theorem 4.6 for the coordinatewise square of a line L differ in degree. In particular, Corollary 2.7 shows that it only depends on the linear matroid M L whether L •2 is a line or a (re-embedded) plane conic.

Remark 4.8
In the Grassmannian of lines Gr(1, P n ), consider the locus ⊂ Gr(1, P n ) of those lines L whose coordinate-wise square L •2 is a line. Considering Plücker coordinates p i j on the Grassmannian Gr(1, P n ), we observe that is the subvariety of Gr(1, P n ) given by the vanishing of p i j p jk p ki for all i, j, k ∈ {0, 1, . . . , n} distinct: Indeed, if L is the image of an embedding P 1 B − → P n given by a chosen rank 2 matrix B ∈ C (n+1)×2 , then Z ⊂ (P 1 ) * is the set of points corresponding to the nonzero rows of B. Then |Z | = 2 if and only if among any three distinct rows of B there always exist two linearly dependent rows. In terms of the Plücker coordinates, which are given by the 2 × 2-minors of B, this translates into the vanishing condition above. (i) If Z is not contained in any conic, then I is minimally generated by n − 5 linear forms and 6 quadratic forms. (ii) If Z is contained in a unique conic Q ⊂ PW * , we distinguish two cases: (a) If Q is irreducible, then I is minimally generated by n − 4 linear forms and 7 cubic forms. (a) If Q is reducible, then L •2 is the complete intersection of n − 4 hyperplanes and 2 quadrics. Proof Notice that k = 2, so dim W = 3.
In the first case, we checked computationally with Macaulay2 (Grayson and Stillman 2018) that the ideal is minimally generated by seven cubics. A structural description of these quadrics and cubics will be given in the proof of Theorem 4.11. The image of the second morphism is a complete intersection of two binomial quadrics. By Lemma 4.1, the coordinate-wise square L •2 arises from the image of ψ via a linear re-embedding P 4 → P n , producing additional n−4 linear forms in I . (iii) In case (a), the set Z consists of three points spanning the projective plane PW * , so dim Sym 2 W /I (Z ) 2 = 3. Then by Lemma 4.1, the coordinate-wise square L •2 is contained in a plane P 2 ∼ = ϑ(P(Sym 2 W /I (Z ) 2 )) ⊂ P n . On the other hand, dim L •2 = dim L = 2, so L •2 ⊂ P n must be a plane in P n .
, so Z can also be viewed as the finite set of points associated to L . Applying Lemma 4.1 to L ⊂ P 3 shows that the image of ψ : PW → P(Sym 2 W /I (Z ) 2 ) is the coordinate-wise square L •2 ⊂ P 3 . Hence, L •2 ⊂ P n is a linear re-embedding of the quartic surface from Example 3.5 into higher dimension. Finally, we consider case (c). Consider three points p 1 , p 2 , p 3 ∈ Z lying on a line T ⊂ PW * . Then T must be an irreducible component of each conic through Z . Since Z spans the projective plane PW * , there must also be a point p 0 ∈ Z outside of T . All points in Z \{ p 0 } must lie on the line T , as otherwise there could be at most one conic passing through Z . If Z := { p 0 , p 1 , p 2 , p 3 } ⊂ Z , then each conic passing through Z also passes through Z , i.e. I (Z ) 2 = I (Z ) 2 . We may choose a basis z 0 , z 1 , z 2 of W such that Z ⊂ PW * with respect to these coordinates is given by The plane L := V (x 1 + x 2 − x 3 ) ⊂ P 3 is the image of P 2 [z 0 :z 1 :z 2 :z 1 +z 2 ] − −−−−−−−− → P 3 , so Z can be viewed as the finite set of points associated to L . Lemma 4.1 shows that L •2 ⊂ P 3 coincides with the image of the morphism ψ : PW → P(Sym 2 W /I (Z ) 2 ). On the other hand, Lemma 4.1 shows that L •2 ⊂ P n is a linear re-embedding of PW → P(Sym 2 W /I (Z ) 2 ). From I (Z ) 2 = I (Z ) 2 , we deduce that L •2 ⊂ P n is a linear re-embedding of the quadratic surface as we compute from Proposition 3.4.

Remark 4.10
Opposed to Remark 4.7, the structure of the coordinate-wise square of a plane L ⊂ P n does not only depend on the linear matroid of L: For n = 5, it can happen both in case (i) and case (ii).(a) of Theorem 4.9 that M L = {I ⊂ {0, 1, . . . , 5} | |I | ≤ 3}.

Squaring in high ambient dimensions
Consider the case of k-dimensional linear spaces in P n for n k. For a general linear space L ∈ Gr(k, P n ), the finite set of points Z does not lie on a quadric. We know from Proposition 4.2 that the coordinate-wise square L •2 is a linear re-embedding of the k-dimensional second Veronese variety. In this subsection, we investigate the first degenerate case where the point configuration Z is a unique quadric.
The following theorem gives the structure of coordinate-wise squares as the one appearing in Proposition 4.3. We will also prove Corollary 4.5 by deriving the polynomials vanishing on the set of symmetric matrices with a comultiplicity 1 eigenvalue. Proposition 4.3 shows that Corollary 4.5 is a special case of the theorem stated below.
In fact, for s ≥ 3, we show that the claim holds scheme-theoretically, see Remark 4.17. We believe that in fact for arbitrary s the claim is even true idealtheoretically.
The remainder of this subsection is dedicated to the proof of Theorem 4.11. It reduces to the following elimination problem. Let k ≥ 1 and s ≥ 2. Consider a symmetric (k +1)×(k +1)-matrix of variables Y := (y i j ) 1≤i, j≤k+1 and the corresponding polynomial ring C[y] := C[y i j ]/(y i j − y ji ). Over the polynomial ring C[y, t], we consider the matrix M := Y + t I s , where we define the matrix Henceforth, we denote the 2 × 2-minors of Y with rows i = j and columns = m by Y i j| m := y i y jm − y im y j ∈ C[y], and correspondingly M i j| m ∈ C[y, t] for the 2 × 2-minors of M. Let J 0 ⊂ C[y, t] denote the ideal generated by the 2 × 2-minors of M. By J := J 0 ∩ C[y] we denote the ideal in C[y] obtained by eliminating t from J 0 . We explicitly describe the elimination ideal J for all values of k and s.
First, we observe that Theorem 4.11 follows directly from Proposition 4.12.

Proof of Theorem 4.11
Analogous to the proof of Proposition 4.3, we identify P Sym 2 W with P Sym 2 C k+1 such that q = I s . By Lemma 4.1, the coordinatewise square L •2 is a linear re-embedding of the variety obtained by the projection of ν 2 (PW ) from the point q = I s ∈ P Sym 2 W . Note that V (J ) describes the set of points Y ∈ P Sym 2 W lying on the line joining q with some point in ν 2 (PW ). Hence, the projection from q is given by intersecting V (J ) with a hyperplane H ⊂ P Sym 2 W not containing q = I s . From Proposition 4.12, we know that V (J ) ∩ H has a set-theoretic description inside H ∼ = P ( k+2 2 )−2 as the zero set of the indicated number of quadric and cubic forms. The coordinate-wise square L •2 is by Lemma 4.1 the image of V (J ) ∩ H under a linear embedding ϑ : H → P n , leading to additional n − k+2 2 + 2 linear forms vanishing on L •2 .
We prove Proposition 4.12 in several steps. First, we describe a set X of certain lowdegree polynomials in the ideal J . Secondly, we show that V (X) = V (J ). Finally, we identify a subset of X providing minimal generators of the ideal (X) ⊂ C[y], consisting of the claimed number of quadratic and cubic forms.

Lemma 4.13
The following sets of polynomials in C[y] are contained in the ideal J : for all i, j ≤ s distinct and holds for respective indices i, j, , m. From this, we conclude that these polynomials are contained in J 0 ∩ C[y] = J .
(iii) For s arbitrary, V (X) and V (J ) agree set-theoretically.
Proof For k ≤ 5, we have checked computationally with a straightforward implementation in Macaulay2 (Grayson and Stillman 2018) that even the ideal-theoretic equality (X) = J holds. We now argue that from this we can conclude the claim for arbitrary k.
(i) Let s ≥ 3. We need to show that the ideal generated by X ⊂ C[y] coincides with J ⊂ C[y] after localisation at any element in the set since the union of the corresponding non-vanishing sets D(y i j ), D(y ii − y j j ) is U 1 .
In order to show that (X) and J agree after localisation at y i 0 j 0 for {i 0 } ∩ { j 0 } ⊂ {s + 1, . . . , k + 1}, we may substitute y i 0 j 0 = 1 in both ideals. For a fixed 0 ≤ s distinct from i 0 and j 0 , we note that t Hence, eliminating t from J 0 | y i 0 j 0 =1 just amounts to replacing t = −Y i 0 0 | j 0 0 in each occurrence of t in the minors M i j| m (for i = j, = m) generating the ideal J 0 . According to (4.1), this leads to the following generators of J | y i 0 j 0 =1 : To check that J | y i 0 j 0 =1 = (X)| y i 0 j 0 =1 , we need to check that each of these polynomials belong to (X)| y i 0 j 0 =1 . For this, it is enough to see that they can be expressed in terms of those polynomials in X that only involve variables with indices among {i 0 , j 0 , 0 , i, j, }. This corresponds to showing the claim for a corresponding symmetric submatrix of M of size at most 6 × 6. We conclude that it is enough to check holds for fixed 0 ≤ s distinct from i 0 and j 0 . Therefore, replacing t = Y j 0 0 | j 0 0 − Y i 0 0 |i 0 0 in the expressions for the 2 × 2-minors of M describes generators of J | y i 0 i 0 −y j 0 j 0 =1 . As before, these polynomials involve variables with at most six distinct indices, so it is enough to verify the claim for k ≤ 5 by the same argument as above.
Next, we identify maximal linearly independent subsets of E, F, G.

Lemma 4.16
The following sets are bases for the vector spaces E , F and G : is the unique polynomial in B E containing the monomial y im y j . In particular, the polynomials in B E are linearly independent, so B E forms a basis of E .
If i, j, , m ∈ {1, . . . , k + 1} with < m are such that Y i |im − Y j | jm ∈ F\(B F ∪ −B F ), then so B F spans F . Each of the polynomials Y i |im − Y j | jm in B F contains a monomial not occurring in any of the other polynomials of B F , namely y ii y m . Therefore, the polynomials in B F are linearly independent. For 3 ≤ m ≤ s − 1 and ∈ {3, . . . , m − 1} ∪ {s}, the polynomial Y 12|12 − Y 2 |2 + Y m| m − Y m1|m1 is the unique polynomial in B G containing the monomial y y mm . In particular, if a linear combination of polynomials in B G is zero, none of the above polynomials can occur in this linear combination. The remaining polynomials in B G are of the form Y 1s|1s − Y s2|s2 + Y 2m|2m − Y m1|m1 for 3 ≤ m ≤ s − 1. Among these, the polynomial containing the monomial y 22 y mm is unique. We conclude that the polynomials in B G are linearly independent.
We observe that The vector space of symmetric s×s-matrices with zero diagonal and whose columns all sum to zero is of dimension s 2 − s, so dim C G ≤ s 2 − s. On the other hand, we can count that |B G | = s−3 2 + 2(s − 3) = s 2 − s, so B G is a basis of G . Proof of Proposition 4.12 By Lemma 4.14, V (J ) = V (X) holds set-theoretically. For s = 3, we observe that H 1 ∪ H 2 consists up to sign of seven linearly independent cubics, so by Lemma 4.15, the ideal (X) is in this case minimally generated by those seven cubics and the polynomials in B E , B F and B G from Lemma 4.16.

Remark 4.17
In fact, for s ≥ 3, our proof shows that V (X) is the same scheme as V (J ) away from the point I s ∈ P Sym 2 C k+1 . In the proof of Theorem 4.11, we considered V (J ) ∩ H , where H is a hyperplane not containing I s . Since V (J ) ∩ H = V (X) ∩ H scheme-theoretically, we conclude that our equations for L •2 in Theorem 4.11 describe not only the correct set, but even the correct scheme. In fact, we believe that we have ideal-theoretic equality for the specified set of polynomials, but our proof stops short of verifying this.
We now prove the result about eigenspaces of symmetric matrices stated as Corollary 4.5. It follows directly from the proof of Proposition 4.12.
Proof of Corollary 4.5 A complex symmetric matrix A ∈ C s×s has an eigenspace of codimension 1 with respect to an eigenvalue λ ∈ C if and only if the matrix A − λ id is of rank 1, which means that A ∈ V (J ) for the case s = k + 1. By Lemmas 4.14 and 4.15, this is equivalent to the vanishing of the equations E ∪ F ∪ G, which are the above relations among 2 × 2-minors for s = k + 1 ≥ 4. The second claim was proved in Proposition 4.3.
The proof of Theorem 4.11 was based on relating the coordinate-wise square L •2 in the case dim C I (Z ) 2 = 1 to the question when a symmetric matrix can be completed to a rank 1 matrix by adding a multiple of I s . In the same spirit, for arbitrary linear spaces L (no restrictions on the set of quadrics containing Z ), determining the ideal of the coordinate-wise square L •2 boils down to the following problem in symmetric rank 1 matrix completion: Problem 4.18 For a fixed matrix B ∈ C (n+1)×(k+1) of rank k + 1, find the defining equations of the set M ∈ C (k+1)×(k+1) symmetric | ∃P ∈ C (k+1)×(k+1) symmetric such that B P B T has a zero diagonal and rk(M + P) = 1 . Indeed, let L be an arbitrary linear space of dimension k and let B ∈ C (n+1)×(k+1) be a chosen matrix of full rank describing L as the image of the linear embedding P k → P n given by B. Then the rows of B form the finite set of points Z ⊂ (P k ) * . Identifying quadratic forms on P k with symmetric (k + 1) × (k + 1)-matrices, the subspace I (Z ) 2 ⊂ Sym 2 (C k+1 ) * corresponds to I (Z ) 2 = {P ∈ C (k+1)×(k+1) symmetric such that B P B T has a zero diagonal}.
By Lemma 4.1, the coordinate-wise square L •2 is a linear re-embedding of projecting the second Veronese variety ν 2 (P k ) = {rank 1 symmetric (k + 1) × (k + 1)-matrices up to scaling} from P(I (Z ) 2 ), so describing the ideal of L •2 corresponds to solving Problem 4.18 for the given matrix B. Similarly, describing the coordinate-wise r -th power of a linear space corresponds to the analogous problem in symmetric rank 1 tensor completion.
By Lemma 4.1, determining the coordinate-wise r -th power of a linear space corresponds to describing the projection of the r -th Veronese variety from a linear space of the form P(I (Z ) r ) for a non-degenerate finite set of points Z . We may ask how general this problem is, and pose the question which linear subspaces of P Sym r W are of the form P(I (Z ) r ): Question 4.19 Which linear subspaces of C[z 0 , . . . , z k ] r can be realised as the set of degree r polynomials vanishing on some non-degenerate finite set of points in P k of cardinality ≤ n + 1?
We envision that an answer to this question may lead to insights into describing which varieties can occur as the coordinate-wise r -th power of some linear space in P n . by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.