Terracini convexity

We present a generalization of the notion of neighborliness to non-polyhedral convex cones. Although a definition of neighborliness is available in the non-polyhedral case in the literature, it is fairly restrictive as it requires all the low-dimensional faces to be polyhedral. Our approach is more flexible and includes, for example, the cone of positive-semidefinite matrices as a special case (this cone is not neighborly in general). We term our generalization Terracini convexity due to its conceptual similarity with the conclusion of Terracini’s lemma from algebraic geometry. Polyhedral cones are Terracini convex if and only if they are neighborly. More broadly, we derive many families of non-polyhedral Terracini convex cones based on neighborly cones, linear images of cones of positive-semidefinite matrices, and derivative relaxations of Terracini convex hyperbolicity cones. As a demonstration of the utility of our framework in the non-polyhedral case, we give a characterization based on Terracini convexity of the tightness of semidefinite relaxations for certain inverse problems.


Introduction
The combinatorial view of polytopes is a pillar of polyhedral theory which has played a prominent role both in deepening our understanding of the structure of polytopes as well as in illuminating those attributes of polytopes that are significant in the context of particular applications such as linear programming.A parallel perspective for non-polyhedral convex sets -even in the presence of additional structure -has generally been lacking.This limitation may be attributed to the fact that the central object of study in polyhedral combinatorics is the face lattice, and consequently, many of the key ideas and definitions in the field are face-centric.However, face-centric notions do not always carry over naturally to the non-polyhedral setting for a number of reasons; in particular, non-polyhedral closed convex sets consist of infinitely many faces, may contain non-exposed faces, may lack faces of all dimensions, may not be closed under linear images, and so forth.Motivated by this broad challenge of bridging the gap in our understanding between the polyhedral and non-polyhedral cases, we focus in this article on the question of obtaining a suitable generalization of neighborliness for non-polyhedral convex sets, with a less face-centric reformulation of neighborliness of polytopes playing a central role in our development.
A polyhedral cone that is pointed is called k-neighborly if the cone over any subset of up to k extreme rays forms a face [Gru03]. 1 Neighborliness arises in many contexts in geometry and polyhedral combinatorics, most notably in the characterization of various extremal classes of polytopes [Gru03] and in conditions under which linear programming relaxations are tight for certain nonconvex inverse problems [DT05].

Motivation
We are aware that there is a definition available for non-polyhedral k-neighborly convex cones that are closed and pointed which parallels the polyhedral setting [KW08] -that is, the cone over any subset of up to k extreme rays forms an exposed face.However, this notion is too restrictive in the non-polyhedral case as it essentially requires that all the low-dimensional faces are polyhedral, and, in particular, are linearly isomorphic to orthants.This limitation restricts the utility of neighborliness in the non-polyhedral context in a number of ways.
As one example, the cone of positive semidefinite matrices is not k-neighborly for any k > 1 as all the faces other than the extreme rays are non-polyhedral, and as a consequence, neighborliness is not useful for characterizing tightness of semidefinite relaxations for nonconvex problems that are ubiquitous in many applications [RFP10,CRPW12], in contrast to the situation with linear programming.Concretely, Donoho and Tanner [DT05] used neighborliness of polytopes to characterize the exactness of linear programming relaxations for identifying nonnegative vectors with the smallest number of nonzeros in affine spaces.A similar characterization of the success of semidefinite relaxations for identifying low-rank positive semidefinite matrices in affine spaces -a problem that arises in a range of applications such as factor analysis, collaborative filtering, and phase retrieval, and contains NP-hard problems as special cases -has been lacking.Thus, we seek a more flexible notion for non-polyhedral cones that specializes to the usual definition of neighborliness for polyhedral cones.
In a different vein, the utility of neighborliness lies in the fact that it provides a succinct characterization of the geometry of the 'most singular' pieces of the boundary of a polyhedral cone.It is of intrinsic interest to understand such geometry more generally for other families of structured cones.Hyperbolicity cones serve as an instructive case study in this regard.These are convex cones derived from hyperbolic polynomials, with the nonnegative orthant and the positive semidefinite matrices being prominent examples.Relaxations based on derivatives of hyperbolicity cones offer the prospect of computationally less expensive approaches for obtaining bounds on conic optimization problems with respect to hyperbolicity cones, and an intriguing feature of these relaxations is that they tend to preserve the low-dimensional faces of the original hyperbolicity cone.Formalizing and quantifying this assertion by leveraging the perspective of neighborliness would provide new insights into the facial geometry of a large class of structured convex cones.
In this paper, we describe a generalization of neighborliness for non-polyhedral cones that addresses the preceding objectives.

Towards a Definition for Non-Polyhedral Cones
In aiming at an appropriate generalization of neighborliness for non-polyhedral cones that overcomes the limitation of polyhedrality of the low-dimensional faces, a natural approach is to reformulate neighborliness via other geometric attributes that are less face-centric.As a first attempt, for a x (2)  x (1)  x (3) L C 3 (x (1) ) Figure 1: Illustration of neighborliness properties of three cones.C 1 is neighborly while C 2 is not.C 3 is not neighborly but it serves as an instructive example for the definition of Terracini convexity.
convex cone C that is closed and pointed but not necessarily polyhedral, let S C (x) denote the linear span of the smallest exposed face of C that contains x.Then one can check that if the extreme rays of C are exposed, k-neighborliness of C is equivalent to the following condition for any collection x (1) , . . ., x (k) of generators of the extreme rays of C: One can check that the left-hand-side of this equation always contains the right-hand-side, with the containment being strict in general and equality holding only for k-neighborly cones.It is instructive to consider the three cones in R 3 that are shown in Figure 1 from the perspective of the relation (1).The cone C 1 is isomorphic to the orthant in R 3 , which is 3-neighborly, and therefore the relation (1) holds for any subset of the generators of the three extreme rays.The cone C 2 is not 2-neighborly as the cone over the generators x (1) , x (2) is not a face of C 2 ; accordingly, we note that S C 2 (x (1) + x (2) ) S C 2 (x (1) ) + S C 2 (x (2) ).Finally, the ice-cream cone C 3 is evidently not 2-neighborly by considering the cone over the generators x (1) , x (2) ; as expected, we again have the strict containment S C 3 (x (1) + x (2) ) S C 3 (x (1) ) + S C 3 (x (2) ).The cone C 3 presents an interesting case study as it is also linearly isomorphic to the cone of 2 × 2 symmetric positive semidefinite matrices.As mentioned previously, developing a suitable generalization of neighborliness that encompasses the cone of positive semidefinite matrices is one of the motivations for this article, and we investigate next what precisely fails with the relation (1) for C 3 .For a polyhedral cone C that is pointed, the map S C (x) represents a kind of "local linearization" of C around the point x; concretely, the set S C (x) is the largest subspace -also called the lineality space -in the cone of feasible directions from x into C.However, the interpretation of S C (x) as a local linearization of C at x no longer holds in general if C is not polyhedral.For the cone C 3 in Figure 1, the set S C 3 (x (1) ) does not fully represent a local linearization of C 3 around x (1) as it fails to account for the curvature of the boundary of C 3 at x (1) .Rather, the subspace L C 3 (x (1) ) in Figure 1, akin to a tangent space at x (1) with respect to the boundary of C 3 , provides a more accurate local linearization of C 3 at x (1) .Letting L C 3 (x (2) ) similarly denote an accurate local linearization of C 3 at x (2) , we observe that L C 3 (x (1) ) + L C 3 (x (2) ) = R 3 .As x (1) + x (2) lies in the interior of C 3 , a natural local linearization of C 3 at x (1) + x (2) is the full space R 3 , i.e., L C 3 (x (1) + x (2) ) = R 3 .Consequently, we have that the relation (1) holds for C 3 with k = 2 if we substitute S C 3 with L C 3 .Motivated by this discussion, our generalization of neighborliness to closed, convex, pointed cones is based on a criterion analogous to (1) with a more accurate notion of local linearization; as we discuss in the sequel, this criterion is satisfied by neighborly polyhedral cones, cones of positive semidefinite matrices, as well as many other families.

Terracini Convex Cones
We begin by giving a formal definition of the map L C (x).In the example with the cone C 3 from Figure 1, the set L C 3 (x) corresponds to a tangent space.However, convex cones in general have both smooth and singular features in their boundary, and therefore we do not explicitly appeal to any differential notions.Our definition is stated in terms of the feasible directions K C (x) into a convex cone C ⊂ R d that is closed and pointed from any x ∈ C: The closure of the cone of feasible directions K C (x) is called the tangent cone of C at x. Definition 1.1.Let C ⊂ R d be a convex cone that is closed and pointed.For any x ∈ C, the convex tangent space of C at x is denoted by L C (x) and is defined as the lineality space of the tangent cone of C at x: In some sense, the subspace L C (x) represents all those directions from x in which the cone C is locally "flat".For smooth convex cones C that are closed and pointed, the convex tangent space L C (x) at a point x ( = 0) on the boundary is indeed the tangent space with respect to the boundary of C at x.For polyhedral cones C that are pointed, one can check that L C (x) = S C (x).With this definition, we are in a position to present the main object of investigation of this article.Definition 1.2.A convex cone C ⊂ R d that is closed and pointed is k-Terracini convex if the following condition holds for any collection x (1) , . . ., x (k) of generators of extreme rays of C: ( If C is k-Terracini convex for all k, then we say that C is Terracini convex.

One inclusion always holds as
, and the relevant portion of this definition is the other inclusion.The reason for the terminology 'Terracini convexity' is that the stipulation in this definition mirrors the consequence of Terracini's lemma in algebraic geometry [Ter11], with convex tangent space playing the role in our context that a tangent space does in Terracini's lemma. 2 We give next some preliminary examples of k-Terracini convex cones: Example 1.3.To begin with, it is instructive to compare k-Terracini convexity to k-neighborliness for polyhedral cones.For a polyhedral cone C that is pointed, we observed previously that L C (x) = S C (x) for x ∈ C. As C has exposed extreme rays and as the relation (1) is equivalent to kneighborliness, we have that k-Terracini convexity and k-neighborliness are equivalent for pointed polyhedral cones.We also prove this fact as a special case of a more general result (see Theorem 3.13 and Corollary 3.14).
Example 1.4.All convex cones that are closed and pointed are trivially 1-Terracini convex.As a contrast, based on the generalization of [KW08] of neighborliness to non-polyhedral cones, a convex cone that is closed and pointed is 1-neighborly if and only if all its extreme rays are exposed.
Example 1.5.Let C ⊂ R d be a smooth convex cone that is closed and pointed.Then C is Terracini convex.To see this, consider any collection x (1) , . . ., x (k) of generators of extreme rays of C. Due to the smoothness of C, we have that k i=1 L C x (i) = span(C) for k ≥ 2, unless all the x (i) 's generate the same extreme ray (in which case the Terracini convexity condition is trivially satisfied).
Example 1.6.As our next example, we consider the cone of positive semidefinite matrices S d + in the space of d × d real symmetric matrices S d .This cone consists of both smooth and singular features in its boundary.For from which it follows that S d + is Terracini convex.We give an alternative proof of this fact via a dual perspective on Terracini convexity; see Example 1.9 after Proposition 1.7.
It is instructive to consider the definition of Terracini convexity from a dual perspective, as this leads to a characterization that is more easily verified in some cases.In preparation to state this dual criterion, we recall that the polar of a cone S ⊂ R d is the collection of linear functionals that are nonpositive on S and is denoted S • .With this notation, the normal cone to a convex cone As C is a cone, one can check that the normal cone to C at x ∈ C is given by: which is the set of linear functionals that are nonpositive on C and vanish at x.We now establish an equivalent dual formulation of Terracini convexity.
Proposition 1.7.A closed, pointed, convex cone C ⊂ R d is k-Terracini convex if and only if for any collection x (1) , . . ., x (k) of generators of extreme rays of C, Remark 1.8.In the result above, one inclusion is trivial -we always have that the span of the intersection of the normal cones is contained inside the intersection of the spans of the normal cones.Terracini convexity corresponds to the reverse inclusion being true, and this is all we need to verify.This remark is dual to the assertion after Definition 1.2 about one inclusion always being true.
Proof.The normal cone and the closure of the cone of feasible directions at a point x ∈ C are related via Taking orthogonal complements in the definition of k-Terracini convexity, we see that C is k-Terracini convex if and only if for any collection x (1) , . . ., x (k) of generators of extreme rays of C, Here we have used that the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements.To complete the proof, we note that ) = 0, then we have that ℓ(x (i) ) ≤ 0 for each i (as ℓ ∈ C • ) and therefore ℓ(x (i) ) = 0 for each i (as k i=1 ℓ(x (i) ) = 0).
To illustrate the utility of this dual formulation, we show that the positive semidefinite cone is Terracini convex.
Example 1.9 (Positive semidefinite cone).Let C = S d + be the cone of d × d positive semidefinite matrices.Given an extreme ray vv ′ for v ∈ R d , the corresponding normal cone from For any collection of generators of extreme rays v (1) v (1) ′ , . . ., v (k) v (k) ′ of C for v (1) , . . ., v (k) ∈ R d , we have that: and as k was arbitrary, it follows that S d + is Terracini convex.

Outline of Contributions
We initiate our study of Terracini convex cones by investigating the face structure of such cones.Specifically, in Section 2 we provide two conditions for a closed, pointed, convex cone to be Terracini convex based on order-theoretic properties of the faces of the cone.The first condition states that if a cone is k-Terracini convex for a sufficiently large k, which is a function of the height of the partially ordered set of faces, then the cone is Terracini convex.The second condition gives a necessary and sufficient characterization for a cone to be Terracini convex based on the collection of all convex tangent spaces of the cone inheriting some of the lattice structure of the subspace lattice.
From the examples in the previous subsection we see that Terracini convexity is equivalent to neighborliness for polyhedral cones, but there are many families of non-polyhedral cones that are also Terracini convex.Thus, a natural question is to clarify the distinction between Terracini convexity and neighborliness for non-polyhedral cones.In one direction, the cone of positive semidefinite matrices serves as an example that there are Terracini convex cones that are not neighborly.In the other direction, we prove in Section 3 that subject to a non-degeneracy condition that is of the form of a quadratic growth property, k-neighborly cones are k-Terracini convex.As a consequence of this result, we obtain that the cone over the (homogeneous) moment curve, which was studied by Kalai and Wigderson in [KW08], is Terracini convex; see Section 3.3 for more examples.
Next we demonstrate the utility of the notion of Terracini convexity in characterizing tightness of semidefinite relaxations for the problem of finding a positive semidefinite matrix of smallest rank in an affine space.A commonly employed heuristic to solve this problem is to compute the positive semidefinite matrix of smallest trace in the given affine space, which can be obtained via a tractable semidefinite program.In Section 4, we show that the success of this heuristic is closely tied to a certain cone being Terracini convex.Our result may be viewed as a generalization of Donoho and Tanner's result on using neighborliness to characterize the exactness of linear programming relaxations for identifying nonnegative vectors with the smallest number of nonzeros in affine spaces [DT05].As a by-product of our result, we obtain that 'most' linear images of a cone of positive semidefinite matrices are k-Terracini convex, where the value of k depends on the dimension of the image of the linear map; see Theorem 4.8.
In Section 5, we investigate the Terracini convexity properties of derivative relaxations of hyperbolicity cones.We study conditions under which derivatives of Terracini convex hyperbolicity cones continue to be k-Terracini convex (for suitable k), and in particular the relationship between the number of derivatives and k.As a consequence, we obtain new examples of Terracini convex cones, and in particular ones that are basic semialgebraic; it is instructive to contrast these examples with the ones described in Section 4.3 of linear images of cones of positive semidefinite matrices, which are semialgebraic but not necessarily basic semialgebraic.
Sections 3, 4, and 5 illustrate the role that Terracini convexity plays in illuminating various aspects of the facial structure of convex cones.In each case, we obtain new examples of Terracini convex cones in the course of our discussion.We conclude in Section 6 with some open questions.

Order-Theoretic Conditions for Terracini Convexity
In this section we discuss conditions under which a closed, pointed, convex cone is Terracini convex based on the order structure underlying the faces of a convex cone.Section 2.1 shows that a cone that is k-Terracini convex for sufficiently large k is Terracini convex, with the threshold value of k depending on the length of the longest chain of faces of the cone.In Section 2.2 we give a latticetheoretic condition on the collection of lineality spaces that is necessary and sufficient for a cone to be Terracini convex.
In preparation for our discussion, we recall briefly a few relevant facts about the face structure of a convex cone.Let C be a closed, pointed, convex cone.A subset F ⊆ C is a face if x, y ∈ C and x + y ∈ F implies that x, y ∈ F. A face F ⊆ C is exposed if F can be expressed as the intersection of C and a hyperplane specified by a linear functional ℓ ∈ C • , i.e., F = {x ∈ C : ℓ(x) = 0}.By convention C is itself an exposed face as one can take ℓ = 0.The collection of (exposed) faces of C form a partially ordered set (poset) by inclusion.For any subset X ⊆ C, let F C (X ) (respectively, F exp C (X )) denote the inclusion-wise minimal (exposed) face of C containing X .For any element x ∈ C, one can check that the normal cone N C (x) depends only on F exp C (x), which in turn depends only on F C (x); consequently, the convex tangent space L C (x) depends only on F exp C (x) and in turn F C (x) [Roc15].Formally, for any x (1) , x (2) ∈ C: (6)

Terracini Convexity and the Height of the Poset of Faces
Given a closed, pointed, convex cone C, consider a collection of points x (1) , . . ., x (k) ∈ C. For large k, it is possible to replace the convex tangent space L C ( k i=1 x (i) ) by L C ( i∈I x (i) ) for a subset I ⊆ {1, . . ., k} that is potentially much smaller than k, by appealing to the observation that the convex tangent space at a point depends only on the smallest face containing the point.This allows us to conclude that if C is k-Terracini convex for sufficiently large k, then C is Terracini convex.
We describe next the relevant terminology that we use in our result.A collection of faces is called a chain of faces.For a closed, pointed, convex cone C, let H(C) denote the height of the poset of faces of C, which is the length of the longest chain of faces of C. As the dimension always increases strictly along chains of faces and as any maximal-length chain of faces begins with the zero-dimensional face 3 {0} and ends with C, we have that H(C) ≤ dim(C) + 1.We have next a result that allows us to replace the convex tangent space of a large sum of elements of C by that of a smaller subset based on H(C): Lemma 2.1.Let C be a closed, pointed, convex cone, and consider a collection of points x (1) , . . ., x (k) ∈ C.There exists I ⊆ {1, . . ., k} with |I| ≤ H Proof.We explicitly construct a set I with |I| ≤ H(C) − 1. Set j = 0, We are now in a position to state and prove the main result of this section.
Proof.Let x (1) , . . ., x (k) be a collection of generators of extreme rays of C. By Lemma 2.1, we know that there exists I ⊆ {1, . . ., k} with . From (6) we have that: Since C is (H(C) − 1)-Terracini convex, it is |I|-Terracini convex and therefore Combining ( 7) and (8), and noting that k , we conclude that C is k-Terracini convex.Since k was arbitrary, we have shown that C is Terracini convex.
As a consequence of this result, we have the following corollary: Proof.This follows from the observation that H(C) ≤ dim(C) + 1.

Terracini Convexity and the Lattice of Subspaces
Motivated by the order-theoretic structure underlying the faces of a closed, pointed, convex cone C ⊂ R d , we consider the order-theoretic aspects of the collection of convex tangent spaces associated to C: As L(C) is a subset of the collection of subspaces in R d , one may view L(C) as a poset by inclusion.However, the collection of all subspaces in R d additionally forms a lattice (called the subspace lattice in R d ) with the join of two subspaces given by their sum and the meet given by their intersection.
In this section we relate Terracini convexity of C to L(C) inheriting some of the lattice structure of the collection of all subspaces in R d .
In preparation to present this result, we discuss next a link between the elements of L(C) and the exposed faces of C. As noted previously in (6), the convex tangent space at a point x ∈ C depends only on the smallest exposed face of C containing x so that the elements of L(C) are in one-to-one correspondence with the exposed faces of C. The next result describes how one obtains an exposed face of C given an element of L(C): Lemma 2.4.Let C be a closed, pointed, convex cone.For any x ∈ C we have that: Proof.One can check that F exp C (x) ⊆ L C (x), and therefore Proof.Suppose first that C is Terracini convex.Consider any pair L C (x), L C (y) ∈ L(C) corresponding to x, y ∈ C, and let x = i x (i) and y = j y (j) be decompositions in terms of generators of extreme rays of C. As C is Terracini convex, we have that: Since L C (x + y) ∈ L(C), the poset L(C) is a join sub-semilattice of the lattice of all subspaces in R d .
In the other direction, suppose that the poset L(C) is a join sub-semilattice of the lattice of all subspaces in R d .Consider any collection x (1) , . . ., x (k) ∈ C of generators of extreme rays of C. As the join is given by subspace sum, we have that k i=1 L C x (i) ∈ L(C), which implies that is the convex tangent space at some point y ∈ C.Then, from Lemma 2.4 we see that Appealing to (6), we can then conclude that Therefore, Terracini convexity of a cone C is linked to the poset L(C) inheriting the join structure of the lattice of subspaces.In general, L(C) does not inherit the meet structure of the lattice of subspaces as the intersection of the convex tangent spaces corresponding to two exposed faces does not usually yield a convex tangent space corresponding to an exposed face of C (the positive semidefinite cone provides a counterexample); indeed, the preceding proposition makes no assumptions on the existence of a meet operation.

Neighborliness and Terracini Convexity
Terracini convexity is one approach to extend neighborliness from polyhedral cones to non-polyhedral convex cones.As discussed in the introduction, there is already a previous notion of neighborliness available in the non-polyhedral case due to Kalai and Wigderson [KW08].In this section we investigate the relationship between these two concepts, and in particular we show that kneighborly convex cones (formally defined in Section 3.1) are k-Terracini convex subject to mild non-degeneracy conditions.Throughout this section we view R m as being equipped with an inner product (which varies based on context and is specified clearly in each case), and we define an associated set S m−1 ⊂ R m of unit-norm elements induced by the inner product.Doing so allows us to work with a distinguished set ext(K) ∩ S m−1 of normalized extreme rays of a closed, pointed, convex cone K ⊆ R m .

k-Neighborly Convex Cones
In [KW08] Kalai and Wigderson extend the notion of a neighborly polytope to define a k-neighborly embedded smooth manifold.This concept serves as the point of departure for a definition of a k-neighborly convex cone that is expressed in convex-geometric terms with no reference to an underlying embedded manifold.Definition 3.1.Let M be a smooth manifold and let φ : M → R m be an embedding of M in R m .The image φ(M) is a k-neighborly embedded manifold if for any collection x (1) , x (2) , . . ., x (k) of elements of φ(M), there exists an affine function ℓ : R m → R such that ℓ(x (i) ) = 0 for i = 1, 2, . . ., k and ℓ(x) > 0 for all x ∈ φ(M) \ {x (1) , x (2) , . . ., x (k) }.
This definition is a slight reformulation of that of Kalai and Wigderson and it is stated in a manner that is more convenient for our presentation.The neighborliness of φ(M) clearly only depends on the convex hull of φ(M), which suggests the following notion of a k-neighborly convex cone.
It is straightforward to check that if an embedded smooth manifold φ(M) ⊆ R m is k-neighborly, then the cone over φ(M), i.e., cone({1} × φ(M)) ⊆ R m+1 , is a k-neighborly convex cone.A basic observation about k-neighborly convex cones is that all of their sufficiently low-dimensional faces are linearly isomorphic to a nonnegative orthant.
Proof.As K is a closed, pointed, convex cone, so is F. Hence, F is the conic hull of its extreme rays.Let x (1) , . . ., x (d) be a choice of d linearly independent normalized extreme rays of F (and hence of K).Let ℓ be a linear functional satisfying ℓ(x (i) ) = 0 for i = 1, 2, . . ., d and ℓ(x) > 0 for all other normalized extreme rays of K, whose existence is guaranteed due to the k-neighborliness of K. Let F = {x ∈ K : ℓ(x) = 0} be the face of K exposed by ℓ.Since every extreme ray of K that belongs to F is also an extreme ray of F , it follows from the definition of ℓ that x (1) , x (2) , . . ., x (d)  are exactly the normalized extreme rays of F .As such, F is a closed, pointed, convex cone with exactly d linearly independent extreme rays, and therefore it must be linearly isomorphic to R d + .Finally, F and F are both faces of K such that their relative interiors have a point in common, so F = F [Roc15, Corollary 18.1.2].Proposition 3.3 makes it clear that k-Terracini convex cones are not necessarily k-neighborly.Indeed, we have seen that the positive semidefinite cone is Terracini convex, and yet its faces are not linearly isomorphic to nonnegative orthants in general.We describe next an example that serves as a running illustration throughout this section.This cone was considered by Kalai and Wigderson [KW08] in the language of neighborly manifolds.
Cone over the Veronese embedding The Veronese embedding φ n,2d : We denote the cone over this embedding by When discussing this example, we let m = n+2d−1 2d and equip R m with the inner product4 that satisfies φ(y), φ(z) B := y, z 2d for all y, z ∈ R n where the inner product on the right is the Euclidean inner product on R n .The norms associated with these inner products are denoted • B and • , respectively.Any linear functional ℓ : R m → R restricted to the extreme rays of the cone C n,2d can be interpreted as a homogeneous polynomial of degree 2d in n variables, i.e., Under this interpretation, the dual cone −C • n,2d is the cone of (coefficients of) nonnegative homogeneous polynomials of degree 2d in n variables.
Example 3.4 (Neighborliness of cones over Veronese embeddings [KW08]).The cone C n,2d is a d-neighborly convex cone.To see this, consider a collection of up to d normalized extreme rays and define the linear functional From the Cauchy-Schwarz inequality, we can see that this is a nonnegative polynomial in z.(In fact, it is a sum of squares.)As such, ℓ defines a linear functional that is nonnegative on the extreme rays of C n,2d , and hence on C n,2d itself.Furthermore, the only normalized extreme rays at which ℓ vanishes are φ n,2d (z (i) ) for i = 1, 2, . . ., d.

Non-Degeneracy and Regularity of Convex Cones
Our approach to showing that a k-neighborly cone is k-Terracini convex is based on the dual characterization of k-Terracini convexity from Proposition 1.7.Specifically, for any collection of normalized extreme rays x (1) , . . ., x (k) of a k-neighborly cone K ⊆ R m , we wish to prove that for an open set U ⊆ R m containing the origin.The linear functional that supports K at the points x (1) , . . ., x (k) , which is available to us from the definition of k-neighborliness, serves as a natural candidate for ℓ.The key issue with executing this strategy is that we need to control the extent to which any ∆ ∈ k i=1 span N K (x (i) ) perturbs ℓ.In particular, as ∆ ∈ k i=1 span N K (x (i) ) may be decomposed as ∆ = ∆ )), we need to bound the amount that the 'negative' parts ∆ (i) − perturb ℓ.We consider two conditions to address this point.The first one ensures that ℓ(x) grows sufficiently fast around {x (1) , x (2) , . . ., x (k) }.The second one controls the growth of any linear functional in −N K (x) for any normalized extreme ray x ∈ K.Under these conditions -with the second one applied to each ∆ . The first condition is a requirement on k-neighborly cones and takes the form of a quadratic growth criterion, while the second one is a regularity property applicable to arbitrary closed, pointed, convex cones.Both of these conditions are mild; for example, we show that the cone over the Veronese embedding satisfies them.(That being said, we are unaware of a method to prove that a k-neighborly cone is k-Terracini convex without these two conditions.)We precisely describe the conditions next, and we prove in Section 3.3 that k-neighborly cones satisfying these conditions are k-Terracini convex.

Non-Degenerate Neighborliness
We present a non-degenerate extension of the notion k-neighborliness in which the linear functional exposing a subset of k extreme rays satisfies an additional growth condition when restricted to nearby extreme rays.
The quadratic growth condition (10) is a mild restriction, and it is satisfied by the examples of k-neighborly convex cones we consider in this section.
Example 3.6 (k-neighborly polyhedral cones are non-degenerate k-neighborly).If K ⊆ R m is a k-neigborly polyhedral cone, then for any collection x (1) , x (2) , . . ., x (k) of normalized extreme rays there is a linear functional ℓ such that ℓ(x (i) ) = 0 for i = 1, 2, . . ., k and ℓ(x) > 0 for all other normalized extreme rays of K.As the set of normalized extreme rays is finite, one can choose ǫ smaller than half the minimum distance between normalized extreme rays and obtain that , . . ., x (k) }, which implies that (10) is vacuously satisfied for any positive µ.
Example 3.7 (Cone C n,2d over the Veronese embedding is non-degenerate d-neighborly).For y, z ∈ R n with unit Euclidean norm so that φ(y) B = φ(z) B = 1 (this is the norm associated with the Bombieri inner product on R m ), we have that Here, the inequality follows from the Cauchy-Schwarz inequality and the fact that y and z have unit Euclidean norm.For unit Euclidean norm z (i) , i = 1, 2, . . ., d and unit Euclidean norm z ∈ R n , the linear functional ℓ from Example 3.4 satisfies .
It follows that C n,2d is non-degenerate d-neighborly.
Although the definition of being non-degenerate k-neighborly only requires quadratic growth locally around the set of minimizers, compactness of the sphere means that local quadratic growth implies global quadratic growth.
Lemma 3.8.If a closed, pointed, convex cone K ⊆ R m is non-degenerate k-neighborly then for every collection x (1) , x (2) , . . ., x (k) of normalized extreme rays of K, there exists µ 0 > 0, and a linear functional ℓ, such that ℓ(x (i) ) = 0 for i = 1, 2, . . ., k and Proof.Let x (1) , x (2) , . . ., x (k) be a collection of normalized extreme rays of K. Let ǫ and µ be the positive constants, and let ℓ be the linear functional, that exist because and let W c = ext(K) ∩ S m−1 \ W be its complement in normalized extreme rays.By compactness of W c and the fact that ℓ(x) > 0 on W c , there exists some M > 0 such that where the second inequality holds because x − y 2 ≤ 4 whenever x, y ∈ S m−1 .Since x − x (i) 2 for all x ∈ W, taking µ 0 = min{µ, M/4} completes the proof.

Regular Cones
Our notion of regularity for a closed, pointed, convex cone requires that no linear functional in the dual cone grows too fast around its minimizer when restricted to extreme rays.This holds whenever the restriction of a linear functional to the extreme rays is smooth.
Definition 3.9.A closed, pointed, convex cone K ⊆ R m is regular if for each x 0 ∈ ext(K) and each ℓ ∈ −N K (x 0 ), there exist δ > 0 and ν > 0 such that Example 3.10 (Polyhedral cones are regular).If K ⊆ R m is a proper polyhedral cone, then the set of normalized extreme rays is finite.Therefore, for sufficiently small δ, ), then ℓ(x 0 ) = 0 and so (11) is vacuously satisfied for any ν > 0.
Example 3.11 (Cone over the Veronese embedding is regular).Suppose that z 0 ∈ S n−1 and ℓ(φ n,2d (z)) is nonnegative and vanishes at z 0 .Consider the nonnegative homogeneous quadratic z 2 z 0 2 − z, z 0 2 , which vanishes only on the line spanned by z 0 .Since both ℓ(φ n,2d (z)) and its gradient vanish at z = z 0 , there exists Since z 0 was arbitrary, it follows that C n,2d is regular.
Although the definition of a cone being regular only bounds the growth of a linear functional on normalized extreme rays locally around its minimizer, such a local bound can be extended to a global bound.Lemma 3.12.If a closed, pointed, convex cone K ⊆ R m is regular then for each x 0 ∈ ext(K) and each ℓ ∈ −N K (x 0 ) there exists ν 0 > 0 such that ℓ(x) ≤ ν 0 x − x 0 2 for all x ∈ ext(K) ∩ S m−1 .

Terracini Convexity of Neighborly Cones
We are now in a position to state and prove the main result of this section.
Theorem 3.13.If a closed, pointed, convex cone is non-degenerate k-neighborly and regular, then it is k-Terracini.
Proof.Let x (1) , x (2) , . . ., x (k) be a collection of normalized extreme rays of a closed, pointed, non-degenerate k-neighborly convex cone K. To establish that K is k-Terracini convex, by Remark 1.8 it suffices to show that k i=1 span Let ℓ be a linear functional from the definition of non-degenerate k-neighborliness of K. Since this functional is nonnegative on K and vanishes on x (i) for i = 1, 2, . . ., k, it follows that ℓ ∈ − k i=1 N K (x (i) ).Further, from Lemma 3.8 there exists µ 0 > 0 such that ) , for each i we have a decomposition of ∆ as ∆ = ∆ − where ∆ ).As K is regular, for each i = 1, 2, . . ., k, there exists ν If we choose 0 < γ < µ 0 /ν 0 it follows from ( 12) and ( 13) that Using the fact that ∆(x This theorem yields two immediate corollaries based on the examples in Sections 3.2.1 and 3.2.2.Proof.This follows immediately from Theorem 3.13 and Examples 3.7 and 3.11.While Corollary 3.15 holds for general cones over Veronese embeddings, for the special case of the cone over the moment curve, i.e., the case n = 2, a stronger conclusion is possible.
A natural question at this stage is whether cones C n,2d over Veronese embeddings for n > 2 are also Terracini convex, rather than merely being d-Terracini convex.For the case of n = 3, this question is open, and (to the best of our knowledge) cannot be resolved given the current understanding of the structure of C 3,2d .For the case of n = 4, the following example shows that C 4,4 is not Terracini convex based on Blekherman's study of dimensional differences between faces of nonnegative polynomials and sums of squares [Ble09].

Preservation of Terracini Convexity under Linear Images
In this section, we consider the Terracini convexity properties of linear images of Terracini convex cones such as the nonnegative orthant and the positive semidefinite matrices.We carry out our investigation by analyzing the performance of convex relaxations for nonconvex inverse problems.Specifically, we consider the problem of finding the componentwise nonnegative vector with the smallest number of nonzero entries (i.e., nonnegative sparse vectors) in an affine space, and that of finding the smallest rank positive semidefinite matrix in an affine space.Both of these problems arise commonly in many applications and they have been widely studied in the literature.In Section 4.1 we consider sparse vector recovery and we reprove a result of Donoho and Tanner that a natural linear programming relaxation succeeds in recovering nonnegative sparse vectors in an affine space if and only if a particular linear image of the nonnegative orthant is k-Terracini convex for an appropriate k [DT05].Donoho and Tanner's original proof was given in the language of neighborly polytopes.We provide an alternate proof in Section 4.1 by appealing to the dual relation Proposition 1.7 as it is instructive in our subsequent analysis on recovering low-rank matrices in affine spaces.In Section 4.2 we prove that the success of a semidefinite programming relaxation in recovering positive semidefinite low-rank matrices implies k-Terracini convexity of a particular linear image of the cone of positive semidefinite matrices for a suitable k; in the reverse direction, we show that a 'robust' analog of k-Terracini convexity implies success of the semidefinite relaxation.
The results in Section 4.2 lead to a new family of non-polyhedral Terracini convex cones, which we describe in Section 4.3.Thus, this section supplies new examples of Terracini convex cones, and our results also highlight the utility of our definition of Terracini convexity in generalizing neighborly polyhedral cones, as the usual notion of neighborliness for non-polyhedral cones is not the right one for characterizing the performance of semidefinite relaxations for low-rank matrix recovery.

Linear Images of the Nonnegative Orthant
In applications ranging from feature selection in machine learning to recovering signals and images from a limited number of measurements, a frequently encountered question is that of finding vectors with the smallest number of nonzero entries in a given affine space.Consider the following model problem: min Here A : R d → R n is a linear map, b ∈ R n , x ≥ 0 denotes componentwise nonnegativity of x, and |support(x)| denotes the number of nonzero entries of x.As solving (P0) is NP-hard in general, the following tractable linear programming relaxation is the method of choice that is employed in most contexts: In assessing the performance of the relaxation (P1), the usual mode of analysis is to suppose that there exists a nonnegative vector x ⋆ ∈ R d with a small number of nonzeros such that b = Ax ⋆ , and to then ask whether x ⋆ is the unique optimal solution of (P1), i.e., whether LP (A, Ax ⋆ ) = {x ⋆ }.The main result of Donoho and Tanner [DT05] relates the success of (P1) to neighborliness properties of images of the d-simplex In Theorem 4.1, to follow, we state a conic analog of the result in [DT05], and we reprove it in two stages.The proof we give offers a template for our generalization in Section 4.2 on relating the performance of semidefinite relaxations for low-rank matrix recovery to Terracini convexity of linear images of the cone of positive semidefinite matrices.Our analysis relies on relating the following three properties; each of these is stated with respect to a positive integer k, which will be clear from context.Remark 4.2.This result is a conic analog of those in [DT05].The assumption that A is surjective is to ensure a cleaner argument; if this condition is not satisfied, the proof can be adapted by restricting to the image of A. Finally, the results in [DT05] do not require the condition null(A)∩R d ++ = ∅, and they are described in terms of a property termed 'outward neighborliness'.However, the particular restriction on which we focus suffices for our purposes and leads to a simpler exposition.
This result leads to two types of consequences in [DT05].In one direction, Donoho and Tanner leveraged results on constructions of neighborly polytopes to obtain new families of linear maps A for which the linear program (P1) succeeds in sparse recovery.Conversely, by building on results in the sparse recovery literature, they constructed new families of neighborly polytopes.
Our proof proceeds in two steps and is based on the following intermediate results.+ is also {0}.For nonzero x ⋆ , in considering the exact recovery property and the unique preimage property, we may assume without loss of generality that 1, x ⋆ = 1.The reason for this that LP (A, αb) = αLP (A, b) for any α > 0; the unique preimage property is similarly unaffected by such scaling.With this normalization, the exact recovery property is equivalent to the fact that for any x ⋆ ∈ R d + with |support(x ⋆ )| ≤ k, the point Ax ⋆ has a unique preimage in the solid simplex ∆ d 0 = {x ∈ R d : 1, x ≤ 1, x ≥ 0}.Consider the implication that the exact recovery property implies the unique preimage property.Assume that the unique preimage property does not hold.Then there exists Based on the description of B, we can conclude that 1, x = 1 and therefore x ∈ ∆ d .This violates the property that Ax ⋆ has a unique preimage in ∆ d 0 ; hence the exact recovery property does not hold.Conversely, consider the implication that the unique preimage property implies the exact recovery property.Assume for the sake of a contradiction that there exists With these two reformulations of the unique preimage property and the Terracini convexity property in hand, we proceed to establish the desired result.
Consider the implication that the unique preimage property implies the Terracini convexity property.Based on the unique preimage property applied to elements of R d + with one nonzero entry, we conclude that B(R d + ) has d extreme rays.Let v ∈ null(B) ⊥ ∩ ri(Ω).Letting U be an open set in R d containing the origin, we have that v + ǫ[U ∩ null(B) ⊥ ∩ span(Ω)] ⊂ null(B) ⊥ ∩ ri(Ω) for a sufficiently small ǫ > 0. Consequently, we can conclude that span(null(B) ⊥ ∩ Ω) = null(B) ⊥ ∩ span(Ω), which is equivalent to B(R d + ) being k-Terracini convex.Next, consider the implication that the Terracini convexity property implies the unique preimage property.We prove this by induction on k.For the base case k = 1, as the cone B(R d + ) has d extreme rays, we have that the unique preimage property holds for k = 1.For k > 1, suppose for the sake of a contradiction that null(B) ⊥ ∩ ri(Ω) = ∅.Thus, there exists a face Ω of R d + contained strictly in Ω, i.e., Ω Ω such that null(B) ⊥ ∩ Ω = null(B) ⊥ ∩ Ω.We have the following sequence of containment relations: The first relation follows from Ω ⊆ Ω, the second one follows from the Terracini convexity property, the third one follows from null(B) ⊥ ∩ Ω = null(B) ⊥ ∩ Ω, and the final one follows from the fact that the span of the intersection of two sets is contained inside the intersection of the spans of the sets.In conclusion, all the containments are satisfied with equality and we have that null(B) ⊥ ∩ span( Ω) = null(B) ⊥ ∩ span(Ω), or equivalently that: As R d + is a polyhedral cone, we note that span( Ω) ⊥ and span(Ω) ⊥ are themselves spans of faces of R d + .In particular, let F, F be faces of R d + such that F F , and span(F) = span(Ω) ⊥ , span( F) = span( Ω) ⊥ .The relationship (17) implies that there exists a generator x of an extreme ray of R d + in F \F such that x = (x (+) − x (−) ) + v for x (+) , x (−) ∈ F with disjoint supports and v ∈ null(B).Hence, we have that B(x + x (−) ) = Bx (+) .As dim(F) = k, the sum of the sizes of the supports of x + x (−) and of x (+) is at most k + 1.If x (+) = 0 we have a contradiction due to the inductive hypothesis.If x (+) = 0 we have 1, (x + x (−) ) = 0, which implies that x + x (−) = 0 and in turn that x = 0, also a contradiction.
Based on these two results, we are now in a position to prove Theorem 4.1.
Proof of Theorem 4.1.As null(A) ∩ R d ++ = ∅ and k < d by assumption, we can apply Lemma 4.3.Specifically, the exact recovery property for A is equivalent to the unique preimage property for Next, in preparation to apply Proposition 4.4, we need to verify that the linear map B is surjective.The surjectivity of B is equivalent to A being surjective and 1 / ∈ null(A) ⊥ .The former condition holds by assumption and the latter condition is in turn equivalent to null(A) span(1) ⊥ .
The assumption null(A) ∩ R d ++ = ∅ implies that null(A) span(1) ⊥ .Thus, we are in a position to apply Proposition 4.4 and obtain that the unique preimage property of the cone B(R d + ) is equivalent to B(R d + ) satisfying the Terracini convexity property.This concludes the proof.

Linear Images of the Positive Semidefinite Matrices
The development of convex relaxations for obtaining low-rank matrices in affine spaces largely paralleled and built upon the literature on sparse recovery.Notable examples of such problems include factor analysis and collaborative filtering.Concretely, given an affine space in S d of the form {X ∈ S d : A(X) = b} where A : S d → R n is a linear map and b ∈ R n , consider the following optimization problem for identifying a positive-semidefinite low-rank matrix in this space: min As with the problem (P0), the program (R0) is also NP-hard to solve in general.Consequently, the following semidefinite relaxation is widely employed in practice: By analogy with the analysis of the performance of (P1), we are interested in obtaining conditions under which the unique optimal solution of (R1) with b = A(X ⋆ ) for a low-rank matrix X ⋆ ∈ S d + is equal to X ⋆ , i.e., whether SDP (A, A(X ⋆ )) = {X ⋆ }.Our objective in the remainder of this section is to relate such exact recovery to Terracini convexity of an appropriate linear image of S d + .As with the previous subsection, our analysis is organized in terms of three properties: • A linear map A : S d → R n satisfies the exact recovery property if for any X ⋆ ∈ S d + with rank(X ⋆ ) ≤ k, the unique optimal solution of the semidefinite programming relaxation (R1) is SDP (A, A(X ⋆ )) = {X ⋆ }.
• Consider a linear map B : S d → R N .The cone B(S d + ) satisfies the unique preimage property if for any X ⋆ ∈ S d + with rank(X ⋆ ) ≤ k, the point B(X ⋆ ) has a unique preimage in S d + .
• Consider a linear map B : S d → R N .The cone B(S d + ) satisfies the Terracini convex property if it is closed and pointed, its extreme rays are in one-to-one correspondence with those of S d + , and it is k- Terracini.In what follows, let O d = {X ∈ S d : tr(X) = 1, X 0} be the spectraplex.This plays the same role as the simplex ∆ d did in Section 4.1.We are now in a position to state the main new result of this section.Then the map A satisfies the exact recovery property.
The proof in the direction from the exact recovery property to the Terracini convexity property largely follows the same sequence of steps as the proof of the analogous direction of Theorem 4.1, although technical care is required due to the fact that the cone of feasible directions into the cone of positive-semidefinite matrices is not closed.In the direction from the Terracini convexity property to the exact recovery property, we require a robust analog of the Terracini convexity property.This condition is in the same spirit as constraint qualification type assumptions that are required in the semidefinite programming literature in order to guarantee strict complementarity [Ren01].
Inspired by the two-stage proof in Section 4.1, we begin with the following result that parallels Lemma 4.3: Lemma 4.6.Consider a linear map A : S d → R n and define the linear map B : Proof.As with the proof of Lemma 4.3, in considering the exact recovery property and the unique recovery property, we assume without loss of generality that tr(X ⋆ ) = 1.With this normalization, the exact recovery property is equivalent to the fact that for any X ⋆ ∈ S d + with rank(X ⋆ ) ≤ k, the point A(X ⋆ ) has a unique preimage in the solid spectraplex O d 0 = {X ∈ S d : tr(X) ≤ 1, X 0}.Consider the implication that the exact recovery property implies the unique preimage property.Assume that the unique preimage property does not hold.Then there exists Based on the description of B, we can conclude that tr( X) = 1 and therefore X ∈ O d .This violates the property that A(X ⋆ ) has a unique preimage in O d 0 ; hence the exact recovery property does not hold.Conversely, consider the implication that the unique preimage property implies the exact recovery property.Assume for the sake of a contradiction that there exists The point X ′ = (1 − tr( X))X 0 + X has the property that B(X ′ ) = B(X ⋆ ).Consequently, we have that X ⋆ = X ′ = (1 − tr( X))X 0 + X, which in turn implies that X 0 and X belong to the smallest face of S d + containing X ⋆ .However, as rank(X 0 ) = d but rank(X ⋆ ) ≤ k < d, we have the desired contradiction.Then the cone B(S d + ) satisfies the unique preimage property.
Remarks: In the direction from the Terracini convexity property to the unique preimage property, the fact that S d + is not polyhedral, unlike R d + , complicates matters in comparison to the proof of Proposition 4.4.Specifically, translated to the context of the present theorem, the reasoning up to (17) in Proposition 4.4 continues to hold, but the sentence immediately after (17) is no longer true.As stated previously, the nature of this difficulty is akin to the lack of strict complementarity in semidefinite programs (in contrast to linear programs), thus necessitating some type of constraint qualification assumption.The 'robust Terracini' form of the assumption in the second part of this result is similar in spirit to assumptions discussed in [Ren01] to ensure strong duality in conic programs.
Proof.We begin by presenting a dual reformulation of the unique preimage property.For each Unlike the situation with Proposition 4.4, the cone of feasible directions K S d + (X ⋆ ) is not closed, which presents additional complications.We prove next that we must have null(B) then there is a low-rank matrix near X ⋆ for which the unique preimage property does not hold.
Concretely, suppose for the sake of a contradiction that M ∈ null(B) ∩ K S d + (X ⋆ ) with M = 0. Without loss of generality, we assume that X ⋆ has rank r ∈ {1, . . ., k} with the row/column space equal to the span of the first r standard basis vectors.For such an X ⋆ , the closure of the cone of feasible directions K S d + (X ⋆ ) takes on a convenient block-diagonal form, so that M ∈ K S d + (X ⋆ ) may be viewed as follows: . We now construct a rank-r matrix for which the unique preimage property does not hold, thus violating the given assumption.Choose any matrix W ∈ S r such that W and W + P are strictly positive definite.We have that the matrix + and has rank equal to r.Further, we also have that the matrix Consequently, we have that the matrix: and so the image of the rank-r matrix + , which gives us the desired contradiction.In summary, we have for each In analogy to the case of the nonnegative orthant, the positive-semidefinite cone Next, we prove that null( B) ⊥ ∩ span(Ω) is a subspace of positive dimension by constructing a nonzero element in this subspace.Recall that null(B) ⊥ ∩ ri( Ω) = ∅ and that Ω ⊂ Ω.Consider any Z ∈ [null(B) ⊥ ∩ ri( Ω)], which by construction is nonzero.We have the expression Z = M + cI with M ∈ null(A) ⊥ \{0} and c ∈ R based on the surjectivity of B and that Finally, we consider the preceding two paragraphs together in the context of the Terracini convex property of the cone B(S d + ).Specifically, we have that null( B) ⊥ ∩ Ω = {0} and that null( B) ⊥ ∩ span(Ω) is a subspace of positive dimension.This violates the reformulation of Terracini convexity of B(S d + ) that span(null( B) ⊥ ∩ Ω) = null( B) ⊥ ∩ span(Ω).This gives us the desired contradiction.
Given the preceding two results, we now prove Theorem 4.5: Proof of Theorem 4.5.For the first statement, we are given that null(A) ∩ S d ++ = ∅.Hence, we can apply Lemma 4.6 and obtain that the cone B(S d + ) satisfies the unique preimage property.Next, in preparation to apply the first part of Proposition 4.7, we need to check that the linear map B is surjective, which is equivalent to A being surjective and I / ∈ null(A) ⊥ .The former condition holds by assumption and the latter condition is in turn equivalent to null(A) span(I) ⊥ .The assumption null(A) ∩ S d ++ = ∅ implies that null(A) span(I) ⊥ .Thus, we are in a position to apply Proposition 4.7 and obtain that the cone B(S d + ) satisfies the Terracini convexity property.For the second statement, we can apply the second part of Proposition 4.7 to conclude that the cone B(S d + ) satisfies the unique preimage property.Applying Lemma 4.6, we conclude that the map A satisfies the exact recovery property.

New Families of Terracini Convex Cones
The results from the preceding section lead naturally to new families of Terracini convex cones.Specifically, from the literature on the semidefinite relaxation (R1) we have that the exact recovery property is satisfied with high probability by random linear maps A of suitable dimension [RFP10,CP11].Combined with the first part of Theorem 4.5, we obtain Terracini convex cones that are specified as linear images of the cone of the positive-semidefinite matrices.
Theorem 4.8.Let A 1 , . . ., A n ∈ R d×d be a collection of independent random matrices in which each A i is a Gaussian random matrix with i.i.d entries that have zero-mean and variance 1 n , and . There exist constants c 1 , c 2 > 0 and c 3 (ǫ) > 0 (depending on ǫ), such Proof.We begin with a geometric reformulation of the exact recovery property of Section 4.2 based on the argument presented in Lemma 4.6.Specifically, for a given linear map A : S d → R n and a positive integer k, the exact recovery property of Section 4.2 is equivalent to the condition that for any X ⋆ ∈ S d with rank(X ⋆ ) ≤ k and tr(X ⋆ ) = 1, we have that A(X ⋆ ) has a unique preimage in the solid spectraplex O d 0 = {X ∈ S d : tr(X) ≤ 1, X 0}.The results in [RFP10, CP11] concern a more general geometric criterion which can be specialized to our context.These results are stated in terms of the matrix nuclear norm • ⋆ = i σ i (•) (i.e., the sum of the singular values).Consider the linear map Â : R d×d → R n defined in terms of the Gaussian random matrices A 1 , . . ., A n as Â d×d is a subset of the nuclear norm unit ball.Thus, with the same value of k = ⌊ c 1 n d ⌋, one can conclude that the linear map A defined by the restriction of Â to the domain S d satisfies the exact recovery property of Section 4.2 for k = ⌊ c 1 n d ⌋ with probability greater than 1 − 2e −c 2 n .Further, we have that null(A) ∩ S d ++ = ∅ with probability at least 1 − e −c 3 (ǫ)n .This follows from the observation that the probability that null(A) ∩ S d ++ = ∅ is the same as the probability that null(A) ∩ S d + = {0}.This latter quantity can be estimated using the results from [Gor88, CRPW12, ALMT14], using the fact that the positive semidefinite cone is self-dual.
Therefore, by a union bound, the assumptions of the first part of Theorem 4.5 are satisfied, and hence the cone B(S d ) is k-Terracini convex, with probability at least 1 − 2e −c 2 n − e −c 3 (ǫ)n .Thus, in some sense 'most' linear images of the cone of positive semidefinite matrices are k-Terracini convex for a suitable k depending on the dimension of the image of the linear map.This result offers a semidefinite analog of the result of Donoho and Tanner [DT05] on neighborliness of linear images the nonnegative orthant.Linear images of the positive semidefinite cone are semialgebraic but are generally not basic semialgebraic (as this property is not preserved under linear projections).In the next section, we describe an approach to obtaining basic semialgebraic Terracini convex cones from the positive semidefinite cone via an different construction based on the viewpoint of hyperbolic programming.

Terracini Convexity and Derivative Relaxations of Hyperbolicity Cones
In this section, we study Terracini convexity from a more algebraic perspective by focusing on a class of convex cones that are obtained from hyperbolic polynomials, which are multivariate polynomials possessing certain real-rootedness properties.The associated cones are called hyperbolicity cones, and among the prototypical examples of such cones are the nonnegative orthant and the positive semidefinite cone.Where the previous section demonstrated that generic linear images of the nonnegative orthant and of the positive semidefinite cone are k-Terracini convex for suitable k, here we show that the (algebraically defined) operation of taking derivative relaxations of the nonnegative orthant and of the positive semidefinite cone lead to hyperbolicity cones with nontrivial Terracini convexity properties.As hyperbolicity cones are basic semialgebraic, i.e., they are defined by finitely many polynomial inequalities, a remarkable fact about the k-Terracini convex cones we construct in this section is that they are all basic semialgebraic.In contrast, the k-Terracini convex cones constructed in Section 4 by taking projections of the positive semidefinite cone are, in general, not basic semialgebraic.The rest of the section is organized in the following way.In Section 5.1, we briefly state basic definitions and terminology related to hyperbolic polynomials, hyperbolicity cones, and their derivative relaxations, as well as reviewing properties of the boundary and extreme rays of hyperbolicity cones.In Section 5.2 we study tangent cones of hyperbolicity cones and how these interact with derivative relaxations.In particular, we show that the tangent cone to a hyperbolicity cone at a point is the hyperbolicity cone associated with the localization of the associated hyperbolic polynomial at that point.This gives us an algebraic handle on the objects arising in the definition of Terracini convexity.Section 5.3 is focused on establishing the main result on Terracini convexity properties of derivative relaxations of a class of hyperbolicity cones that includes the orthant, the positive semidefinite cone, and the cone of positive semidefinite Hankel matrices.

Hyperbolicity Cones and Their Derivative Relaxations
Hyperbolic polynomials Let p be a polynomial with real coefficients that is homogeneous of degree d in n variables, and let e ∈ R n .We say that p is hyperbolic with respect to e if p(e) > 0 and, for each x ∈ R n , the univariate polynomial t → p(te − x) has only real roots.Given x ∈ R n let λ p,e max (x) = λ p,e 1 (x) ≥ λ p,e 2 (x) ≥ • • • ≥ λ p,e d (x) = λ p,e min (x) denote the roots of t → p(te − x), or hyperbolic eigenvalues of x with respect to p and e.If p and e are clear from the context, we write λ 1 (x), • • • , λ d (x).The rank of x ∈ R n , denoted rank p (x), is the number of non-zero hyperbolic eigenvalues of x with respect to p and e.The multiplicity of x is mult p (x) = deg(p) − rank p (x), the number of zero hyperbolic eigenvalues of x.
Hyperbolicity cones Associated with a hyperbolic polynomial p and direction of hyperbolicity e is the closed hyperbolicity cone Λ + (p, e) = {x ∈ R n : λ p,e min (x) ≥ 0}.This is a convex cone, a result due to Gårding [Går59].We denote the interior of this cone by Λ ++ (p, e).If ẽ ∈ Λ ++ (p, e), then p is hyperbolic with respect ẽ and Λ + (p, e) = Λ + (p, ẽ) [Går59].If p and e are clear from the context, we write Λ + instead of Λ + (p, e) for brevity of notation.
Although the hyperbolic eigenvalues of x with respect to p depend on the choice of e, the multiplicity, mult p (x), and rank, rank p (x), are independent of the choice of direction of hyperbolicity [Ren06, Proposition 22].The lineality space of the hyperbolicity cone Λ + is exactly the set of points with multiplicity deg(p) (or rank zero), i.e., (see, e.g., [Ren06,Proposition 11]).If we expand p(x + te) in powers of t as then Descartes' rule of signs gives an equivalent description of the hyperboicity cone as This shows that any hyperbolicity cone is a basic semialgebraic set, i.e., it can be expressed via finitely many polynomial inequalities.
Derivative relaxations If p is hyperbolic with respect to e and ẽ ∈ Λ ++ (p, e), then the directional derivative is again hyperbolic with respect to e (by Rolle's theorem).The hyperbolicity cone Λ + (D ẽp, e) satisfies Λ + (D ẽp, e) ⊇ Λ + (p, e).As such, it is often referred to as a derivative relaxation of Λ + (p, e).When p, e, and ẽ are clear from the context we abuse notation and write Λ ′ + := Λ + (D ẽp, e) for brevity.
One of the most interesting aspects of derivative relaxations is that boundary points (of high enough multiplicity) of Λ + remain boundary points of Λ ′ + .
Theorem 5.1 (Renegar [Ren06, Theorem 12]).Let p be hyperbolic with respect to e with hyperbolicity cone Λ + , and for any ẽ ∈ Λ ++ (p, e) let the associated derivative relaxation be As a straightforward corollary, we obtain a relationship between the lineality spaces of a hyperbolicity cone and its derivative relaxation.
Corollary 5.2.Under the same hypotheses as Theorem 5.1, if deg(p) Proof.This follows from Theorem 5.1 by noting that the lineality space of Λ + is exactly the set of x with mult p (x) = deg(p) and the lineality space of Λ ′ + is exactly the set of x with mult D ẽp (x) = deg(p) − 1.
One consequence of Corollary 5.2 is that if deg(p) ≥ 3 then Λ + being a pointed cone implies that any derivative relaxation Λ ′ + is also pointed.Building on Corollary 5.2, we can understand how the extreme rays of the derivative cone and the original cone relate to each other.In particular, the extreme rays of derivative relaxations are either extreme rays of the original cone or extreme rays of multiplicity one.
Proof.As Λ + is pointed and deg(p) ≥ 3 it follows from Corollary 5.2 that Λ ′ + is pointed.If x generates an extreme ray of Λ ′ + and mult D ẽp (x) ≥ 2 then, by Theorem 5.1, we can conclude that mult p (x) ≥ 3 and x ∈ Λ + .Since x ∈ Λ + ⊇ Λ ′ + and x generates an extreme ray of Λ ′ , it follows that x generates an extreme ray of Λ + .

Tangent Cones and Derivative Relaxations
In this section we study tangent cones of hyperbolicity cones, and in particular how tangent cones change when we take derivative relaxations.We first show that the tangent cone of a hyperbolicity cone Λ + (p, e) at a point x is again a hyperbolicity cone (Theorem 5.9) and that the corresponding hyperbolic polynomial is the localization of p at x (Definition 5.4).The main result of the section (Theorem 5.11) is that the tangent cone to Λ ′ + at a boundary point x is the corresponding derivative relaxation of the tangent cone to Λ + at that same point x.This is the key technical result that enables us to understand how k-Terracini convexity is affected by taking derivative relaxations (see Section 5.3).
Example 5.5.Let p(X) = det(X) where X is a d × d symmetric matrix of indeterminates, and let e = I.The corresponding hyperbolicity cone is the cone of d × d positive semidefinite matrices.Suppose that X = Z 0 0 0 where Z is k × k and positive definite.Then, by the formula for the determinant of a block matrix in terms of the Schur complement, There is an alternative formulation of Loc x (p) in terms of directional derivatives of p in the x direction.This alternative formulation is particularly useful in understanding how derivative relaxations interact with localization.In the forthcoming discussion, we refer on several occasions to higher-order directional derivatives of a hyperbolic polynomial, which we denote as a composition of first-order directional derivatives as D y (k) • • • D y (1) p; if the directions y (1) , . . ., y (k) are the same, we denote the associated higher-order directional derivative in a more compact manner as D k y p. Lemma 5.6.If p is a hyperbolic polynomial with respect to e then Proof.By a Taylor expansion, Since p vanishes to order mult p (x) as λ → ∞, and mult p (x) is independent of the choice of e in the interior of the hyperbolicity cone, it follows that D k y p(x) = 0 whenever y ∈ int(Λ + (p, e)) and 0 ≤ k < mult p (x).As such, if k < mult p (x) then y → D k y p(x) is a polynomial that vanishes on the interior of the (full-dimensional) hyperbolicity cone, so it must be identically zero.Hence Taking the limit as λ → ∞ we obtain the stated result.
We now consider localization of a hyperbolic polynomial from the point of view of its zeros.To do so, we use the following basic fact about how hyperbolic eigenvalues change along different directions.where the functions s → t i (s; x, u) are real analytic functions of s.Furthermore, if u ∈ Λ + (p, e) then t ′ i (s; x, u) := d ds t i (s; x, u) ≥ 0 for all s.The roots of the polynomial t → p(x − te + su) are the eigenvalues of x + su, and therefore each t i (s; x, u) in the above lemma is an eigenvalue of x + su.The assertion that the functions s → t i (s; x, u) are real analytic functions of s corresponds to the eigenvalues of x+su being analytic functions of s, and the nonnegativity of each of the derivatives t ′ i (s; x, u) (when u ∈ Λ + (p, e)) corresponds to each of the eigenvalues of x + su being non-decreasing functions of s.This result is useful because it allows us to understand localization from the point of view of eigenvalues.Expanding t i (λ −1 ; x, y) about t i (0; x, y), gives t i (λ −1 ; x, y) = λ i (x) + λ −1 t ′ (0; x, y) + O(λ −2 ).We obtain (20) by taking the limit as λ → ∞.
We are interested in the localization of a hyperbolic polynomial at a point because it turns out to be the algebraic analogue of the geometric operation of taking the tangent cone to a hyperbolicity cone at a point.Although this is probably well-known, we have included a proof because we had difficulty finding an explicit statement of this type in the literature.
We now show that localization at x is the algebraic analog of the tangent cone to the hyperbolicity cone at x. Proof.The fact that the localization is hyperbolic with respect to e is exactly [ABG70, Lemma 3.42], and also follows immediately from Lemma 5.8 and the fact that the t ′ i (0; x, y) are always real.
For the reverse inclusion, suppose that z ∈ K Λ + (p,e) (x).In other words, there exists a sufficiently large positive λ 0 such that x + λ −1 z ∈ Λ + (p, e) for all λ ≥ λ 0 .Then has nonnegative coefficients for all λ ≥ λ 0 .By continuity of the coefficients as functions of λ, it follows that t → lim Proof.Let m = mult p (x) ≥ 1.On the one hand Loc x (p)(y) = 1 m! D m y p(x).Differentiating in the direction ẽ gives where we have used the fact that D y (1) • • • D y (m) p(x) is invariant under permutations of y (1) , . . ., y (m) .On the other hand x has multiplicity m − 1 ≥ 0 with respect to D ẽp.As such We have shown that D ẽLoc x (p) = Loc x (D ẽp), from which the result directly follows.
The fact that localization and taking derivatives commute tells us that the convex tangent space of a hyperbolicity cone is exactly the same as the convex tangent space of its derivative relaxation at points of high enough multiplicity.
Proof.From Theorem 5.9, the convex tangent space of Λ + at x is the lineality space of Λ + (Loc x (p), e).

Derivative Relaxations of Terracini Convex Hyperbolicity Cones
In this section we state and prove two results related to Terracini convexity properties of derivative relaxations of hyperbolicity cones.The first, Proposition 5.13, gives a sufficient condition under which any derivative relaxation of a hyperbolicity cone that is k-Terracini convex also has non-trivial Terracini convexity properties.It is, a priori, unclear whether the hypotheses of Proposition 5.13 hold for any interesting examples.In the main result of this section (Theorem 5.14), we show that if Λ + is a hyperbolicity cone that is Terracini convex and for which all of its extreme rays have hyperbolic rank one, then repeatedly taking derivative relaxations produces new examples of hyperbolicity cones with Terracini convexity properties.Examples of hyperbolicity cones to which Theorem 5.14 applies are the nonnegative orthant and the positive semidefinite cone, as well as other examples such as the cone of d × d positive semidefinite Hankel matrices.Proposition 5.13.Suppose that p is hyperbolic with respect to e, the degree deg(p) ≥ 3, and the associated hyperbolicity cone Λ + is pointed and k-Terracini convex.If each collection x (1) , . . ., x (k ′ ) of k ′ extreme rays of Λ + satisfies one of the following conditions: • there exists j such that mult p (x (j) ) ≤ 2, or Remarks: The case deg(p) = 1 is vacuous as Λ + is a halfspace and Terracini convexity requires a cone to be pointed.For similar reasons, the case deg(p) = 2 is not interesting as deg(D ẽp) = 1 and Λ ′ + is a halfspace.
Proof.Let ℓ = min{k, k ′ } and let x (1) , . . ., x (ℓ) be extreme rays of Λ ′ + (note that Λ ′ + is pointed as Λ + is pointed and deg(p) ≥ 3).We consider next two cases based on the multiplicities of the x (j) 's with respect to the derivative polynomial D ẽp.
Case 1: Assume that there exists j such that mult D ẽp (x (j) ) = 1.In this case, the localization of D ẽp at x (j) has degree one, which implies that Λ + (Loc x (D ẽp), e) is a halfspace; therefore, from the second part of Theorem 5.9, the convex tangent space of Λ ′ + at x (j) is a subspace of codimension one.If all of x (1) , . . ., x (ℓ) generate the same extreme ray, then so does ℓ i=1 x (i) .This means that all of the convex tangent spaces of Λ ′ + at these points are the same, so certainly Otherwise there is some x (j ′ ) that generates an extreme ray that is distinct from x (j) .
Since L Λ ′ + (x (j) ) ∩ Λ ′ + exposes the extreme ray generated by x (j) (from Lemma 2.4, as hyperbolicity cones are facially exposed [Ren06, Theorem 23]), it follows that x (j ′ ) / ∈ L Λ ′ + (x (j) ).Since the convex tangent space of Λ ′ + at x (j) has codimension one and does not contain x (j ′ ) , Case 2: Assume that mult D ẽp (x (i) ) ≥ 2 for all i = 1, 2, . . ., ℓ. From Corollary 5.3, it follows that mult p (x (i) ) ≥ 3 for all i = 1, 2, . . ., ℓ and that the x (i) all generate extreme rays of Λ + .As ℓ ≤ k ′ and by our assumption on the extreme rays of Λ + , it follows that mult p ℓ i=1 x (i) ≥ 3. Then The first and third equalities in (22) follow from Corollary 5.12 together with the fact that x (i) (for each i) and ℓ i=1 x (i) have multiplicity at least three with respect to p.The second equality in (22) follows from the fact that Λ + is k-Terracini convex and ℓ ≤ k.
While Proposition 5.13 may appear rather technical, it is useful because it applies when we repeatedly take derivative relaxations.Indeed, we have as an immediate consequence that for a hyperbolic polynomial p with deg(p) = 3, if Λ + is Terracini convex then so is any derivative relaxation Λ ′ + ; this follows from the observation that multiplicity of any generator of an extreme ray of Λ + is at most two.For higher-degree hyperbolic polynomials, we present next the main result of this section which shows that for Terracini convex hyperbolicity cones with all the extreme rays having hyperbolic rank one, the derivative relaxations yield new hyperbolicity cones with nontrivial Terracini convexity properties.Recall that the rank of a point with respect to a hyperbolic polynomial is the number of non-zero eigenvalues, and that a cone is Terracini convex if it is k-Terracini convex for all k.
Theorem 5.14.Let p be hyperbolic with respect to e and let d = deg(p) with d > 3. Suppose that Λ + (p, e) is pointed and Terracini convex and that whenever x generates an extreme ray of Proof.For brevity of notation, we write p It is helpful in our proof to use the observation that any x that generates an extreme ray of Λ (ℓ)  either generates an extreme ray of Λ + := Λ + (p, e) or satisfies mult p (ℓ) (x) = 1.We show both this secondary result as well as the primary result via induction.
For the base case of the secondary result, note that if x generates an extreme ray of Λ + then by Corollary 5.3 either mult p (1) (x) = 1 or x is an extreme ray of Λ + (with mult p (x) ≥ 3).For the primary result, note that Λ + is Terracini convex.If x (1) , . . ., x (d−3) are extreme rays of Λ + , then their sum has rank at most d − 3 (since the hyperbolic rank function is subadditive [AB18]) and hence has multiplicity at least three with respect to p.It follows from Proposition 5.13 that Λ (1) For the inductive hypothesis of the secondary result, assume that if x generates an extreme ray of Λ (ℓ−1) + then either x generates an extreme ray of Λ + or mult p (ℓ−1) (x) = 1.For the primary result, assume that Λ (ℓ−1) + is (d − ℓ − 1)-Terracini convex.We now establish the inductive step for the secondary result.If x generates an extreme ray of Λ (ℓ) + then by Corollary 5.3 either mult p (ℓ) (x) = 1 or x generates an extreme ray of Λ (ℓ−1) + with mult p (ℓ−1) (x) ≥ 3, and so by the inductive hypothesis is an extreme ray of Λ + .
Finally we establish the inductive step of the primary result by applying Proposition 5.13.Let x (1) , . . ., x (d−ℓ−2) be extreme rays of Λ (ℓ−1) + .Assume that each x (i) has multiplicity at least three with respect to p (ℓ−1) (otherwise we are done).Based on the inductive hypothesis, each x (i) must be an extreme ray of Λ + and so must have rank one with respect to p by assumption.Then x (i) has rank at most d − ℓ − 2, and hence multiplicity at least ℓ + 2, with respect to p.By applying Theorem 5.1 ℓ − 1 times, and noting that p, p (1) , . . ., p (ℓ−1) all have degree at least three, we see that d−ℓ−2 i=1 x (i) has multiplicity at least ℓ + 2 − (ℓ − 1) = 3 with respect to p (ℓ−1) .Then, by Proposition 5.13, Λ We conclude by discussing three concrete special cases of Theorem 5.14.
Example 5.15 (Hyperbolicity cones associated with permanents).If p(x) = d i=1 x i and e is the vector of all ones, then the corresponding hyperbolicity cone is the nonnegative orthant.This is Terracini convex and all of its extreme rays have rank one.In this case, if e (1) , . . ., e (ℓ) ∈ R d ++ , then D e (ℓ) • • • D e (1) p(x) is the permanent of the d × d matrix with columns e (1) , . . ., e (ℓ) and d − ℓ copies of x.Theorem 5.14 then tells us that the hyperbolicity cone associated with this permanent is (d − ℓ − 2)-Terracini convex as long as 1 ≤ ℓ ≤ d − 3.
Example 5.16 (Hyperbolicity cones associated with mixed discriminants).If p(X) = det(X) and e is the identity matrix, then the corresponding hyperbolicity cone is the positive semidefinite cone.This is Terracini convex and all of its extreme rays have rank one.In this case, if E (1) , . . ., E (ℓ)  are positive definite matrices then the quantity D E (ℓ) • • • D E (1) p(X) is known as the mixed discriminant of the d-tuple of matrices (E (1) , . . ., E (ℓ) , X, . . ., X). Theorem 5.14 then tells us that the hyperbolicity cone associated with this mixed discriminant is (d − ℓ − 2)-Terracini convex as long as 1 ≤ ℓ ≤ d − 3.
Example 5.17 (Hyperbolicity cones associated with mixed discriminants of Hankel matrices).Consider the cone H d+1 of (d + 1) × (d + 1) symmetric positive semidefinite Hankel matrices.This can be viewed as the hyperbolicity cone associated with the determinant restricted to the 2d + 1dimensional subspace of Hankel matrices.Its extreme rays have the form and are rank one as symmetric matrices, and therefore have rank one with respect to the determinant polynomial.The cone H d+1 is also linearly isomorphic to the cone C 2,2d over the homogeneous moment curve of degree 2d, which is Terracini convex from Corollary 3.16.As such, if we choose E (1) , . . ., E (ℓ) to be positive definite (d + 1) × (d + 1) Hankel matrices and if 1 ≤ ℓ ≤ d − 2, then the mixed discriminant of (E (1) , . . ., E (ℓ) , X, . . ., X) restricted to Hankel matrices X yields an associated hyperbolicity cone that is (d − 1 − ℓ)-Terracini convex.

Discussion
In this paper we introduced the notion of Terracini convex cones, generalizing the notion of neighborly polyhedral cones to the non-polyhedral setting in a way that includes examples such as the positive semidefinite cone and the cone over the moment curve.This suggests the pursuit of a broader program that seeks to extend key notions from polyhedral combinatorics to more general convex cones.
Explicit constructions A significant feature of the literature on neighborly polytopes -arguably, a principle reason for considering such polytopes in the first place -is that they offer examples of various extremal polyhedral constructions.Obtaining similar constructions with nonpolyhedral Terracini convex cones would offer an interesting point of comparison with the polyhedral case.For example, we are not aware whether the non-degeneracy and regularity conditions of Section 3 are necessary to conclude that k-neighborly cones are k-Terracini convex, and identifying potential counterexamples would provide an interesting extremal class of convex cones.In a different direction, explicit constructions for linear images of the positive-semidefinite cone that are Terracini-convex would immediately yield explicit (non-random) families of linear maps for which the associated low-rank inverse problems considered in Section 4.2 may be solved exactly via semidefinite programming; despite significant attention devoted to this question, we are not aware of any such families of linear maps.
Beyond generalizing neighborliness A simplicial polytope is one in which every proper face is a simplex.In the spirit of this paper, a natural analogue in the non-polyhedral conic setting would be a closed pointed convex cone for which every proper face is Terracini convex.Let us call such convex cones boundary Terracini convex.Clearly the cone over any simplicial polytope is boundary Terracini convex, but boundary Terracini convex cones are a much richer class.One interesting example is the epigraph of the nuclear norm, i.e., {(X, t) ∈ R m×m × R : X ⋆ ≤ t}.One can check that all of the proper faces of this convex cone are linearly isomorphic to positive semidefinite cones.Moreover, we can deduce from [Ren06, Corollary 17] that if Λ + is a hyperbolicity cone that is boundary Terracini convex, then so are derivative relaxations of Λ + (as long as they are pointed).
It would be interesting to study such boundary Terracini convex cones in more detail.

Weaker notions of Terracini convexity
The key condition (2) in the definition of k-Terracini convexity is required to hold for every subset of at most k extreme rays.It is natural to consider weaker notions of k-Terracini convexity that only require (2) to hold for 'many' subsets of at most k extreme rays.By ruling out certain explicit configurations of k extreme rays such a definition would generalize important existing variations on neighborliness, such as k-neighborly centrally symmetric polytopes (in which subsets of k extreme points containing an antipodal pair are excluded).Another approach would be to require that (2) hold for suitably generic subsets of at most k extreme rays.
Seeking and studying examples of convex cones that are generically k-Terracini convex but not k-Terracini convex, would lead to a deeper understanding of Terracini convexity and its variants.
Possible further constructions of k-Terracini convex cones We have seen that the positive semidefinite cone and the cone over the moment curve (or equivalently the cone of Hankel positive semidefinite matrices) are Terracini convex.These are both examples of spectrahedral cones (intersections of a positive semidefinite cone with a subspace) with all extreme rays having rank one.This very special class of spectrahedral cones were classified by Blehkerman, Sinn, and Velasco [BSV17] and are closely connected to questions about the relationship between nonnegative polynomials and sums of squares.It would be interesting to investigate the Terracini convexity properties of spectrahedral cones with only rank one extreme rays.Going one step further, one could similarly investigate the Terracini convexity properties of hyperbolicity cones with only (hyperbolic) rank one extreme rays.Unlike the spectrahedral setting, we are not aware of any nontrivial characterization of this class of convex cones.Theorem 4.8 shows that with high probability, Gaussian random linear images of the positive semidefinite cone are k-Terracini convex, for a suitable k.The specific properties of the positive semidefinite cone are only used in isolated places in the argument, and do not seem to be essential.It is plausible that there an analogue of Theorem 4.8 where the positive semidefinite cone is replaced with any Terracini convex hyperbolicity cone, or perhaps even any Terracini convex cone.If this were the case, it would be a substantial further generalization of the fact that Gaussian random linear images of the simplex are k-neighborly polytopes, for suitable k [DT05].It would also suggest the broader applicability of the notion of Terracini convexity for understanding convex relaxations of inverse problems.
Obstructions to lifts of convex sets Another setting in which neighborliness is useful, and Terracini convexity may find applications, is in the study of lifted representations of convex sets.Given a convex set C and a closed convex cone K, we say that C has a K-lift if we can express C as the linear projection of an affine slice of K.Such representations of convex sets are of importance when convex optimization problems are expressed in conic form.In particular, they play a prominent role in the study of the expressive power of linear, second-order cone, and semidefinite programming of a given size (see, e.g., [FGP + 20] for a recent survey).It turns out if a convex set satisfies a notion of k-neighborliness that is somewhat weaker than that studied in this paper, then it cannot have a K-lift where K is a product of finitely many k × k positive semidefinite cones [Ave19].A natural extension of this line of inquiry would be to investigate whether Terracini convexity properties also provide obstructions to the existence of certain lifted representations of convex sets.

C
a) increase j by one, (b) set I j = I j−1 ∪ {i}, and (c) set F (j) C = F C m∈I j x (m) .The sequence of faces F = F C {x (1) , . . ., x (k) } , and therefore forms a chain of faces of C of length at most H(C).As F C {x (1) , . . ., x (k) } = F C k i=1 x (i) and as the index set I j satisfies |I j | ≤ H(C) − 1, setting I = I j leads to the desired conclusion.

••
A linear map A : R d → R n satisfies the exact recovery property if, for each x ⋆ ∈ R d + with |support(x ⋆ )| ≤ k, the unique optimal solution of the linear programming relaxation (P1) is LP (A, Ax ⋆ ) = {x ⋆ }. • Consider a linear map B : R d → R N .The cone B(R d + ) satisfies the unique preimage property if, for each x ⋆ ∈ R d + with |support(x ⋆ )| ≤ k, the point Bx ⋆ has a unique preimage in R d + .Consider a linear map B : R d → R N .The cone B(R d + ) satisfies the Terracini convexity property if it is pointed, it has d extreme rays, and it is k-Terracini convex.Given these notions we state next the result of Donoho and Tanner in conic form: Theorem 4.1.Consider a linear map A : R d → R n that is surjective and define the linear map B : R d → R n+1 as Bx = Ax 1, x .Suppose that null(A) ∩ R d ++ = ∅.Fix a positive integer k < d.The map A satisfies the exact recovery property if and only if the cone B(R d + ) satisfies the Terracini convexity property.

Lemma 4. 3 .
Consider a linear map A : R d → R n and define the linear map B : R d → R n+1 as Bx = Ax 1, x .Suppose that null(A) ∩ R d ++ = ∅.Fix a positive integer k < d.The map A satisfies the exact recovery property if and only if the cone B(R d + ) satisfies the unique preimage property.Proof.For the case x ⋆ = 0, one can check that LP (A, 0) = {0} and that the unique preimage of 0 ∈ R n+1 under the map B in R d which in turn implies that x 0 and x belong to the smallest face of R d + containing x ⋆ , i.e., support(x 0 ) ⊆ support(x ⋆ ) and support(x) ⊆ support(x ⋆ ).However, as |support(x 0 )| = d but |support(x ⋆ )| ≤ k < d,we have the desired contradiction.Our next result relates the unique preimage property to the Terracini convexity property: Proposition 4.4.Consider a linear map A : R d → R n and define the linear map B : R d → R n+1 as Bx = Ax 1, x .Suppose the map B is surjective.Fix a positive integer k.The cone B(R d + ) satisfies the unique preimage property if and only if it satisfies the Terracini convexity property.Proof.First, we give a dual reformulation of the unique preimage property.For eachx ⋆ ∈ R d + with |support(x ⋆ )| ≤ k, the property that Bx ⋆ has a unique preimage in R d + is equivalent to the transverse intersection condition null(B) ∩ K R d + (x ⋆ ) = {0}.The cone K R d + (x ⋆ ) isclosed and therefore one can check that this transverse intersection condition is equivalent to null(B) ⊥ ∩ ri(N R d + (x ⋆ )) = ∅.As the nonnegative orthant is a self-dual cone, the normal cone N R d + (x ⋆ ) is given by a face of R d + of co-dimension at most k.In summary, the unique preimage property states that for any face Ω of R d + of co-dimension at most k, we have that null(B) ⊥ ∩ ri(Ω) = ∅.Second, we note that the cone B(R d + ) is pointed by construction.As the linear map B is surjective, elements of the normal cone N B(R d + ) (Bx) for any x ∈ R d + are in one-to-one correspondence with null(B) ⊥ ∩ N R d + (x).Consequently, by appealing to Proposition 1.7, the cone B(R d + ) being k-Terracini convex is equivalent to the condition that for any face Ω of R d + of co-dimension at most k, we have that span(null(B) ⊥ ∩ Ω) = null(B) ⊥ ∩ span(Ω).

Theorem 4. 5 .− d−k+1 2 .
Consider a linear map A : S d → R n and fix a positive integer k < d.Then the following two statements hold:1.Suppose that A is surjective and null(A) ∩ S d ++ = ∅.Consider the linear map B :S d → R n+1 defined as B(X) = A(X) tr(X).If the map A satisfies the exact recovery property, then the coneB(S d + )satisfies the Terracini convexity property.2.Assume that n > d+12 Suppose there exists an open set S in the space of linear maps from S d to R n with the following properties:• A ∈ S• For each Ã ∈ S, the map Ã is surjective and satisfies null( Ã) ∩ S d ++ = ∅.•For each Ã ∈ S with associated B : S d → R n+1 defined as B(X) = Ã(X) tr(X) , the cone B(S d + ) satisfies the Terracini convexity property.
Fix a positive integer k < d.The map A satisfies the exact recovery property if and only if the cone B(S d + ) satisfies the unique preimage property.

− d−k+1 2 .
The next proposition represents the main new component of the proof of Theorem 4.5: Proposition 4.7.Consider a linear map A : S d → R n and define the linear map B :S d → R n+1 as B(X) = A(X) tr(X).Fix a positive integer k.Then we have the following two results:1.Suppose the map B is surjective.If the cone B(S d + ) satisfies the unique preimage property, then it satisfies the Terracini convexity property.2.Assume that n d+12 Suppose there exists an open set S in the space of linear maps from S d to R n satisfying the following conditions:• A ∈ S• For each Ã ∈ S, the associated linear map B : S d → R n+1 defined as B(X) = Ã(X) tr(X) is surjective and the cone B(S d + ) satisfies the Terracini convexity property.

.− d−k+1 2 ,
given by a face of S d + of dimension at least d−k+1 2 (corresponding to positive-semidefinite matrices with row/column space orthogonal to those of X ⋆ ).Thus, the unique preimage property states that for any face Ω of S d + of dimension at least d−k+1 2 , we have that null(B) ⊥ ∩ ri(Ω) = ∅.Next, we note that for each Ã ∈ S, the associated linear map B is such that the cone B(S d + ) is closed and pointed by construction.Further, each B is surjective by assumption.Thus, elements of the normal cone N B(S d + ) (B(X)) are in one-to-one correspondence with those of null( B)⊥ ∩ N S d + (X)for each X ∈ S d + .Hence, by appealing to Proposition 1.7, the Terracini convexity property states that for any face Ω of S d + of dimension at least d−k+12, we have that span(null( B)⊥ ∩ Ω) = null( B) ⊥ ∩ span(Ω).With these reformulations of the unique preimage property and the Terracini convexity property, we now proceed to establish the result.Proof of Statement 1 To prove the first result, we begin by noting that unique preimage property applied to rank-one elements of S d + implies that the cone B(S d + ) has extreme rays in one-to-one correspondence with those of S d + .Next, let M ∈ null(B) ⊥ ∩ ri(Ω).Letting U be an open set in S d containing the origin, we have that M +ǫ[U ∩null(B) ⊥ ∩span(Ω)] ⊂ null(B) ⊥ ∩ri(Ω) for a sufficiently small ǫ > 0. Consequently, we can conclude that span(null(B) ⊥ ∩ Ω) = null(B) ⊥ ∩ span(Ω), which is equivalent to the Terracini convexity condition.Proof of Statement 2 Next we consider the second statement.Fix a face Ω of S d + of co-dimension at least d−k+1 2 Suppose for the sake of a contradiction that span(null(B) ⊥ ∩ ri(Ω)) = ∅.As n > d+1 2 we have that null(B) ⊥ ∩ span(Ω) is a subspace of positive dimension in S d .By the Terracini convexity property applied to the cone B(S d + ), we have that null(B) ⊥ ∩ Ω = ∅.Hence, there exists a proper face Ω of S d + such that Ω Ω, null(B) ⊥ ∩ Ω = null(B) ⊥ ∩ Ω, and null(B) ⊥ ∩ ri( Ω) = ∅.As a consequence, there also exists an element W ∈ [Ω ∩ span( Ω) ⊥ ]\{0}.We use the W available to us to construct a linear map Ã in S. Specifically, there exists ǫ > 0 such that: null( Ã) ⊥ = {M − ǫ M W : M ∈ null(A) ⊥ } for some Ã ∈ S. Associated to this Ã is the linear map B. We show next that null( B) ⊥ ∩Ω = {0}.As B is surjective, we may consider the direct sum decomposition null( B) ⊥ = null( Ã) ⊥ ⊕span(I).Thus, for any Y ∈ null( B) ⊥ , we have the decomposition Y = M −ǫ M W +cI for some M ∈ null(A) ⊥ , c ∈ R. If Y ∈ Ω then one can check that Y + ǫ M W ∈ Ω, and in particular, that Y + ǫ M W / ∈ Ω based on the construction of W , unless M = 0.But we also have that Y + ǫ M W = M + cI and M + cI ∈ Ω, which implies that M = 0 and in turn that c = 0.In summary, we obtain that null( B) ⊥ ∩ Ω = {0}.

Theorem 5. 9 .
If p is hyperbolic with respect to e and x ∈ Λ + (p, e) then 1. Loc x (p) is hyperbolic with respect to e; and 2. Λ + (Loc x (p), e) = K Λ + (p,e) (x) is the tangent cone of Λ + (p, e) at x.
λ→∞ λ mult p (x) p(x + λ −1 (z + te)) = Loc x (p)(z + te) has non-negative coefficients.Consequently, we have that z ∈ Λ + (Loc x (p), e) and so the cone of feasible directions is contained in Λ + (Loc x (p), e).Taking the closure shows that the tangent cone is contained in Λ + (Loc x (p), e), completing the proof.Example 5.10 (Example 5.5 continued).Suppose that p(X) = det(X) where X is a d×d symmetric matrix of indeterminates, e = I, and X = Z 0 0 0 where Z is k × k and positive definite.The hyperbolicity cone of Loc X (p) isΛ + (Loc X (p), I) = Y 11 Y 12 Y T 12 Y 22 : Y 22 0which coincides with the tangent cone to the positive semidefinite cone at X.The main technical result of this section, a fairly immediate corollary of Lemma 5.6, is that localization and taking derivatives commute.Theorem 5.11.If p is hyperbolic with respect to e and x ∈ Λ + (p, e) with mult p (x) ≥ 1, then Λ + (D ẽLoc x (p), e) = Λ + (Loc x (D ẽp), e) for any ẽ ∈ Λ ++ (p, e).

x
φ 2,d (x, y)φ 2,d (x, y) ′ = d x d−1 y • • • xy d−1 y d Let C ⊂ R d be a closed, pointed, convex cone.The cone C is Terracini convex if and only if L(C) is a join sub-semilattice of the lattice of all subspaces in R d (i.e., the poset L(C) has a join given by the sum of two subspaces).