On the Approximation of Unbounded Convex Sets by Polyhedra

This article is concerned with the approximation of unbounded convex sets by polyhedra. While there is an abundance of literature investigating this task for compact sets, results on the unbounded case are scarce. We first point out the connections between existing results before introducing a new notion of polyhedral approximation called ε,δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \varepsilon , \delta \right) $$\end{document}-approximation that integrates the unbounded case in a meaningful way. Some basic results about ε,δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \varepsilon , \delta \right) $$\end{document}-approximations are proved for general convex sets. In the last section, an algorithm for the computation of ε,δ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\left( \varepsilon , \delta \right) $$\end{document}-approximations of spectrahedra is presented. Correctness and finiteness of the algorithm are proved.


Introduction
The problem of approximating a convex set C by a polyhedron in the Hausdorff distance has been studied systematically for at least a century [26] and has a variety of applications in mathematical programming. These include algorithms for convex optimization problems that approximate the feasible region by a sequence of polyhedra [4,19] or solution concepts for convex vector optimization problems [10,23,31]. Moreover, there are multiple algorithms for mixed-integer convex optimzation problems that are based on polyhedral outer approximations, see [7,21,38]. Interest in this problem is fueled by the fact that polyhedra have a simple structure in the sense Communicated by Martine Labbe.
B Daniel Dörfler daniel.doerfler@uni-jena.de 1 Friedrich Schiller University Jena, Jena, Germany that they can be described by finitely many points and directions. This finite structure makes computations with polyhedra more viable than with general convex sets. Hence, it is desirable to work with polyhedra that approximate some complicated set well. If the set C to be approximated is assumed to be compact, then numerous theoretical results are available. This includes asymptotic [11,32] and explicit [3] bounds on the number of vertices that a polyhedron needs to have in order to approximate C to within a prescribed accuracy. Moreover, iterative algorithms, so-called augmenting and cutting schemes, for the polyhedral approximation of convex bodies are known, see [16]. Some convergence properties are discussed in [17] and [18]. An overview about combinatorial aspects of the approximation of convex bodies is collected in the survey article by Bronshteȋn [2].
If boundedness of the set C is not assumed, the amount of literature on the problem is scarce, although there are various applications where unbounded convex sets arise naturally. In convex vector optimization, for example, it is known that the so-called extended feasible image or upper image contains the set of nondominated points on its boundary, see, e.g., [10,22]. Due to its geometric properties, it is advantageous to work with this unbounded set instead of with the feasible image itself. Another application is in large deviations theory, which, generally speaking, is the study of the asymptotic behaviour of tails of sequences of probability distributions, see [36]. Under certain conditions, bounds for this behaviour can be obtained in terms of rate functions of probability distributions. In [28], such bounds are obtained under the condition that the level sets of a specific convex rate function can be approximated by polyhedra. Moreover, the authors of [25] generalize the algorithm in [7] and consider mixed-integer convex optimization problems, whose feasible region is not necessarily bounded. The problems are solved by computing polyhedral outer approximations of the feasible region in such a fashion that reaching a globally optimal solution is guaranteed.
The most notable result about the polyhedral approximation of C is due to Ney and Robinson [28] who give a characterization of the class of sets that can be approximated arbitrarily well by polyhedra in the Hausdorff distance. However, this class is relatively small as restrictive assumptions on the recession cone of C have to be made, such as polyhedrality. The reason boils down to the fact that the Hausdorff distance is seldom suitable to measure the similarity between unbounded sets. In fact, the Hausdorff distance between closed and convex sets is finite only if the recession cones of the sets are equal, see Proposition 3.2 in Section 3. Due to this difficulty, additional assumptions about the structure of the problem have to be made in each of the aforementioned applications. These include polyhedrality of the ordering cone or boundedness of the problem in convex vector optimization, see, e.g., [8,23], polyhedrality of a cone generated by the rate function in large deviations theory, or strong duality when dealing with convex optimization problems. In 2018, Ulus [35] characterized the tractability of convex vector optimization problems in terms of polyhedral approximations. One important necessary condition is the so-called self-boundedness of the problem.
Considering the facts mentioned, polyhedral approximation of unbounded convex sets requires a notion that does not solely rely on the Hausdorff distance. To this end, our main contribution is the introduction of the notion of (ε, δ)-approximation for closed convex sets C that do not contain lines. One feature of (ε, δ)-approximations is that the recession cones of the involved sets play an important role. We show that (ε, δ)-approximations define a meaningful notion of polyhedral approximation in the sense that a sequence of approximations converges to the set C as ε and δ diminish. This convergence is achieved in the sense of Painlevé-Kuratowski, who define convergence for sequences of sets, see [30]. Moreover, we present an algorithm for the computation of (ε, δ)-approximations when the set C is a spectrahedron, i.e., defined by a linear matrix inequality. We also prove correctness and finiteness of the algorithm. Its main purpose, however, is to show that (ε, δ)-approximations can be constructed in finitely many steps theoretically.
This article is organized as follows. In the next section, we introduce the necessary notation and provide definitions. In Section 3, we compare the results by Ney and Robinson [28] with the results by Ulus [35] and put them in relation. In particular, we show that self-boundedness is a special case of the property that the excess of a set over its own recession cone is finite. The concept of (ε, δ)-approximations is introduced in Section 4. We prove a bound on the Hausdorff distance between truncations of an (ε, δ)-approximation and truncations of C. The main result is Theorem 4.2. It states that a sequence of (ε, δ)-approximations of C converges to C in the sense of Painlevé-Kuratowski as ε and δ tend to zero. In the last section, we present the aforementioned algorithm and prove correctness and finiteness as well as illustrate it with two examples.

Preliminaries
Throughout this article, we denote by cl C, int C, 0 + C, conv C, and cone C the closure, interior, recession cone, convex hull and conical hull of a set C, respectively. A compact convex set with nonempty interior is called a convex body. The Euclidean ball with radius r centred at a point c ∈ R n is denoted by B r (c). A point c of a convex set C is called an extreme point of C, if C \ {c} is convex. Extreme points are exactly the points of C that cannot be written as a proper convex combination of elements of C [29,p. 162]. For finite sets V , D ⊆ R n , the set is called a polyhedron. The plus sign denotes Minkowski addition. The sets V , D in (1) are called a V -representation of P as it is expressed in terms of its vertices and directions. A polyhedron can equivalently be expressed as a finite intersection of closed halfspaces [29,Theorem 19.1], i.e., for a matrix A ∈ R m×n and a vector b ∈ R m . The data (A, b) are called a Hrepresentation of P. The extreme points of a polyhedron P are called vertices of P and are denoted by vert P. For symmetric matrices A 0 , A 1 , . . . , A n of arbitrary fixed size, we define i.e., a linear combination of the A i and a translation ofĀ(x) by A 0 , respectively. We denote by A 0 (A 0) positive (semi-)definiteness of the symmetric matrix 0} is called a spectrahedron. Spectrahedra are a generalization of polyhedra for which many geometric properties of polyhedra generalize nicely, e.g., the recession cone of a spectrahedron C is obtained as {x ∈ R n | A(x) 0}, see [12], whereas the recession cone of a polyhedron in H -representation is {x ∈ R n | Ax ≤ 0}. Given a cone K the set K • = {y ∈ R n | ∀x ∈ K : x T y ≤ 0} is called the polar cone of K . The polar (0 + C) • of the recession cone of a spectrahedron C is computed as where A i · X means the trace of the matrix product A i X , see [12,Section 3] A set whose recession cone is pointed is called line-free. It is well known that a closed convex set contains an extreme point if and only if it is line-free, see [29,Corollary 18.5.3]. Given nonempty sets C 1 , C 2 ⊆ R n , the excess of C 1 over C 2 , e[C 1 , C 2 ], is defined as where · denotes the Euclidean norm. The Hausdorff distance between C 1 and C 2 , d H (C 1 , C 2 ), is then expressed as It is well known that the Hausdorff distance defines a metric on the space of nonempty compact subsets of R n . Between unbounded sets the Hausdorff distance may be infinite. A polyhedron P is called an ε-approximation of a convex set C if d H (P, C) ≤ ε.

Polyhedral Approximation in the Hausdorff Distance
Every convex body can be approximated arbitrarily well by a polytope in the Hausdorff distance, see, e.g., [32]. Moreover, algorithms for the computation of ε-approximations exist for which the convergence rate is known [16]. For the approximation of unbounded convex sets, only the following theorem is known, which provides a characterization of the sets that can be approximated by polyhedra in the Hausdorff distance.
A related result is found in [35], where the approximate solvability of convex vector optimization problems in terms of polyhedral approximations is investigated. In order to state the result and establish the relationship to Theorem 3.1, we need the following definition from [35]. Definition 3.1 A set C R n , with a nontrivial recession cone is called self-bounded, if there exists y ∈ R n such that {y} + 0 + C ⊇ C.
Adjusted to our notation, the mentioned result can be stated as: If 0 + C is polyhedral, then, clearly, C can be approximated by a polyhedron. The difference to Theorem 3.1 is the self-boundedness of C instead of the finiteness of e[C, 0 + C]. The following theorem points out the connection between these conditions and shows that under an additional assumption, both coincide. The relationships are illustrated in Fig. 1.

Theorem 3.2 Given a nonempty, closed and convex set C ⊆ R n , consider the statements
Then the following implications are true: This infimum is uniquely attained, because 0 + C is closed and convex. Then c − r c ≤ M and we con- The set C is contained in its own recession cone. Therefore, it is self-bounded and e[C, 0 + C] = 0. Centre: The excess of C over its recession cone is finite and attained in any of the two vertices. However, C is not self-bounded, because it cannot be contained in a translation of 0 + C. Right: A set that is neither self-bounded nor does it hold e[C, 0 + C] < ∞. Traversing the parabolic arc, the distance to 0 + C grows without bound From the first part of the proof, we know that B M (0) + 0 + C ⊇ C, which completes the proof.

Example 3.1
To see that e[C, 0 + C] < +∞ does not imply self-boundedness of C unless 0 + C is solid, consider the following counterexample. In R 2 let C = conv{±e 1 } +cone{e 2 }, where e i denotes the ith unit vector. Then one has the equality e[C, 0 + C] = 1, but C is not self-bounded. The set is illustrated in the centre of Figure 1.
In view of the above result, we suggest calling a set self-bounded, if it satisfies Property (iii). On the one hand, this extends the notion to sets whose recession cone is not solid. And on the other hand, this makes every compact set self-bounded, rather than just singletons. Since in [35] cones are assumed to be solid, Theorem 3.2 proves that a convex vector optimization problem is tractable in terms of polyhedral approximations, if and only if the upper image [35,Equation 6] of the problem satisfies (i) in Theorem 3.1.
The reason that many unbounded convex sets are beyond the scope of polyhedral approximation in the Hausdorff distance is that it is by nature designed to behave nicely only for compact sets. The following proposition specifies this.

A Polyhedral Approximation Scheme for Closed Convex Line-Free Sets
We have seen that, in order to approximate a set C by a polyhedron P in the Hausdorff distance, their recession cones need to be identical. Theorems 3.1 and 3.2 tell us that this is achievable only for specific sets C. To treat a larger class of sets, a concept is needed that quantifies similarity between closed convex cones, similar to how the Hausdorff distance quantifies similarity between compact sets. Definition 4.1 Given nonempty closed convex cones K 1 , K 2 ⊆ R n , the truncated Hausdorff distance between K 1 and K 2 ,d H (K 1 , K 2 ), is defined as Since every cone contains the origin, it is immediate thatd H (K 1 , K 2 ) ≤ 1. The truncated Hausdorff distance defines a metric on the set of closed convex cones in R n , see [13]. However, it is only one way among many to measure the distance between convex cones. We suggest the survey in [15] for a more thorough discussion of the topic. With the truncated Hausdorff distance, we define the following notion of polyhedral approximation of convex sets that are not necessarily bounded.

Remark 4.1
The assumption that P is line-free is equivalent to vert P = ∅ and hence required for condition (i) in the definition. Condition (iii) means that P is an outer approximation of C. This is required, because otherwise the roles of P and C would have to be interchanged in (i). However, it is not clear how to proceed with this in a Lastly, we decided to make a distinction between ε and δ, because scales of these error measures may be very different depending on the sets, i.e., δ is always bounded from above by 1, but for ε it may be useful to allow values larger than 1. Figure 2 illustrates the definition. We will show that an (ε, δ)-approximation of a set C approximates C in a meaningful way. To this end, we consider the Painlevé-Kuratowski convergence, a notion of set convergence that is suitable for a broader class of sets than convergence with respect to the Hausdorff distance. To conserve space and enhance readability, we will denote by C ν the sequence {C ν } ν∈N as well as the specific element C ν of this sequence whenever there is no ambiguity. The two sets in the definition are called the inner and outer limit of C ν , respectively. Convergence in the sense of Painlevé-Kuratowski is weaker than convergence in the Hausdorff distance, but both concepts coincide when restricted to compact subsets, see    [30,p. 120]) A sequence C ν of nonempty closed and convex sets converges to C in the sense of Painlevé-Kuratowski if and only if there exist x ∈ R n and r 0 ∈ R, such that for all r ≥ r 0 it holds that In geometric terms, this means that a sequence of nonempty closed and convex sets converges in the sense of Painlevé-Kuratowski if and only if it converges in the Hausdorff distance on every nonempty compact subset. In the remainder of this section we show that (ε, δ)-approximations provide a meaningful notion of polyhedral approximation for unbounded sets in the sense that a sequence of approximations converges as defined in Definition 4.2 if ε and δ tend to zero. To this end, we need some preparatory results. The first one yields a bound on the Hausdorff distance between truncations of a set and truncations of an (ε, δ)-approximation. Proposition 4.1 Let C ⊆ R n be nonempty closed convex and line-free and let P be an (ε, δ)-approximation of C. Then for every x ∈ conv vert P and r ≥ ε it holds true that is attained as p − c with p ∈ P, then p = v + td for some d ∈ 0 + P and t ≥ 0.
We need two more results before we can prove Theorem 4.2.

Lemma 4.1 Let C ⊆ R n be nonempty closed and convex and let there be
sequences v ν , r ν such that inf c∈C v ν − c → 0, inf r ∈0 + C r ν − r → 0, and v ν + r ν ∈ B M (x) for some M ≥ 0 and x ∈ R n . If C is line-free, then v ν is bounded.
Proof Assume that v ν is unbounded. This implies that r ν is also unbounded and 0 + C = {0}. Without loss of generality, let r ν = 0 for all ν. Then d ν := r ν / r ν is bounded and has a convergent subsequence. Without loss of generality, we can assume that d ν → d ∈ 0 + C. We will show that −d ∈ 0 + C. Therefore let c ∈ C, t ≥ 0 and define y ν = argmin y∈C v ν − y . By the triangle inequality, it holds true that Note that M ν is bounded from above by some M, because v ν − y ν → 0. For every T ≥ 0, there exists some ν T , such that r ν T ≥ T . Let T ≥ t and definē Putting it all together, one gets where the last inequality holds due to (12) and boundedness of M ν . Since C is closed and d ν → d ∈ 0 + C, taking the limit T → +∞ yields that c − td ∈ C. This is a contradiction to the pointedness of 0 + C.
Every closed and line-free convex set C can be written as the convex hull of its extreme points plus its recession cone [14,p. 35]. In particular, Lemma 4.1 states that the set of convex combinations of extreme points for which a given point in C can be decomposed in such a fashion is compact. The next result establishes a relation between extreme points of C and the vertices of an (ε, δ)-approximation.

Proposition 4.2 Let C ⊆ R n be nonempty closed convex and line-free.
For ν ∈ N, let P ν be an (ε ν , δ ν )-approximation of C. If (ε ν , δ ν ) → (0, 0), then for every extreme point c of C there exists a sequence x ν → c such that x ν ∈ conv vert P ν .
Proof Since C is line-free, it has at least one extreme point. Let c be one such extreme point. Assume that for every sequence x ν with x ν ∈ conv vert P ν there exists a γ > 0, such that x ν − c > γ for infinitely many ν. Then, without loss of generality, there exists one such sequence such that x ν − c > γ for every ν and, since C ⊆ P ν , c = x ν + r ν for some r ν ∈ 0 + P ν . By Lemma 4.1, it holds that x ν and r ν are bounded. Then there exist subsequences x ν k , r ν k such that Note that r = 0, because r ν > γ for all ν. Finally, This is a contradiction to c being an extreme point of C.
We are now ready to prove the main result.
Proof By Theorem 4.1, we must show that there exist c ∈ R n and r 0 ≥ 0 such that d H (P ν ∩ B r (c), C ∩ B r (c)) → 0 for all r ≥ r 0 . Let r ≥ max ν∈N ε ν and let c be an extreme point of C, which exists, because C contains no lines. By Proposition 4.2, there exists a sequence x ν → c with x ν ∈ conv vert P ν . Applying the triangle inequality and Proposition 4.1 yields for some v ν ∈ conv vert P ν . The first and third term in this sum converge to zero as x ν → c. It remains to show that Let the supremum be attained by p ν ∈ P ν . Then i.e., v ν + d ν ∈ B M (c) for some M ≥ 0. Therefore, the sequence v ν is bounded according to Lemma 4.1. Hence, x ν − v ν is also bounded and d H (P ν ∩ B r (c), C ∩ B r (c)) → 0, which was to be proved.
Theorem 4.2 justifies the definition of (ε, δ)-approximations, i.e., it states that they define a meaningful notion of approximation. We close this section with the observation that (ε, δ)-approximations reduce to ε-approximations in the compact case.

An Algorithm for the Polyhedral Approximation of Unbounded Spectrahedra
In this section, we present an algorithm for computing (ε, δ)-approximations of closed convex and line-free sets C whose interior is nonempty. We also prove correctness and finiteness of the algorithm. The algorithm employs a cutting scheme, a procedure for approximating convex bodies by polyhedra that is introduced in [16]. A cutting scheme is an iterative algorithm that computes a sequence of polyhedral outer approximations by successively intersecting the approximation with new halfspaces. In doing so, vertices of the current approximation are cut off, hence the name. The calculation of these halfspaces is explained in Proposition 5.2.
Since we are dealing with unbounded sets, we pursue the idea to reduce computations to certain compact sets and then apply a cutting scheme. Furthermore, we have to be able to assess the set 0 + C. Since this is difficult in the general case, we only consider sets C that are spectrahedra, because a representation of the recession cone is readily available.
Throughout this section, we consider the following semidefinite programs related to a closed spectrahedron C = {x ∈ R n | A(x) 0} with nonempty interior. For a direction w ∈ R n \ {0}, consider Solving (P 1 (w)) is equivalent to determining the maximal shifting of a hyperplane with normal w within C. The following result is well known in the literature, see, e.g., [29,Corollary 14.2.1].
The second problem we consider is where v ∈ R n and d ∈ R n \ {0}. Solving (P 2 (v, d)) can be described as the task of determining the maximum distance on can move in direction d starting at point v until the set C is reached. If this distance is finite and v / ∈ C, then a solution to (P 2 (v, d)) yields a point on the boundary of C, namely one of the points that are obtained by intersecting the boundary of C with the affine set {v + td | t ∈ R}. The Lagrangian dual problem of (P 2 (v, d)) is Solutions to (P 2 (v, d)) and (D 2 (v, d)) give rise to a supporting hyperplane of C as described in the next proposition.

Proposition 5.2
Let v / ∈ C and set d = c − v for some c ∈ int C. Then solutions (x * , t * ) to (P 2 (v, d)) and (U * , w * ) to (D 2 (v, d)) exist. Moreover, w * T x ≥ w * T v + t * for all x ∈ C and equality holds for x = x * .
Proof Without loss of generality, we can assume that int C = {x ∈ R n | A(x) 0}, see [12,Corollary 5]. Then (c, 1) is strictly feasible for (P 2 (v, d)), which is the wellknown Slater's constraint qualification in convex optimization. Since, v / ∈ C and by convexity the first constraint is violated whenever t ≤ 0. Since C is closed, an optimal solution (x * , t * ) to (P 2 (v, d)) with t * ∈ [0, 1] exists. Slater's constraint qualification now implies strong duality, i.e., an optimal solution (U * , w * ) to (D 2 (v, d)) exists and the optimal values conincide. Next, let x ∈ C and observe that The third equality holds due to strong duality. Lastly, for x = x * we have equality, because x * = v + t * d and w * T d = 1.
We want to describe the functioning of the algorithm geometrically before we present the details in pseudo code. The method consists of two phases. In the first phase, an initial polyhedron P 0 , such that P 0 ⊇ C andd H (0 + P 0 , 0 + C) ≤ δ, is constructed as follows: For w ∈ int (0 + C) • , the set is a compact basis of 0 + C, i.e., 0 + C = cone M. We use a cutting scheme to compute a polyhedral δ-approximation M of M with M ⊆ int M. If in (13) we set w = 1, then is a polyhedral cone withd H (K , 0 + C) ≤ δ. Next, we need to construct a polyhedron P 0 with recession cone K that contains C. To this end, we compute a H -representation (R, 0) of K and solve (P 1 (r )) for every row r of R that is for every normal of supporting hyperplanes that define K . Note that a solution always exists, because r ∈ int (0 + C) • by construction. For a solution x * r to (P 1 (r )), the set is a hyperplane that supports C in x * r . For the initial approximation, we then set Clearly, it holds that 0 + P 0 = K and that P 0 has at least one vertex, because K is pointed.
In the second phase of the algorithm, P 0 is refined by successively cutting of vertices until all vertices are within distance of at most ε from C. This is achieved by iteratively intersecting P 0 with halfspaces that support C in some point of its boundary. To guarantee finiteness of the algorithm, we retreat with the computations to compact subsets P 0 of P 0 and C of C, namely and where w is the same as in (13) and the x * r are the optimal solutions from (15). A cutting scheme is then applied to compute an outer ε-approximation P of C. Finally, an (ε, δ)-approximation of C is obtained as We describe the aforementioned cutting scheme due to [16] that is used in the computation of an (ε, δ)-approximation for the special case of spectrahedral sets in Algorithm 1.
The vectors e and e i , i = 1, . . . , n, in line 1 denote the vector in R n with components all equal to one and the ith unit vector, respectively. Since C is compact, it holds that int (0 + C) • = R n . Therefore, Proposition 5.1 implies that optimal solutions x * w in line 1 always exist. Note that κ in line 12 is an upper bound on the Hausdorff distance between P and C due to the following observation. The Hausdorff distance between P and C is attained in a vertex of P, because C ⊆ P. Since the part x * v of an optimal solution of (P 2 (v, d)) is an element of the boundary of C, we conclude For the special class of spectrahedral sets, the cutting scheme algorithm terminates after finitely many steps. This is proved in [5,Theorem 4.38].

Remark 5.1
As mentioned at the beginning of this section, Algorithm 1 falls into the class of cutting scheme algorithms. In [16], convergence properties for a similar supporting hyperplane method approximates the set only in a neighbourhood of the optimal solution to the underlying optimization problem, while we are interested in an approximation of the whole set C.
We are now prepared to present Algorithm 2, an algorithm for the computation of (ε, δ)-approximations of closed and line-free spectrahedra with nonempty interior.

Algorithm 2:
An algorithm for (ε, δ)-approximations of spectrahedra Data: Matrix A(x) representing an unbounded closed and line-free spectrahedron C with nonempty interior, error tolerances ε and δ Result: V -representation of an (ε, δ)-approximation P of C Compute solutions x * r of (P 1 (r )) for every row r of Compute an ε-approximation P of C according to Algorithm 1 10 P ← P + K 11 return a V -representation of P Steps 6 and 13 in Algorithm 1 and 5, 6 and 11 in Algorithm 2 require the computation of a V -representation from an H -representation or vice versa. These problems are known as vertex enumeration and facet enumeration, respectively, and are difficult problems on their own. It is beyond the scope of this paper to discuss these problems in more detail. Therefore, we only point out that there exist toolboxes that are able to perform these tasks numerically, such as bensolve tools [6,24]. In practice, however, the computations often become infeasible in dimensions three and higher when the number of halfspaces defining the polyhedron is large. It is also known that vertex enumeration for unbounded polyhedra is NP-hard, see [20]. Thus, since vertex enumeration has to be performed in every iteration of Algorithm 1 and for the unbounded polyhedron P in step 11 of Algorithm 2, one cannot expect the algorithms to be computationally efficient. Proof Since C is closed and does not contain any lines, its recession cone is also closed and pointed. This implies that (0 + C) • has nonempty interior, see, e.g., [1,p. 53]. The direction w defined in line 1 is an element of (0 + C) • according to 4. Note that w = 0, because 0 + C = {0} and the pointedness of 0 + C implies that the matrices A 1 , . . . , A n But since the recession cone of r x ∈ R n | r T x ≤ r Tx r is K and allx r lie in the same hyperplane, it is also true that Altogether, we conclude C ⊆ C + K ⊆ P + K = P.

Corollary 5.1 Algorithm 2 terminates after finitely many steps.
Proof This is a consequence of the finiteness of Algorithm 1, see [5,Theorem 4.38]. Therefore, the executions of Algorithm 1 in lines 3 and 8 of Algorithm 2 terminate after finitely many steps, which implies that Algorithm 2 itself is finite.
We close this section by illustrating Algorithm 2 with the following two examples.

Example 5.1
Consider the spectrahedron C ⊆ R 2 defined by the matrix inequality ⎛ ⎜ ⎜ ⎝ It is the intersection of the epigraphs of the functions x → 1/x, restricted to the positive real line, and x → x 2 . We use the solver SDPT3 [33,34] and the software bensolve tools [6,24] to solve the semidefinite subproblems and perform vertex and facet enumeration, respectively. The algorithm is implemented in GNU Octave [9]. Figure 3 shows the polyhedral approximations of C at different stages of Algorithm 2 for the tolerances (ε, δ) = (0.1, 0.1).
Computational results for different values of ε and δ are presented in Table 1. It can be seen that the number of subproblems that have to be solved is larger than the number of vertices the polyhedral approximation has. The reason is that one instance of (P 2 (v, d)) is solved for every vertex of the current approximation in every iteration of Algorithm 1, but only one of these vertices is cut off. Moreover, the number of solved subproblems grows quickly as ε decreases, because more iterations of Algorithm 1 are needed to reach the desired accuracy and the number of solved subproblems grows with every iteration. Since the recession cone of C is just a ray and easy to approximate, most of the computational effort is put into approximating C in line 9. However, for fixed ε and decreasing δ the number of solved subproblems grows. This is due to the fact that C depends on the approximate recession cone K . As δ decreases the rays generating K will be closer to each other with respect to the truncated Hausdorff distance. Therefore, the set C will have a larger area and it takes more iterations to compute an ε-approximation of it. Note that for (ε, δ) equal to It is a closed and pointed convex cone with nonempty interior. Thus, we can apply Algorithm 2 to it. Since S is a cone, its only vertex is the origin and we can terminate the algorithm after K has been computed in line 5. Then K is a polyhedral cone and it holdsd H (K , S) ≤ δ. Figure 4 shows a polyhedral approximation of S with 20 extreme rays andd H (K , S) ≤ 0.1.

Conclusion
We have introduced the notion of (ε, δ)-approximations for the polyhedral approximation of unbounded convex sets. Since polyhedral approximation in the Hausdorff distance can only be achieved for unbounded sets under restrictive assumptions, (ε, δ)approximations are of particular interest, because they allow treatment of a larger class of sets. An important observation is that the recession cones of the involved sets must play a crucial role in a meaningful concept of approximation for unbounded sets. We have shown that (ε, δ)-approximations define a suitable notion of approximation in the sense that a sequence of such approximations convergences and that (ε, δ)approximations generalize the polyhedral approximation of compact sets with respect to the Hausdorff distance. Finally, we have presented an algorithm that allows for the computation of (ε, δ)-approximations of spectrahedra and have shown that the algorithm is finite.