On the Approximation of Unbounded Convex Sets by Polyhedra

This article is concerned with the approximation of unbounded convex sets by polyhedra. While there is an abundance of literature investigating this task for compact sets, results on the unbounded case are scarce. We first point out the connections between existing results before introducing a new notion of polyhedral approximation called ($\varepsilon,\delta$)-approximation that integrates the unbounded case in a meaningful way. Some basic results about ($\varepsilon,\delta$)- approximations are proven for general convex sets. In the last section an algorithm for the computation of ($\varepsilon,\delta$)-approximations of spectrahedra is presented. Correctness and finiteness of the algorithm are proven.


Introduction
The problem of approximating a convex set C by a polyhedron in the Hausdorff distance has been studied systematically for at least a century [27] and has a variety of applications in mathematical programming. These include algorithms for convex optimization problems that approximate the feasible region by a sequence of polyhedra [4,20] or solution concepts for convex vector optimization problems [10,24,32]. Moreover, there are multiple algorithms for mixed-integer convex optimzation problems that are based on polyhedral outer approximations, see [7,22,39]. Interest in this problem is fueled by the fact that polyhedra have a simple structure in the sense that they can be described by finitely many points and directions. This finite structure makes computations with polyhedra more viable than with general convex sets. Hence, it is desirable to work with polyhedra that approximate some complicated set well. If the set C to be approximated is assumed to be compact, then numerous theoretical results are available. This includes asymptotic [11,33] and explicit [3] bounds on the number of vertices that a polyhedron needs to have in order to approximate C to within a prescribed accuracy. Moreover, iterative algorithms, so called augmenting and cutting schemes, for the polyhedral approximation of convex bodies are known, see [17]. Some convergence properties are discussed in [18] and [19]. An overview about combinatorial aspects of the approximation of convex bodies is collected in the survey article by Bronshteȋn [2].
If boundedness of the set C is not assumed, the amount of literature on the problem is scarce, although there are various applications where unbounded convex sets arise naturally. In convex vector optimization, for example, it is known that the so-called extended feasible image or upper image contains the set of nondominated points on its boundary, see e.g. [10,23]. Due to its geometric properties it is advantageous to work with this unbounded set instead of with the feasible image itself. Another application is in large deviations theory, which, generally speaking, is the study of the asymptotic behaviour of tails of sequences of probability distributions, see [37]. Under certain conditions bounds for this behaviour can be obtained in terms of rate functions of probability distributions. In [29] such bounds are obtained under the condition that the level sets of a specific convex rate function can be approximated by polyhedra. Moreover, the authors of [26] generalize the algorithm in [7] and consider mixed-integer convex optimization problems, whose feasible region is not necessarily bounded. The problems are solved by computing polyhedral outer approximations of the feasible region in such a fashion that reaching a globally optimal solution is guaranteed.
The most notable result about the polyhedral approximation of C is due to Ney and Robinson [29] who give a characterization of the class of sets that can be approximated arbitrarily well by polyhedra in the Hausdorff distance. However, this class is relatively small as restrictive assumptions on the recession cone of C have to be made, such as polyhedrality. The reason boils down to the fact that the Hausdorff distance is seldom suitable to measure the similarity between unbounded sets. In fact, the Hausdorff distance between closed and convex sets is finite only if the recession cones of the sets are equal, see Proposition 3.5 in Section 3. Due to this difficulty, additional assumptions about the structure of the problem have to be made in each of the aforementioned applications. These include polyhedrality of the ordering cone or boundedness of the problem in convex vector optimization, see e.g. [8,24], polyhedrality of a cone generated by the rate function in large deviations theory, or strong duality when dealing with convex optimization problems. In 2018, Ulus [36] characterized the tractability of convex vector optimization problems in terms of polyhedral approximations. One important necessary condition is the so-called self-boundedness of the problem.
Considering the facts mentioned, polyhedral approximation of unbounded convex sets requires a notion that does not solely rely on the Hausdorff distance. To this end our main contribution is the introduction of the notion of (ε, δ)-approximation for closed convex sets C that do not contain lines. One feature of (ε, δ)-approximations is that the recession cones of the involved sets play an important role. We show that (ε, δ)-approximations define a meaningful notion of polyhedral approximation in the sense that a sequence of approximations converges to the set C as ε and δ diminish. This convergence is achieved in the sense of Painlevé-Kuratowski, who define convergence for sequences of sets, see [31]. Moreover, we present an algorithm for the computation of (ε, δ)-approximations when the set C is a spectrahedron, i.e. defined by a linear matrix inequality. We also prove correctness and finiteness of the algorithm. Its main purpose, however, is to show that (ε, δ)-approximations can be constructed in finitely many steps theoretically.
This article is organized as follows. In the next section we introduce the necessary notation and provide definitions. In Section 3 we compare the results by Ney and Robinson [29] with the results by Ulus [36] and put them in relation. In particular, we show that self-boundedness is a special case of the property that the excess of a set over its own recession cone is finite. The concept of (ε, δ)-approximations is introduced in Section 4. We prove a bound on the Hausdorff distance between truncations of an (ε, δ)-approximation and truncations of C. The main result is Theorem 4.9. It states that a sequence of (ε, δ)-approximations of C converges to C in the sense of Painlevé-Kuratowski as ε and δ tend to zero. In the last section we present the aforementioned algorithm and prove correctness and finiteness as well as illustrate it with two examples.

Preliminaries
Throughout this article we denote by cl C, int C, 0 + C, conv C, and cone C the closure, interior, recession cone, convex hull and conical hull of a set C, respectively. A compact convex set with nonempty interior is called a convex body. The Euclidean ball with radius r centered at a point c ∈ R n is denoted by B r (c). A point c of a convex set C is called an extreme point of C, if C \ {c} is convex. The set of extreme points of C is denoted by ext C. Extreme points are exactly the points of C that cannot be written as a proper convex combination of elements of C [30, p. 162]. For finite sets V, D ⊆ R n , the set is called a polyhedron. The plus sign denotes Minkowski addition. The sets V, D in (1) for a matrix A ∈ R m×n and a vector b ∈ R m . The data (A, b) are called a H-representation of P . The extreme points of a polyhedron P are called vertices of P and are denoted by vert P . For symmetric matrices A 0 , A 1 , . . . , A n of arbitrary fixed size we definē i.e. a linear combination of the A i and a translation ofĀ(x) by A 0 , respectively. We denote by A 0 (A 0) positive (semi-)definiteness of the symmetric matrix A. A set of the form {x ∈ R n | A(x) 0} is called a spectrahedron. Spectrahedra are a generalization of polyhedra for which many geometric properties of polyhedra generalize nicely, e.g. the recession cone of a spectrahedron C is obtained as {x ∈ R n |Ā(x) 0}, see [13], whereas the recession cone of a polyhedron in H-representation is {x ∈ R n | Ax ≤ 0}. Given a cone K the set K • = {y ∈ R n | ∀x ∈ K : x T y ≤ 0} is called the polar cone of K. The polar (0 + C) • of the recession cone of a spectrahedron C is computed as where A i · X means the trace of the matrix product A i X, see [13,Section 3]. A cone K is called polyhedral if K = cone D for some finite set D and pointed if K ∩ (−K) = {0}. A set whose recession cone is pointed is called line-free. It is well known, that a closed convex set contains an extreme point if and only if it is line-free, see [30,Corollary 18.5.3]. Given nonempty sets C 1 , C 2 ⊆ R n the excess of C 1 over C 2 , e[C 1 , C 2 ], is defined as where · denotes the Euclidean norm. The Hausdorff distance between C 1 and C 2 , d H (C 1 , C 2 ), is then expressed as It is well known, that the Hausdorff distance defines a metric on the space of nonempty compact subsets of R n . Between unbounded sets the Hausdorff distance may be infinite. A polyhedron P is called an ε-approximation of a convex set C if d H (P, C) ≤ ε.

Polyhedral Approximation in the Hausdorff Distance
Every convex body can be approximated arbitrarily well by a polytope in the Hausdorff distance, see e.g. [33]. Moreover, algorithms for the computation of ε-approximations exist for which the convergence rate is known [17]. For the approximation of unbounded convex sets only the following theorem is known, which provides a characterization of the sets that can be approximated by polyhedra in the Hausdorff distance. (i) C is convex, 0 + C is polyhedral, and e[C, 0 + C] < +∞.
(ii) There exists a polyhedral cone K such that for every ε > 0 there exists a finite set Further, if (ii) holds then K = 0 + C.
A related result is found in [36], where the approximate solvability of convex vector optimization problems in terms of polyhedral approximations is investigated. In order to state the result and establish the relationship to Theorem 3.1 we need the following definition from [36].

Definition 3.2.
A set C R n , with a nontrivial recession cone is called self-bounded, if there exists y ∈ R n such that {y} + 0 + C ⊇ C.
Adjusted to our notation, the mentioned result can be stated as: [36,Proposition 3.7]). Let C ⊆ R n be closed and convex. If C is selfbounded, then there exists a finite set V ⊆ R n such that d H (conv V + 0 + C, C) ≤ ε.
If 0 + C is polyhedral, then, clearly, C can be approximated by a polyhedron. The difference to Theorem 3.1 is the self-boundedness of C instead of the finiteness of e[C, 0 + C]. The following theorem points out the connection between these conditions and shows, that under an additional assumption, both coincide. The relationships are illustrated in Figure 1.
Proof. We start with the assertion (i) ⇒ (iii). Let M = e[C, 0 + C], c ∈ C be arbitrary and r c ∈ 0 + C such that c − r c = inf r∈0 + C c − r . This infimum is uniquely attained, because 0 + C is closed and convex. Then c − r c ≤ M and we conclude c = (c − r c ) + r c ∈ B M (0) + 0 + C. Therefore, B M (0) + 0 + C ⊇ C.
To show (iii) ⇒ (i), let K + 0 + C ⊇ C for some compact set K. Then we have Therefore, − M ε c + 0 + C ⊇ B M (0) and, since 0 + C is convex, − M ε c + 0 + C ⊇ B M (0) + 0 + C. From the first part of the proof we know, that B M (0) + 0 + C ⊇ C, which completes the proof.

Example 3.1.
To see that e[C, 0 + C] < +∞ does not imply self-boundedness of C unless 0 + C is solid, consider the following counterexample. In R 2 let C = conv{±e 1 } + cone{e 2 }, where e i denotes the i-th unit vector. Then one has the equality e[C, 0 + C] = 1, but C is not self-bounded. The set is illustrated in the center of Figure 1.
In view of the above result, we suggest calling a set self-bounded, if it satisfies Property (iii). On one hand, this extends the notion to sets whose recession cone is not solid. And on the other hand, makes every compact set self-bounded, rather than just singletons. Since in [36] cones are assumed to be solid, Theorem 3.4 proves that a convex vector optimization problem is tractable in terms of polyhedral approximations, if and only if the upper image [36,Equation 6] of the problem satisfies (i) in Theorem 3.1.
The reason that many unbounded convex sets are beyond the scope of polyhedral approximation in the Hausdorff distance is that it is by nature designed to behave nicely only for compact sets. The following proposition specifies this. Figure 1: Illustration of Theorem 3.4. Left: The set C is contained in its own recession cone. Therefore, it is self-bounded and e[C, 0 + C] = 0. Center: The excess of C over its recession cone is finite and attained in any of the two vertices. However, C is not self-bounded, because it cannot be contained in a translation of 0 + C. Right: A set that is neither self-bounded nor does it hold e[C, 0 + C] < ∞. Traversing the parabolic arc, the distance to 0 + C grows without bound.

Proposition 3.5. For closed and convex sets
Proof. Assume 0 Consider the equivalent definition of the Hausdorff distance: Let ε be large enough such that C 1 ∩ (C 2 + B ε (0)) = ∅ and let z be an element of this set. Then z + µr ∈ C 1 for all µ ≥ 0. The recession cone of C 2 + B e (0) is 0 + C 2 according to [30, Proposition 9.1.2]. Therefore, there exists some µ ε such that z + µr / ∈ C 2 + B e (0) for all µ ≥ µ ε . This yields d H (C 1 , C 2 ) ≥ ε and the claim follows with ε → ∞.

A Polyhedral Approximation Scheme for Closed Convex Line-Free Sets
We have seen that, in order to approximate a set C by a polyhedron P in the Hausdorff distance, their recession cones need to be identical. Theorems 3.1 and 3.4 tell us that this is achievable only for specific sets C. To treat a larger class of sets, a concept is needed that quantifies similarity between closed convex cones, similar to how the Hausdorff distance quantifies similarity between compact sets. Definition 4.1. Given nonempty closed convex cones K 1 , K 2 ⊆ R n , the truncated Hausdorff distance between K 1 and K 2 ,d H (K 1 , K 2 ), is defined as Since every cone contains the origin, it is immediate thatd H (K 1 , K 2 ) ≤ 1. The truncated Hausdorff distance defines a metric on the set of closed convex cones in R n , see [14]. However, it is only one way among many to measure the distance between convex cones. We suggest the survey in [16] for a more thorough discussion of the topic. With the truncated Hausdorff distance we define the following notion of polyhedral approximation of convex sets that are not necessarily bounded.

Remark 4.3.
The assumption that P is line-free is equivalent to vert P = ∅ and hence required for condition (i) in the definition. Condition (iii) means that P is an outer approximation of C. This is required, because otherwise the roles of P and C would have to be interchanged in (i). However, it is not clear how to proceed with this in a meaningful fashion. The analogue of considering vertices of P would be to consider extreme points of C instead. The set of extreme points of C may be unbounded and it is in general not possible to enforce the upper bound of ε. Lastly, we decided to make a distinction between ε and δ, because scales of these error measures may be very different depending on the sets, i.e. δ is always bounded from above by 1, but for ε it may be useful to allow values larger than 1. Figure 2 illustrates the definition. We will show that an (ε, δ)-approximation of a set C approximates C in a meaningful way. To this end, we consider the Painlevé-Kuratowski convergence, a notion of set convergence that is suitable for a broader class of sets than convergence with respect to the Hausdorff distance.
if the following equalities hold: To conserve space and enhance readability we will denote by C ν the sequence {C ν } ν∈N as well as the specific element C ν of this sequence whenever there is no ambiguity. The two sets in the definition are called the inner and outer limit of C ν , respectively. Convergence in the sense of Painlevé-Kuratowski is weaker than convergence in the Hausdorff distance, but both concepts coincide when restricted to compact subsets, see Example 4.1 and [31, pp. 131-138]. However, for convex sets Painlevé-Kuratowski convergence can be characterized using the Hausdorff distance.
Example 4.1 (see [31, p. 118]). Consider the sequence of sets for which C ν = {x, y ν } for x, y ν ∈ R n and y ν → ∞. Then C ν converges in the sense of Painlevé-Kuratowski to the singleton In geometric terms this means that a sequence of nonempty closed and convex sets converges in the sense of Painlevé-Kuratowski if and only if it converges in the Hausdorff distance on every nonempty compact subset. In the remainder of this section we show that (ε, δ)-approximations provide a meaningful notion of polyhedral approximation for unbounded sets in the sense that a sequence of approximations converges as defined in Definition 4.2 if ε and δ tend to zero. To this end we need some preparatory results. The first one yields a bound on the Hausdorff distance between truncations of a set and truncations of an (ε, δ)-approximation. Proposition 4.6. Let C ⊆ R n be nonempty closed convex and line-free and let P be an (ε, δ)approximation of C. Then for every x ∈ conv vert P and r ≥ ε it holds true, that Proof. Denote byP ,C, and V , P ∩ B r (x), C ∩ B r (x), and conv vert P , respectively. SinceP ,C are nonempty convex and compact, d H P ,C = p * − c * for some p * ∈P and c * ∈C. 8 If Otherwise, the last inequality is violated. In this case, let If λ * >λ then, according to (10) and (11) which completes the proof.
We need two more results before we can prove Theorem 4.9 below.
Proof. Assume that v ν is unbounded. This implies that r ν is also unbounded and 0 + C = {0}. Without loss of generality let r ν = 0 for all ν. Then d ν := r ν / r ν is bounded and has a convergent subsequence. Without loss of generality we can assume that d ν → d ∈ 0 + C. We will show, that −d ∈ 0 + C. Therefore let c ∈ C, t ≥ 0 and define y ν = argmin y∈C v ν − y . By the triangle inequality it holds true, that Note, that M ν is bounded from above by some M , because v ν − y ν → 0. For every T ≥ 0 there exists some ν T , such that r ν T ≥ T . Let T ≥ t and definē Putting it all together one gets where the last inequality holds due to (12) and boundedness of M ν . Since C is closed and d ν → d ∈ 0 + C, taking the limit T → +∞ yields that c − td ∈ C. This is a contradiction to the pointedness of 0 + C.
Every closed and line-free convex set C can be written as the convex hull of its extreme points plus its recession cone [15, p. 35]. In particular, Lemma 4.7 states that the set of convex combinations of extreme points for which a given point in C can be decomposed in such a fashion is compact. The next result establishes a relation between extreme points of C and the vertices of an (ε, δ)-approximation. Proposition 4.8. Let C ⊆ R n be nonempty closed convex and line-free. For ν ∈ N let P ν be an (ε ν , δ ν )-approximation of C. If (ε ν , δ ν ) → (0, 0), then for every extreme point c of C there exists a sequence x ν → c such that x ν ∈ conv vert P ν .
Proof. Since C is line-free, it has at least one extreme point. Let c be one such extreme point. Assume that for every sequence x ν with x ν ∈ conv vert P ν there exists a γ > 0, such that x ν − c > γ for infinitely many ν. Then, without loss of generality, there exists one such sequence such that x ν − c > γ for every ν and, since C ⊆ P ν , c = x ν +r ν for some r ν ∈ 0 + P ν . By Lemma 4.7 it holds that x ν and r ν are bounded. Then there exist subsequences x ν k , r ν k such that Note that r = 0, because r ν > γ for all ν. Finally, This is a contradiction to c being an extreme point of C.
We are now ready to prove the main result.
Proof. By Theorem 4.5 we must show that there exist c ∈ R n and r 0 ≥ 0 such that for all r ≥ r 0 it holds d H (P ν ∩ B r (c), C ∩ B r (c)) → 0. Let r ≥ max ν∈N ε ν and let c be an extreme point of C, which exists, because C contains no lines. By Proposition 4.8 there exists a sequence x ν → c with x ν ∈ conv vert P ν . Applying the triangle inequality and Proposition 4.6 yields for some v ν ∈ conv vert P ν . The first and third term in this sum converge to zero as x ν → c. It remains to show that is attained as e[P ν ∩ B r (x ν ), C ∩ B r (x ν )]. Let the supremum be attained by p ν ∈ P ν . Then p ν = v ν + d ν for some d ν ∈ 0 + P ν . It holds i.e. v ν + d ν ∈ B M (c) for some M ≥ 0. Therefore, the sequence v ν is bounded according to Lemma 4.7. Hence, x ν − v ν is also bounded and d H (P ν ∩ B r (c), C ∩ B r (c)) → 0, which was to be proved. Theorem 4.9 justifies the definition of (ε, δ)-approximations, i.e. it states that they define a meaningful notion of approximation. We close this section with the observation that (ε, δ)approximations reduce to ε-approximations in the compact case. On the other hand, if d H (P, C) ≤ ε, then P must be compact by Proposition 3.5. Then 0 + P = 0 + C and P is an (ε, 0)-approximation of C and in particular an (ε, δ)-approximation.

An Algorithm for the Polyhedral Approximation of Unbounded Spectrahedra
In this section we present an algorithm for computing (ε, δ)-approximations of closed convex and line-free sets C whose interior is nonempty. We also prove correctness and finiteness of the algorithm. The algorithm employs a cutting scheme, a procedure for approximating convex bodies by polyhedra that is introduced in [17]. A cutting scheme is an iterative algorithm that computes a sequence of polyhedral outer approximations by successively intersecting the approximation with new halfspaces. In doing so, vertices of the current approximation are cut off, hence the name. The calculation of these halfspaces is explained in Proposition 5.2 below.
Since we are dealing with unbounded sets, we pursue the idea to reduce computations to certain compact sets and then apply a cutting scheme. Furthermore, we have to be able to assess the set 0 + C. Since this is difficult in the general case, we only consider sets C that are spectrahedra, because a representation of the recession cone is readily available.
Throughout this section we consider the following semidefinite programs related to a closed spectrahedron Solving (P 1 (w)) is equivalent to determining the maximal shifting of a hyperplane with normal w within C. The following result is well known in the literature, see e.g. [30, Corollary 14.2.1].
The second problem we consider is where v ∈ R n and d ∈ R n \ {0}. Solving (P 2 (v, d)) can be described as the task of determining the maximum distance on can move in direction d starting at point v until the set C is reached. If this distance is finite and v / ∈ C, then a solution to (P 2 (v, d)) yields a point on the boundary of C, namely one of the points that are obtained by intersecting the boundary of C with the affine set {v + td | t ∈ R}. The Lagrangian dual problem of (P 2 (v, d)) is Solutions to (P 2 (v, d)) and (D 2 (v, d)) give rise to a supporting hyperplane of C as described in the next proposition.
Proof. Without loss of generality we can assume that int 0}, see [13,Corollary 5]. Then (c, 1) is strictly feasible for (P 2 (v, d)), which is the well known Slater's constraint qualification in convex optimization. Since, v / ∈ C and by convexity the first constraint is violated whenever t ≤ 0. Since C is closed, an optimal solution (x * , t * ) to (P 2 (v, d)) with t * ∈ [0, 1] exists. Slater's constraint qualification now implies strong duality, i.e. an optimal solution (U * , w * ) to (D 2 (v, d)) exists and the optimal values conincide. Next, let x ∈ C and observe that The third equality holds due to strong duality. Lastly, for x = x * we have equality, because We want to describe the functioning of the algorithm geometrically before we present the details in pseudo code. The method consists of two phases. In the first phase an initial polyhedron P 0 , such that P 0 ⊇ C andd H (0 + P 0 , 0 + C) ≤ δ, is constructed as follows: is a compact basis of 0 + C, i.e. 0 + C = cone M . We use a cutting scheme to compute a polyhedral δ-approximation M of M with M ⊆ int M . If in (13) we set w = 1, then is a polyhedral cone withd H (K, 0 + C) ≤ δ. Next, we need to construct a polyhedron P 0 with recession cone K that contains C. To this end we compute a H-representation (R, 0) of K and solve (P 1 (r)) for every row r of R, that is for every normal of supporting hyperplanes that define K. Note, that a solution always exists, because r ∈ int (0 + C) • by construction. For a solution is a hyperplane that supports C in x * r . For the initial approximation we then set Clearly, it holds that 0 + P 0 = K and that P 0 has at least one vertex, because K is pointed.
In the second phase of the algorithm P 0 is refined by successively cutting of vertices until all vertices are within distance of at most ε from C. This is achieved by iteratively intersecting P 0 with halfspaces that support C in some point of its boundary. To guarantee finiteness of the algorithm we retreat with the computations to compact subsets P 0 of P 0 and C of C obtained by intersecting the sets with a halfspace, namely and where w is the same as in (13). How the above halfspace can be computed will be discussed in Variant 3 below. A cutting scheme is then applied to compute an outer ε-approximation P of C. Finally, an (ε, δ)-approximation of C is obtained as We describe the aforementioned cutting scheme due to [17] that is used in the computation of an (ε, δ)-approximation for the special case of spectrahedral sets in Algorithm 1.
The vectors e and e i , i = 1, . . . , n, in line 1 denote the vector in R n with components all equal to one and the i-th unit vector, respectively. Since C is compact, it holds that int (0 + C) Therefore, Proposition 5.1 implies that optimal solutions x * w in line 1 always exist. Note, that κ in line 12 is an upper bound on the Hausdorff distance between P and C due to the following observation. The Hausdorff distance between P and C is attained in a vertex of P , because C ⊆ P . Since the part x * v of an optimal solution of (P 2 (v, d)) is an element of the boundary of C, For the special class of spectrahedral sets the cutting scheme algorithm terminates after finitely many steps. This is proved in [5,Theorem 4.38].
polyhedron. Therefore, our algorithm can handle a larger class of sets. Finally, the supporting hyperplane method approximates the set only in a neighbourhood of the optimal solution to the underlying optimization problem, while we are interested in an approximation of the whole set C.
We are now prepared to present Algorithm 2, an algorithm for the computation of (ε, δ)approximations of closed and line-free spectrahedra with nonempty interior.
Compute an ε-approximation P of C according to Algorithm 1 8 P ← P + K 9 return a V -representation of P Steps 6 and 13 in Algorithm 1 and 5 and 9 in Algorithm 2 require the computation of a V -representation from an H-representation or vice versa. These problems are known as vertex enumeration and facet enumeration, respectively, and are difficult problems on their own. It is beyond the scope of this paper to discuss these problems in more detail. Therefore, we only point out that there exist toolboxes that are able to perform these tasks numerically, such as bensolve tools [6,25]. In practice, however, the computations often become infeasible in dimensions three and higher when the number of halfspaces defining the polyhedron is large. It is also known, that vertex enumeration for unbounded polyhedra is NP-hard, see [21]. Thus, since vertex enumeration has to be performed in every iteration of Algorithm 1 and for the unbounded polyhedron P in step 9 of Algorithm 2, one cannot expect the algorithms to be computationally efficient.
In order to prove that Algorithm 2 works correctly and is finite, we need the following preliminary result. Proposition 5.5. Let C ⊆ R n be a closed convex set and K ⊆ R n be a closed convex cone such that 0 + C \ {0} ⊆ int K. Then ext (C + K) is bounded.
Proof. We may assume that C + K is pointed. Otherwise, ext (C + K) = ∅ and the statement is vacuous. Note that ext (C + K) ⊆ ext C because for every x ∈ C and d ∈ K \ {0} it is true that i.e. x + d is not an extreme point of C + K. Now, assume that ext (C + K) is unbounded. Then ext C is unbounded as well. Let {x k } k∈N be an unbounded sequence of extreme points of C +K. Without loss of generality we assume that { x k } k∈N is strictly monotonically increasing. If this condition is not satisfied, we can pass to a suitable subsequence. Define the sequence of radial projections of x k − x 1 as Since d k ∈ B 1 (0) for all k ≥ 2, it has a convergent subsequence. Again, without loss of generality, assume {d k } k≥2 is itself convergent with limitd. According to [30,Theorem 8.2] it holdsd ∈ 0 + (C − {x 1 }) = 0 + C. Since 0 + C \ {0} ⊆ int K and d = 1, there exists some k 0 ∈ N such that d k ∈ K for all k ≥ k 0 . This implies x k − x 1 ∈ K for all k ≥ k 0 as K is a cone. Therefore, x k ∈ {x 1 } + K for all k ≥ k 0 . However, this contradicts the assumption that x k ∈ ext (C + K) for all k ∈ N . Proof. Since C is closed and does not contain any lines, its recession cone is also closed and pointed. This implies that (0 + C) • has nonempty interior, see e.g. [1, p. 53]. The direction w defined in line 1 is an element of (0 + C) • according to 4. Note, that w = 0, because 0 + C = {0} and the pointedness of 0 + C implies that the matrices A 1 , . . . , A n are linearly independent [28,Lemma 3.2.9]. To see that w is indeed from the interior of (0 + C) • observe that for every The last inequality holds, because at least one eigenvalue ofĀ(x) is positive. The set M defined in line 2 is compact, because w ∈ int (0 + C) • . Note, that M is not full-dimensional, however, treating its affine hull as the ambient space, M is a valid input for Algorithm 1 in line 3. By enlarging M in line 4 it remains polyhedral as the Minkowski sum of polyhedra. The cone K is then polyhedral and it satisfies 0 + C \ {0} ⊆ int K andd H (K, 0 + C) ≤ δ. The first assertion is immediate from the observation that 0 + C = cone M and M ⊆ int M . Secondly, it is true that x ≥ 1 + δ for every x satisfying w T x = −(1 + δ). Therefore, x ≥ 1 for every x ∈ M due to the construction of the set. Assumed H (K, 0 + C) is attained as k − c and let α be chosen such that αk ∈ M , in particular α ≥ 1. Then we obtain the second claim by the following observation:d Note that K is pointed because 0 / ∈ M . Since 0 + C \{0} ⊆ int K, ext (S + K) is bounded according to Proposition 5.5. Therefore, its convex hull is bounded as well. Moreover, it is nonempty because C + K is closed according to [30,Corollary 9.1.1] and line-free due to K being pointed. Thus, [30,Corollary 18.5.2] guarantees the existence of an extreme point. Moreover, the set conv ext (C + K) is closed, cf. [15]. Together this implies that inf w T x x ∈ conv ext (C + K) is finite and attained, i.e. line 6 of the algorithm is well-defined. The additional shift of −ε on the right hand side of the halfspace ensures that the intersection with C has nonempty interior. Furthermore, C is compact, see [12,Theorem 12]. Hence, it is a valid input to Algorithm 1 in line 7 and an ε-approximation P of C is computed correctly. Now, we show that P = P + K is an (ε, δ)-approximation of C. Since P is compact, it holds 0 + P = K and P is pointed. We have already demonstratedd H (K, 0 + C) ≤ δ. It remains to show C ⊆ P . From the fact that conv ext (C + K) ⊆ C one obtains the decomposition where H + denotes the halfspace utilized in line 6, see [12,Corollary 2]. Moreover, it is easy to verify that (C + K) ∩ H + + K = C + K is true because the normal vector w of H + satisfies w ∈ K • . We conclude This completes the proof of correctness.
Proof. This is a consequence of the finiteness of Algorithm 1, see [5,Theorem 4.38]. Therefore, the executions of Algorithm 1 in lines 3 and 7 of Algorithm 2 terminate after finitely many steps, which implies that Algorithm 2 itself is finite.
The difficulty of Algorithm 2 is the determination of the halfspace in line 6. The reason is that, although min w T x x ∈ conv ext (C + K) is a convex program, a representation of conv ext (C + K) as a spectrahedron or a more general description in terms of convex functions is not readily available. In fact, it is an open question whether conv ext (C + K) is a spectrahedron in the first place. So far, the only knowledge we have about conv ext (C + K) is acquired from Proposition 5.5, which implies its compactness and the existence of the halfspace (20). In order to deal with this limitation from a computational perspective, we suggest a modification of Algorithm 2. Note that in line 6 of the algorithm it suffices to intersect C with a halfspace H + such that their intersection is bounded and the containment conv ext (C + K) ⊆ H + is satisfied, cf. [12,Corollary 2]. The following variant of Algorithm 2 computes such a halfspace iteratively. It replaces lines 6-8 of the original algorithm.
Algorithm 3: Variant of lines 6-8 of Algorithm 2 1 Compute the set R of extreme directions of K • 2 for every r ∈ R do 3 Compute a solution x * r of (P 1 (r)) Compute an ε-approximation P of C according to Algorithm 1 8 α ← 2α 9 P ← P + K 10 until C ⊆ P After computing the set of extreme directions of K • via vertex enumeration, problem (P 1 (r)) is solved for every extreme direction r of K • . Solutions x * r exist according to Proposition 5.1. These solutions give rise to an initial halfspace H + with normal being the direction w computed in line 1 of Algorithm 2 and right and side A compact subset C of C is obtained as the intersection of C and H + . It has nonempty interior because x * r ∈ int H + and int C = ∅. Moreover, α < 0. Now, Algorithm 1 is used to compute a polyhedral outer approximation P of C with tolerance ε and it is checked whether P = P + K is an (ε, δ)-approximation of C by verifying the containment C ⊆ P . If the containment holds, the algorithm is terminated and P is returned as a solution. Otherwise, a new compact subset is obtained by doubling the value of α in the definition of H + , which corresponds to a shift of H + in the direction −w, and the approximation is repeated.
The containment C ⊆ P can easily be verified using semidefinite programming. Suppose (A, b) is an H-representation of P for A ∈ R m×n and b ∈ R m . Let a i ∈ R n denote the i-th row of A. Then C ⊆ P if and only if sup x∈C a iT x ≤ b i for every i = 1, . . . , m. This follows from the fact that the hyperplane {x ∈ R n | a iT x ≤ sup x∈C a iT x} is a supporting hyperplane of C whenever a i = 0 and the right hand side is finite. The value sup x ∈ Ca iT x is obtained by solving problem (P 1 (a i )). Thus, the containment in line 10 can be verified by computing an H-representation of P and solving m semidefinite programs. Since the set conv ext (C + K) is bounded according to Proposition 5.5, it will eventually be contained in C and the algorithm terminates.
We close this section by illustrating Algorithm 2 with the following two examples.
Example 5.1. Consider the spectrahedron C ⊆ R 2 defined by the matrix inequality It is the intersection of the epigraphs of the functions x → 1/x, restricted to the positive real line, and x → x 2 . We use the solver SDPT3 [34,35] and the software bensolve tools [6,25] to solve the semidefinite subproblems and perform vertex and facet enumeration, respectively. The algorithm is implemented in GNU Octave [9]. Figure 3 shows the polyhedral approximations of C at different stages of Algorithm 2 for the tolerances (ε, δ) = (0.1, 0.1).
Computational results for different values of ε and δ are presented in Table 1. It can be seen that the number of subproblems that have to be solved is larger than the number of vertices the polyhedral approximation has. The reason is that one instance of (P 2 (v, d)) is solved for every vertex of the current approximation in every iteration of Algorithm 1, but only one of these vertices is cut off. Moreover, the number of solved subproblems grows quickly as ε decreases, because more iterations of Algorithm 1 are needed to reach the desired accuracy and the number of solved subproblems grows with every iteration. Since the recession cone of C is just a ray and easy to approximate, most of the computational effort is put into approximating C in line 7. However, for fixed ε and decreasing δ the number of solved subproblems grows. This is due to the fact that C depends on the approximate recession cone K. As δ decreases the rays generating K will be closer to each other with respect to the truncated Hausdorff distance. Therefore, the set C will have a larger area and it takes more iterations to compute an ε-approximation of it. Note, that for (ε, δ) equal to (0.3, 0.2), (0.5, 0.15) or (0.5, 0.2) the same number of subproblems are solved and the approximations have the same number of vertices. For the tolerances (0.3, 0.2) and (0.5, 0.2) the values are identical, because during the approximation of C the approximation error in Algorithm 1 changes from a value larger than 0.5 to a value smaller than 0.3 in one iteration. Therefore, the resulting (ε, δ)-approximations are identical. For (ε, δ) = (0.5, 0.15) the approximation is different and it is a coincidence that the values coincide.
Example 5.2. Algorithm 2 can also be used to compute polyhedral approximations of closed and pointed convex cones. Consider for example the positive semidefinite cone of 2 × 2 matrices It is a closed and pointed convex cone with nonempty interior. Thus, we can apply Algorithm 2 to it. Since S is a cone, its only vertex is the origin and we can terminate the algorithm after C 0 + C K (a) The approximate recession cone K as computed in lines 1-5.
C P 0 (b) P0 is the initial approximation of C according to (16). The grey hatched area is the compact subset C of C computed in line 6.   K has been computed in line 5. Then K is a polyhedral cone and it holdsd H (K, S) ≤ δ. Figure  4 shows a polyhedral approximation of S with 20 extreme rays andd H (K, S) ≤ 0.1.

Conclusion
We have introduced the notion of (ε, δ)-approximations for the polyhedral approximation of unbounded convex sets. Since polyhedral approximation in the Hausdorff distance can only be achieved for unbounded sets under restrictive assumptions, (ε, δ)-approximations are of particular interest, because they allow treatment of a larger class of sets. An important observation is that the recession cones of the involved sets must play a crucial role in a meaningful concept of approximation for unbounded sets. We have shown that (ε, δ)-approximations define a suitable notion of approximation in the sense that a sequence of such approximations convergences and that (ε, δ)-approximations generalize the polyhedral approximation of compact sets with respect to the Hausdorff distance. Finally, we have presented an algorithm that allows for the computation of (ε, δ)-approximations of spectrahedra and have shown that the algorithm is finite.

Data Availability Statement
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.