An inner approximation method to compute the weight set decomposition of a triobjective mixed-integer problem

This article is dedicated to the weight set decomposition of a multiobjective (mixed-)integer linear problem with three objectives. We propose an algorithm that returns a decomposition of the parameter set of the weighted sum scalarization by solving biobjective subproblems via Dichotomic Search which corresponds to a line exploration in the weight set. Additionally, we present theoretical results regarding the boundary of the weight set components that direct the line exploration. The resulting algorithm runs in output polynomial time, i.e. its running time is polynomial in the encoding length of both the input and output. Also, the proposed approach can be used for each weight set component individually and is able to give intermediate results, which can be seen as an “approximation” of the weight set component. We compare the running time of our method with the one of an existing algorithm and conduct a computational study that shows the competitiveness of our algorithm. Further, we give a state-of-the-art survey of algorithms in the literature.


Introduction
Multiobjective optimization problems have gained a lot of attention lately, especially when it comes to problems with integer or both integer and continuous variables. From a theoretical point of view, many well-studied single objective optimization problems have been extended to a multiobjective setting, since real-world problems naturally often have more than one objective function. Instead of combining these to one objective function via weights and thereby fixing the importance of the different objective functions, one is interested in getting various or all "optimal" solutions to provide a decision support for the user. All optimal solutions for a multiobjective problem, also known as efficient solutions, and their objective function values denoted as nondominated images, are in general not computable by combining objective functions and then solving such a weighted sum problem. Still, this weighted sum scalarization is a useful and broadly applied scalarization for multiobjective optimization even though the optimal solutions obtained, the so-called supported efficient solutions and their corresponding supported nondominated images, are only a fraction of all efficient solutions and all nondominated images, respectively. Though the number of supported nondominated images may be exponential in the size of the input, this may hold true even if we restrict ourselves to extreme supported nondominated images, which are additionally extreme points of the convex hull of all images. 1 The weight set decomposition has become a useful tool to find all extreme supported nondominated images: The weight set is the set of eligible parameters for the weighted sum scalarization, i.e. all positive weights. This set can be decomposed into weight set components for every extreme supported nondominated image such that each component consists of those weights for which the corresponding image is optimal for a weighted sum problem with this particular weight. As the weight set components are convex polytopes, this structure is of considerable aid when computing extreme supported nondominated images. Moreover, knowledge about the weight set provides insights about the adjacency structure of the nondominated set which expands the work of Gorski, Klamroth, and Ruzika [13].
In this article, we present a new algorithm for computing the weight set decomposition and, thereby, the set of extreme supported nondominated images of a triobjective mixed-integer linear problem. It is applicable to a broad class of multiobjective problems. For this purpose, we make use of the properties of the weight set components and the Dichotomic Search. Furthermore, we prove that our algorithm runs in output polynomial time, that is, its running time is polynomial in the encoding length of both the input and output. In a computational study, we examine and present the benefits of our algorithm for practical usages. Additionally, the algorithm has various innovative properties such as at every intermediate step of the algorithm, the current subset of the weight set component has a polyhedral structure and is convex. This property can be used for the computation of an "approximation" of weight set components. However, this is not in the focus of this article.

Related work
For biobjective problems, the set of all extreme supported nondominated images can be computed by Dichotomic Search [2,7]: Starting with the two lexicographically optimal images, the normalized normal vector of the line passing through these images is used as a weight vector for the weighted sum problem. Solving this, either the two existing points are optimal or a new supported nondominated image is computed which then yields a better objective function value. In the former case, the algorithm stops searching new nondominated points between the existing points and continues investigating other pairs of points. In the latter case, this procedure is repeated with the newly found point and either of the original points. This procedure yields a superset of the set of the extreme supported nondominated images and it also provides the weight set decomposition of the one dimensional weight set.
For problems with more than two objective functions, only a few methods for finding all extreme supported nondominated images are known. These methods can be categorized into two approaches: The methods of Bökler and Mutzel [5], Özpeynirci and Köksalan [21] and Przybylski, Gandibleux, and Ehrgott [22] iteratively decompose implicitly or explicitly the weight set using supersets of the components for any known supported nondominated image. In contrast, Alves and Costa [1] use subsets of the components for any known supported nondominated image and iteratively extend the known area of the weight set components.
Przybylski, Gandibleux, and Ehrgott [22] propose the first algorithm to find all extreme supported nondominated images for general multiobjective (mixed-) integer problems and present basic properties of the weight set. Originally, their algorithm has been designed for three objectives but can be recursively applied to problems with more objectives. Initialized with lexicographically optimal images, it decomposes the weight set into components using the known images. That is, the weight set is decomposed such that a weight vector belongs to the component of an image if the weighted sum of this image is better than for the other known images. For two adjacent components the common subedge is searched using Dichotomic Search and either it is stated that this is indeed a common edge or a new image is found. This is repeated until all edges for all supersets of the weight set components are indeed edges of the components. Additionally, Przybylski, Gandibleux, and Ehrgott [22] are the first to provide a computational study of a weight set decomposition algorithm for problems with three objectives.
Özpeynirci and Köksalan [21] present an algorithm that implicitly utilizes supersets of weight set components. It generalizes the work of Aneja and Nair [2] and Cohon [7] and can be seen as a multiobjective Dichotomic Search. The algorithm is initialized with infeasible but nondominated "dummy images" and the corresponding hyperplane of these points is considered. Using the (positive valued) normal vector of this hyperplane as a weight vector for the weighted sum scalarization, a new nondominated supported image is returned. In the following, the hyperplanes where the new image replaces one previous image are considered. If the normal vector of such an hyperplane is not positive, they use an existing image to compute a new weight vector and the algorithm proceeds with a call to the weighted sum procedure. This is repeated, until no new nondominated image is found. The authors provide additional improvements and give the first computational study of weight set decomposition for problems with four objectives. However, even though it is stated that the algorithm works for multiple objective mixed-integer problems, the algorithm that computes the dummy points presented by Özpeynirci [20] needs integer objective function values.
In their work on finding extreme supported solutions for multiple objective combinatorial problems, Bökler and Mutzel [5] make use of the previous work of Ehrgott, Löhne, and Shao [9], Hamel et al. [15], and Heyde and Löhne [16], where a geometric dual problem is introduced. They show that there is a one-to-one correspondence between optimal facets of the dual polyhedron(using the extreme points of the dual) and the weight set components of the extreme supported nondominated images. This gives rise to an implicit way to compute the weight set decomposition. Starting with one lexicographically optimal image of the original problem they compute a polyhedron containing the dual polyhedron. Using an extreme point of this polyhedron as a weight vector, either the corresponding weighted sum problem states that the extreme point is also an extreme point of the dual polyhedron or a new supported nondominated solution and a face defining inequality to trim the polyhedron is returned. This is repeated until the polyhedron is indeed the dual polyhedron. A further improvement is made using a lexicographic variant of the weighted sum scalarization and it is proven that the algorithm runs in output polynomial and even incremental polynomial time for fixed number of objective functions, given a polynomial time algorithm for the single objective problem. Bökler et al. are the first to state a running time for their algorithm and to provide a computational study for more than four objectives. Alves and Costa [1] propose a method that explicitly utilizes subsets of the weight set component for any known supported nondominated image. This last method has been mainly designed for an interactive usage, and specifically for the triobjective case. However, one can also determine the set of all extreme supported nondominated images with this algorithm. Starting with an arbitrary weight vector of the weight set, any time a weighted sum problem is solved obtaining an optimal solution, information from the branch and bound tree is applied to obtain a subset of the corresponding weight set component. Given such subsets the algorithm enlarges the current subset of the weight set component by merging subsets of the same component via a convex hull routine or using theoretical results on adjacency between components. Alternatively, Alves and Costa [1] take again information from the weighted sum problem using a weight vector slightly outside of the current subset. By adjacency, this procedure can be rerun with any adjacent component to iteratively explore and compose the whole weight set.

Our contribution
Given a triobjective mixed-integer linear problem, we propose an algorithm for computing the weight set decomposition and all extreme supported nondominated images. The algorithm utilizes subsets of the weight set components of all known extreme supported nondominated images. These subsets are also convex polytopes, a fact that we take advantage of in the algorithm. Also, the algorithm heavily relies on a variant of the Dichotomic Search that we apply in a various ways: For the initialization, we examine the boundary with Dichotomic Search in order to get some extreme supported nondominated images and the common edge segments of the boundary and the related components. Further, given a subset of a component, this can also consist of an edge, we examine the weight set by searching via Dichotomic Search on the line perpendicular to an edge of the subset, of which we do not know if it is an edge of the component. From this line search, we get new points on the boundary of the weight set component and possible adjacent images. This information is then used by several theoretical results that state, if we have found an edge or an sub-edge of the component. In the latter case, we apply again Dichotomic Search in order to search on the extended line of the sub-edge to find the whole edge of the component. For our algorithm, we are able to provide a running time that is output polynomial for fixed number of objective functions, and with some minor requirements the algorithm runs in incremental polynomial time. The asymptotic running time is competitive regarding the method of Bökler and Mutzel [5] and is besides their algorithm the only algorithm with an output sensitive (and incremental) running time. Not only is this algorithm a new combination of well known methods with further developments and therefore easy to comprehend, it also has several innovative traits that are of advantage for the practical use. The algorithm explores the weight set components one by one, so if the user is interested in the weight set component of one image, he can use this algorithm solely for this particular image. Also, when exploring a component, at any intermediate step the subset of the component computed by the algorithm is a convex polytope and therefore fulfils important properties of the weight set component. Further, all extreme points of the subset are on the boundary of the component. Due to these properties, the intermediate subset can be seen as an "approximation" of the weight set component, which has some practical benefits. Additionally, if we expand the subset in a certain direction, it is guaranteed that the maximum expansion in this direction is achieved.
The remainder of the article is structured as follows: In Sect. 2 we introduce necessary notation, definitions and properties concerning multiobjective optimization and weight set decomposition. In the next section, we state further theoretical results for weight set decomposition and weight set components. On these results we build our algorithm, which we present in Sect. 4. Additionally, we prove correctness, analyse the running time, and compare our algorithm with existing ones in the literature both via a computational study and via an analysis of the asymptotic running time and general advantages and disadvantages of the approaches. We conclude the article in Sect. 5 with a summary of our results and an outlook to future topics of research.

Preliminaries
In the following, some basic definitions and concepts of multiple objective programming and polyhedral theory are stated. For more detailed introductions, we refer to Ehrgott [8] and Ziegler [25].
A multiobjective mixed-integer linear problem with p ≥ 2 objectives, n 1 > 0 discrete, and n 2 ≥ 0 continuous variables can be concisely stated as with n: Remark, we allow the case that n 2 = 0, i.e., integer and combinatorial problems. Let X :={x ∈ Z n 1 × R n 2 : Ax ≤ b} be the set of feasible solutions (referred to as feasible set) and Y :={C x : x ∈ X } ⊆ R p the set of images. The Euclidean vector spaces R n and R p comprising the set of feasible solutions and the set of images are called decision space and criterion space or image space, respectively. We clarify the relation between these sets by the function z : Since there does not exist a canonical order in R p for p ≥ 2, the following variants of the componentwise order are used. For y 1 , y 2 ∈ R p and N = {1, . . . , p}, we define y 1 ≤ y 2 :⇔ y 1 y 2 but y 1 = y 2 , For p ∈ N, the nonnegative orthant is defined as R p ≥ :={r ∈ R p : r ≥ 0} and, likewise, the sets R p and R p > are defined. An image y * ∈ Y is nondominated (weakly nondominated) if there is no other image y ∈ Y such that y ≤ y * (y < y * ). Analogously, x ∈ X is called efficient (weakly efficient), if z(x) is nondominated (weakly nondominated). The set of all nondominated (weakly nondominated) images is referred to by Y N (Y wN ) and the set of (weakly) efficient solutions by X E (X wE ).
The convex hulls of X and Y , denoted by conv X and conv Y , are polyhedra. Recall that the dimension of a polyhedron P ⊆ R p is the maximum number of affinely independent points of P minus one. For w ∈ R n and t ∈ R, the inequality w y ≤ q is called valid for P if P ⊆ {y ∈ R n : w y ≤ q}. A set F ⊆ P is a face of P if there is some valid inequality w y ≤ q such that F = {y ∈ P : w y = q}. As a face is itself a polyhedron, we can adopt the notion of dimension and denote special faces: An extreme point is a face of dimension 0, an edge is a 1-dimensional face and a facet has dimension p − 1. The definition of nondominance can also be applied to any subset S in R p , in particular to a polytope P ⊂ R p and to its faces. Consequently, the set of nondominated points of S ⊂ R p is denoted by S N .
Given some permutation π of the set {1, . . . , p} and the associated order for the objective functions, we refer to the problem lex min (z π (1) as the lexicographic mathematical programming problem with respect to permutation π. This optimization problem can be understood as minimizing the objective function z π(1) first. Then, over the set of minimizers of z π(1) , the next objective function z π(2) is minimized and so on. Varying the permutation π, different solutions and their images can be found, the so-called lexicographically optimal solutions (images). Clearly, these images are nondominated. For biobjective problems, there exist two lexicographic optimal images, y U L = z U L := lexmin(z 1 , z 2 ) {x∈X } and y L R = z L R := lexmin(z 2 , z 1 ) {x∈X } .
Scalarization methods are useful tools to compute efficient solutions. The most common method is the weighted sum method introduced by Zadeh [24] which linearly combines the objective functions to get the scalar-valued optimization problem Every optimal solution to Π λ is a (weakly) efficient solution of the original problem if λ ∈ R p > (λ ∈ R p ≥ ) [11]. The image y of a solution x that can be obtained by the weighted sum method is called a supported nondominated image. The set of supported nondominated images is denoted by Y SN . All other images are called unsupported. A supported nondominated image y ∈ Y is an extreme supported nondominated image, if y cannot be expressed by a convex combination of points in Y N \{y}. Prominent examples of such images are the lexicographically optimal images. The set of extreme supported nondominated images is denoted by Y E SN . All other supported nondominated images are called nonextreme. For efficient solutions in X E , we use these terms analogously. Since normalization of the weight vector does not change the set of optimal solutions, we consider the normalized weight set for the weighted sum scalarization Note that Λ > is not closed and thus not compact. It turns out to be convenient to consider the closure of the normalized weight set The set Λ is a polytope of dimension p − 1 and in particular, it is two-dimensional in the triobjective case, and there is a bijection between Λ and λ ∈ R p−1 : p k=1 λ k ≤ 1 , see Fig. 1 for an example. In the following, for the sake of simplicity, we address both sets as the weight set denoted by Λ. It should be clear by the usage of λ ∈ Λ, whether its dimension is p or p − 1 and we automatically add or remove one entry of the vector if needed. However, we denote the p-dimensional λ as weight vector and the ( p − 1)-dimensional λ as weight point in order to highlight the usage in a scalarization or polyhedral context.
For y ∈ Y , we denote by Λ(y) the weight set component of y defined by This corresponds to the subset of weights λ ∈ Λ for which y is the image of an optimal solution of (Π λ ). Przybylski et al. [22] presented basic properties of the weight set.

Let S be a set of supported nondominated images. Then
Proposition 2.1(5) gives a certificate for a set of supported nondominated images S to subsume the set Y E SN . This certificate requires to be able to compute Λ(y) for any supported nondominated image y which can be easily done if Y E SN is known, using Proposition 2.1(1). However, Y E SN is not known initially. Additionally, the notion of adjacency between extreme supported nondominated images is introduced and a result which justifies that adjacency is well-defined and symmetric is provided. Definition 2.2 (Przybylski et al. [22]) Two extreme supported nondominated images y 1 and y 2 are called adjacent if their common facet Λ(y 1 ) ∩ Λ(y 2 ) in the weight set is a polytope of dimension p − 2.
A similar result for the two dimensional weight set has been conducted by Alves and Costa [1] which gives another perspective on this issue: [1]) Let λ 0 , λ 1 , λ 2 ∈ Λ be three weight vectors that are located on the same line segment. Let the nondominated images y 1 and y 2 be the images of optimal solutions for (Π λ 0 ) and (Π λ 1 ). Let y 1 be also the image of an optimal solution for (Π λ 2 ). Then y 2 is also the image of an optimal solution of (Π λ 2 ).

Proposition 2.4 (Alves and Costa
The last two results are visualized in Fig. 2 and give rise to an algorithmic idea: If two subsets of the weight set components of y 1 and y 2 have an intersection that is not reduced to a single weight vector, then this intersection either defines the complete common edge of both subsets or it can be extended for one or both subsets, e.g. the one of y 2 up to λ 2 .
At last, we recall notions regarding running time and complexity of optimization problems, for a basic introduction see the book of Arora and Barak [3]. General multiple objective mixed-integer and integer problems are NP-hard and only particular cases are polynomially time solvable, see Figueira et al. [10]. In order to quantify the asymptotic running time for problems with a large output, the notion of output-sensitive complexity was developed by Johnson et al. [17] and recently reused by Bökler and Mutzel [5] and Bökler et al. [4] in the multiple objective context. Instead of defining the complexity for enumeration problems and enumeration algorithms, we introduce it solely for multiple objective problems. Given such a problem with reasonably small encoding length of the output 2 and let the cardinality of the output denoted by k, then this problem is in if there is an algorithm that solves the M O P such that 1. its running time is a polynomial of the encoding length of the input and the output, 2. the ith delay for i = 0, . . . , k, is a polynomial of the encoding length of the input and of that subset of the output that has been returned before the ith solution has been returned.
Here, the ith delay for i = 1, . . . , k − 1 characterizes the time between the output of the ith and (i + 1)th solution. The 0th delay is the time between the start of the algorithm and its first output and likewise the kth delay is the time between the last output and the termination of the algorithm.

Foundations
In this section, we present the fundamental parts of our algorithm. Given a (extreme) supported nondominated image y, we compute its weight set component Λ(y) by starting with a subset of the component. We demand some specific requirements of this subset: Basically, the inner approximate component fulfills almost every property of a weight set component except for completeness, that is, every weight vector for which y is optimal is not necessarily contained in L(y). Also, adjacency between inner approximate components or between an inner approximate component and a weight set component may not hold, although the corresponding weight set components are adjacent.
Next, we specify the faces of such an inner approximate component.

Definition 3.2 Let L(y) be an inner approximate component of Λ(y) and F a face of L(y).
Then, F is a final face if it is known that F is a face of Λ(y). Otherwise, F is an interim face.
In particular, for edges we have interim and final edges.
Remark that every face of Λ(y) and every extreme point of L(y) is by definition a final face. As these extreme points are located on the boundary of the weight set component and in order to avoid confusion when using the notion of extreme, we call a point λ on the boundary of Λ(y) an intersection point. If an intersection point λ is indeed an extreme point of Λ(y), then it is an extreme intersection point. The notion emerges from the fact that these points are in the intersection of two weight set components of supported nondominated images, λ ∈ Λ(y) ∩ Λ(y ) with y = y . In this case, we also say that λ separates y and y and L(y) ∩ L(y ) = ∅ holds. The same terminology can be used for other faces. Further, we assume that L(y)\L(y ) = ∅ and L(y )\L(y) = ∅. This assumption will be verified by construction, as we will see in the description of the algorithm in Sect. 4.
The algorithm for triobjective optimization problems utilizes inner approximate components and successively enlarges the components until it is verified that L(y) = Λ(y) holds true for some y and later for every extreme supported nondominated image. This general technique is split into its main features and these will be discussed in the following subsec- In most of these features, a line search in the weight set is conducted: Given two points λ 1 , λ 2 ∈ Λ, we search the line between these points by transforming the triobjective problem into a biobjective problem: Partial Weighted Sum Problem. Solving this problem via Dichotomic Search returns a weight set decomposition for the biobjective problem: We get solutions x 1 , . . . , x d of (Π W ) and weight vectors w 0 , w 1 , . . . , These weight vectors can be used to get information for the weight set decomposition of (M O P) as it is proven in [14]:

Remark 3.3
If W is a matrix described as above such that W does not have zero-valued columns, then the Dichotomic Search on (Π W ) returns supported nondominated images This result on line search is applied by the perpendicular search, the extend edge routine, and in a slightly different fashion for the initialization.

Perpendicular search
Given an inner approximate component L(y) for a supported nondominated image y, it is clear that in the triobjective case dim L(y) ≤ 2. In particular, we are interested in the case where L(y) does not consist of only one point, so we have at least one edge of L(y). Suppose we have an interim edge E of L(y), we will discuss other cases later. Now, as the goal is to compute Λ(y), we aim to enlarge L(y). For this purpose, we use the procedure Perpendicular Search, see Algorithm 1.

Algorithm 1 Perpendicular Search
Ensure: An intersection point λ * that separates y from a supported image y * . Basically, a line perpendicular to E starting from the middle point of E to some weight point λ b is searched via line search. In particular, λ b may be chosen on the boundary of Λ. Then, depending if y is found or not, we get a new intersection point, which may lie on the interim edge E. Next, the correctness of this procedure is proven.

Proposition 3.4 The weight point λ * returned by Perpendicular Search is an intersection point of Λ(y). Moreover, y * is a supported nondominated image.
Proof When performing the line search on I , the corresponding PWS matrix W does not have a zero-valued column. W has a zero-valued column, if both λ a and λ b are located on the same edge of the boundary of Λ. However, as I is perpendicular to E, either λ 1 / ∈ Λ or λ 2 / ∈ Λ. Hence, as W fulfils the requirements of Remark 3.3, we get that y * is a supported nondominated image. If λ * =λ 1 , then it also follows that λ * is an intersection point of Λ(y). Otherwise, the corresponding solutions of both y and y 1 are optimal for Πλ 0 and hencē λ 0 ∈ Λ(y) ∩ Λ(y 1 ). Hence,λ 0 is an intersection point of Λ(y). Remark 3.5 By construction, there is a subinterval of I that is given by either [λ 1 ,λ 2 ] (if λ * =λ 1 ) or [λ 0 ,λ 1 ] (otherwise) such that this subinterval belongs to L(y * ) and consequently, we have L(y * )\L(y) = ∅.
The next proposition states and proves how we can enlarge L(y). Proof By Proposition 3.4, λ * is an intersection point of Λ(y). L(y) can be described by a sequence of intersection points. In case that λ * = λ a , we insert λ * between λ 1 and λ 2 . By construction, we only have to ensure that L(y) remains convex. As λ * is on the boundary of Λ(y) and the same holds true for every extreme point of L(y) the convexity immediately follows by the convexity of Λ(y). This shows the statement.
An example of Perpendicular Search can be seen in Fig. 3. The procedure Perpendicular Search can be seen as an improvement in comparison to the procedure used by Alves and Costa [1] to determine weight points outside of Λ(y). Indeed, it is interesting to note that the weight point obtained in the case of the procedure proposed by Alves and Costa [1] is located in the line segment of the weight set that is considered in the procedure Perpendicular Search. This last procedure has the advantage to generate, in one call, one point located on the boundary of Λ(y), which could require an undefined number of calls in the case of the procedure proposed by Alves and Costa [1].
At last, we take a look at some details of Perpendicular Search: Given L(y) has at least one edge, we assumed that we have at least one interim edge. Clearly, if it is determined that Λ(y) = L(y), a further search is not necessary. However, it may occur that dim(L(y)) = 1, hence there is only one final edge E. Then, we install a dummy interim edge which is identical to E and perform perpendicular search, but we do not know in which direction perpendicular to E we have to search. As it is a final edge which has been marked final in a prior point in time, we know one adjacent extreme supported nondominated imageȳ and hence this "side" of E has been already explored. So, we search in the opposite direction of the component of this adjacent image. We tackle the two cases of Perpendicular Search: -If y 1 = y, we can replace the dummy interim edge by the two new edges and resume the algorithm. -Otherwise, we can state that Λ(y) = L(y): λ * is an intersection point that separates y,ȳ and y * . In both directions L(y) cannot be enlarged and it consists solely of the edge E.
In particular, dim Λ(y) = 1 and y is a nonextreme supported nondominated image.
If we consider an image y such that L(y) = {λ}, we dismiss y from consideration until dim(L(y)) = 1. If this does not occur, we omit y. Also, as we are only interested in the intersection point of the line search that separates y from another image, we do not have to perform a full Dichotomic Search. In fact, we only consider a Partial Dichotomic Search. Recall that in a full Dichotomic Search, if a new optimal image improving the weighted sum objective value has been found, two weight vectors that are the normal vectors of the lines connecting one of the old images to the new one are computed to define new weighted sum problems. In a partial Dichotomic Search, we only consider the weight vector for which one of the defining points is y.

Check edge
Given an interim edge, one needs to check, whether this edge is indeed a final edge or at least a subset of a final edge. First, we clarify which subsets of edges we aim for. Definition 3.7 A subface of a face F is a subset G ⊆ F such that G is a convex polytope with dim(G) = dim(F). In particular, for edges, we have subedges. This definition simplifies the notation of the following Separation Theorem: Proof 1. By definition of the intersection points, λ 1 and λ 2 are located in Λ(y) ∩ Λ(y ). As Λ(y) ∩ Λ(y ) is a convex polytope, the statement follows immediately. 2. This statement is a direct consequence of Proposition 2.3. 3. As λ 1 , λ 2 and λ 3 are all located on the boundary of Λ(y), and as they are located on the same line, conv{λ 1 , λ 2 , λ 3 } = [λ 1 , λ 3 ] is a subedge of an edge E of Λ(y). Next, as λ 2 is located in the relative interior of [λ 1 , λ 3 ], it is also located in the relative interior of E. As the optimal solutions are the same for all weight vectors corresponding to weight points in the interior of E, E is a subedge of the common edge of Λ(y) and Λ(y 2 ). Finally, if λ 2 is found by Perpendicular Search and as Λ(y 2 )\Λ(y) = ∅, Λ(y 2 ) is composed of more than the edge [λ 1 , λ 3 ] and is therefore a polytope of dimension 2. We conclude that y 2 is an extreme supported nondominated image by application of Proposition 2.1(3). The rest is again a direct consequence of Proposition 2.3.
In Fig. 4, some cases of Theorem 3.8 are visualized. In the algorithm, we use Theorem 3.8, when a new interim edge is constructed or the middle point of an edge is identified as an intersection point by Perpendicular Search. We call this test by Theorem 3.8 Check Edge.

Extend edge
Assume that the Separation Theorem returns that a given interim edge E is a subedge of a final edge. In order to extend this subedge to the final edge, we again make use of the line search, which is similar to Perpendicular Search.
Like for Perpendicular Search, a Partial Dichotomic Search is sufficient.

Proposition 3.9 The weight point λ * returned by Extend Edge is an extreme intersection point of Λ(y) separating y, y and y * .
Proof As the PWS matrix W corresponding to the line search on I has clearly no zero-valued column, by Remark 3.3, λ * is an intersection point and y * is a supported nondominated image. If (λ 1 ) · y 1 = (λ 1 ) · y, the solution belonging to y is an optimal solution for the weighted sum problem with weightλ 1 and λ 1 . Otherwise, this holds only for λ 1 . In both cases, as the line I is parallel to an edge of Λ(y), it follows that λ * is an intersection point of L(y). By Proposition 2.3, the same holds for y , hence λ * separates y, y and y * . Further, λ * is extreme, as for every λ ∈ (λ * , λ b ] the solution belonging to y is not an optimal solution for the weighted sum problem. As λ * is indeed an intersection point of L(y), we describe how to enlarge L(y).
Proof Similarly to the proof of Proposition 3.6, λ * must be added in the sequence of intersection points. The difference is that the alignment of λ 2 , λ 1 , λ * makes λ 1 redundant in the sequence.
In Fig. 5, an example of Extend Edge is visualized. At last, if both endpoints of E are not known as extreme intersection points, the whole line passing through E with endpoints on the boundary of Λ can be searched via a full Dichotomic Search, instead of two lines with a Partial Dichotomic Search.

Initialization
At last, we deal with the question of how we initialize at least one inner approximate component. As pointed out in Perpendicular Search, the dimension of L(y) has to be at least one, i.e., we have at least one edge. In order to generate such inner approximate components, we and therefore no intersection points. We can overcome this problem in the following way: whenever a weighted sum problem is solved during Dichotomic Search, for example for a weight vector w ∈ R 2 min w · W C x has to be solved, solve for this particular W instead, where C 2· is the second row of C.

Remark 3.11
If W is a PWS Matrix with one zero-valued column, then the Dichotomic Search that utilizes (Π lex w ) instead of weighted sum problems on (Π W ) returns supported nondominated images z(x 1 ), . . . , z(x d ) of (M O P) and weight vectors λ 1 Further, the corresponding weight points of W · w i for i = 1, . . . , d − 1 are intersection points.
Proof As the solutions we get by (Π lex w ) are efficient, the proof is analogously to the proof of Remark 3.3 which has been done by Halffmann, Dietz, and Ruzika [14]. Regarding the solvability of (Π lex w ), by Glasser [12] only minor requirements are needed to ensure that if the weighted sum problem is polynomially solvable so is the lexicographic problem 3 . For many cases, especially with linear objective functions, the asymptotic running times of the weighted sum problem and the lexicographic problem coincide, however there may be problems, where the algorithm for the lexicographic problem is significantly slower but still running in polynomial time. Clearly, for well-known combinatorial problems like shortest path we either can replace the comparison operator in the algorithm by its lexicographical version or -if the objective function values are non-negative and integer -we can solve two weighted sum problems like it is proposed in Ehrgott [8].
If the requirements of Glaßer et al. are not met or the algorithm for the lexicographic problem is significantly worse than the one of the weighted sum problem, we simply apply the Dichotomic Search on the boundary of the weight set. Then, we either get an extreme supported nondominated image or an image that is weakly dominated by such an ESN image. This is an immediate consequence of the following result:

Proposition 3.12 ([23]) Let (P) be a problem with p objectives and let (P I ) be a subproblem with |I | objectives given by the objectives indexed by I ⊂ {1, . . . , p}. Let y be an extreme supported image for (P I ), then if y is nondominated for (P), it is also an extreme supported image for (P).
Clearly, even if the returned image is only weakly nondominated, we get the correct intersection points. The first attempt to extend the weight set component of this image will show that it is reduced to only one edge, and will allow to discover the extreme supported nondominated image dominating it.
In Fig. 6, a possible initialization is depicted.

Presentation of the algorithm and correctness
As described in the preceding section, the algorithm will maintain a set S of supported nondominated images, and for each known supported nondominated image y ∈ S, a polytope L(y) ⊆ Λ(y) will also be maintained. This will be done by storing a set P of intersections points and a set E of final edges (i.e. edges of polytopes L(y) that are identified as edges of Λ(y)). Algorithm 3 starts with the initialization described in Sect. 3.4 followed by a main loop that will iteratively enlarge the sets L(y) to Λ(y) for any known supported image y, and simultaneously discover new supported images. This loop simply relies on the successive application of the Perpendicular Search procedure applied on an interim edge, the checking of an edge to identify if it is a subedge of a final edge (implicit in Algorithm 3) and next the application of the Extend Edge procedure if necessary, as described in Sect. 3. Updating P, S, and L(y) is done by adding λ * to P, y * to S and following the rules of Proposition 3.6 and Proposition 3.10. In the pseudocode of the algorithm we use ↓ to indicate the input of the procedures and ↑ for the output.

Algorithm 3 WSD Approx Algorithm
Require: An instance of a triobjective mixed-integer linear program Let E be the only edge of L(y) 5: Perpendicular Search(E ↓, λ * ↑, y * ↑); Update(P, S, L(y)); 6: if dim L(y) = 1 then 7: y is a non-extreme image and L(y) will no longer be considered. 8: end if 9: end if 10: if dim L(y) = 2 then 11: while at least one edge of L(y) is an interim edge do 12: if there is an edge E of L(y) with E / ∈ E that is a subedge of Λ(y) then 13: Extend Edge(E ↓, λ * ↑, y * ↑); Update(P, S, L(y)); 14: else 15: Choose any edge E that is not in E 16: Perpendicular Search(E ↓, λ * ↑, y * ↑); Update(P, S, L(y)); 17: end if 18: end while 19: end if 20: end for

Lemma 4.1 The identification of a final edge using the loop of lines 11-18 of Algorithm 3 is done with at most three searches in total (perpendicular search and extend edge).
Proof Theorem 3.8 shows in which case we find a subedge of an edge. Suppose we examine y 1 and we are in a particular case of the third statement of Theorem 3.8, which is clearly the worst case. We have found three intersection points in that order with the following separation, see also Figure 7. Hence, by these points we can conclude that we have found a subedge and λ 1 is an extreme point; regarding the chronological order of the points, each point is necessary for this result. Hence, we need three perpendicular searches and additionally one call to extend edge, as the line segment [λ 1 , λ 3 ] is only a subedge, see Fig. 7. Even though we have performed four searches for the edge [λ 1 , λ 4 ], we can rearrange the points such that λ 4 is counted for the edge that has λ 4 as common endpoint with the edge [λ 1 , λ 4 ]. Further we can show that, using the same construction as above, we need additionally two perpendicular searches and one extend edge, so again we get four searches. We can continue like that, however, at one point we reach a known definitive edge D, as each L(y) is initialized by at least one known edge: As in Fig. 7, we have the extreme point of the edge D λ 5 , which separates y 1 and some image y 4 . Hence we need one additional perpendicular search to find λ 6 and by Theorem 3.8 3. we can conclude that [λ 4 , λ 5 ] is a final edge. So, by this reassignment of intersection points, we need one search for the last edge and at most three for any other edge.

Definition 4.2
Let Y E SN be the set of extreme supported images of a M O P with p objectives, we construct the adjacency graph G W S of the weight set as follows: An example of the adjacency graph can be seen in Fig. 1. The corresponding image space with the lines between the extreme supported nondominated images resemble the adjacency graph.

Observation 4.3
The adjacency graph G W S of the weight set is planar and connected for p = 3. This is not the case for p ≥ 4.  Proof We show that for each y ∈ S, the loop of lines 11-18 terminates in finite time and that L(y) = Λ(y) when it terminates. The number of edges and extreme points of Λ(y) (except edges and extreme points on the boundary of Λ that are found with the initialization), is limited by the number of adjacent extreme supported points to y, which is necessarily finite. Lemma 4.1 implies that the identification of an edge of Λ(y) requires at most three searches in total (Perpendicular Search and Edge Extend). Further, for every edge, the algorithm will find at most one non-extreme image. The non-extreme characteristic is proven with one Perpendicular Search. We conclude the finiteness of the algorithm.
Suppose now that at termination of the while loop of lines 11-18, we have for y ∈ S, L(y) ⊂ Λ(y). This implies that at least one extreme point λ of Λ(y) is not an extreme point of L(y). Consequently, there are two (sub)edges (incident to λ) of Λ(y) that are not edges of L(y). Fig. 8 illustrates possible situations for a missed extreme point λ.
On these figures, only one extreme point of Λ(y) does not belong to L(y). To have several consecutive missed extreme points would not modify the idea of the proof. In Fig. 8 on the left, λ is missed because two subedges of Λ(y) have not been extended. This cannot happen as at termination of the while loop of lines 11-18, all edges of L(y) are final, i.e. are edges of Λ(y) according to Proposition 3.8. On the right, two full edges of Λ(y) are missed, and an edge E of L(y) is cutting the interior of Λ(y). However, the final character of E contradicts the conditions to detect subedges of Λ(y), described by Proposition 3.8. Indeed, the two extreme intersection points of E clearly separate two different pairs of extreme supported images, and there cannot be any intersection point in the relative interior of E, as intersection points are located on the boundary of Λ(y). We conclude that at termination of the while loop of lines 11-18, Λ(y) and L(y) have the same set of extreme points and are therefore equal.
We show next that Y E SN ⊆ S. For all y ∈ S, Λ(y) is computed and we know all adjacent extreme supported images. We consider the adjacency graph G W S from Definition 4.2. As G W S is connected, we necessarily have Y E SN ⊂ S. As we search on the boundary of the weight set, we can find at most all v extreme supported images on each edge of the boundary of Λ. Hence, we need at most O(v) calls to the weighted sum algorithm to find all edges of weight set components on the boundary. Remark, that we assume by Sect. 3.4 that the found images are nondominated and the found edges are indeed final edges of the components.
Claim 3: All initial constructions of the sets L(y) can be done in total in O(v log v) time. Each initialization of L(y) can be done by a planar convex hull algorithm, which has a running time of O(k · log k) for k points [6]. For this initialization, some final edges will already be known. In particular, we use the endpoints of these edges to construct the convex hull that defines L(y). The number of initially known edges in this convex hull is not known. On the other hand, each edge of the weight set decomposition is used exactly once for the convex hull algorithm: a final edge Λ(y 1 ) ∩ Λ(y 2 ) is found, when y 1 has been examined, and is only necessary for the initialization of L(y 2 ). Analogously, we need the edges on the borderline only once. By planarity of the adjacency graph, Observation 4.3, the number of edges of the graph, which is the number of edges of the weight set decomposition not on the borderline, is at most 3 · v − 6. On the borderline we have at most 3 · v edges. Hence, we need O(v) edges for all initializations. As the function f (k) = k · log k is superadditive, we can bound the total time for all initializations by O(v · log v).

Claim 4: Checking if an interim edge is a final edge or a subedge of a final edge can be done in constant time.
This is only a matter of storage of the intersection points. Indeed, if the interim edge is linked to the intersection points on this edge, their number cannot be more than three by Theorem 3.8. Checking if the separated images are the same for at most three intersection points is obviously doable in constant time.
Using these results we can construct the total running time as follows: The initialization can be done in O(v · T W S ) (Claim 2) plus the initialization of the weight set components in O(v log v) (Claim 3). For each edge we need 3 · v calls to the weighted sum algorithm and checking an interim edge is done in constant time (Claim 4). By Observation 4.3 the adjacency graph is planar and therefore has at most 3 · v − 6 edges and each edge corresponds to an edge of the weight set decomposition (without the edges on the boundary of the weight set). Hence, we need a computation time of O(v 2 · T W S ) to find all edges. As this term dominates all other terms in running time, it is the total running time of our algorithm.

Corollary 4.6 If the running time of the weighted sum algorithm T W S , which is usually the algorithm for the single objective problem, is polynomial in the encoding length of the input (and output), the problem is in TotalP and our algorithm has a running time polynomial in the encoding length of the input and output.
However, due to the usage of the Dichotomic Search this version of the algorithm is not in incremental polynomial time: Bökler et al. [4] have shown that the Dichotomic Search Algorithm for biobjective problems may have a delay exponential in the input and preceeding output even though the weighted sum algorithm has polynomial running time. From the mentioned article, we can deduce that a lexicographic variant of the Dichotomic Search runs in incremental polynomial time, if we have a polynomial time algorithm to solve the lexicographic problem Lex M O P (π). This lexicographic variant is similar to the problem Π lex w presented in the initialization phase of our algorithm. We can apply this in the procedures Perpendicular Search and Extend Edge as well to ensure an incremental polynomial time running time of our algorithm.
At last, we can state the overall result: where v i is the number of extreme supported nondominated images found so far and T W S−lex is the running time of an algorithm for problem Π lex w .

Illustrative example
To illustrate our method, we solve the instance of a tri-objective assignment problem [22] defined by the following cost matrices: The set S is therefore initialized with these supported images and subsequently we get a set of extreme intersection points P = {λ 1 , . . . , λ 5 }: λ 1 separates y 3 and y 4 , -λ 2 separates y 1 and y 4 , -λ 3 separates y 1 and y 2 , -λ 4 separates y 2 and y 5 , -λ 5 separates y 3 and y 5 .
From this we obtain the final edges E. We show the exploration of the weight set components in Fig. 9. We focus on Λ(y 1 ), as the other components are very easy to compute. The full weight set decomposition is depicted in Fig. 1.

Comparison with the literature
Przybylski, Gandibleux, and Ehrgott [22], similar to our approach, use the Dichotomic Search but they start with supersets of the weight set components instead of subsets. While our algorithm enlarges an "inner approximation" of the weight set components by applying the Dichotomic Search and separation results, they apply some of these methods in a different fashion to shrink an "outer approximation". Due to these different approaches, a comparison (e) (f) Fig. 9 An example for the WSDApprox Algorithm would automatically become a discussion about the up-and downsides of supersets and subsets, respectively. We compare these approaches more thoroughly in the computational study, see Sect. 4.4. Therefore, we focus on the articles by Bökler and Mutzel [5] and Alves and Costa [1], as the former also present a running time and the latter use subsets of weight set components as well.
In the case of p = 3, Bökler and Mutzel [5] achieve a running time of O(v ·(poly(n, m, L) +v · log v), which is the case for combinatorial problems with polynomial time weighted sum algorithms and exponentially large number of extreme supported nondominated images, we outperform their algorithm. However, as mentioned in Sect. 3.4, we easily met the requirement of Glasser et al. [12] for combinatorial problems. In this case, it is hard to compare the running times: It depends on the actual size of poly(n, m, L) and the number of extreme supported nondominated images in comparison to the encoding length of the input. Hence, we have to compare log v with T W S−lex (or T W S ) which is not comparable in general. Surely there are instances where their algorithm has a faster or the same or a slower running time than ours. But, instead of only returning extreme supported images and related solutions, we additionally provide the user of our algorithm with an explicit description of the weight set decomposition. And further, we can return each component individually as soon as we have fully discovered the component or give intermediate results like an "approximation" of the component which creates additional value. We discuss this further in Sect. 5.
As mentioned before, Alves and Costa [1] use subsets of the weight set components. They enlarge these subsets by solving a weighted sum algorithm slightly outside of the subset and returning information of the Branch and Bound tree. There are some drawbacks with these methods where our usage of the Dichotomic Search gives a guaranteed enlargement of the subsets. First, the new part of the subset is generated from the Branch and Bound tree and could be very small, if the tree structure or the solution structure is unfortunate. Hence, arbitrarily many searches outside of the subsets have to be conducted to complete the weight set component, which results in as many convex hull operations. Second, the search slightly outside of the subset is executed by using a weight point that is perpendicular to an edge of the subset with a certain distance, e.g. a distance of ε = 0.01 as used in the example by Alves and Costa. As it can be seen in their own example in [1], weight set components (for mixed-integer problems) can get very small even for small instances. Hence, it can occur that we "jump" a weight set component by setting ε = 0.01. That is, we get a new image whose component is not adjacent to the one of the current image. Clearly, the choice of ε is for practical purposes only and is set by experience. On the contrary, our algorithm overcomes these difficulties, as we get the maximum expansion in a certain direction, we automatically ensure a convex polyhedral structure, and we guarantee that we find the adjacent image. Nevertheless, the algorithm by Alves and Costa is adequate for practical purposes and may be faster in the average case. And, as it is built for an interactive use, it has a different intention and hence advantages for other settings.

Computational study
We compare the performance of our algorithm with those methods of Przybylski, Gandibleux, and Ehrgott [22] (PGE2010), Bökler and Mutzel [5] (BM2015), and Özpeynirci and Köksalan [21] (OK2015). However, only the code of PGE2010 was available and executed on the same computer. For the comparison with the other approaches, we have used a computer with the same specifications as in [5]. Therefore, the differences in running times reported below are subject to different programming skills, different programming languages and partly different computer architectures. As a consequence, our study can only roughly compare the competitiveness of the four algorithms.
We use instances of the Multiobjective Assignment Problem (MOAP) and the Multiobjective Knapsack Problem (MOKP) with three objectives for our tests. These instances are the same as the ones in the computational study in [22] and the MOAP instances used in [5,21] are generated in the same fashion. For MOAP, we use 10 series of 10 instances of 3AP with a size varying from 5 × 5 to 50× 50 with a step of 5 and randomly generated objective function coefficients in [0,20]. For MOKP, we take into account 10 instances per series with a size varying from 50 to 350 with a step of 50. The objective function and constraint coefficients are uniformly generated in [1,100]. Our algorithm is implemented in Julia 1.2 and using an implementation of the Hungarian Method [18] coded in C that is callable by Julia to solve single objective assignment problems. For the knapsack problems, a dynamic programming based approach is coded in C. The implementation of the PGE2010 algorithm has been done in C and uses the same single objective solvers. All C code is compiled using GCC 9.2 with optimizer option -O3. In the study by Bökler and Mutzel [5], the BM2015 and the OK2015 algorithm have been implemented in C++ using double precision arithmetic and compiled using LLVM 3.3. For both studies the experiments are performed on an Intel Core i7-3770, 3.4 GHz and 16GB of memory running Ubuntu Linux.
In Table 1, we present the results on the MOAP instances for WSD_APPROX and PGE2010 and report the mean on running time, number of extreme supported images, and number of solver calls per image found. Both implementations have been able to solve all instances in this study. The minor discrepancy in the number of extreme supported images is due to numerical precision: On the one hand, non-extreme supported images may have been identified as extreme, on the other hand some extreme supported images may have such minuscule weight set components that they are identified as non-extreme. The PGE2010 algorithm is faster than our algorithm. However, the running times are of the same order of magnitude and may be explained by the issues listed above. Note that the third column of the table shows that our algorithm needs 2.5 times more calls to the single objective solver. This hints to the main difference between the "inner approximation" and the "outer approximation" approach. In our algorithm, a new neighbour is found via perpendicular search, which needs at least three calls of the single objective solver. In fact, our algorithm heavily relies on solving single objective problems to obtain information while less geometry is involved. Only the end point of the line search and the construction of the initial subsets need some geometry induced computations. For the "outer approximation" approach, this observation is inverted: Whenever a new image is found, the computation of the superset of its weight set component needs significantly more computational effort, as its facets with its (possible) adjacent images have to be computed. However, finding a new image by checking a facet of the superset often needs only one solver call. As the weight set is only two dimensional, the geometry of the weight set and its components is rather simple. Hence, it seems that the computation of the supersets is not that time consuming and an "outer approximation" approach may be slightly favourable for these instance sizes and three objectives.
The results for the knapsack instances in Table 2 underline our findings for the assignment instances. The running times of the two algorithms are competitive. One peculiar difference in the outcome of the computational study of these algorithms for knapsack problems is that one of the largest instances could not be solved by PGE2010. This is due to a problem in numerical precision that induces a segmentation fault. However, issues on numerical precision are immanent in the computation of the weight set decomposition: Weight set components get very small very quickly with increasing problem size. We also face these problems, as the inner approximate components are even smaller than the actual components. It seems that our algorithm is more stable in that matter.
In Table 3, we compare our algorithm with the results for MOAP reported in [5] and present the median and the mean absolute deviation of the running times. Clearly, we are competitive with the algorithm in [5] as we have slightly faster running times and less deviation. Also, our study shows that we outperform the algorithm by Özpeynirci and Köksalan [21], as we are a factor of almost 1300 faster at the largest instances. The instances are generated similarly but not the same which explains the discrepancy in the number of extreme supported images. Overall, we can conclude that our algorithm is competitive with the algorithms by Przybylski, Gandibleux, and Ehrgott [22] and Bökler and Mutzel [5] and is substantially faster than the one by Özpeynirci and Köksalan [21].

Conclusion
In this article, we have proposed a new algorithm to compute the weight set decomposition and consequently all extreme supported nondominated images for a multiobjective (mixed-) integer linear problem with three objectives. This algorithm can be described as a graphical approach and makes use of the biobjective Dichotomic Search to explore the weight set. Further, we have established necessary results that determine whether a line segment is part of the common face of two weight set components, hence separates these components. In comparison to the literature, our algorithm can return the whole weight set decomposition but also one particular weight set component as soon as it is examined. Further, it is able to return an intermediate subset of the component, while the extreme points of subset are always on the boundary of the component and hence we have a guaranteed maximal improvement of the subset in every step of the algorithm. As the subset is a convex polyhedron at any time, intermediate results can also be returned that have valuable properties for the user. In the case of a polynomial time algorithm to solve the weighted sum problem, and hence also for the lexicographic problem, our algorithm is output sensitive and even runs in incremental polynomial time. The running time is on the same level of the algorithm by Bökler and Mutzel [5]. Also, the results of the computational study show that practical running times of our algorithm are better than or at the same level as available algorithms. In conclusion, our algorithm is an easily comprehensible, running time competitive algorithm with original features especially for practical purposes. There are two immediate ideas for future research: one is the generalization of our algorithm to more than three objectives, as some parts like the perpendicular search can be taken over, while other parts could be solved in a recursive fashion. The other one concerns the "approximation" of weight set components. We think that our general approach is able to provide approximate weight set components, if applied properly. However, there are several issues that prevent us from an immediate adaption of the algorithm to generate an approximation method: as this has not been done before, a measurement for the quality of an approximation has to be introduced. Also, a proper selection of the edges that a perpendicular search is applied to goes hand in hand with the approximation ratio. While for the exact method we do this selection in a depth-first manner, a breadth-first approach or other strategies may be better in terms of approximation.
In general, a deeper analysis of the weight set and its components for both general and special problems is desired as well as further practical applications of the weight set decomposition.