The Power of the Weighted Sum Scalarization for Approximating Multiobjective Optimization Problems

We determine the power of the weighted sum scalarization with respect to the computation of approximations for general multiobjective minimization and maximization problems. Additionally, we introduce a new multi-factor notion of approximation that is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. For minimization problems, we provide an efficient algorithm that computes an approximation of a multiobjective problem by using an exact or approximate algorithm for its weighted sum scalarization. In case that an exact algorithm for the weighted sum scalarization is used, this algorithm comes arbitrarily close to the best approximation quality that is obtainable by supported solutions - both with respect to the common notion of approximation and with respect to the new multi-factor notion. Moreover, the algorithm yields the currently best approximation results for several well-known multiobjective minimization problems. For maximization problems, however, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously by supported solutions.

1. Introduction. Almost any real-world optimization problem asks for optimizing more than one objective function (e.g., the minimization of cost and time in transportation systems or the maximization of profit and safety in investments). Clearly, these objectives are conflicting, often incommensurable, and, yet, they have to be taken into account simultaneously. The discipline dealing with such problems is called multiobjective optimization. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality: a solution is called efficient (or Pareto optimal ) if no other feasible solution exists that is not worse in any objective function and better in at least one objective. The images of the efficient solutions in the objective space are called nondominated points. In contrast to single objective optimization, where one typically asks for one optimal solution, the main goal of multiobjective optimization is to compute the set of all nondominated points and, for each of them, one corresponding efficient solution. Each of these solutions corresponds to a different compromise among the set of objectives and may potentially be relevant for a decision maker.
Several results in the literature, however, show that multiobjective optimization problems are hard to solve exactly [12,13] and, in addition, the cardinalities of the set of nondominated points (the nondominated set) and the set of efficient solutions (the efficient set) may be exponentially large for discrete problems (and are typically infinite for continuous problems). This impairs the applicability of exact solution methods to real-life problems and provides a strong motivation for studying approximations of multiobjective optimization problems.
Both exact and approximate solution methods for multiobjective optimization problems often resort to using single objective auxiliary problems, which are called scalarizations of the original multiobjective problem. This refers to the transformation of a multiobjective optimization problem into a single objective auxiliary problem based on a procedure that might use additional parameters, auxiliary points, or variables. The resulting scalarized optimization problems are then solved using methods from single objective optimization and the obtained solutions are interpreted in the context of Pareto optimality.
The simplest and most widely used scalarization technique is the weighted sum scalarization (see, e.g., [13]). Here, the scalarized auxiliary problem is constructed by assigning a weight to each of the objective functions and summing up the resulting weighted objective functions in order to obtain the objective function of the scalarized problem. If the weights are chosen to be positive, then every optimal solution of the resulting weighted sum problem is efficient. Moreover, the weighted sum scalarization does not change the feasible set and, in many cases, boils down to the single objective version of the given multiobjective problem -which represents an important advantage of this scalarization especially for combinatorial problems. However, only some efficient solutions (called supported solutions) can be obtained by means of the weighted sum scalarization, while many other efficient solutions (called unsupported solutions) cannot. Consequently, a natural question is to determine which approximations of the whole efficient set can be obtained by using this very important scalarization technique.
1.1. Previous work. Besides many specialized approximation algorithms for particular multiobjective optimization problems, there exist several general approximation methods that can be applied to broad classes of multiobjective problems. An extensive survey of these general approximation methods is provided in [20].
Most of these general approximation methods for multiobjective problems are based on the seminal work of Papadimitriou and Yannakakis [24], who present a method for generating a (1 + ε, . . . , 1 + ε)-approximation for general multiobjective minimization and maximization problems with a constant number of positive-valued, polynomially computable objective functions. They show that a (1 + ε, . . . , 1 + ε)approximation with size polynomial in the encoding length of the input and 1 ε always exists. Moreover, their results show that the construction of such an approximation is possible in (fully) polynomial time, i.e., the problem admits a multiobjective (fully) polynomial-time approximation scheme or MPTAS (MFPTAS ), if and only if a certain auxiliary problem called the gap problem can be solved in (fully) polynomial time.
More recent articles building upon the results of [24] present methods that additionally yield bounds on the size of the computed (1 + ε, . . . , 1 + ε)-approximation relative to the size of the smallest (1 + ε, . . . , 1 + ε)-approximation possible [5,11,27]. Moreover, it has recently been shown in [4] that an even better (1, 1 + ε, . . . , 1 + ε)approximation (i.e., an approximation that is exact in one objective function and (1 + ε)-approximate in all other objective functions) always exists, and that such an approximation can be computed in (fully) polynomial time if and only if the so-called dual restrict problem (introduced in [11]) can be solved in (fully) polynomial time.
Other works study how the weighted sum scalarization can be used in order to compute a set of solutions such that the convex hull of their images in the objective space yields an approximation guarantee of (1 + ε, . . . , 1 + ε) [8][9][10]. Using techniques similar to ours, Diakonikolas and Yannakakis [10] show that such a so-called ε-convex Pareto set can be computed in (fully) polynomial time if and only if the weighted sum scalarization admits a (fully) polynomial-time approximation scheme. Additionally, they consider questions regarding the cardinality of ε-convex Pareto sets.
Besides the general approximation methods mentioned above that work for both minimization and maximization problems, there exist several general approximation methods that are restricted either to minimization problems or to maximization problems.
For minimization problems, there are two general approximation methods that are both based on using (approximations of) the weighted sum scalarization. The previously best general approximation method for multiobjective minimization problems with an arbitrary constant number of objectives that uses the weighted sum scalarization can be obtained by combining two results of Glaßer et al. [16,17]. They introduce another auxiliary problem called the approximate domination problem, which is similar to the gap problem. Glaßer et al. show that, if this problem is solvable in polynomial time for some approximation factor α ≥ 1, then an approximating set providing an approximation factor of α ⋅ (1 + ε) in every objective function can be computed in fully polynomial time for every ε > 0. Moreover, they show that the approximate domination problem with α ∶= σ ⋅ p can be solved by using a σ-approximation algorithm for the weighted sum scalarization of the p-objective problem. Together, this implies that a ((1 + ε) ⋅ σ ⋅ p, . . . , (1 + ε) ⋅ σ ⋅ p)-approximation can be computed in fully polynomial time for p-objective minimization problems provided that the objective functions are positive-valued and polynomially computable and a σ-approximation algorithm for the weighted sum scalarization exists. As this result is not explicitly stated in [16,17], no bounds on the running time are provided.
Obtaining general approximation methods for multiobjective maximization problems using the weighted sum scalarization seems to be much harder than for minimization problems. Indeed, Glaßer et al. [16] show that certain translations of approximability results from the weighted sum scalarization of an optimization problem to the multiobjective version that work for minimization problems are not possible in general for maximization problems.
An approximation method specifically designed for multiobjective maximization problems is presented by Bazgan et al. [3]. Their method is applicable to biobjective maximization problems that satisfy an additional structural assumption on the set of feasible solutions and the objective functions: For each two feasible solutions none of which approximates the other one by a factor of α in both objective functions, a third solution approximating both given solutions in both objective functions by a certain factor depending on α and a parameter c must be computable in polynomial time. The approximation factor obtained by the algorithm then depends on α and c.

Our contribution.
Our contribution is twofold: First, in order to better capture the approximation quality in the context of multiobjective optimization problems, we introduce a new notion of approximation for the multiobjective case. This new notion comprises the common notion of approximation, but is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. Second, we provide a precise analysis of the approximation quality obtainable for multiobjective optimization problems by means of an exact or approximate algorithm for the weighted sum scalarization -with respect to both the common and the new notion of approximation.
In order to motivate the new notion of approximation, consider the biobjective case, in which a (2 + ε, 2 + ε)-approximation can be obtained from the results of Glaßer et al. [16,17] using an exact algorithm for the weighted sum scalarization. As illustrated in Figure 1, this approximation guarantee is actually too pessimistic: Since each point y in the image of the approximating set is nondominated (since it is the image of an optimal solution of the weighted sum scalarization), no images of feasible solutions can be contained in the shaded region. Thus, every feasible solution is actually either (1, 2 + ε)-or (2 + ε, 1)-approximated. Consequently, the approximation quality obtained in this case can be more accurately described by using two vectors of approximation factors. In order to capture such situations and allow for a more precise analysis of the approximation quality obtained for multiobjective problems, our new multi-factor notion of approximation uses a set of vectors of approximation factors instead of only a single vector.  Fig. 1. Image space of a biobjective minimization problem. The point y in the image of the approximating set (2 + ε, 2 + ε)-approximates all points in the hashed region. If y is nondominated, no images of feasible solutions can be contained in the shaded region, so every image in the hashed region is actually either (1, 2 + ε)-or (2 + ε, 1)-approximated.
The second part of our contribution consists of a detailed analysis of the approximation quality obtainable by using the weighted sum scalarization -both for multiobjective minimization problems and for multiobjective maximization problems. For minimization problems, we provide an efficient algorithm that approximates a multiobjective problem using an exact or approximate algorithm for its weighted sum scalarization. We analyze the approximation quality obtained by the algorithm both with respect to the common notion of approximation that uses only a single vector of approximation factors as well as with respect to the new multi-factor notion. With respect to the common notion, our algorithm matches the best previously known approximation guarantee of (σ ⋅ p + ε, . . . , σ ⋅ p + ε) obtainable for p-objective minimization problems and any ε > 0 from a σ-approximation algorithm for the weighted sum scalarization. More importantly, we show that this result is best-possible in the sense that it comes arbitrarily close to the best approximation guarantee obtainable by supported solutions for the case that an exact algorithm is used to solve the weighted sum problem (i.e., when σ = 1).
When analyzing the algorithm with respect to the new multi-factor notion of approximation, however, a much stronger approximation result is obtained. Here, we show that every feasible solution is approximated with some (possibly different) vector (α 1 , . . . , α p ) of approximations factors such that ∑ j∶αj >1 α j = σ⋅p+ε. In particular, the worst-case approximation factor of σ⋅p+ε can actually be tight in at most one objective for any feasible point. This shows the multi-factor notion of approximation yields a much stronger approximation result by allowing a refined analysis of the obtained approximation guarantee. Moreover, for σ = 1, we show that the obtained multi-factor approximation result comes arbitrarily close to the best multi-factor approximation result obtainable by supported solutions. We also demonstrate that our algorithm applies to a large variety of multiobjective minimization problems and yields the currently best approximation results for several problems.
Multiobjective maximization problems, however, turn out to be much harder to approximate by using the weighted sum scalarization. Here, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously when using only supported solutions.
In summary, our results yield essentially tight bounds on the power of the weighted sum scalarization with respect to the approximation of multiobjective minimization and maximization problems -both in the common notion of approximation and in the new multi-factor notion.
The remainder of the paper is organized as follows: In Section 2, we formally introduce multiobjective optimization problems and provide the necessary definitions concerning their approximation. Section 3 contains our general approximation algorithm for minimization problems (Subsection 3.1) as well as a faster algorithm for the biobjective case (Subsection 3.2). Moreover, we show in Subsection 3.3 that the obtained approximation results are tight. Section 4 presents applications of our results to specific minimization problems. In Section 5, we present our impossibility results for maximization problems. Section 6 concludes the paper and lists directions for future work.

Preliminaries.
In the following, we consider a general multiobjective minimization or maximization problem Π of the following form (where either all objective functions are to be minimized or all objective functions are to be to maximized): Here, as usual, we assume a constant number p ≥ 2 of objectives. The elements x ∈ X are called feasible solutions and the set X, which is assumed to be nonempty, is referred to as the feasible set.
We assume that the objective functions take only positive rational values and are polynomially computable. Moreover, for each j ∈ {1, . . . , p}, we assume that there exist strictly positive rational lower and upper bounds LB(j), UB(j) of polynomial encoding length such that LB(j) ≤ f j (x) ≤ UB(j) for all x ∈ X. We let LB ∶= min j=1,...,p LB(j) and UB ∶= max j=1,...,p UB(j).
Similarly, for a maximization problem Π, we say that a point If the point y = f (x) is not dominated by any other point y ′ , we call y nondominated and the feasible solution x ∈ X efficient. The set Y N of nondominated points is called the nondominated set and the set X E of efficient solutions is called the efficient set or Pareto set.

Notions of approximation.
We first recall the standard definitions of approximation for single objective optimization problems.
Definition 2.2. Consider a single objective optimization problem Π and let α ≥ 1. If Π is a minimization problem, we say that a feasible solution x ∈ X α-appro- If Π is a maximization problem, we say that a feasible solution x ∈ X α-approximates another feasible solution A (polynomial-time) α-approximation algorithm is an algorithm that, for every instance I with encoding length I , computes an α-approximation for Π in time bounded by a polynomial in I .
The following definition extends the concept of approximation to the multiobjective case.
For a minimization problem Π, we say that a feasible solution x ∈ X α-approximates another feasible solution x ′ ∈ X (or, equivalently, that the feasible point y = f (x) α-approximates the feasible point Similarly, for a maximization problem Π, we say that a feasible solution x ∈ X α-approximates another feasible solution x ′ ∈ X (or, equivalently, that the feasible The standard notion of approximation for multiobjective optimization problems used in the literature is the following one.
A set P ⊆ X of feasible solutions is called an α-approximation for the multiobjective problem Π if, for any feasible solution x ′ ∈ X, there exists a solution x ∈ P that α-approximates x ′ .
In the following definition, we generalize the standard notion of approximation for multiobjective problems by allowing a set of vectors of approximation factors instead of only a single vector, which allows for tighter approximation results.
Definition 2.5. Let A ⊆ R p be a set of vectors with α j ≥ 1 for all α ∈ A and all j ∈ {1, . . . , p}. Then a set P ⊆ X of feasible solutions is called a (multi-factor) A-approximation for the multiobjective problem Π if, for any feasible solution x ′ ∈ X, there exists a solution x ∈ P and a vector α ∈ A such that x α-approximates x ′ .

Weighted sum scalarization.
Given a p-objective optimization problem Π and a vector w = (w 1 , . . . , w p ) ∈ R p with w j > 0 for all j ∈ {1, . . . , p}, the weighted sum problem (or weighted sum scalarization) Π WS (w) associated with Π is defined as the following single objective optimization problem: Definition 2.6. A feasible solution x ∈ X is called supported if there exists a vector w = (w 1 , . . . , w p ) ∈ R p of positive weights such that x is an optimal solution of the weighted sum problem Π WS (w). In this case, the feasible point y = f (x) ∈ Y is called a supported point. The set of all supported solutions is denoted by X S .
It is well-known that every supported point is nondominated and, correspondingly, every supported solution is efficient (cf. [13]).
In the following, we assume that there exists a polynomial-time σ-approximation algorithm WS σ for the weighted sum problem, where σ ≥ 1 can be either a constant or a function of the input size. When calling WS σ with some specific weight vector w, we denote this by WS σ (w). This algorithm then returns a solutionx such that The following result shows that a σ-approximation for the weighted sum problem is also a σ-approximation of any solution in at least one of the objectives. Lemma 2.7. Letx ∈ X be a σ-approximation for Π WS (w) for some positive weight vector w ∈ R p . Then, for any feasible solution x ∈ X, there exists at least one i ∈ {1, . . . , p} such thatx σ-approximates x in objective f i .
Proof. Consider the case where Π is a multiobjective minimization problem (the proof for the case where Π is a maximization problem works analogously). Then, we must show that, for any feasible solution x ∈ X, there exists at least one i ∈ {1, . . . , p} Assume by contradiction that there exists some , which contradicts the assumption thatx is a σ-approximation for Π WS (w).

A multi-factor approximation result for minimization problems.
In this section, we study the approximation of multiobjective minimization problems by (approximately) solving weighted sum problems. In Subsection 3.1, we propose a multi-factor approximation algorithm that significantly improves upon the ((1 + ε) ⋅ σ ⋅ p, . . . , (1 + ε) ⋅ σ ⋅ p)-approximation algorithm that can be derived from Glaßer et al. [16]. The biobjective case is then investigated in Subsection 3.2. Finally, we show in Subsection 3.3 that the resulting approximation is tight.
Proposition 3.1 motivates to apply the given σ-approximation algorithm WS σ for Π WS iteratively for different weight vectors w in order to obtain an approximation of the multiobjective minimization problem Π. This is formalized in Algorithm 1, whose correctness and running time are established in Theorem 3.2.
Algorithm 1: An A-approximation for p-objective minimization problems input : an instance of a p-objective minimization problem Π, ε > 0, a σ-approximation algorithm WS σ for the weighted sum problem output: an A-approximation P for problem Π For a p-objective minimization problem, Algorithm 1 outputs an A-approximation where A ={(α 1 , . . . , α p ) ∶ α 1 , . . . , α p ≥ 1, α i ≤ σ for at least one i, and j∶αj >1 Proof. In order to approximate all feasible solutions, we can iteratively apply Proposition 3.1 with ε ′ ∶= ε σ⋅p instead of ε, leading to the modified constraint on the sum of the α j where the right-hand side becomes (1+ε ′ )⋅σ ⋅p = σ ⋅p +ε. More precisely, we iterate with b j = LB(j) ⋅ (1 + ε ′ ) ij and i j = 0, . . . , u j , where u j is the largest integer such that LB(j) ⋅ (1 + ε ′ ) uj ≤ U B(j), for each j ∈ {1, . . . , p}. Actually, this iterative application of Proposition 3.1 involves redundant weight vectors. More precisely, consider a weight vector w = (w 1 , . . . , w p ) where w j = 1 bj with b j = LB(j) ⋅ (1 + ε ′ ) tj for j = 1, . . . , p, and let k be an index such that t k = min j=1,...,p t j . Then problem Π WS (w) . . , p. Therefore, it is sufficient to consider all weight vectors w for which at least one component w k is set to 1 LB(k) (see Figure 2 for an illustration). The running time follows.
Note that, depending on the structure of the weighted sum algorithm WS σ , the practical running time of Algorithm 1 could be improved by not solving every weighted sum problem from scratch, but using the information obtained in previous iterations.
Also note that, as illustrated in Figure 2, Algorithm 1 also directly yields a subdivision of the objective space into hyperrectangles such that all solutions whose images are in the same hyperrectangle are approximated by the same solution (possibly with When the weighted sum problem can be solved exactly in polynomial time, Theorem 3.2 immediately yields the following result: Another special case worth mentioning is the situation where the weighted sum problem admits a polynomial-time approximation scheme. Here, similar to the case in which an exact algorithm is available for the weighted sum problem (see Corollary 3.3), we can still obtain a set of vectors α of approximation factors with ∑ j∶αj >1 α j = p + ε while only losing the property that at least one α i equals 1.

Biobjective Problems.
In this subsection, we focus on biobjective minimization problems. We first specialize some of the general results of the previous subsection to the case p = 2. Afterwards, we propose a specific approximation algorithm for biobjective problems, which significantly improves upon the running time of Algorithm 1 in the case where an exact algorithm WS 1 for the weighted sum problem is available. Theorem 3.2, which is the main general result of the previous subsection, can trivially be specialized to the case p = 2. It is more interesting to consider the situation where the weighted sum can be solved exactly, corresponding to Corollary 3.3. In that case, we obtain the following result: Corollary 3.6. If WS σ = WS 1 is an exact algorithm for the weighted sum problem and p = 2, Algorithm 1 yields an A-approximation where It is worth pointing out that, unlike for the previous results, the set A of approximation factors is now finite. This type of result can be interpreted as a disjunctive approximation result: Algorithm 1 outputs a set P ensuring that, for any x ∈ X, there exists x ′ ∈ P such that x ′ (1, 2 + ε)-approximates x or x ′ (2 + ε, 1)-approximates x.
In the biobjective case, we may scale the weights in the weighted sum problem to be of the form (γ, 1) for some γ > 0. In the following, we make use of this observation and refer to a weight vector (γ, 1) simply as γ.
Algorithm 2 is a refinement of Algorithm 1 in the biobjective case when an exact algorithm WS 1 for the weighted sum problem is available. Algorithm 1 requires to test all the u 1 + u 2 + 1 weights 2) , or equivalently the u 1 + u 2 + 1 weights of the form (γ t , 1), where γ t = LB (2) LB (1) (1 + ε ′ ) u2−t+1 for t = 1, . . . , u 1 + u 2 + 1. Instead of testing all these weights, Algorithm 2 considers only a subset of these weights. More precisely, in each iteration, the algorithm selects a subset of consecutive weights {γ , . . . , γ r }, solves WS 1 (γ t ) for the weight γ t with t = ⌊ +r 2 ⌋, and decides whether 0, 1, or 2 of the subsets {γ , . . . , γ t } and {γ t , . . . , γ r } need to be investigated further. This process can be viewed as developing a binary tree where the root, which corresponds to the initialization, requires solving two weighted sum problems, while each other node requires solving one weighted sum problem. This representation is useful to bound the running time of our algorithm. The following technical result on binary trees, whose proof is given in the appendix, will be useful for this purpose: Proof. The approximation guarantee of Algorithm 2 derives from Theorem 3.2. We just need to prove that the subset of weights used here is sufficient to preserve the approximation guarantee.
To this end, first observe that any solution since γ > γ i > γ t . Thus, if x (1, 2 + ε)-approximates x t , we obtain which shows that x also (1, 2+ε)-approximates x i . Therefore, x i and the corresponding weight γ i are not needed.
Similarly, if x t (2 + ε, 1)-approximates x , we have which shows that x t (2 + ε, 1)-approximates x i . Therefore x i and the corresponding weight γ i are again not needed.
We now prove the claimed bound on the running time. Algorithm 2 explores a set of weights of cardinality u 1 + u 2 + 1 = ⌊log 1+ε ′ UB(1) LB(2) ⌋ + 1. The running time is obtained by bounding the number of calls to algorithm WS 1 , which corresponds to the number of nodes of the binary tree implicitly developed by the algorithm. The height of this tree is log 2 (u 1 + u 2 + 1) ∈ O log 1 ε ⋅ log UB LB . In order to bound the number of nodes with two children in the tree, we observe that we generate such a node (i.e. add the pairs ( , t) and (t, r) to Q) only if x does not (1, 2 + ε)-approximate x t and x t does not (2 + ε, 1)-approximate x , and also x t does not (1, 2 + ε)-approximate x r and x r does not (2 + ε, 1)-approximate x t . Hence, whenever a node with two children is generated, the corresponding solution x t does neither (1, 2 + ε) nor (2 + ε, 1)-approximate any previously generated solution and vice versa, so their objective values in both of the two objective functions must differ by more than a factor (2 + ε). Using that the jth objective value of any feasible solution is between LB(j) and UB(j), this implies that there can be at most nodes with two children in the tree.
Using the obtained bounds on the height of the tree and the number of nodes with two children, Lemma 3.7 shows that the total number of nodes in the tree is which proves the claimed bound on the running time.

Tightness results.
When solving the weighted sum problem exactly, Corollary 3.3 states that Algorithm 1 obtains a set A of approximation factors in which The following theorem shows that this multi-factor approximation result is arbitrarily close to the best possible result obtainable by supported solutions: Fig. 3. Image space of the instance constructed in the proof of Theorem 3.9 for p = 2. The shaded region is {(1, 2 − ε), (2 − ε, 1)}-approximated by the supported points y 1 , y 2 .
for example, a collection of p + 1 disjoint s-t-paths whose cost vectors correspond to the points y 1 , . . . , y p ,ỹ suffices). Consequently, the result from Theorem 3.9 holds for each of these specific problems as well.
Moreover, note that also the classical approximation result obtained in Corollary 3.5 is arbitrarily close to best possible in case that the weighted sum problem is solved exactly: While Corollary 3.5 shows that a (p + ε, . . . , p + ε)-approximation is obtained from Algorithm 1 when solving the weighted sum problem exactly, the instance constructed in the proof of Theorem 3.9 shows that the supported solutions do not yield an approximation guarantee of (p − ε, . . . , p − ε) for any ε > 0. This yields the following theorem: Theorem 3.10. For any ε > 0, there exists an instance of a p-objective minimization problem for which the set X S of supported solutions is not a (p − ε, . . . , p − ε)approximation.

Applications.
Our results can be applied to a large variety of minimization problems since exact or approximate polynomial-time algorithms are available for the weighted sum scalarization of many problems.
4.1. Problems with a polynomial-time solvable weighted sum scalarization. If the weighted sum scalarization can be solved exactly in polynomial time, Corollary 3.3 shows that Algorithm 1 yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee (α 1 , . . . , α p ) such that ∑ j∶αj >1 α j = p + ε and α i = 1 for at least one i.
Many problems of this kind admit an MFPTAS, i.e., a (1+ε, . . . , 1+ε)-approximation that can be computed in time polynomial in the encoding length of the input and 1 ε . The approximation guarantee we obtain is worse in this case, even if the sum of the approximation factors for which an error can be observed is p + ε in both approaches. The running time, however, is usually significantly better in our approach.
For the multiobjective shortest path problem, for example, the existence of an MFPTAS was shown in [24], while several specific MFPTAS have been proposed. Among these, the MFPTAS with the best running time is the one proposed in [26]. For p ≥ 2, their running time for general digraphs with n vertices and m arcs is using one of the fastest algorithms for single objective shortest path [25], and even O (m + n log log n) ⋅ log 1 ε log UB LB ⋅ log UB LB for p = 2, using Theorem 3.8 and the same single objective algorithm.
There are, however, also problems for which the weighted sum scalarization can be solved exactly in polynomial time, but whose multiobjective version does not admit an MFPTAS unless P = NP. For example, this is the case for the minimum s-tcut problem [24]. For yet other problems, like, e.g., the minimum weight perfect matching problem, only a randomized MFPTAS is known so far [24]. In both cases, our algorithm can still be applied.

4.2.
Problems with a polynomial-time approximation scheme for the weighted sum scalarization. For problems where the weighted sum scalarization admits a polynomial-time approximation scheme, Corollary 3.4 shows that Algorithm 1 yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee (α 1 , . . . , α p ) such that ∑ j∶αj >1 α j = p + ε. Thus, only the property that α i = 1 for at least one i is lost compared to the case where the weighted sum scalarization can be solved exactly in polynomial time.
Since there exists a vast variety of single objective problems that admit polynomial-time approximation schemes, this result is also widely applicable and yields the best known multiobjective approximation results for many problems. For example, we obtain the best known approximation results for the multiobjective versions of the weighted planar TSP (for which a polynomial-time approximation scheme with running time linear in the number n of vertices exists [22]) and minimum weight planar vertex cover (for which a polynomial-time approximation scheme was proposed in [1]). Note that, for both of these problems and many others in this class, it is not known whether the multiobjective version admits an MPTAS.

4.3.
Problems with a polynomial-time σ-approximation for the weighted sum scalarization. If the weighted sum scalarization admits a polynomial-time σ-approximation algorithm (where σ can be either a constant or a function of the input size), Theorem 3.2 shows that Algorithm 1 yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee (α 1 , . . . , α p ) such that ∑ j∶αj >1 α j = σ ⋅ p + ε and α i ≤ σ for at least one i. Moreover, by Corollary 3.5, the algorithm also yields a (classical) (σ⋅p+ε, . . . , σ⋅p+ε)-approximation.
These results yield the best known approximation guarantees for many wellstudied problems whose single objective version does not admit a polynomial-time approximation scheme unless P = NP. Consequently, the multiobjective version of these problems does not admit an MPTAS under the same assumption. Problems of this kind include, e.g., minimum weight vertex cover, minimum k-spanning tree, minimum weight edge dominating set, and minimum metric k-center, all of which admit 2-approximation algorithms in the single objective case (see [2,14,15,21], respectively). An example of a problem where only a non-constant approximation factor can be obtained in the single objective case is the minimum weight set cover problem, where only a (1 + ln S )-approximation exists with S denoting the cardinality of the ground set to cover [7]. For all of these problems, Algorithm 1 yields both the best known classical approximation result for the multiobjective version as well as the first multi-factor approximation result.
A particularly interesting problem of this class is the metric version of the symmetric traveling salesman problem (metric STSP). Here, problem specific (deterministic) algorithms exist that obtain approximation guarantees of (2, 2) in the biobjective case [18] and (2 + ε, . . . , 2 + ε) for any constant number of objectives [23]. The best approximation algorithm for the single objective version is the 3 2 -approximation algorithm by Christofides [6], which can be used in Algorithm 1 in order to obtain a multi-factor approximation where each feasible solution is approximated with some approximation guarantee (α 1 , . . . , α p ) such that ∑ j∶αj >1 α j = 3 2 ⋅ p + ε and α i ≤ 3 2 for at least one i.
5. An inapproximability result for maximization problems. In this section, we show that the weighted sum scalarization is much less powerful for approximating multiobjective maximization problems.
Intuitively, in a multiobjective minimization problem, the positivity of the objective function values and of the weights used within the weighted sum scalarization implies that a bad (i.e., large) value in some of the objective functions cannot be compensated in the weighted sum by a good (i.e., small) value in another objective function. This means that, if the weighted sum of the objective values of a solution for a minimization problem is (close to) optimal (i.e., minimal), then no single objective value can be too large. More precisely, for f (x) = (f 1 (x), . . . , f p (x)) ∈ R p and w ∈ R p with f j (x), w j > 0 for j = 1, . . . , p and v > 0, it holds that For a maximization problem, however, a bad (i.e., small) value in some of the objective functions can be completely compensated in the weighted sum by a very good (i. e., large) value in another objective function. Thus, a solution that obtains a (close to) optimal (i.e., maximal) value in the weighted sum of the objective values can still have a very small (i. e., bad) value in some of the objectives: ≥ cv for all j = 1, . . . , p and any constant c > 0.
For instance, while Corollary 3.6 implies that the set of supported solutions yields a {(1, 2 + ε), (2 + ε, 1)}-approximation for any ε > 0 in the case of a biobjective minimization problem, no similar result holds for maximization problems. Indeed, for biobjective maximization problems, Figure 4 demonstrates that there may exist unsupported solutions that are approximated only with an arbitrarily large approximation factor in (all but) one objective function by any supported solution.
The following theorem generalizes the construction in Figure 4 to an arbitrary number of objectives and shows that, for maximization problems, a polynomial ap- proximation factor can, in general, not be obtained in more than one of the objective functions simultaneously even if the approximating set consists of all the supported solutions: 6. Conclusion. The weighted sum scalarization is the most frequently used method to transform a multiobjective into a single objective optimization problem. In this article, we contribute to a better understanding of the quality of approximations for general multiobjective optimization problems which rely on this scalarization technique. To this end, we refine and extend the common notion of approximation quality in multiobjective optimization. As we show, the resulting multi-factor notion of approximation more accurately describes the approximation quality in multiobjective contexts. We also present an efficient approximation algorithm for general multiobjective minimization problems which turns out to be best possible under some additional assumptions. Interestingly, we show that a similar result based on supported solutions cannot be obtained for multiobjective maximization problems.
The new multi-factor notion of approximation is independent of the specific algorithms used here. Thus, a natural direction for future research is to analyze new and existing approximation algorithms more precisely with the help of this new notion. This may yield both a better understanding of existing approaches as well as more accurate approximation results.
Appendix A. Proof of Lemma 3.7.
Proof. In order to show the claimed upper bound on the number of nodes, we first show that any binary tree T with height h and k nodes with two children that has the maximum possible number of nodes among all such binary trees must have the following property: If v is a node with two children at level , then all nodes u at the levels 0, . . . , − 1 must also have two children.
So assume by contradiction that T is a binary tree maximizing the number of nodes among all trees with height h and k nodes with two children, but T does not have this property. Then there exists a node v in T with two children at some level and a node u with at most one child at a lower level ′ ∈ {0, . . . , − 1}. Then, the binary tree T ′ , that results from making one node w that is a child of v in T an (additional) child of u, also has height h, contains k nodes with two children, and has the same number of nodes as T . Moreover, the level of w in T ′ changes to ′ + 1 < + 1. Hence, also the level of any leave of the subtree rooted at w must have decreased by at least one. Thus, giving any leave of this subtree an additional child in T ′ would yield a binary tree of height h and k nodes with two children, and a strictly larger number of nodes than T , contradicting the maximality of T .
By the above property, in any binary tree maximizing the number of nodes among the trees satisfying the assumptions of the lemma, there are only nodes with two children on all levels i < h ′ ∶= ⌊log 2 (k + 1)⌋ and only nodes with at most one child on all levels i > h ′ . Level h ′ may contain nodes with two children, but there is at least one node with only one child on this level.
Consequently, there are at most k nodes in total on the levels 0, . . . , h ′ − 1 and at most k + 1 nodes at level h ′ . Moreover, there are at most 2(k + 1) nodes at level h ′ + 1, each of which is the root of a subtree (path) consisting of at most h − h ′ nodes (each with at most one child). Overall, this proves an upper bound of at most k + (k + 1) + 2(k + 1) ⋅ (h − h ′ ) ∈ O(k ⋅ h) on the number of nodes in the tree.