1 Introduction

Almost any real-world optimization problem asks for optimizing more than one objective function (e.g., the minimization of cost and time in transportation systems or the maximization of profit and safety in investments). Clearly, these objectives are conflicting, often incommensurable, and, yet, they have to be taken into account simultaneously. The discipline dealing with such problems is called multiobjective optimization. Typically, multiobjective optimization problems are solved according to the Pareto principle of optimality: a solution is called efficient (or Pareto optimal) if no other feasible solution exists that is not worse in any objective function and better in at least one objective. The images of the efficient solutions in the objective space are called nondominated points. In contrast to single objective optimization, where one typically asks for one optimal solution, the main goal of multiobjective optimization is to compute the set of all nondominated points and, for each of them, one corresponding efficient solution. Each of these solutions corresponds to a different compromise among the set of objectives and may potentially be relevant for a decision maker.

Several results in the literature, however, show that multiobjective optimization problems are hard to solve exactly [11, 12] and, in addition, the cardinalities of the set of nondominated points (the nondominated set) and the set of efficient solutions (the efficient set) may be exponentially large for discrete problems (and are typically infinite for continuous problems). This impairs the applicability of exact solution methods to real-life problems and provides a strong motivation for studying approximations of multiobjective optimization problems.

Both exact and approximate solution methods for multiobjective optimization problems often resort to using single objective auxiliary problems, which are called scalarizations of the original multiobjective problem. This refers to the transformation of a multiobjective optimization problem into a single objective auxiliary problem based on a procedure that might use additional parameters, auxiliary points, or variables. The resulting scalarized optimization problems are then solved using methods from single objective optimization and the obtained solutions are interpreted in the context of Pareto optimality.

The simplest and most widely used scalarization technique is the weighted sum scalarization (see, e.g., [12]). Here, the scalarized auxiliary problem is constructed by assigning a weight to each of the objective functions and summing up the resulting weighted objective functions in order to obtain the objective function of the scalarized problem. If the weights are chosen to be positive, then every optimal solution of the resulting weighted sum problem is efficient. Moreover, the weighted sum scalarization does not change the feasible set and, in many cases, boils down to the single objective version of the given multiobjective problem — which represents an important advantage of this scalarization especially for combinatorial problems. However, only some efficient solutions (called supported solutions) can be obtained by means of the weighted sum scalarization, while many other efficient solutions (called unsupported solutions) cannot. Consequently, a natural question is to determine which approximations of the whole efficient set can be obtained by using this very important scalarization technique.

1.1 Previous Work

Besides many specialized approximation algorithms for particular multiobjective optimization problems, there exist several general approximation methods that can be applied to broad classes of multiobjective problems. An extensive survey of these general approximation methods is provided in [18].

Most general approximation methods for multiobjective problems are based on the seminal work of Papadimitriou and Yannakakis [21], who present a method for generating a \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation for general multiobjective minimization and maximization problems with a constant number of positive-valued, polynomially computable objective functions. They show that a \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation with size polynomial in the encoding length of the input and \(\frac {1}{\varepsilon }\) always exists. Moreover, their results show that the construction of such an approximation is possible in (fully) polynomial time, i.e., the problem admits a multiobjective (fully) polynomial-time approximation scheme or MPTAS (MFPTAS), if and only if a certain auxiliary problem called the gap problem can be solved in (fully) polynomial time.

More recent articles building upon the results of [21] present methods that additionally yield bounds on the size of the computed \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation relative to the size of the smallest \((1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation possible [4, 10, 22]. Moreover, it has recently been shown in [5] that an even better \((1,1+\varepsilon ,\dots ,1+\varepsilon )\)-approximation (i.e., an approximation that is exact in one objective function and (1 + ε)-approximate in all other objective functions) always exists, and that such an approximation can be computed in (fully) polynomial time if and only if the so-called dual restrict problem (introduced in [10]) can be solved in (fully) polynomial time.

Other works study how the weighted sum scalarization can be used in order to compute a set of solutions such that the convex hull of their images in the objective space yields an approximation guarantee of \((1+\varepsilon ,\dots ,1+\varepsilon )\) [7,8,9]. Using techniques similar to ours, Diakonikolas and Yannakakis [9] show that such a so-called ε-convex Pareto set can be computed in (fully) polynomial time if and only if the weighted sum scalarization admits a (fully) polynomial-time approximation scheme. Additionally, they consider questions regarding the cardinality of ε-convex Pareto sets.

Besides the general approximation methods mentioned above that work for both minimization and maximization problems, there exist several general approximation methods that are restricted either to minimization problems or to maximization problems.

For minimization problems, there are two general approximation methods that are both based on using (approximations of) the weighted sum scalarization. The previously best general approximation method for multiobjective minimization problems with an arbitrary constant number of objectives that uses the weighted sum scalarization can be obtained by combining two results of Glaßer et al. [15, 16]. They introduce another auxiliary problem called the approximate domination problem, which is similar to the gap problem. Glaßer et al. show that, if this problem is solvable in polynomial time for some approximation factor α ≥ 1, then an approximating set providing an approximation factor of α ⋅ (1 + ε) in every objective function can be computed in fully polynomial time for every ε > 0. Moreover, they show that the approximate domination problem with α := σp can be solved by using a σ-approximation algorithm for the weighted sum scalarization of the p-objective problem. Together, this implies that a ((1 + ε) ⋅ σp,…,(1 + ε) ⋅ σp)-approximation can be computed in fully polynomial time for p-objective minimization problems provided that the objective functions are positive-valued and polynomially computable and a σ-approximation algorithm for the weighted sum scalarization exists. As this result is not explicitly stated in [15, 16], no bounds on the running time are provided.

For biobjective minimization problems, Halffmann et al. [17] show how to obtain a \((\sigma \cdot (1+2\varepsilon ),\sigma \cdot (1+\frac {2}{\varepsilon }))\)-approximation for any given 0 < ε ≤ 1 if a polynomial-time σ-approximation algorithm for the weighted sum scalarization is given.

Obtaining general approximation methods for multiobjective maximization problems using the weighted sum scalarization seems to be much harder than for minimization problems. Indeed, Glaßer et al. [15] show that certain translations of approximability results from the weighted sum scalarization of an optimization problem to the multiobjective version that work for minimization problems are not possible in general for maximization problems.

An approximation method specifically designed for multiobjective maximization problems is presented by Bazgan et al. [3]. Their method is applicable to biobjective maximization problems that satisfy an additional structural assumption on the set of feasible solutions and the objective functions: For each two feasible solutions none of which approximates the other one by a factor of α in both objective functions, a third solution approximating both given solutions in both objective functions by a certain factor depending on α and a parameter c must be computable in polynomial time. The approximation factor obtained by the algorithm then depends on α and c.

1.2 Our Contribution

Our contribution is twofold: First, in order to better capture the approximation quality in the context of multiobjective optimization problems, we introduce a new notion of approximation for the multiobjective case. This new notion comprises the common notion of approximation, but is specifically tailored to the multiobjective case and its inherent trade-offs between different objectives. Second, we provide a precise analysis of the approximation quality obtainable for multiobjective optimization problems by means of an exact or approximate algorithm for the weighted sum scalarization – with respect to both the common and the new notion of approximation.

In order to motivate the new notion of approximation, consider the biobjective case, in which a (2 + ε,2 + ε)-approximation can be obtained from the results of Glaßer et al. [15, 16] using an exact algorithm for the weighted sum scalarization. As illustrated in Fig. 1, this approximation guarantee is actually too pessimistic: Since each point y in the image of the approximating set is nondominated (since it is the image of an optimal solution of the weighted sum scalarization), no images of feasible solutions can be contained in the shaded region. Thus, every feasible solution is actually either (1,2 + ε)- or (2 + ε,1)-approximated. Consequently, the approximation quality obtained in this case can be more accurately described by using two vectors of approximation factors. In order to capture such situations and allow for a more precise analysis of the approximation quality obtained for multiobjective problems, our new multi-factor notion of approximation uses a set of vectors of approximation factors instead of only a single vector.

Fig. 1
figure 1

Image space of a biobjective minimization problem. The point y in the image of the approximating set (2 + ε,2 + ε)-approximates all points in the hatched region. If y is nondominated, no images of feasible solutions can be contained in the shaded region, so every image in the hatched region is actually either (1,2 + ε)- or (2 + ε,1)-approximated

The second part of our contribution consists of a detailed analysis of the approximation quality obtainable by using the weighted sum scalarization – both for multiobjective minimization problems and for multiobjective maximization problems. For minimization problems, we provide an efficient algorithm that approximates a multiobjective problem using an exact or approximate algorithm for its weighted sum scalarization. We analyze the approximation quality obtained by the algorithm both with respect to the common notion of approximation that uses only a single vector of approximation factors as well as with respect to the new multi-factor notion. With respect to the common notion, our algorithm matches the best previously known approximation guarantee of (σp + ε,…,σp + ε) obtainable for p-objective minimization problems and any ε > 0 from a σ-approximation algorithm for the weighted sum scalarization. More importantly, we show that this result is best-possible in the sense that it comes arbitrarily close to the best approximation guarantee obtainable by supported solutions for the case that an exact algorithm is used to solve the weighted sum problem (i.e., when σ = 1).

When analyzing the algorithm with respect to the new multi-factor notion of approximation, however, a much stronger approximation result is obtained. Here, we show that every feasible solution is approximated with some (possibly different) vector \((\alpha _{1},\dots ,\alpha _{p})\) of approximations factors such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j} = \sigma \cdot p + \varepsilon \). In particular, the worst-case approximation factor of σp + ε can actually be tight in at most one objective for any feasible point. This shows the multi-factor notion of approximation yields a much stronger approximation result by allowing a refined analysis of the obtained approximation guarantee. Moreover, for σ = 1, we show that the obtained multi-factor approximation result comes arbitrarily close to the best multi-factor approximation result obtainable by supported solutions. We also demonstrate that our algorithm applies to a large variety of multiobjective minimization problems and yields the currently best approximation results for several problems.

Multiobjective maximization problems, however, turn out to be much harder to approximate by using the weighted sum scalarization. Here, we show that a polynomial approximation guarantee can, in general, not be obtained in more than one of the objective functions simultaneously when using only supported solutions.

In summary, our results yield essentially tight bounds on the power of the weighted sum scalarization with respect to the approximation of multiobjective minimization and maximization problems – both in the common notion of approximation and in the new multi-factor notion.

The remainder of the paper is organized as follows: In Section 2, we formally introduce multiobjective optimization problems and provide the necessary definitions concerning their approximation. Section 3 contains our general approximation algorithm for minimization problems (Section 3.1). Moreover, we show in Section 3.2 that the obtained approximation results are tight. Section 4 concludes the paper by listing applications of our general approximation algorithm and directions for future work. In the appendix, we present both a faster approximation algorithm for minimization problems in the biobjective case (Appendix A) as well as our impossibility result for maximization problems (Appendix B).

2 Preliminaries

In the following, we consider a general multiobjective minimization or maximization problem π of the following form (where either all objective functions are to be minimized or all objective functions are to be to maximized):

$$ \begin{array}{@{}rcl@{}} \min/\max && f(x)=(f_{1}(x),\dots,f_{p}(x))\\ \text{s. t. } && x \in X \end{array} $$

Here, as usual, we assume a constant number p ≥ 2 of objectives. The elements xX are called feasible solutions and the set X, which is assumed to be nonempty, is referred to as the feasible set. An image y = f(x) of a feasible solution xX is also called a (feasible) point. We let \(Y:= f(X) := \{f(x): x\in X\}\subseteq \mathbb {R}^{p}\) denote the set of feasible points.

We assume that the objective functions take only positive rational values and are polynomially computable. Moreover, for each \(j\in \{1,\dots ,p\}\), we assume that there exist strictly positive rational lower and upper bounds LB(j),UB(j) of polynomial encoding length such that LB(j) ≤ fj(x) ≤UB(j) for all xX. We let \(\text {LB}:= \min \limits _{j=1,\dots ,p}\text {LB}(j)\) and \(\text {UB}:= \max \limits _{j=1,\dots ,p}\text {UB}(j)\).

Definition 1

For a minimization problem π, we say that a point y = f(x) ∈ Y is dominated by another point \(y^{\prime }=f(x^{\prime })\in Y\) if \(y^{\prime }\neq y\) and

$$ \begin{array}{@{}rcl@{}} y^{\prime}_{j}=f_{j}(x^{\prime}) \leq f_{j}(x)=y_{j} \text{ for all } j\in\{1,\dots,p\}. \end{array} $$

Similarly, for a maximization problem π, we say that a point y = f(x) ∈ Y is dominated by another point \(y^{\prime }=f(x^{\prime })\in Y\) if \(y^{\prime }\neq y\) and

$$ \begin{array}{@{}rcl@{}} y^{\prime}_{j}=f_{j}(x^{\prime}) \geq f_{j}(x)=y_{j} \text{ for all } j\in\{1,\dots,p\}. \end{array} $$

If the point y = f(x) is not dominated by any other point \(y^{\prime }\), we call y nondominated and the feasible solution xX efficient. The set YN of nondominated points is called the nondominated set and the set XE of efficient solutions is called the efficient set or Pareto set.

2.1 Notions of Approximation

We first recall the standard definitions of approximation for single objective optimization problems.

Definition 2

Consider a single objective optimization problem π and let α ≥ 1. If π is a minimization problem, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) if \(f(x) \leq \alpha \cdot f(x^{\prime })\). If π is a maximization problem, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) if \(\alpha \cdot f(x) \geq f(x^{\prime })\). A feasible solution that α-approximates every feasible solution of π is called an α-approximation for π.

A (polynomial-time) α-approximation algorithm is an algorithm that, for every instance I with encoding length |I|, computes an α-approximation for π in time bounded by a polynomial in |I|.

The following definition extends the concept of approximation to the multiobjective case.

Definition 3

Let \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathbb {R}^{p}\) with αj ≥ 1 for all \(j\in \{1,\dots ,p\}\).

For a minimization problem π, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) (or, equivalently, that the feasible point y = f(x) α-approximates the feasible point \(y^{\prime }=f(x^{\prime })\)) if

$$ \begin{array}{@{}rcl@{}} f_{j}(x) \leq \alpha_{j}\cdot f_{j}(x^{\prime}) \text{ for all } j\in\{1,\dots,p\}. \end{array} $$

Similarly, for a maximization problem π, we say that a feasible solution xX α-approximates another feasible solution \(x^{\prime }\in X\) (or, equivalently, that the feasible point y = f(x) α-approximates the feasible point \(y^{\prime }=f(x^{\prime })\)) if

$$ \begin{array}{@{}rcl@{}} \alpha_{j}\cdot f_{j}(x) \geq f_{j}(x^{\prime}) \text{ for all } j\in\{1,\dots,p\}. \end{array} $$

The standard notion of approximation for multiobjective optimization problems used in the literature is the following one.

Definition 4

Let \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathbb {R}^{p}\) with αj ≥ 1 for all \(j\in \{1,\dots ,p\}\).

A set \(P\subseteq X\) of feasible solutions is called an α-approximation for the multiobjective problem π if, for any feasible solution \(x^{\prime }\in X\), there exists a solution xP that α-approximates \(x^{\prime }\).

In the following definition, we generalize the standard notion of approximation for multiobjective problems by allowing a set of vectors of approximation factors instead of only a single vector, which allows for tighter approximation results.

Definition 5

Let \(\mathcal {A}\subseteq \mathbb {R}^{p}\) be a set of vectors with αj ≥ 1 for all \(\alpha \in \mathcal {A}\) and all \(j\in \{1,\dots ,p\}\). Then a set \(P\subseteq X\) of feasible solutions is called a (multi-factor) \(\mathcal {A}\)-approximation for the multiobjective problem π if, for any feasible solution \(x^{\prime }\in X\), there exist a solution xP and a vector \(\alpha \in \mathcal {A}\) such that x α-approximates \(x^{\prime }\).

Note that, in the case where \(\mathcal {A}=\{(\alpha _{1},\dots ,\alpha _{p})\}\) is a singleton, an \(\mathcal {A}\)-approximation for a multiobjective problem according to Definition 5 is equivalent to an \((\alpha _{1},\dots ,\alpha _{p})\)-approximation according to Definition 4.

2.2 Weighted Sum Scalarization

Given a p-objective optimization problem π and a vector \(w=(w_{1},\ldots ,w_{p})\in \mathbb {R}^{p}\) with wj > 0 for all j ∈{1,…,p}, the weighted sum problem (or weighted sum scalarization) πWS(w) associated with π is defined as the following single objective optimization problem:

$$ \begin{array}{@{}rcl@{}} \min/\max && \sum\limits_{j=1}^{p} w_{j} \cdot f_{j}(x)\\ \text{s. t. } && x \in X \end{array} $$

Definition 6

A feasible solution xX is called supported if there exists a vector \(w=(w_{1},\ldots ,w_{p}) \in \mathbb {R}^{p}\) of positive weights such that x is an optimal solution of the weighted sum problem πWS(w). In this case, the feasible point y = f(x) ∈ Y is called a supported point. The set of all supported solutions is denoted by XS.

It is well-known that every supported point is nondominated and, correspondingly, every supported solution is efficient (cf. [12]).

In the following, we assume that there exists a polynomial-time σ-approximation algorithm WSσ for the weighted sum problem, where σ ≥ 1 can be either a constant or a function of the input size. When calling WSσ with some specific weight vector w, we denote this by WSσ(w). This algorithm then returns a solution \(\hat x\) such that \({\sum }_{j=1}^{p} w_{j} f_{j}(\hat x) \leq \sigma \cdot {\sum }_{j=1}^{p} w_{j} f_{j}(x)\) for all xX, if π is a minimization problem, and \(\sigma \cdot {\sum }_{j=1}^{p} w_{j} f_{j}(\hat x) \geq {\sum }_{j=1}^{p} w_{j} f_{j}(x)\) for all xX, if π is a maximization problem. The running time of algorithm WSσ is denoted by \(T_{WS_{\sigma }}\).

The following result shows that a σ-approximation for the weighted sum problem is also a σ-approximation of any solution in at least one of the objectives.

Lemma 1

Let \(\hat {x} \in X\) be a σ-approximation for πWS(w) for some positive weight vector \(w\in \mathbb {R}^{p}\). Then, for any feasible solution xX, there exists at least one i ∈{1,…,p} such that \(\hat {x}\) σ-approximates x in objective fi.

Proof

Consider the case where π is a multiobjective minimization problem (the proof for the case where π is a maximization problem works analogously). Then, we must show that, for any feasible solution xX, there exists at least one i ∈{1,…,p} such that \(f_{i}(\hat {x}) \leq \sigma \cdot f_{i}(x)\).

Assume by contradiction that there exists some \(x^{\prime } \in X\) such that \(f_{j}(\hat {x}) > \sigma \cdot f_{j}(x^{\prime })\) for all j ∈{1,…,p}. Then, we obtain \({\sum }_{j=1}^{p} w_{j} \cdot f_{j}(\hat {x}) > \sigma \cdot {\sum }_{j=1}^{p} w_{j} \cdot f_{j}(x^{\prime })\), which contradicts the assumption that \(\hat {x}\) is a σ-approximation for πWS(w). □

3 A Multi-Factor Approximation Result for Minimization Problems

In this section, we study the approximation of multiobjective minimization problems by (approximately) solving weighted sum problems. In Section 3.1, we propose a multi-factor approximation algorithm that significantly improves upon the \(((1+\varepsilon )\cdot \sigma \cdot p, \dots ,(1+\varepsilon )\cdot \sigma \cdot p)\)-approximation algorithm that can be derived from Glaßer et al. [15]. Moreover, we show in Section 3.2 that the resulting approximation is tight.

3.1 Approximation Results

Proposition 1

Let \(\bar {x}\in X\) be a feasible solution of π and let \(b=(b_{1},\dots ,b_{p})\) be such that \(b_{j}\leq f_{j}(\bar {x})\leq (1+\varepsilon )\cdot b_{j}\) for \(j=1,\dots ,p\) and some ε > 0. Applying WSσ(w) with \(w_{j}:= \frac {1}{b_{j}}\) for \(j=1,\dots ,p\) yields a solution \(\hat x\) that \((\alpha _{1},\dots ,\alpha _{p})\)-approximates \(\bar x\) for some \(\alpha _{1},\dots ,\alpha _{p}\geq 1\) such that αiσ for at least one i ∈{1,…,p} and

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$

Proof

Since \(\hat x\) is the solution returned by WSσ(w), we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{p}\frac{1}{b_{j}} f_{j}(\hat{x}) \leq \sigma\cdot\left( \sum\limits_{j=1}^{p}\frac{1}{b_{j}}f_{j}(\bar{x})\right) \leq \sigma\cdot(1+\varepsilon)\cdot\left( \sum\limits_{j=1}^{p} 1\right) = (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$

Since \(\frac {1}{b_{j}} \geq \frac {1}{f_{j}(\bar {x})}\), we get \({\sum }_{j=1}^{p} \frac {f_{j}(\hat {x})}{f_{j}(\bar {x})} \leq {\sum }_{j=1}^{p}\frac {1}{b_{j}} f_{j}(\hat {x})\), which yields

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j=1}^{p} \frac{f_{j}(\hat{x})}{f_{j}(\bar{x})} \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$

Setting \(\alpha _{j} := \max \limits \left \{1,\frac {f_{j}(\hat {x})}{f_{j}(\bar {x})}\right \}\) for \(j=1,\dots ,p\), we have

$$ \begin{array}{@{}rcl@{}} \sum\limits_{j:\alpha_{j}>1} \alpha_{j} \leq (1+\varepsilon)\cdot\sigma\cdot p. \end{array} $$

The worst case approximation factors αj are then obtained when equality holds in the previous inequality.

Moreover, by Lemma 1, there exists at least one i ∈{1,…,p} such that \(f_{i}(\hat {x}) \leq \sigma \cdot f_{i}(\bar {x})\). Thus, we have αiσ for at least one i ∈{1,…,p}, which proves the claim. □

Proposition 1 motivates to apply the given σ-approximation algorithm WSσ for πWS iteratively for different weight vectors w in order to obtain an approximation of the multiobjective minimization problem π. This is formalized in Algorithm 1, whose correctness and running time are established in Theorem 1.

figure a

Theorem 1

For a p-objective minimization problem, Algorithm 1 outputs an \(\mathcal {A}\)-approximation where

$$ \begin{array}{@{}rcl@{}} \mathcal{A} = \bigg\{&(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i} \leq \sigma \text{ for at least one \textit{i}, and}\\ & \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = \sigma\cdot p\ + \varepsilon \bigg\} \end{array} $$

in time \(\displaystyle T_{\text {WS}_{\sigma }}\cdot \sum \limits _{i=1}^{p} \prod\limits_{j\neq i}\left \lceil \log _{1+\frac {\varepsilon }{\sigma p}}\frac {\text {UB}(j)}{\text {LB}(j)}\right \rceil \in \mathcal {O}\left (T_{\text {WS}_{\sigma }} \left (\frac {\sigma }{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).

Proof

In order to approximate all feasible solutions, we can iteratively apply Proposition 1 with \(\varepsilon ^{\prime } := \frac {\varepsilon }{\sigma \cdot p}\) instead of ε, leading to the modified constraint on the sum of the αj where the right-hand side becomes \((1+\varepsilon ^{\prime })\cdot \sigma \cdot p = \sigma \cdot p\ + \varepsilon \). More precisely, we iterate with \(b_{j} = LB(j)\cdot (1+ \varepsilon ^{\prime })^{i_{j}}\) and ij = 0,…,uj, where uj is the largest integer such that \(LB(j)\cdot (1+\varepsilon ^{\prime })^{u_{j}} \leq UB(j)\), for each j ∈{1,…,p}. Actually, this iterative application of Proposition 1 involves redundant weight vectors. More precisely, consider a weight vector w = (w1,…,wp) where \(w_{j}= \frac {1}{b_{j}}\) with \(b_{j} = LB(j)\cdot (1+\varepsilon ^{\prime })^{t_{j}}\) for j = 1,…,p, and let k be an index such that \(t_{k} = \min \limits _{j=1,\ldots ,p} t_{j}\). Then problem πWS(w) is equivalent to problem \({\varPi }^{\text {WS}}(w^{\prime })\) with \(w^{\prime }_{j} =\frac {1}{b^{\prime }_{j}}\), where \(b^{\prime }_{j} = LB(j)\cdot (1+\varepsilon ^{\prime })^{t_{j} - t_{k}}\) for j = 1,…,p. Therefore, it is sufficient to consider all weight vectors w for which at least one component wk is set to \(\frac {1}{LB(k)}\) (see Fig. 2 for an illustration). The running time follows. □

Fig. 2
figure 2

Weight vectors and subdivision of the objective space in Algorithm 1. The weight vector \(w=(\frac {1}{b_{1}},\dots ,\frac {1}{b_{p}})\) with \(b_{j} = LB(j)\cdot (1+\varepsilon ^{\prime })^{t_{j}}\) for j = 1,…,p is equivalent to the weight vector \(w^{\prime }=(\frac {1}{b^{\prime }_{1}},\dots ,\frac {1}{b^{\prime }_{p}})\) obtained by reducing all exponents tj by their minimum. The solution \(\text {WS}_{\sigma }(w^{\prime })\) returned for \(w^{\prime }\) is then used to approximate all solutions with images in the shaded (hyper-) rectangles

Note that the set \(\mathcal {A}\) in Theorem 1 is infinite. By rounding up each approximation factor αj to the nearest power of 1 + ε, however, it is possible to obtain the same result for the finite set

$$ \begin{array}{@{}rcl@{}} \bar{\mathcal{A}} = \bigg\{&(\bar{\alpha}_{1},\dots,\bar{\alpha}_{p}) : \bar{\alpha}_{j}=(1+\varepsilon)^{l_{j}} \text{ for some } l_{j}\in\mathbb{Z}_{\geq0}~(j=1,\dots,p), \\ & \bar{\alpha}_{i} \leq (1+\varepsilon)\cdot\sigma \text{ for at least one \textit{i}, and} {\sum}_{j:\bar{\alpha}_{j}>1} \bar{\alpha}_{j} = (1+\varepsilon)\cdot (\sigma\cdot p\ \!+ \varepsilon) \bigg\}. \end{array} $$

Also note that, depending on the structure of the weighted sum algorithm WSσ, the practical running time of Algorithm 1 could be improved by not solving every weighted sum problem from scratch, but using the information obtained in previous iterations.

Moreover, as illustrated in Fig. 2, Algorithm 1 also directly yields a subdivision of the objective space into hyperrectangles such that all solutions whose images are in the same hyperrectangle are approximated by the same solution (possibly with different approximation guarantees): For each weight vector \(w=(\frac {1}{b_{1}},\dots ,\frac {1}{b_{p}})\) considered in the algorithm (where bk = LB(k) for at least one k), all solutions \(\bar {x}\) with images in the hyperrectangles \(\times _{j=1}^{p} \left [b_{j}\cdot (1+\varepsilon ^{\prime })^{\ell },b_{j}\cdot (1+\varepsilon ^{\prime })^{\ell +1}\right ]\) for \(\ell =0,1,\dots \) are approximated by the solution returned by WSσ(w).

When the weighted sum problem can be solved exactly in polynomial time, Theorem 1 immediately yields the following result:

Corollary 1

If WSσ = WS1 is an exact algorithm for the weighted sum problem, Algorithm 1 outputs an \(\mathcal {A}\)-approximation where

$$ \begin{array}{@{}rcl@{}} \mathcal{A} &= & \bigg\{(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i}=1 \text{ for at least one}~i, \text{ and }\\ && \sum\limits_{j:\alpha_{j}>1} \alpha_{j} = p + \varepsilon\bigg\} \end{array} $$

in time \(\mathcal {O}\left (T_{\text {WS}_{1}} \left (\frac {1}{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).

Another special case worth mentioning is the situation where the weighted sum problem admits a polynomial-time approximation scheme. Here, similar to the case in which an exact algorithm is available for the weighted sum problem, we can still obtain a set of vectors α of approximation factors with \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \) while only losing the property that at least one αi equals 1.

Corollary 2

If the weighted sum problem admits a polynomial-time (1 + τ)-approximation for any τ > 0, then, for any ε > 0 and any \(0<\tau <\frac {\varepsilon }{p}\), Algorithm 1 can be used to compute an \(\mathcal {A}\)-approximation where

$$ \begin{array}{@{}rcl@{}} &&\mathcal{A} = \bigg\{(\alpha_{1},\dots,\alpha_{p}) : \alpha_{1},\dots,\alpha_{p} \geq 1, \alpha_{i}\leq 1+\tau \text{ for at least one}~i, \text{ and }\\ &&~~\quad{\sum}_{j:\alpha_{j}>1} \alpha_{j} = p + \varepsilon\bigg\} \end{array} $$

in time \(\mathcal {O}\left (T_{\text {WS}_{1+\tau }} \left (\frac {1+\tau }{\varepsilon -\tau \cdot p} \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).

Proof

Given ε > 0 and \(0<\tau <\frac {\varepsilon }{p}\), apply Algorithm 1 with ετp and σ := 1 + τ. □

Since any component of a vector in the set \(\mathcal {A}\) from Theorem 1 can get arbitrarily close to σp + ε in the worst case, the best “classical” approximation result using only a single vector of approximation factors that is obtainable from Theorem 1 reads as follows:

Corollary 3

Algorithm 1 computes a \((\sigma \cdot p + \varepsilon ,\dots ,\sigma \cdot p + \varepsilon )\)-approximation in time \(\mathcal {O}\left (T_{\text {WS}_{\sigma }} \left (\frac {1}{\varepsilon } \log \frac {\text {UB}}{\text {LB}}\right )^{p-1} \right )\).

3.2 Tightness Results

When solving the weighted sum problem exactly, Corollary 1 states that Algorithm 1 obtains a set \(\mathcal {A}\) of approximation factors in which \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \) for each \(\alpha =(\alpha _{1},\dots ,\alpha _{p})\in \mathcal {A}\).

The following theorem shows that this multi-factor approximation result is arbitrarily close to the best possible result obtainable by supported solutions:

Theorem 2

For ε > 0, let

$$ \begin{array}{@{}rcl@{}} \mathcal{A}:=\{\alpha\in\mathbb{R}^{p}: \alpha_{1},\dots,\alpha_{p}\geq 1, \alpha_{i}=1 \text{ for at least one}~i, \text{ and } \sum\limits_{j:\alpha_{j}>1}\alpha_{j}=p-\varepsilon\}. \end{array} $$

Then there exists an instance of a p-objective minimization problem for which the set XS of supported solutions is not an \(\mathcal {A}\)-approximation.

Proof

In the following, we only specify the set Y of images. A corresponding instance consisting of a set X of feasible solutions and an objective function f can then easily be obtained, e. g., by setting X := Y and \(f:= \text {id}_{\mathbb {R}^{p}}\).

For M > 0, let \(Y:= \{y^{1},\dots ,y^{p},\tilde {y}\}\) with \(y^{1}=\left (M,\frac {1}{p},\dots ,\frac {1}{p}\right )\), \(y^{2}=\left (\frac {1}{p},M,\frac {1}{p},\dots ,\frac {1}{p}\right )\), …, \(y^{p}=\left (\frac {1}{p},\dots ,\frac {1}{p},M\right )\) and \(\tilde {y}=\left (\frac {M+1}{p},\dots ,\frac {M+1}{p}\right )\). Note that the point \(\tilde {y}\) is unsupported, while \(y^{1},\dots ,y^{p}\) are supported (an illustration for the case p = 2 is provided in Fig. 3).

Fig. 3
figure 3

Image space of the instance constructed in the proof of Theorem 2 for p = 2. The shaded region is {(1,2 − ε),(2 − ε,1)}-approximated by the supported points y1,y2

Moreover, the ratio of the j-th components of the points yj and \(\tilde {y}\) is exactly

$$ \begin{array}{@{}rcl@{}} \frac{M}{\nicefrac{(M+1)}{p}} = p\cdot\frac{M}{M+1}, \end{array} $$

which is larger than pε for \(M>\frac {p}{\varepsilon }-1\). Consequently, for such M, the point \(\tilde {y}\) is not α-approximated by any of the supported points \(y^{1},\dots ,y^{p}\) for any \(\alpha \in \mathcal {A}\), which proves the claim. □

We remark that the set of points Y constructed in the proof of Theorem 2 can easily be obtained from instances of many well-known multiobjective minimization problems such as multiobjective shortest path, multiobjective spanning tree, multiobjective minimum (s-t-) cut, or multiobjective TSP (e.g., for multiobjective shortest path, a collection of p + 1 disjoint s-t-paths whose cost vectors correspond to the points \(y^{1},\dots ,y^{p},\tilde {y}\) suffices). Consequently, the result from Theorem 2 holds for each of these specific problems as well.

Moreover, note that also the classical approximation result obtained in Corollary 3 is arbitrarily close to best possible in case that the weighted sum problem is solved exactly: While Corollary 3 shows that a \((p+\varepsilon ,\dots ,p+\varepsilon )\)-approximation is obtained from Algorithm 1 when solving the weighted sum problem exactly, the instance constructed in the proof of Theorem 2 shows that the supported solutions do not yield an approximation guarantee of \((p-\varepsilon ,\dots ,p-\varepsilon )\) for any ε > 0. This yields the following theorem:

Theorem 3

For any ε > 0, there exists an instance of a p-objective minimization problem for which the set XS of supported solutions is not a \((p-\varepsilon ,\dots ,p-\varepsilon )\)-approximation.

4 Conclusion

The weighted sum scalarization is the most frequently used method to transform a multiobjective into a single objective optimization problem. In this article, we contribute to a better understanding of the quality of approximations for general multiobjective optimization problems which rely on this scalarization technique. To this end, we refine and extend the common notion of approximation quality in multiobjective optimization. As we show, the resulting multi-factor notion of approximation more accurately describes the approximation quality in multiobjective contexts.

For multiobjective minimization problems, we also present an efficient approximation algorithm, which turns out to be best possible under some additional assumptions. This algorithm can be applied to a large variety of minimization problems and improves upon the previously best approximation results for several well-known problems. If the weighted sum scalarization admits a polynomial-time approximation scheme, our algorithm yields a multi-factor approximation where each feasible solution is approximated with some approximation guarantee \((\alpha _{1},\dots ,\alpha _{p})\) such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=p+\varepsilon \). This yields the best known approximation results for the multiobjective versions of the weighted planar TSP [20] and minimum weight planar vertex cover [1]. If the weighted sum scalarization admits a polynomial-time σ-approximation algorithm, our algorithm yields both a (classical) \((\sigma \cdot p+\varepsilon ,\dots ,\sigma \cdot p+\varepsilon )\)-approximation and a multi-factor approximation where each feasible solution is approximated with some approximation guarantee \((\alpha _{1},\dots ,\alpha _{p})\) such that \({\sum }_{j:\alpha _{j}>1}\alpha _{j}=\sigma \cdot p+\varepsilon \) and αiσ for at least one i. These results yield the best known approximation guarantees for many well-studied problems whose single objective version does not admit a polynomial time approximation scheme unless P = NP (and, consequently, the multiobjective version does not admit an MPTAS under the same assumption). Problems of this kind include, e.g., minimum weight vertex cover [2], minimum k-spanning tree [14], minimum weight edge dominating set [13], and minimum metric k-center [19], whose single objective versions admit 2-approximation algorithms, as well as the minimum weight set cover problem, where a \((1+\ln |S|)\)-approximation exists with |S| denoting the cardinality of the ground set to cover [6].

Interestingly, it is easy to show that similar approximation results cannot be obtained for multiobjective maximization problems (see Appendix B).

Since the new multi-factor notion of approximation is independent of the specific algorithms used here, a natural direction for future research is to analyze new and existing approximation algorithms more precisely with the help of this new notion. This may yield both a better understanding of existing approaches as well as more accurate approximation results.