1 Introduction

Multiobjective optimization problems, i.e., optimization problems with more than one objective function, are of growing interest in both mathematical optimization theory and real-world applications. In these problems, solutions optimizing all objectives simultaneously usually do not exist. Therefore, if no prior information about preferences is available, every so-called efficient solution is a possible candidate for an optimal solution. A solution is said to be efficient if any other solution that is better in some objective is necessarily worse in at least one other objective. One of the major challenges in multiobjective optimization is the overwhelming number of different images of efficient solutions that typically exist.

Additional preference information reduces the number of solutions that qualify as optimal. A common way to model such preferences is via ordering cones, which describe, for each solution, which other solutions are guaranteed to be worse. In the case of minimization problems, the case of no prior information described above corresponds to the ordering cone being the nonnegative orthant of the objective space (also called the Pareto cone in this context). A larger ordering cone means more preference information and, thus, a smaller set of possible optimal solutions. Prominent special cases are weighted sum scalarizations, which correspond to the ordering cones being half spaces. If the weights for a weighted sum scalarization are given, this means that the complete preference information is available.

Another important approach for dealing with large numbers of required solutions is the concept of approximation, where every solution only has to be covered up to a multiplicative tolerance in each objective function, thus reducing the number of needed solutions drastically.

In this article, we study relations between these two approaches. More precisely, we study approximation properties (with respect to the Pareto cone) of solutions that are (approximately) optimal with respect to larger ordering cones. Our main focus lies on the case of biobjective minimization problems.

1.1 Related work

The field of study of mathematical optimization with respect to vector-valued objective functions and general preference relations is known as vector optimization. An introduction to the concepts of vector optimization can be found in [8, 15]. Multiobjective optimization is a subfield of vector optimization in which preferences are defined by the componentwise ordering on \({\mathbb {R}}^p\).

The use of cones to model preferences is a well-studied topic in multiobjective optimization [7, 14, 23] and their investigation as dominance cones was initiated by Yu [22]. He gives an in-depth study of the equivalence of properties between orderings and cones in multiobjective and vector optimization theory. Conditions under which multiobjective optimization problems using alternative ordering cones can be reduced to the standard case of the componentwise ordering are studied in [17]. An overview about results on properties of ordering cones in multiobjective optimization and the corresponding literature can be found in [21].

Vanderpooten et al. [19] introduce a general framework modeling a variety of notions of approximation in the context of general ordering cones, including the concepts considered here. They provide conditions under which an approximation with respect to some cone is an approximation with respect to some other cone containing it. Engau and Wiecek [9] characterize an additive notion of approximation using the theory of dominance cones.

The systematic study of the theory of approximation in multiobjective optimization in the multiplicative sense considered here started with the seminal work of Papadimitriou and Yannakakis [18]. They show that, under weak assumptions, approximations of polynomial cardinality are guaranteed to exist and that the problem of finding an approximation can be polynomially reduced to solving an approximate version of the decision problem associated with the multiobjective optimization problem. Subsequent articles focus on sufficient conditions for the computability of approximations and their cardinality [2, 5, 6, 12, 16, 20]. A survey on literature about approximation methods for general multiobjective optimization problems and for several specific multiobjective combinatorial optimization problems is given in [13].

The weighted sum scalarization (see, e.g., [7]) as a special case of alternative ordering cones has been a widely studied tool for computing approximations in multiobjective optimization problems: Glaßer et al. [10] study how multiobjective optimization problems can be approximated using a norm-based approach. Most notably, they show that, for p-objective minimization problems, for any \(\varepsilon > 0\), a \((p+\varepsilon )\)-approximation can be computed using the weighted sum scalarization. A specific algorithm using the weighted sum scalarization for computing approximations in biobjective minimization problems is given in [11]. For biobjective optimization problems with convex feasible sets and linear objective functions, an efficient algorithm for computing \((1+\varepsilon )\)-approximations is studied in [4]. For an extensive study of the approximation quality achievable by the weighted sum scalarization for multiobjective minimization and maximization problems in general, see [3].

1.2 Our contribution

We consider multiplicative approximation using general ordering cones for the special case of biobjective minimization problems. More specifically, we investigate how optimal (or approximately optimal) solutions with respect to general ordering cones can be used to achieve an approximation guarantee with respect to the usual Pareto cone. In contrast to the results by Vanderpooten et al. [19] about approximation guarantees carrying over from smaller to larger ordering cones, we show that an approximation with respect to some fixed ordering cone containing the Pareto cone does not straightforwardly yield an approximation with respect to the Pareto cone (i.e., in the classical sense). We introduce the concept of \(\gamma \)-supportedness as a generalization of both supportedness and efficiency. For some angle \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), a solution is called \(\gamma \)-supported if it is optimal with respect to some (arbitrary) ordering cone of inner angle \(\gamma \) (see Fig. 1 on Page 6 for an illustration of such a cone). Thus, the definitions of a \(\frac{\pi }{2}\)-supported solution and a \(\pi \)-supported solution coincide with the definition of an efficient solution and a supported solution, respectively. We show that this characterization of ordering cones by their inner angle provides structural results on the approximation guarantee that is achievable for the Pareto cone by solutions that are approximately optimal with respect to larger cones. Our main result (Theorem 3.2) naturally generalizes existing approximation results for the weighted sum scalarization as well as for the Pareto cone and unifies them in a general statement about approximability by a family of cones specified by their inner angle. Moreover, we show that the achieved approximation guarantees are best possible for every inner angle \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), including the previously known cases. Finally, we show that considering families of cones of the same inner angle does not yield an approximation guarantee for maximization problems, which, again, generalizes known results for the weighted sum scalarization to general ordering cones.

2 Preliminaries

In this section, we first repeat some important concepts and definitions from multiobjective optimization theory in the classical sense. Then we briefly recall how to generalize multiobjective optimization problems to more general ordering relations via cones and provide some basic properties of this generalization.

We introduce a new framework that allows us to describe biobjective minimization problems with respect to general ordering relations and to define \(\gamma \)-supportedness. Finally, we provide a formal definition of approximation for multiobjective optimization problems with respect to general ordering cones.

2.1 Multiobjective optimization and scalarizations

We use the usual notation \({\mathbb {R}}^p_\geqq :=\left\{ y \in {\mathbb {R}}^p: 0 \leqq y \right\} \), where \(0 \in {\mathbb {R}}^p\) is the p-dimensional zero vector and \(\leqq \) is the weak componentwise order:

$$\begin{aligned} y \leqq y'&:\Leftrightarrow y_i \le y'_i, \quad i = 1, \ldots ,p \end{aligned}$$

Multiobjective optimization problems can be formally defined as follows:

Definition 2.1

(Multiobjective Minimization/Maximization Problem) For \(p \ge 1\), a p-objective optimization problem \(\Pi \) is given by a set of instances. Each instance \(I=\left( X^I,f^I\right) \) consists of a (finite or infinite) set \(X^I\) of (feasible) solutions and a vector \(f^I = \left( f^I_1,\ldots , f^I_{p}\right) \) of p objective functions \(f^I_i: X^I \rightarrow {\mathbb {R}}\) for \(i = 1,\ldots ,p\). In a minimization problem, all objective functions \(f^I_i\) should be minimized, in a maximization problem, they should be maximized.

The solutions of interest are those for which it is not possible to improve the value of one objective function without worsening the value of at least one other objective. Solutions with this property are called efficient solutions:

Definition 2.2

For an instance \(I=\left( X^I,f^I\right) \) of a p-objective minimization (maximization) problem, a solution \(x \in X^I\) dominates another solution \(x' \in X^I\) if \(f^I(x) \ne f^I(x')\) and \(f^I(x) \leqq f^I(x')\) (\(f^I(x) \geqq f^I(x')\)). A solution \(x \in X^I\) is called efficient if it is not dominated by any other solution \(x' \in X^I\). The set \(X^I_E\subseteq X^I\) of all efficient solutions is called the efficient set.

In the following, we usually drop the superscript I indicating the dependence on the instance in \(X^I\), \(f^I\), etc. The majority of the results of this paper are only applicable for minimization problems. Therefore, we introduce some of the concepts in this chapter for minimization problems only, even though they easily transfer to the case of maximization. Some of the formal definitions for maximization problems are given in Sect. 4.

In the remainder of this paper, it is assumed that, in any instance \(I = (X,f)\) of a p-objective minimization problem, the set \(f(X) + {\mathbb {R}}^p_\geqq \) is closed, i.e., the set f(x) is \({\mathbb {R}}^p_\geqq \)-closed [7]. Note that this is, in particular, the case if f(X) is compact, which holds, for example, if f(X) is finite or a polytope. Additionally, it is assumed that all objective functions only attain positive values \(f_i(x) > 0\) for all \(x \in X\) and \(i = 1,\ldots , p\). This allows for a reasonable notion of approximation (see Sect. 2.3). These assumptions imply external stability [7]: for any feasible solution \(x \in X\) that is dominated by another feasible solution \(x' \in X\), there also exists an efficient solution \(x'' \in X_E\) dominating x.

When dealing with multiobjective optimization problems, it is common to consider scalarizations, where related single objective optimization problems are considered in order to gain information about the multiobjective problem at hand. Here, we consider only scalarizations where the feasible set remains unchanged. We call an instance of a single objective optimization problem that shares the feasible set X with a given multiobjective optimization problem instance I (and whose solutions yield some information about the multiobjective instance) a scalarization of I.

Two of the most important kinds of scalarizations are weighted sum scalarizations and weighted max-ordering scalarizations.

Definition 2.3

For an instance \(I = (X,f)\) of a p-objective minimization problem and weights \(w_i > 0\) for \(i = 1, \ldots , p\), the weigthed sum scalarization of I with weights \(w_1,\ldots ,w_p\) is the single objective instance

$$\begin{aligned} \min _{x \in X} \quad w_1 \cdot f_1(x) + \cdots + w_p \cdot f_p(x). \end{aligned}$$

It is well-known that, for any multiobjective optimization problem instance I and weights \(w_i > 0\) for \(i=1,\ldots ,p\), any solution \(x \in X\) that is optimal for the weighted sum scalarization of I with weights \(w_1,\ldots ,w_p\) is efficient (for I). On the other hand, there might exist efficient solutions that are not optimal for any weighted sum scalarization. Solutions that are optimal for some weighted sum scalarization are called supported solutions.

Definition 2.4

For an instance \(I = (X,f)\) of a p-objective minimization problem and weights \(w_i > 0\) for \(i = 1, \ldots , p\), the weighted max-ordering scalarization of I with weights \(w_1,\ldots ,w_p\) is the single objective instance

$$\begin{aligned} \min _{x \in X} \quad \max \left\{ w_1 \cdot f_1(x), \ldots , w_p \cdot f_p(x)\right\} . \end{aligned}$$

It is well-known that, for any multiobjective optimization problem instance I and weights \(w_i > 0\), there exists some solution \(x \in X\) that is optimal for the weighted max-ordering scalarization of I with weights \(w_1,\ldots ,w_p\) and also efficient (for I). Moreover (if \(f(x) > 0\) for all \(x \in X\) as assumed here), each efficient solution \(x \in X_E\) is optimal for the weighted max-ordering scalarization with weights \(w_i = \frac{1}{f_i(x)}\) for \(i = 1,\ldots ,p\).

2.2 Orderings and cones

In multiobjective minimization problems, where efficient solutions are of interest, it is implicitly assumed that the underlying preference relation is the weak componentwise order \(\leqq \): A solution \(x \in X\) is efficient if and only if, for any \(x' \in X\) with \(f(x') \leqq f(x)\), we also have \(f(x) \leqq f(x')\). However, this can be generalized to other reasonable ways of defining “optimal” solutions.

A binary relation R on \({\mathbb {R}}^p\) that is reflexive, transitive, compatible with addition (i.e., for any \(y, y',z \in {\mathbb {R}}^p\) with \(y R y'\), we have \((y+z) R (y'+z)\)), and compatible with scalar multiplication (i.e., for any \(y,y' \in {\mathbb {R}}^p\) with \(yRy'\) and any \(\lambda > 0\), we have \((\lambda \cdot y) R (\lambda \cdot y')\)) is called a vector preorder. It is well-known that any closed vector preorder R on \({\mathbb {R}}^p\) corresponds to exactly one closed convex cone \(C \subseteq {\mathbb {R}}^p\) via \(y R y' \Leftrightarrow y' - y \in C\) and vice versa [7].

In multiobjective optimization, the relations that are of interest additionally adhere to the so-called Pareto axiom [17]: If a solution is at least as good as another solution in all objective functions, it should also be at least as good in the multiobjective sense, and if a solution is not better than another solution in any objective function and strictly worse in at least one objective, it should be worse in the multiobjective sense. For multiobjective minimization problems, this means that a closed vector preorder \(\preceq \) only qualifies as a meaningful way to describe multiobjective preferences if we have \({\mathbb {R}}^p_\geqq \subseteq C_\preceq \) and \(- {\mathbb {R}}^p_\geqq \cap C_\preceq = \{0\}\).

Fig. 1
figure 1

Illustration of the cone \(C_\gamma ^\varphi \subsetneq {\mathbb {R}}^2\)

In the two-dimensional case, the situation is particularly simple: Any closed convex cone \(C \subseteq {\mathbb {R}}^2\) (except for the empty set and subspaces of \({\mathbb {R}}^2\)) can be uniquely described by its inner angle \(\gamma \in [0, \pi ]\) and its rotation \(\varphi \in [0, 2\pi )\) with respect to some direction of reference. For cones containing \({\mathbb {R}}^2_\geqq \), the inner angle \(\gamma \) has to be within \(\left[ \frac{\pi }{2}, \pi \right] \) and the angle of rotation \(\varphi \) can vary within an interval of length \(\gamma - \frac{\pi }{2}\) (without loss of generality, the interval \(\left[ 0, \gamma - \frac{\pi }{2}\right] \) since we can choose the direction of reference accordingly). Note that, if the inner angle of a cone containing \({\mathbb {R}}^2_\geqq \) is smaller than \(\pi \), it does not contain any point from \(-{\mathbb {R}}^2_\geqq \setminus \{0\}\). There exist exactly two cones of inner angle \(\pi \) that contain \({\mathbb {R}}^2_\geqq \) and are not disjoint from \(-{\mathbb {R}}^2_\geqq {\setminus } \{0\}\), namely \(\left\{ (y_1,y_2) \in {\mathbb {R}}^2 | y_1 \ge 0\right\} \) and \(\left\{ (y_1,y_2) \in {\mathbb {R}}^2 | y_2 \ge 0\right\} \). Thus, in a closed convex cone \(C \subseteq {\mathbb {R}}^2\), if the inner angle \(\gamma \) is smaller than \(\pi \), we have \({\mathbb {R}}^p_\geqq \subseteq C\) and \(- {\mathbb {R}}^p_\geqq \cap C = \{0\}\) if and only if the angle of rotation \(\varphi \) lies in the closed interval \(\left[ 0, \gamma - \frac{\pi }{2}\right] \). If the inner angle \(\gamma \) is equal to \(\pi \), we have \({\mathbb {R}}^p_\geqq \subseteq C\) and \(- {\mathbb {R}}^p_\geqq \cap C = \{0\}\) if and only if \(\varphi \) lies in the open interval \(\left( 0, \frac{\pi }{2}\right) \). Given \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), we can write the allowed interval for \(\varphi \) shortly as \(\left[ 0,\gamma -\frac{\pi }{2}\right] \setminus \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \). This yields the closed interval \(\left[ 0, \gamma - \frac{\pi }{2}\right] \) for \(\gamma < \pi \) and the open interval \(\left( 0, \gamma - \frac{\pi }{2}\right) = \left( 0, \frac{\pi }{2}\right) \) for \(\gamma = \pi \).

Hence, the following definition, which is illustrated in Fig. 1, covers exactly all closed convex cones \(C \subseteq {\mathbb {R}}^2\) for which \({\mathbb {R}}^p_\geqq \subseteq C\) and \(- {\mathbb {R}}^p_\geqq \cap C = \{0\}\).

Definition 2.5

For \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), we define

$$\begin{aligned} \varphi ':=\gamma - \frac{\pi }{2} - \varphi . \end{aligned}$$

In the following, if the values of \(\gamma \) and \(\varphi \) are clear from the context, we always use this convention. We define a linear mapping \(T_\gamma ^\varphi : {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2\) via

$$\begin{aligned} T_\gamma ^\varphi (y) :=\left( \begin{array}{cc} \sin \gamma &{} (-\cos \gamma )\\ 0&{}1 \end{array} \right) \left( \begin{array}{cc} \cos \varphi &{} (-\sin \varphi )\\ \sin \varphi &{}\cos \varphi \end{array} \right) \cdot y = \left( \begin{array}{cc} \cos \varphi '&{} \sin \varphi '\\ \sin \varphi &{}\cos \varphi \end{array} \right) \cdot y. \end{aligned}$$

Using this notation, we define a cone

$$\begin{aligned} C_\gamma ^\varphi :=\left\{ y \in {\mathbb {R}}^2 : T_\gamma ^\varphi (y) \geqq 0 \right\} \end{aligned}$$

and the corresponding vector preorder \(\leqq _\gamma ^\varphi \) on \({\mathbb {R}}^p\) by

$$\begin{aligned} y \leqq _\gamma ^\varphi y' \quad :\Longleftrightarrow \quad y' - y \in C_\gamma ^\varphi . \end{aligned}$$

For \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), we define \({\bar{\varphi }}_\gamma \) to be the value of \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \) for which \(\varphi '= \varphi \):

$$\begin{aligned} {\bar{\varphi }}_\gamma :=\frac{\gamma }{2} - \frac{\pi }{4} \end{aligned}$$

It is easy to see that \(C_\gamma ^\varphi \subsetneq {\mathbb {R}}^2\) is a closed convex cone with inner angle \(\gamma \) containing \({\mathbb {R}}^2_\geqq \), and that the extreme directions of \(C_{\gamma }^{\varphi }\) include angles of \(\varphi \) and \(\gamma - \frac{\pi }{2} - \varphi \) with the first axis and second axis, respectively (see Fig. 1): The first \(2 \times 2\)-matrix in the definition of \(T_\gamma ^\varphi \) rotates the first axis by an angle of \(\gamma \) while the second axis remains unchanged. The second \(2 \times 2\)-matrix is a rotation matrix with rotation angle \(\varphi \).

Moreover, the following lemma holds for \(\leqq _\gamma ^\varphi \):

Lemma 2.1

For \(y,y' \in {\mathbb {R}}^2\), we have \(y \leqq _\gamma ^\varphi y'\) if and only if \(T_\gamma ^\varphi (y) \leqq T_\gamma ^\varphi (y')\).

Proof

We have

$$\begin{aligned} y \leqq _\gamma ^\varphi y' \quad \Leftrightarrow \quad y' - y \in C_\gamma ^\varphi \quad \Leftrightarrow \quad T_\gamma ^\varphi (y' - y) \geqq 0 \quad \Leftrightarrow \quad T_\gamma ^\varphi (y') \geqq T_\gamma ^\varphi (y) \end{aligned}$$

by the definitions of \(\leqq _\gamma ^\varphi \) and \(C_\gamma ^\varphi \) and by linearity of \(T_\gamma ^\varphi \). \(\square \)

We summarize the facts obtained in this subsection so far in the following proposition:

Proposition 2.1

Let \(C \subseteq {\mathbb {R}}^2\). The following statements are equivalent:

  1. 1.

    \(C = C_\gamma ^\varphi \) for some \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \).

  2. 2.

    C is a closed convex cone with \({\mathbb {R}}^2_\geqq \subseteq C\) and \(- {\mathbb {R}}^p_\geqq \cap C = \{0\}\).

  3. 3.

    \(C = C_\preceq \) for a closed vector preorder \(\preceq \) on \({\mathbb {R}}^2\) for which \(y \leqq y'\) implies \(y \preceq y'\), and \(y \leqq y'\) and \(y \ne y'\) imply \(y' \npreceq y\) for all \(y,y' \in {\mathbb {R}}^2\).

Given some \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), and an instance (Xf) of a biobjective minimization problem, we can define a biobjective minimization problem instance with the same feasible set X and objective function f, but using \(\leqq _\gamma ^\varphi \) instead of \(\leqq \) as the underlying vector preorder. Proposition 2.1 states that any reasonable way to define minimization of f over X can be described like this. Moreover, from Lemma 2.1, we know that, for any biobjective minimization problem instance (Xf), using \(\leqq _\gamma ^\varphi \) is equivalent to using the weak componentwise order \(\leqq \) for the objective function \(T_\gamma ^\varphi \circ f: X \rightarrow {\mathbb {R}}^2\).

Definition 2.6

For \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), and an instance \(I = (X,f)\) of a biobjective minimization problem \(\Pi \), we define \(I_\gamma ^\varphi :=\left( X, T_\gamma ^\varphi \circ f\right) \):

$$\begin{aligned} \min _{x\in X} \quad T_\gamma ^\varphi \left( f(x) \right) \end{aligned}$$

In a biobjective minimization problem instance \(I = (X,f)\), we say that a solution \(x \in X\) is optimal with respect to \(\leqq _\gamma ^\varphi \) if x is efficient in \(I_\gamma ^\varphi \), i.e., if there does not exist a solution \(x' \in X\) such that \(f(x') \ne f(x)\) and \(f(x') \leqq _\gamma ^\varphi f(x)\).

Note that, for any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), if \(f(x) > 0\), then also \(T_\gamma ^\varphi (f(x)) > 0\). Thus, \(I_\gamma ^\varphi \) indeed always satisfies our assumption of positive-valued objective functions. Moreover, this implies that our assumption of \(f(X) + {\mathbb {R}}^2_\geqq \) being closed also transfers to \(I_\gamma ^\varphi \).

The above reasoning implies that solving a biobjective minimization problem instance with respect to any reasonable closed vector preorder can be reduced to applying a linear mapping and solving the resulting instance with respect to the usual componentwise order. Thus, for any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), any known result that holds for biobjective optimization problems in the usual sense can also be applied to \(I_\gamma ^\varphi \) as long as all of the corresponding conditions are satisfied. However, one has to be careful when applying algorithmic results to \(I_\gamma ^\varphi \) since basic requirements like, e.g., polynomial computability of the objective function, do not trivially hold for \(\left( T_\gamma ^\varphi \circ f\right) \) even if f is polynomially computable as the matrix describing \(T_\gamma ^\varphi \) might contain irrational entries.

Obviously, \(T_{\frac{\pi }{2}}^0\) is the identity mapping, so, for any instance I of a biobjective minimization problem, we have \(I_{\frac{\pi }{2}}^0 = I\). Thus, in the special case \(\gamma = \frac{\pi }{2}\) and (thus) \(\varphi = \varphi '= 0\), the optimal solutions with respect to \(\leqq _\gamma ^\varphi \) are exactly the efficient solutions. In the other extreme case, where \(\gamma = \pi \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} = \left( 0, \frac{\pi }{2}\right) \), the definition of \(I_\gamma ^\varphi \) yields the single objective optimization problem instance

$$\begin{aligned} \min _{x\in X} \quad \sin \varphi \cdot f_1(x) + \cos \varphi \cdot f_2(x), \end{aligned}$$

i.e., the weighted-sum scalarization of I with (positive) weights \(\sin \varphi \) and \(\cos \varphi \).

Recall that, in a multiobjective optimization problem, a solution \(x \in X\) is called supported if there exists a nonnegative vector of weights such that x is an optimal solution of the weighted sum scalarization with these weights. Equivalently, using the fact that weighted sum scalarizations correspond to the case of the inner angle \(\gamma \) being equal to \(\pi \), we can say that a solution is supported if and only if it is an optimal solution of \(I_\gamma ^\varphi \) for \(\gamma = \pi \) for some \(\varphi \in \left( 0,\frac{\pi }{2}\right) \). We generalize this idea to arbitrary values of \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) in the following way:

Definition 2.7

Let \(I = (X,f)\) be a biobjective optimization problem and let \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) be given. We say that a solution \(x \in X\) is \(\gamma \)-supported if there exists some \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \) such that x is optimal with respect to \(\leqq _\gamma ^\varphi \).

Hence, the definition of a supported solution coincides with the definition of a \(\pi \)-supported solution. Moreover, the definition of an efficient solution is exactly the definition of a \(\frac{\pi }{2}\)-supported solution. Thus, the concept of \(\gamma \)-supportedness generalizes and connects the concepts of efficiency and supportedness. Note that, if \(\gamma _1,\gamma _2 \in \left[ \frac{\pi }{2}, \pi \right] \) such that \(\gamma _1 \le \gamma _2\), then every \(\gamma _2\)-supported solution is \(\gamma _1\)-supported. In particular, for any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), every supported solution is \(\gamma \)-supported and every \(\gamma \)-supported solution is efficient.

2.3 Approximation

Next, we define approximation for biobjective minimization problems (the definition for maximization problems is analogous). Here, we generalize the usual notion of approximation, which is based on the componentwise order, to arbitrary ordering relations on \({\mathbb {R}}^2\). The usual definition of approximation (see [13]) is obtained by replacing \(\leqq _\gamma ^\varphi \) by \(\leqq \) in the following definition.

Definition 2.8

Let \(I = (X,f)\) be a biobjective minimization problem instance such that \(f_1(x),f_2(x) > 0\) for all \(x \in X\). Let \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \) be given. For a scalar \(\alpha \ge 1\), we say that \(x' \in X\) is \(\alpha \)-approximated by \(x \in X\) with respect to \(\leqq _\gamma ^\varphi \) if \(f(x) \leqq _\gamma ^\varphi \alpha \cdot f(x')\). A set \(X_\alpha \subseteq X\) is called an \(\alpha \)-approximation with respect to \(\leqq _\gamma ^\varphi \) if any feasible solution \(x \in X\) is \(\alpha \)-approximated with respect to \(\leqq _\gamma ^\varphi \) by some solution \(x' \in X_\alpha \).

In single objective minimization problem instances, we say that a solution is \(\alpha \)-approximate if it \(\alpha \)-approximates any other feasible solution (in the single objective sense, where \(x'\) is \(\alpha \)-approximated by x if \(f(x) \le \alpha \cdot f(x')\)).

Obviously, for any biobjective minimization problem instance, the efficient set is a 1-approximation. Note that, in the special case of \(\leqq \), an approximation is also referred to as an “approximate Pareto set” in the literature [2].

Definition 2.8, together with Lemma 2.1, states that a solution \(x' \in X\) is \(\alpha \)-approximated by another solution \(x \in X\) with respect to \(\leqq _\gamma ^\varphi \) if \(T_\gamma ^\varphi (f(x)) \leqq T_\gamma ^\varphi (\alpha \cdot f(x'))\). Note that, by linearity of \(T_\gamma ^\varphi \), this is equivalent to \(T_\gamma ^\varphi (f(x)) \leqq \alpha \cdot T_\gamma ^\varphi (f(x'))\). Thus, \(x' \in X\) is \(\alpha \)-approximated by \(x \in X\) with respect to \(\leqq _\gamma ^\varphi \) in I if and only if \(x'\) is \(\alpha \)-approximated by x (with respect to \(\leqq \)) in \(I_\gamma ^\varphi \). Recall that, in the biobjective case, optimization with respect to any closed vector preorder can be reduced to the componentwise order via \(T_\gamma ^\varphi \). The above reasoning states that the concept of approximation is consistent with this reduction. In fact, this equivalent characterization of approximation would be a different straightforward way to define approximation with respect to \(\leqq _\gamma ^\varphi \). However, the definition as stated in Definition 2.8 directly generalizes to arbitrary cones for more than two objectives while the alternative characterization is universally applicable only in the biobjective case. A very general definition of approximation in multiobjective optimization with respect to arbitrary cones and a further characterization of when the two mentioned definition approaches are equivalent are given by Vanderpooten et al. [19]. They also present various results generalizing the following observation about approximations:

Observation 1

Consider \(\gamma _1,\gamma _2 \in \left[ \frac{\pi }{2}, \pi \right] \), \(\varphi _1 \in [0, \gamma _1]{\setminus } \left\{ \gamma _1 - \pi , \frac{\pi }{2}\right\} \), and \(\varphi _2 \in [0, \gamma _2]{\setminus } \left\{ \gamma _2 - \pi , \frac{\pi }{2}\right\} \) such that \(\varphi _1 \le \varphi _2\) and \(\varphi '_1 \le \varphi '_2\), i.e., such that

$$\begin{aligned} C_{\gamma _1}^{\varphi _1} \subseteq C_{\gamma _2}^{\varphi _2}. \end{aligned}$$

For \(\alpha \ge 1\), if \(x' \in X\) is \(\alpha \)-approximated by \(x \in X\) in \(I_{\gamma _1}^{\varphi _1}\), then \(x'\) also \(\alpha \)-approximated by x in \(I_{\gamma _2}^{\varphi _2}\). Thus, any \(\alpha \)-approximation in \(I_{\gamma _1}^{\varphi _1}\) is an \(\alpha \)-approximation in \(I_{\gamma _2}^{\varphi _2}\). In particular, for any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] \), if \(x' \in X\) is \(\alpha \)-approximated by \(x \in X\) in I then \(x'\) is also \(\alpha \)-approximated by x in \(I_{\gamma }^{\varphi }\) and any \(\alpha \)-approximation in I is an \(\alpha \)-approximation in \(I_\gamma ^\varphi \).

3 Structural results

Observation 1 states that, for any \(\alpha \ge 1\), \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), any \(\alpha \)-approximation for I is also an \(\alpha \)-approximation for \(I_\gamma ^\varphi \). Vice versa, suppose that we can identify an approximation (or even the efficient set) for \(I_\gamma ^\varphi \) for some \(\gamma \in \left( \frac{\pi }{2},\pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \). Does this yield an \(\alpha \)-approximation for I for some \(\alpha \)? It is easy to see that the answer to this question is “no” in general:

Example 3.1

Let \(\alpha > 1\), \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \), and \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \frac{\pi }{2}\right\} \). Consider the following instance I of a biobjective minimization problem (see also Fig. 2): Let the feasible set consist of exactly two solutions \(x_1,x_2\) such that \(f_1(x_1) = 1\), \(f_2(x_1) = (\alpha -1) \cdot \tan \varphi \), \(f_1(x_2) = \alpha \), and \(f_2(x_2) = \frac{\alpha -1}{\alpha +1} \cdot \tan \varphi \). Then the efficient set of \(I_\gamma ^\varphi \) is \(\{x_1\}\), but \(\{x_1\}\) is not an \(\alpha \)-approximation for I. However, \(\{x_2\}\) is an \(\alpha \)-approximation for I.

Fig. 2
figure 2

Illustration of Example 3.1. The solution \(x_2\) is not optimal with respect to \(\leqq _\gamma ^\varphi \) and not \(\alpha \)-approximated by \(x_1\) (with respect to \(\leqq \))

We obtain the following proposition:

Proposition 3.1

For any \(\gamma \in \left( \frac{\pi }{2},\pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), and any \(\alpha \ge 1\), there exists an instance I of a biobjective minimization problem such that the set of optimal solutions with respect to \(\leqq _\gamma ^\varphi \) is not an \(\alpha \)-approximation.

Proof

For \(\alpha > 1\) and \(\varphi \ne 0\), the claim follows from Example 3.1. If \(\alpha > 1\) and \(\varphi = 0\), we have \(\varphi '\ne 0\), since \(\gamma \ne \frac{\pi }{2}\). Thus, we can simply exchange \(f_1\) and \(f_2\) and replace \(\varphi \) by \(\varphi '\) in Example 3.1 to obtain the claim. The claim for \(\alpha = 1\) is a direct implication of the claim for any \(\alpha > 1\). \(\square \)

Proposition 3.1 states that the set of optimal solutions with respect to \(\leqq _\gamma ^\varphi \) for a single fixed pair of parameters \((\gamma , \varphi )\) does not yield any approximation guarantee for I. This is unsurprising: If the set of optimal solutions with respect to \(\leqq _\gamma ^\varphi \) yielded any approximation guarantee, this would mean that, for the special case \(\gamma = \pi \), where \(I_\gamma ^\varphi \) is a weighted sum scalarization of I, the (often unique) optimal solution of this scalarization would already yield an approximation guarantee in general.

In the case \(\gamma = \pi \), one is typically more interested in the set of supported solutions, i.e., the set of solutions that are optimal with respect to \(\leqq _\pi ^\varphi \) for some (arbitrary) \(\varphi \in \left( 0, \frac{\pi }{2}\right) \). It is well-known that, for any biobjective minimization problem instance, the set of supported solutions is a 2-approximation [10]. We state this result using our terminology.

Theorem 3.1

(Glaßer et al. [10]) For any biobjective minimization problem instance I, let \(X_W \subseteq X\) be a set of solutions that, for any \(\varphi \in \left( 0,\frac{\pi }{2}\right) \), contains one optimal solution with respect to \(\leqq _\pi ^\varphi \). Then \(X_W\) is a 2-approximation.

Our goal is to generalize Theorem 3.1 to arbitrary values of \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \). More precisely, we want to obtain a result about the approximation guarantee achievable by solutions that are optimal with respect to \(\leqq _\gamma ^\varphi \) for some fixed \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) but arbitrary \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \). Example 3.2 shows that, for \(\gamma \in \left( \frac{\pi }{2}, \pi \right) \), it does not suffice to require a single arbitrary optimal solution for each \(\varphi \), as it is the case for \(\gamma = \pi \).

Example 3.2

Let \(\gamma \in \left( \frac{\pi }{2}, \pi \right) \) and \(\alpha \ge 1\). Consider the following instance of a biobjective minimization problem (see also Fig. 3): The feasible set consists of exactly two solutions \(x_1, x_2\) with \(f_1(x_1) = \alpha + 1\), \(f_2(x_1) = 1\), \(f_1(x_2) = 1\), and \(f_2(x_2) = \frac{-\cos \gamma }{\sin \gamma } \cdot (\alpha + 1) + 1\).

Note that, for any \(\varphi \in \left[ 0,\gamma -\frac{\pi }{2}\right] \), we have \(0 \le \sin \varphi \le - \cos \gamma \), where the first inequality is strict if \(\varphi \ne 0\) and the second inequality is strict if \(\varphi \ne \gamma - \frac{\pi }{2}\), and we have \(0 < \sin \gamma \le \cos \varphi \), where, again, the second inequality is strict if \(\varphi \ne \gamma - \frac{\pi }{2}\). Therefore, for any \(\varphi \in \left[ 0,\gamma -\frac{\pi }{2}\right] \), the following holds for the second objective function of \(I_\gamma ^\varphi \):

$$\begin{aligned} \sin \varphi \cdot f_1(x_1) + \cos \varphi \cdot f_2(x_1)&= \sin \varphi \cdot (\alpha + 1) + \cos \varphi \\&\le \sin \varphi \cdot (\alpha + 1) + \cos \varphi + \sin \varphi \\&\le (-\cos \gamma ) \cdot (\alpha + 1) + \cos \varphi + \sin \varphi \\&\le \frac{\cos \varphi }{\sin \gamma } \cdot (-\cos \gamma )\cdot (\alpha + 1) + \cos \varphi + \sin \varphi \\&= \sin \varphi +\cos \varphi \cdot \left( \frac{- \cos \gamma }{\sin \gamma }\cdot (\alpha + 1) + 1\right) \\&=\sin \varphi \cdot f_1(x_2) + \cos \varphi \cdot f_2(x_2), \end{aligned}$$

where, if \(\varphi \ne 0\), the first inequality is strict, and, if \(\varphi \ne \gamma - \frac{\pi }{2}\), the second and third inequalities are strict. Thus, \(x_1\) is optimal with respect to \(\leqq _\gamma ^\varphi \) for any \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] \). On the other hand, \(x_1\) does not \(\alpha \)-approximate \(x_2\) (with respect to \(\leqq \)).

Fig. 3
figure 3

Illustration of Example 3.2. The dominance cone of \(f(x_2)\) in \(I_\gamma ^\varphi \) is illustrated for \(\varphi = 0\) (dotted), \(\varphi = {\bar{\varphi }}_\gamma \) (dashed), and \(\varphi = \gamma - \frac{\pi }{2}\) (solid). The solution \(x_1\) is not dominated by \(x_2\) and is, thus, optimal with respect to \(\leqq _\gamma ^\varphi \) for any \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] \)

We now generalize Theorem 3.1 to arbitrary values of \(\gamma \). We will see that, for any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), the set of \(\gamma \)-supported solutions is an approximation. The approximation guarantee obtained from our result is equal to 1 for \(\gamma = \frac{\pi }{2}\), is equal to 2 for \(\gamma = \pi \), and, interestingly, increases continuously in between depending on \(\gamma \).

Moreover, for any \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), we provide a weighted max-ordering scalarization of \(I_\gamma ^\varphi \) such that, for fixed \(\gamma \), a set containing only one optimal solution of this scalarization for each \(\varphi \) yields the same approximation guarantee (analogous to Threorem 3.1). For \(\gamma = \pi \), this scalarization naturally yields the (single objective) instance itself, so, this result is indeed a generalization of Theorem 3.1. We further generalize this result to approximate solutions of the provided scalarization.

First, note the following simple property of weighted max-ordering scalarizations:

Lemma 3.1

Let \(I = (X,f)\) be a biobjective minimization problem instance, let \(\alpha \ge 1\), and let \(w_1, w_2 > 0\) be given. Let \(x \in X\) be an \(\alpha \)-approximate solution for the weighted max-ordering scalarization of I with weights \(w_1,w_2\) and let \(x' \in X\) be a solution such that \(w_1 \cdot f_1(x') = w_2 \cdot f_2(x')\). Then \(x'\) is \(\alpha \)-approximated by x in I.

Proof

In the first component, we have

$$\begin{aligned} w_1 \cdot f_1(x)&\le \max \left\{ w_1 \cdot f_1(x),w_2 \cdot f_2(x)\right\} \\&\le \alpha \cdot \max \left\{ w_1 \cdot f_1(x'),w_2 \cdot f_2(x') \right\} \\&= \alpha \cdot w_1 \cdot f_1(x'). \end{aligned}$$

The approximation guarantee in the second component follows analogously. \(\square \)

The following lemma states that a solution \(x \in X\) that approximates another solution \(x' \in X\) with respect to \(\leqq _\gamma ^\varphi \) for some \(\gamma \) and \(\varphi \) also approximates \(x'\) with respect to \(\leqq \) by some factor. This factor depends on \(\gamma \), \(\varphi \), and \(f(x')\). Note that, by Proposition 3.1, we cannot expect this factor to depend solely on \(\gamma \) and \(\varphi \).

Lemma 3.2

Let \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \), \(\alpha \ge 1\), and let \(I = (X,f)\) be a biobjective minimization problem instance. Let \(x' \in X\) be \(\alpha \)-approximated by \(x \in X\) with respect to \(\leqq _\gamma ^\varphi \). Then x approximates \(x'\) (with respect to \(\leqq \)) with factor

$$\begin{aligned} \alpha \cdot \left( 1+ \max \left\{ \frac{f_1(x')}{f_2(x')} \cdot \tan \varphi , \frac{f_2(x')}{f_1(x')} \cdot \tan \varphi '\right\} \right) . \end{aligned}$$

Proof

In the first component, we obtain

$$\begin{aligned} f_1(x)&\le \frac{1}{\cos \varphi '} \cdot \left( \cos \varphi '\cdot f_1(x) + \sin \varphi '\cdot f_2(x)\right) \\&\le \frac{1}{\cos \varphi '} \cdot \alpha \cdot \left( \cos \varphi '\cdot f_1(x') + \sin \varphi '\cdot f_2(x')\right) \\&= \alpha \cdot \left( 1+\tan \varphi '\cdot \frac{f_2(x')}{f_1(x')}\right) \cdot f_1(x'). \end{aligned}$$

Similarly, in the second component, we obtain

$$\begin{aligned} f_2(x)&\le \frac{1}{\cos \varphi } \cdot \left( \sin \varphi \cdot f_1(x) + \cos \varphi \cdot f_2(x)\right) \\&\le \frac{1}{\cos \varphi } \cdot \alpha \cdot \left( \sin \varphi \cdot f_1(x') + \cos \varphi \cdot f_2(x')\right) \\&= \alpha \cdot \left( 1+\tan \varphi \cdot \frac{f_1(x')}{f_2(x')}\right) \cdot f_2(x'). \end{aligned}$$

This immediately yields the claimed approximation guarantee. \(\square \)

The next lemma states that, for \(\gamma \in \left( \frac{\pi }{2},\pi \right] \), \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \), and an instance \(I = (X,f)\), if we use the weights \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\) for a weighted max-ordering scalarization of \(I_\gamma ^\varphi \), then any solution \(x' \in X\) for which \(\frac{f_1(x')}{f_2(x')} =\frac{\sqrt{\tan \varphi '}}{\sqrt{\tan \varphi }}\) meets the conditions of Lemma 3.1. This scalarization is illustrated in Fig. 4.

Fig. 4
figure 4

Illustration of the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) with weights \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\) for given \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \). The solution x is optimal for this scalarization so there does not exist any feasible point in the gray region. For this choice of weights, we have \(\frac{d_1}{c_1} = \frac{d_2}{c_2} = 1 + \sqrt{\tan \varphi } \cdot \sqrt{\tan \varphi '}\) (see Proposition 3.2)

Lemma 3.3

Let \(\gamma \in (\frac{\pi }{2}, \pi ]\), \(\varphi \in (0, \gamma - \frac{\pi }{2})\), and let \(I = (X,f)\) be a biobjective minimization problem instance. Let \(x' \in X\) such that \(\frac{f_1(x')}{f_2(x')} = \frac{\sqrt{\tan \varphi '}}{\sqrt{\tan \varphi }}\). Moreover, let \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\). Then

$$\begin{aligned} w_1 \cdot \left( \cos \varphi '\cdot f_1(x') + \sin \varphi '\cdot f_2(x') \right) = w_2 \cdot \left( \sin \varphi \cdot f_1(x') + \cos \varphi \cdot f_2(x' )\right) . \end{aligned}$$

Proof

We know that \(f_1(x') = \sqrt{\tan \varphi '} \cdot \frac{f_2(x')}{\sqrt{\tan \varphi }}\), so it suffices to show that

$$\begin{aligned} w_1 \cdot \left( \cos \varphi '\cdot \sqrt{\tan \varphi '}+ \sin \varphi '\cdot \sqrt{\tan \varphi } \right) = w_2 \cdot \left( \sin \varphi \cdot \sqrt{\tan \varphi '} + \cos \varphi \cdot \sqrt{\tan \varphi }\right) . \end{aligned}$$

Using the definition of \(w_1,w_2\) and that \(\tan = \frac{\sin }{\cos }\), this is a simple computation:

$$\begin{aligned}&\left( \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\right) \cdot \left( \cos \varphi '\cdot \sqrt{\tan \varphi '}+ \sin \varphi '\cdot \sqrt{\tan \varphi } \right) \\&\quad = 2 \cdot \sqrt{\sin \varphi } \cdot \sqrt{\sin \varphi '} + \sqrt{\cos \varphi } \cdot \sqrt{\cos \varphi '} + \frac{\sin \varphi \cdot \sin \varphi '}{\sqrt{\cos \varphi } \cdot \sqrt{\cos \varphi '}}\\&\quad = \left( \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\right) \cdot \left( \sin \varphi \cdot \sqrt{\tan \varphi '}+ \cos \varphi \cdot \sqrt{\tan \varphi } \right) . \end{aligned}$$

\(\square \)

The following proposition combines Lemma 3.1, Lemma 3.2, and Lemma 3.3. It first states that, for \(x'\), \(\varphi \), \(w_1\), and \(w_2\) as in Lemma 3.3, we can approximate \(x'\) not only in the corresponding weighted max-ordering scalarization but also in \(I_\gamma ^\varphi \). Then it states that we even obtain an approximation factor for I. We will see that, for any solution \(x' \in X\), the angle \(\varphi \) satisfying \(\frac{f_1(x')}{f_2(x')} = \frac{\sqrt{\tan \varphi '}}{\sqrt{\tan \varphi }}\) corresponds to \(x'\) in the sense that, in the maximum in the approximation factor provided in Lemma 3.2, both terms are equal (a geometric explanation for this is given in Fig. 4). Therefore, the approximation factor obtained for I depends only on \(\gamma \) and \(\varphi \) and does not involve a maximum.

Proposition 3.2

Let \(\alpha \ge 1\), \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \), \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \), and let \(I = (X,f)\) be a biobjective minimization problem instance. For a solution  \(x \in X\) that is \(\alpha \)-approximate for the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) with weights \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\), any solution \(x' \in X\) with

$$\begin{aligned} \frac{f_1(x')}{f_2(x')} = \frac{\sqrt{\tan \varphi '}}{\sqrt{\tan \varphi }} \end{aligned}$$
(1)
  1. (i)

    is \(\alpha \)-approximated by x with respect to \(\leqq _\gamma ^\varphi \), and

  2. (ii)

    is \(\left( \alpha \cdot \left( 1+\sqrt{\tan \varphi } \cdot \sqrt{\tan \varphi '}\right) \right) \)-approximated by x (with respect to \(\leqq \)).

Proof

We first prove (i). Lemma 3.3 implies that

$$\begin{aligned} w_1 \cdot \left( \cos \varphi '\cdot f_1(x') + \sin \varphi '\cdot f_2(x') \right) = w_2 \cdot \left( \sin \varphi \cdot f_1(x') + \cos \varphi \cdot f_2(x' )\right) . \end{aligned}$$

Thus, we can apply Lemma 3.1 to the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) with weights \(w_1,w_2\), which immediately yields that \(x'\) is \(\alpha \)-approximated by x with respect to \(\leqq _\gamma ^\varphi \).

In order to prove (ii), we apply Lemma 3.2 to obtain that \(x'\) is approximated by x with factor

$$\begin{aligned} \alpha \cdot \left( 1+ \max \left\{ \frac{f_1(x')}{f_2(x')} \cdot \tan \varphi , \frac{f_2(x')}{f_1(x')} \cdot \tan \varphi '\right\} \right) . \end{aligned}$$

Since (1) holds, we know that

$$\begin{aligned} \max \left\{ \frac{f_1(x')}{f_2(x')} \cdot \tan \varphi , \frac{f_2(x')}{f_1(x')}\cdot \tan \varphi '\right\}&= \max \left\{ \frac{\sqrt{\tan \varphi '}}{\sqrt{\tan \varphi }} \cdot \tan \varphi , \frac{\sqrt{\tan \varphi }}{\sqrt{\tan \varphi '}}\cdot \tan \varphi '\right\} \\&= \sqrt{\tan \varphi } \cdot \sqrt{\tan \varphi '}, \end{aligned}$$

which yields (ii). \(\square \)

Proposition 3.2 states that, for given \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \), any solution \(x' \in X\) can be approximated by a solution that is \(\alpha \)-approximate for a specific weighted max-ordering scalarization of \(I_\gamma ^\varphi \), if \(\varphi \) is chosen such that (1) holds. The achievable approximation factor depends on \(\gamma \) and \(\varphi \). The following lemma provides an upper bound on this approximation factor that solely depends on \(\gamma \). Its proof is given in Appendix A.

Lemma 3.4

Let \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \) and \(\varphi \in \left[ 0, \gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \). Then we have \(\sqrt{\tan \varphi } \cdot \sqrt{\tan \varphi '} \le \tan {\bar{\varphi }}_\gamma \), where \({\bar{\varphi }}_\gamma = \frac{\gamma }{2} - \frac{\pi }{4}\).

We are now ready to prove our main result.

Theorem 3.2

Let \(I= (X,f)\) be a biobjective minimization problem instance and let \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \). Let \(X_Q \subseteq X\) be a set of solutions that, for any \(\varphi \in \left( 0,\gamma -\frac{\pi }{2}\right) \), contains an \(\alpha \)-approximate solution for the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) with weights \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\). Then \(X_Q\) is an \(\left( \alpha \cdot (1+\tan {\bar{\varphi }}_\gamma )\right) \)-approximation (for I), where \({\bar{\varphi }}_\gamma = \frac{\gamma }{2} - \frac{\pi }{4}\).

Proof

Let \(x' \in X\) be any feasible solution. Choose \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \) such that \(\frac{\sqrt{\tan \varphi }}{\sqrt{\tan \varphi '}} = \frac{f_1(x')}{f_2(x')}\), i.e., \(\varphi = \arctan \left( \frac{1}{q} \cdot \left( s \cdot \tan \gamma + \sqrt{1+ s^2 \cdot \left( \tan \gamma \right) ^2}\right) \right) \) for \(q = \frac{f_1(x')}{f_2(x')}\) and \(s = \frac{1}{2} \cdot \left( q + \frac{1}{q}\right) \). Then \(X_Q\) contains an \(\alpha \)-approximate solution for the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) with weights \(w_1 = \frac{\sqrt{\sin \varphi }}{\sqrt{\cos \varphi '}} + \frac{\sqrt{\cos \varphi }}{\sqrt{\sin \varphi '}}\) and \(w_2 = \frac{\sqrt{\sin \varphi '}}{\sqrt{\cos \varphi }} + \frac{\sqrt{\cos \varphi '}}{\sqrt{\sin \varphi }}\). Proposition 3.2 states that \(x'\) is \(\left( \alpha \cdot \left( 1+\sqrt{\tan \varphi } \cdot \sqrt{\tan \varphi '}\right) \right) \)-approximated by x. Thus, by Lemma 3.4, \(x'\) is also \(\left( \alpha \cdot (1+ \tan {\bar{\varphi }}_\gamma )\right) \)-approximated by x. \(\square \)

Note that one can obtain Theorem 3.1 by setting \(\gamma = \pi \) and \(\alpha = 1\) in Theorem 3.2. Thus, Theorem 3.2 is indeed a generalization of Theorem 3.1.

The following corollary collects several alternative formulas expressing the approximation factor \((\alpha \cdot (1+\tan {\bar{\varphi }}_\gamma ))\) obtained in Theorem 3.2. Its proof is given in Appendix B.

Corollary 3.1

The set \(X_Q\) from Theorem 3.2 is an \(\left( \alpha \cdot (1+ S)\right) \)-approximation, where

$$\begin{aligned} S = \tan \left( \frac{\gamma - \frac{\pi }{2}}{2}\right) = \frac{1 -\sin \gamma }{-\cos \gamma } = \frac{- \cos \gamma }{1+\sin \gamma } = \tan \gamma + \sqrt{1+ (\tan \gamma )^2}. \end{aligned}$$

Theorem 3.2 yields the following corollary. It provides the approximation factor achievable by the set of \(\gamma \)-supported solutions in a biobjective minimization problem instance for any inner angle \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \). Of course, the set of \(\frac{\pi }{2}\)-supported solutions, i.e., the efficient set, is a 1-approximation and the set of (\(\pi \)-) supported solutions is a 2-approximation. In between \(\frac{\pi }{2}\) and \(\pi \), the approximation factor is continuous and strictly increasing in \(\gamma \). See Fig. 5 for an illustration.

Corollary 3.2

For any biobjective minimization problem instance and any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), the set of \(\gamma \)-supported solutions is a \((1+\tan {\bar{\varphi }}_\gamma )\)-approximation, where \({\bar{\varphi }}_\gamma = \frac{\gamma }{2} - \frac{\pi }{4}\).

Proof

For \(\gamma = \frac{\pi }{2}\), the claim is obviously true as the set of efficient solutions is a 1-approximation. For \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \), we know that, for any \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \) and any weighted max-ordering scalarization of \(I_\gamma ^\varphi \), there exists a solution that is optimal for both the weighted max-ordering scalarization of \(I_\gamma ^\varphi \) and for \(I_\gamma ^\varphi \) itself, and is therefore also \(\gamma \)-supported. Thus, the set of \(\gamma \)-supported solutions contains an optimal solution for any weighted max-ordering scalarization of \(I_\gamma ^\varphi \) for any \(\varphi \in \left( 0, \gamma - \frac{\pi }{2}\right) \). The claim follows from Theorem 3.2 setting \(\alpha = 1\). \(\square \)

Fig. 5
figure 5

Approximation factor achieved by the set of \(\gamma \)-supported solutions due to Corollary 3.2 (solid) and Corollary 3.3 (dashed)

Figure 5 shows that the increase of the approximation factor achieved by the set of \(\gamma \)-supported solutions is quite close to linear in \(\gamma \). In fact, it is slightly convex. Thus, a reasonable rule of thumb is that the percentage at which the angle \(\gamma \) is between \(\frac{\pi }{2}\) and \(\pi \) is the approximation accuracy that is lost by the set of \(\gamma \)-supported solutions compared to the efficient set. The next corollary formalizes this rule of thumb.

Corollary 3.3

For any biobjective minimization problem instance and any \(\gamma \in \left[ \frac{\pi }{2}, \pi \right] \), the set of \(\gamma \)-supported solutions is a \(\frac{2\gamma }{\pi }\)-approximation.

Proof

Note that, since \(\tan \) is a convex function on \([0,\frac{\pi }{4}]\), where \(\tan 0 = 0\) and \(\tan \frac{\pi }{4} = 1\), we have

$$\begin{aligned} \tan {\bar{\varphi }}_\gamma = \tan \left( \frac{4 \cdot {\bar{\varphi }}_\gamma }{\pi } \cdot \frac{\pi }{4} \right) \le \frac{4 \cdot {\bar{\varphi }}_\gamma }{\pi } \cdot \tan \frac{\pi }{4} = \frac{4 \cdot {\bar{\varphi }}_\gamma }{\pi }. \end{aligned}$$

Thus, by Corollary 3.2, the set of \(\gamma \)-supported solutions is a \(\left( 1 +\frac{4 \cdot {\bar{\varphi }}_\gamma }{\pi }\right) \)-approximation, where \(1 +\frac{4 \cdot {\bar{\varphi }}_\gamma }{\pi } = 1 + \frac{2 \gamma - \pi }{\pi } = \frac{2\gamma }{\pi }\). \(\square \)

The following proposition states that Theorem 3.2 and Corollary 3.2 are tight in the sense that, for any inner angle \(\gamma \) (including the cases \(\gamma = \frac{\pi }{2}\) and \(\gamma = \pi \)), no better approximation guarantee than the one provided is achievable by approximations with respect to \(\leqq _\gamma ^\varphi \) for all \(\varphi \in [0,\gamma -\frac{\pi }{2}]{\setminus } \{\gamma - \pi , \frac{\pi }{2}\}\).

Proposition 3.3

For any \(\gamma \in [\frac{\pi }{2},\pi ]\), any \(\alpha \ge 1\), and any \(\varepsilon > 0\), there exists an instance \(I = (X,f)\) of a biobjective optimization problem for which a set that is an \(\alpha \)-approximation with respect to \(\leqq _\gamma ^\varphi \) for all \(\varphi \in [0,\gamma -\frac{\pi }{2}]{\setminus } \{\gamma - \pi , \frac{\pi }{2}\}\) is not an \(\left( \alpha \cdot (1+\tan {\bar{\varphi }}_\gamma ) - \varepsilon \right) \)-approximation with respect to \(\leqq \).

Proof

Define \(\varepsilon ' > 0\) such that \(\varepsilon ' < \min \{\frac{\varepsilon }{\alpha },1\}\). Consider the following instance I, which is illustrated in Fig. 6: Let the feasible set consist of exactly three solutions, \(x_1,x_2,x_3\) such that

$$\begin{aligned} \begin{array}{ll} f_1(x_1) = \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \tan {\bar{\varphi }}_\gamma \right) ,&{}\qquad f_2(x_1) = \alpha \cdot \varepsilon ', \\ f_1(x_2) = \alpha \cdot \varepsilon ', &{}\qquad f_2(x_2) = \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \tan {\bar{\varphi }}_\gamma \right) ,\\ f_1(x_3) = 1, &{}\qquad f_2(x_3) = 1. \end{array} \end{aligned}$$

Then \(\{x_1,x_2\}\) is an \(\alpha \)-approximation with respect to \(\leqq _\gamma ^\varphi \) for all \(\varphi \in [0,\gamma -\frac{\pi }{2}]{\setminus } \{\gamma - \pi , \frac{\pi }{2}\}\): For \(\varphi \le {\bar{\varphi }}_\gamma \), we have \(\varphi '\ge {\bar{\varphi }}_\gamma \) and, therefore, \(\tan \varphi \le \tan {\bar{\varphi }}_\gamma \le \tan \varphi '\). We can compute

$$\begin{aligned} \cos \varphi '\cdot f_1(x_1) + \sin \varphi '\cdot f_2(x_1)&= \cos \varphi '\cdot \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \tan {\bar{\varphi }}_\gamma \right) + \sin \varphi '\cdot \alpha \cdot \varepsilon '\\&\le \cos \varphi '\cdot \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \tan \varphi '\right) + \sin \varphi '\cdot \alpha \cdot \varepsilon '\\&= \alpha \cdot \left( \cos \varphi '+ \sin \varphi '\right) \\&= \alpha \cdot \left( \cos \varphi '\cdot f_1(x_3) + \sin \varphi '\cdot f_2(x_3) \right) \end{aligned}$$

and, since \({\bar{\varphi }}_\gamma \le \frac{\pi }{4}\) and, therefore, \(\tan {\bar{\varphi }}_\gamma \le \tan \frac{\pi }{4} = 1\),

$$\begin{aligned} \sin \varphi \cdot f_1(x_1) + \cos \varphi \cdot f_2(x_1)&= \sin \varphi \cdot \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \tan {\bar{\varphi }}_\gamma \right) + \cos \varphi \cdot \alpha \cdot \varepsilon '\\&\le \sin \varphi \cdot \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \frac{1}{\tan {\bar{\varphi }}_\gamma } \right) + \cos \varphi \cdot \alpha \cdot \varepsilon '\\&\le \sin \varphi \cdot \alpha \cdot \left( 1+ (1-\varepsilon ') \cdot \frac{1}{\tan \varphi } \right) + \cos \varphi \cdot \alpha \cdot \varepsilon '\\&= \alpha \cdot \left( \sin \varphi + \cos \varphi \right) \\&= \alpha \cdot \left( \sin \varphi \cdot f_1(x_3) + \cos \varphi \cdot f_2(x_3) \right) . \end{aligned}$$

Thus, for \(\varphi \le {\bar{\varphi }}_\gamma \), \(x_3\) is \(\alpha \)-approximated by \(x_1\) with respect to \(\leqq _\gamma ^\varphi \). Similarly, we can prove that, for \(\varphi \ge {\bar{\varphi }}_\gamma \), \(x_3\) is \(\alpha \)-approximated by \(x_2\) with respect to \(\leqq _\gamma ^\varphi \).

However, \(\{x_1,x_2\}\) is not an \(\left( \alpha \cdot \left( 1+\tan {\bar{\varphi }}_\gamma \right) - \varepsilon \right) \)-approximation (with respect to \(\leqq \)): We have \(\tan {\bar{\varphi }}_\gamma \le 1\) and, thus,

$$\begin{aligned} \left( \alpha \cdot \left( 1+\tan {\bar{\varphi }}_\gamma \right) - \varepsilon \right) \cdot f_1(x_3)&< \alpha \cdot \left( 1+ \tan {\bar{\varphi }}_\gamma - \varepsilon '\right) \\&\le \alpha \cdot \left( 1+ \tan {\bar{\varphi }}_\gamma - \varepsilon ' \cdot \tan {\bar{\varphi }}_\gamma \right) \\&= f_1(x_1). \end{aligned}$$

Similarly, we have

$$\begin{aligned} \left( \alpha \cdot (1+\frac{- \cos \gamma }{\sin \gamma + 1}) - \varepsilon \right) \cdot f_2(x_3) < f_2(x_2). \end{aligned}$$

Thus, \(x_3\) is not \(\left( \alpha \cdot \left( 1+\frac{- \cos \gamma }{\sin \gamma + 1}\right) - \varepsilon \right) \)-approximated. \(\square \)

Fig. 6
figure 6

Illustration of the instance I constructed in the proof of Proposition 3.3. The shaded region is \(\alpha \)-approximated by \(x_1\) or \(x_2\) with respect to \(\leqq _\gamma ^\varphi \) for \(\varphi = {\bar{\varphi }}_\gamma \). It is easy to see that for \(\varphi \le {\bar{\varphi }}_\gamma \) (i.e., if the dominance cones are rotated counterclockwise in the picture), \(x_3\) is \(\alpha \)-approximated by \(x_1\) with respect to \(\leqq _\gamma ^\varphi \), and, for \(\varphi \ge {\bar{\varphi }}_\gamma \) (i.e., if the dominance cones are rotated clockwise), \(x_3\) is \(\alpha \)-approximated by \(x_2\) with respect to \(\leqq _\gamma ^\varphi \). Thus, \(\{x_1,x_2\}\) is an \(\alpha \)-approximation with respect to \(\leqq _\gamma ^\varphi \) for any \(\varphi \in \left[ 0,\gamma - \frac{\pi }{2}\right] {\setminus } \left\{ \gamma - \pi , \frac{\pi }{2}\right\} \)

4 Structural results for maximization problems

In this section, we investigate whether the results obtained in Sect. 3 can be transfered to the case of maximization. It is known that obtaining approximations using the weighted sum scalarization is more challenging for maximization problems than for minimization problems since the set of supported solutions does not yield any approximation guarantee in general [3]. We will see that this is also the case when using general ordering cones to obtain approximations. In contrast to the case of minimization problems, where the approximation guarantee that is achieved by the set of \(\gamma \)-supported solutions increases continuously when \(\gamma \) is increased between \(\frac{\pi }{2}\) and \(\pi \), the set of \(\gamma \)-supported solutions does not yield any approximation guarantee for any \(\gamma > \frac{\pi }{2}\) in the case of maximization problems in general.

In this section, instead of the assumption that the set \(f(X) + {\mathbb {R}}^p_\geqq \) is closed, we assume that \(f(X) - {\mathbb {R}}^p_\geqq \) is closed and that f(X) is bounded. The additional assumption of f(X) being bounded ensures external stability, i.e., that, also for maximization problem instances, for any feasible solution \(x \in X\) that is dominated by another feasible solution \(x' \in X\), there also exists an efficient solution \(x'' \in X_E\) dominating x. All other underlying concepts in this section are analogous to the corresponding concepts for minimization problems introduced in Sect. 2.

Observation 1 transfers directly to the case of maximization. However, results similar to Sect. 3 do not hold for maximization. The set of \(\gamma \)-supported solutions does not yield any approximation guarantee in general:

Theorem 4.1

For any \(\gamma \in (\frac{\pi }{2},\pi ]\) and any \(\alpha \ge 1\), there exists an instance I of a biobjective maximization problem where the set of \(\gamma \)-supported solutions is not an \(\alpha \)-approximation.

Proof

For \(\gamma \in \left( \frac{\pi }{2}, \pi \right] \) and \(\alpha \ge 1\), define the following instance of a biobjective maximization problem (see also Fig. 7): Let the feasible set consist of exactly three solutions, \(x_1,x_2,x_3\) such that \(f_1(x_1) = 1\), \(f_2(x_1) = \alpha + 2 + \frac{1}{\tan {\bar{\varphi }}_\gamma } \cdot \alpha \), \(f_1(x_2) = \alpha +2 + \frac{1}{\tan {\bar{\varphi }}_\gamma } \cdot \alpha \), \(f_2(x_2) = 1\), \(f_1(x_3) = \alpha + 1\), and \(f_2(x_3) = \alpha + 1\). Then \(x_3\) is not \(\gamma \)-supported: If \(\varphi \le {\bar{\varphi }}_\gamma \), we have \(\tan {\bar{\varphi }}_\gamma \le \tan \varphi '\) and, therefore,

$$\begin{aligned} \cos \varphi '\cdot f_1(x_1) + \sin \varphi '\cdot f_2(x_1)&= \cos \varphi '+ \sin \varphi '\cdot \alpha + 2 \cdot \sin \varphi '+ \frac{\sin \varphi '}{\tan {\bar{\varphi }}_\gamma }\cdot \alpha \\&> \cos \varphi '+ \sin \varphi '\cdot \alpha + \sin \varphi '+ \frac{\sin \varphi '}{\tan {\bar{\varphi }}_\gamma }\cdot \alpha \\&\ge \cos \varphi '+ \sin \varphi '\cdot \alpha + \sin \varphi '+ \cos \varphi '\cdot \alpha \\&= \cos \varphi '\cdot f_1(x_3) + \sin \varphi '\cdot f_2(x_3). \end{aligned}$$

Moreover, we have \(\tan \varphi \le \frac{1}{\tan \varphi '} \le \frac{1}{\tan {\bar{\varphi }}_\gamma }\) by Lemma 3.4, which implies that

$$\begin{aligned} \sin \varphi \cdot f_1(x_1) + \cos \varphi \cdot f_2(x_1)&= \sin \varphi + \cos \varphi \cdot \alpha + 2 \cdot \cos \varphi + \frac{\cos \varphi }{\tan {\bar{\varphi }}_\gamma }\cdot \alpha \\&> \sin \varphi + \cos \varphi \cdot \alpha + \cos \varphi + \frac{\cos \varphi }{\tan {\bar{\varphi }}_\gamma }\cdot \alpha \\&\ge \sin \varphi + \cos \varphi \cdot \alpha + \cos \varphi + \sin \varphi \cdot \alpha \\&= \sin \varphi \cdot f_1(x_3) + \cos \varphi \cdot f_2(x_3). \end{aligned}$$

Thus, \(x_3\) is dominated by \(x_1\) in \(I_\gamma ^\varphi \). Similarly, if \(\varphi \ge {\bar{\varphi }}_\gamma \), the solution \(x_3\) is dominated by \(x_2\) in \(I_\gamma ^\varphi \). On the other hand, \(\{x_1,x_2\}\) is obviously not an \(\alpha \)-approximation. \(\square \)

Fig. 7
figure 7

Illustration of the maximization problem instance I constructed in the proof of Theorem 4.1. The dominance cones of \(x_1\) and \(x_2\) with respect to \(\geqq _\gamma ^\varphi \) are illustrated for \(\varphi = {\bar{\varphi }}_\gamma \). It is easy to see that \(x_3\) is dominated by \(x_1\) for \(\varphi \le {\bar{\varphi }}_\gamma \) (if the dominance cones are rotated counterclockwise) and by \(x_2\) for \(\varphi \ge {\bar{\varphi }}_\gamma \) (if the dominance cones are rotated clockwise). Thus, \(x_3\) is not \(\gamma \)-supported. However, \(x_3\) is not \(\alpha \)-approximated by \(x_1\) or by \(x_2\) in I

5 Conclusions and additional notes

This article studies approximation properties of general ordering cones containing the Pareto cone for biobjective minimization problems. As expected, it does not suffice to consider the set of optimal solutions (or an approximation) with respect to a single ordering cone in order to achieve an approximation guarantee in the classical sense. Instead, we classify ordering cones by their inner angle \(\gamma \) and consider sets that are optimal (or approximately optimal) with respect to all closed convex ordering cones of inner angle \(\gamma \) simultaneously. These sets then, in fact, achieve an approximation guarantee, which depends on \(\gamma \). We introduce the concept of \(\gamma \)-supportedness to describe solutions that are optimal with respect to at least one ordering cone of inner angle \(\gamma \). Since this concept incorporates both efficiency and supportedness as special cases, our results are a generalization of the fact that the efficient set is a 1-approximation and of known results about the approximation quality achievable by the set of supported solutions. Our results are best possible in the sense that better approximation guarantees than the ones shown are not generally achievable for any inner angle \(\gamma \in [\frac{\pi }{2}, \pi ]\).

Designing (polynomial-time) approximation algorithms based on general ordering cones (other than weighted sum scalarizations) is possible but presents further challenges since the resulting problems stay biobjective. Moreover, when attempting to compute, e.g., \(\gamma \)-supported solutions via the definition of \(\gamma \)-supportedness, all values for \(\varphi \) from the continuous set \([0,\gamma -\frac{\pi }{2}] \setminus \{\gamma - \pi , \frac{\pi }{2}\}\) have to be considered. Finally, the fact that the matrix describing the linear mapping \(T_\gamma ^\varphi \) typically contains irrational entries constitutes an additional obstacle for algorithmic applications of the presented concepts.

An interesting direction for future research is the generalization of the presented results to general ordering cones in more than two objectives. The equivalence between closed convex cones containing \({\mathbb {R}}_\geqq \) and closed vector preorders satisfying the Pareto axiom also holds for the more general case of three or more objectives. Also, most of the definitions and observations stated in Sect. 2 easily transfer to the case of more than two objectives. For details, we refer to [19].

Moreover, Proposition 3.1 can easily be generalized to \(p \ge 3\) objectives: For any closed vector preorder \(\preceq \) on \({\mathbb {R}}^p\) satisfying the Pareto axiom (except for \(\leqq \)) and any \(\alpha \ge 1\), there exists a p-objective minimization problem instance where the set of optimal solutions with respect to \(\preceq \) is not an \(\alpha \)-approximation with respect to \(\leqq \).

However, since, in three or more dimensions, a general closed convex cone cannot be described by a finite number of scalar parameters, generalizing the positive results from Sect. 3 is far from straightforward. One way to simplify the situation is the restriction to polyhedral cones, which are cones that can be obtained from the nonnegative orthant via a linear mapping. Nevertheless, even then, it is not obvious how to generalize the concept of \(\gamma \)-supportedness, as there does not exist an unambiguous inner angle in a polyhedral cone in three or more dimensions.