1 Introduction

Multiobjective optimization has gained substantial and increasing attention in the optimization literature. This can be partly explained by the prevalence of multiple, conflicting objectives in practical applications, see e.g. [20] for a prominent example in cancer radiation treatment. From a theoretical point of view, multiobjective optimization is not only an interesting research area but it is also worth studying due to its connections to game theory, inverse optimization, and, among others, robust optimization, see [7, 10, 11].

A multiobjective optimization problem (MOP) with p objectives, \(p \in {\mathbb {N}}, p\ge 2\), can be stated as

$$\begin{aligned} \min f(x) = \min \ {}&(f_1(x), \dots , f_p(x))^\top \\ s.t.&x \in X, \end{aligned}$$
(MOP)

where \(X \subseteq {\mathbb {R}}^n\), for \(n\in {\mathbb {N}}\), is called the feasible set, and \(f = (f_1, \dots , f_p)^\top : {\mathbb {R}}^n \rightarrow {\mathbb {R}}^p\) is the (vector-valued) objective function. A multiobjective discrete optimization problem (MODO) can be stated as an (MOP) with the additional restriction that X is a finite set. We denote by \(Y {:}{=}f(X) {:}{=}\{ y \in {\mathbb {R}}^p: y = f(x),\ x \in X \}\) the set of images and call \({\mathbb {R}}^n\) and \({\mathbb {R}}^p\) the decision space and the image space, respectively.

For images \(y,{\bar{y}} \in {\mathbb {R}}^p\) the weak componentwise ordering is defined by \( y \leqq {\bar{y}}\) if and only if \(y_i \le {\bar{y}}_i\) for all \(i = 1, \dots , p\), the componentwise ordering is defined by \(y \le {\bar{y}}\) if and only if \(y \leqq {\bar{y}}\) and \(y \ne {\bar{y}}\), and the strict componentwise ordering is defined \(y < {\bar{y}}\) if and only if \(y_i < {\bar{y}}_i\) for all \(i = 1, \dots , p\). Further, the nonnegative orthant is defined by \({\mathbb {R}}^p_{\geqq } {:}{=}\{ y \in {\mathbb {R}}^p: y \geqq 0 \}\). The sets \({\mathbb {R}}^p_{\ge }\) and \({\mathbb {R}}^p_{>}\) are defined analogously. Then, a feasible solution x dominates another feasible solution \(x'\) if and only if \(f(x) \le f(x')\). A feasible solution \(x^* \in X\) is efficient (weakly efficient) if there does not exist another feasible solution \(x \in X\) such that \(f(x) \le f(x^*)\) (\(f(x) < f(x^*)\)). We call an image \(y = f(x)\) (weakly) nondominated if x is (weakly) efficient and denote by \(Y_N\) (\(Y_{wN}\)) the set of (weakly) nondominated images. For a more detailed and thorough introduction on multiobjective optimization, we refer to [15].

A scalarization transforms systematically an MOP into a single objective problem using additional parameters, such as weights or reference points. In this context, three questions are of major interest: Is the optimal solution of the scalarized problem guaranteed to be (weakly) efficient? And, vice versa, can any efficient solution be obtained as an optimal solution for a scalarized problem? If yes, how do the parameters need to be chosen to obtain this specific efficient solution?

The well-known weighted sum scalarization chooses a non-negative weight \(\lambda _i\ge 0\) for each objective function and solves the problem \(\min _{y \in Y} \{ \lambda ^\top y\}\), see [35]. The image of every optimal solution of this scalar-valued problem is a (weakly) nondominated image of the original problem if \(\lambda \in {\mathbb {R}}^p_{>}\) (\(\lambda \in {\mathbb {R}}^p_{\ge }\)) [18]. By varying the weights, other nondominated images can be found. Weighted sum scalarizations yield so-called supported nondominated images. These are located on the convex hull of the set of images and, in general, are a strict subset of \(Y_N\). Those nondominated images that are also extreme points of the convex hull of Y are called extreme supported nondominated images.

The basic idea of the weight set decomposition is quite intuitive and has been explored extensively for the weighted sum scalarization [2, 6, 19, 26, 28]. Each nondominated image has an associated weight set component, i.e., the set of all weight vectors for which the weighted sum scalarization yields the same nondominated image. The weight set decomposition is usually taken to be a (minimal) collection of weight set components that cover the weight set, the set of all eligible weights.

Weight set components offer decision makers additional insight into the nondominated set, and can be particularly useful for three or more objectives, when visualization of the nondominated set is difficult. For example, a weight set component with a comparatively large volume is obtained from a nondominated image that is more ‘robust’ with respect to changes in preferences of the single objectives. The intersections of weight set components also embody the adjacency structure of the supported nondominated images: two supported nondominated images are adjacent, if the dimension of the intersection of their weight set components is equal to the dimension of the weight set minus one. In addition to their value to decision makers, the construction of weight set components may also form an integral part of algorithms for generating sets of nondominated images or approximations to them. The adjacency structure can be especially helpful in the design of interactive methods [2, 19, 28].

However, the existence of unsupported nondominated images, i.e., images that are nondominated but not supported, and the fact that corresponding solutions cannot be computed by the weighted sum scalarization delimits the applicability of this particular scalarization. Yet, it motivates the weighted Tchebycheff scalarization which does not suffer from this shortcoming.

Let \(s \in {\mathbb {R}}^p\) be a reference point, \(\lambda \in {\mathbb {R}}^p_\ge \) a weight vector, and \(\Vert y \Vert ^{\lambda }_\infty {:}{=}\max _{i = 1, \dots , p} \{ |\lambda _i \ y_i|\}\) the max-norm on \({\mathbb {R}}^p\). Then, the weighted Tchebycheff scalarization can be stated as

figure a

Typically, the reference point is chosen to be the ideal point \(y_i^I {:}{=}\min _{x \in X} f_i(x), \ i =1, \dots , p,\) or to be some utopia point \(y^U < y^I\). For weights \(\lambda \in {\mathbb {R}}^p_>\) and reference points \(s \leqq y^I\), every optimal solution to \(\Pi ^{TS} (\lambda )\) is weakly efficient for an MODO. If the solution is unique, it is efficient [32]. Conversely, each nondominated image is indeed optimal for a weighted Tchebycheff scalarization problem with appropriately chosen weights [32].

In this work, we provide a first rigorous and comprehensive theory on the weighted Tchebycheff weight set components, we analyze the polyhedral and combinatorial structure of the sets and provide an adjacency concept of nondominated images.

1.1 Related work

The Tchebycheff norm was introduced for biobjective optimization problems by Geoffrion in 1967 [17]. Bowman [8] and Wierzbicki [33], among others, suggest using the (weighted) Tchebycheff norm to find nondominated images of MOPs, even for nonlinear objective functions. To avoid weakly nondominated images which are not nondominated, modifications are introduced: the lexicographic weighted Tchebycheff scalarization [32] chooses among all images that are optimal the image with minimal 1-norm. The augmented weighted Tchebycheff norm [32] adds the 1-norm scaled with a small parameter to the objective function. The modified augmented weighted Tchebycheff norm [14, 21] also uses weights in the augmentation term.

Since the distance to the reference point may provide useful information in the optimization process, many applications of these Tchebycheff scalarization techniques can be found in the context of interactive approaches, see [25] for an overview. For example, Steuer and Choo [32] utilize the (augmented) weighted Tchebycheff scalarization, while Luque et al. [24] develop such an approach for solving convex multiobjective programs using the lexicographic weighted Tchebycheff scalarization. For multiobjective mixed integer linear programming, Alves and Clímaco [1] combine a branch-and-bound approach with adjustments of the reference point, employing the augmented weighted Tchebycheff scalarization.

Weight set decomposition methods for the weighted sum scalarization date back to the work of Yu and Zeleny in 1975 [34], who introduce a generalized simplex method and link basic efficient solutions with the set of weights in the polyhedral cone defined by the corresponding basis matrix. For biobjective problems, the well-known dichotomic search approach [3, 12] in fact calculates all extreme supported nondominated images. Benson and Sun [4, 5] extend this idea and establish a link between extreme supported nondominated images of a multiobjective linear optimization problem and a partitioning of the weight set.

Przybylski et al. [28] adapt this technique to multiobjective integer programs. They state fundamental properties concerning the weight set components: Each weight set component \(\Lambda ^{WS}(y)\) of an image y is a polytope and knowing all extreme supported nondominated images is sufficient for its calculation. A weight set component has dimension equal the dimension of the weight set if and only if the corresponding image is an extreme supported nondominated one, which implies that the set of extreme supported nondominated images is sufficient and necessary to cover the whole weight set. Further, two weight set components intersect in common faces only. That is, there exists a face F of \(\Lambda ^{WS}(y)\) and a face \(F'\) of \(\Lambda ^{WS}(y')\) such that \(F = F' = \Lambda ^{WS}(y) \cap \Lambda ^{WS}(y')\). Based on this symmetry, two extreme supported nondominated images are defined to be adjacent if and only if the dimension of their intersection is one less than the dimension of the weight set. Finally, they present an algorithm for computing all extreme supported nondominated images for three objectives using the derived properties by iteratively shrinking supersets of the actual weight set components.

The weight set decomposition is implicitly calculated by the algorithms of Özpeynirci and Köksalan [26] and Bökler and Mutzel [6]. The algorithms of Alves and Costa [2] and Halffmann et al. [19] iteratively augment subsets of the weight set components based on the convexity property. Seipp [31] and Schulze et al. [30] use a weight set decomposition linked with so-called arrangements of hyperplanes in the image space to show that the number of extreme supported nondominated images of multiobjective minimum spanning tree problems and unconstrained multiobjective combinatorial problems, respectively, is polynomially bounded. Correia et al. [13] modify the results of Seipp to enumerate all efficient minimum spanning trees.

For the weighted Tchebycheff scalarization, Eswaran et al. [16] explicitly consider weight set components for biobjective problems. Based on this approach, Ralphs et al. [29] adapt the dichotomic search method to calculate all nondominated images of biobjective discrete optimization problems. Karakaya et al. [23] introduce an adjacency concept based on the weighted Tchebycheff scalarization. Here, two images are called adjacent if the intersection of their weight set components with respect to the weighted Tchebycheff scalarization is non-empty. Bozkurt et al. [9] give a representation of the weight set components as a union of polytopes, which is used to evaluate the quality of efficient solutions and efficient solution sets. In connection with [23], this is recently modified by Karakaya and Köksalan [22].

1.2 Our contribution

We present a rigorous theory on the weight set decomposition approach for the weighted Tchebycheff scalarization of MODOs. As shown in Fig. 1, the weighted Tchebycheff scalarization implies a more complex structure in comparison to the weighted sum scalarization. Our primary contribution is a comprehensive theoretical analysis of this structure and its properties. Knowing this structure may allow for new algorithms in the future, following the methodologies of [2, 6, 19, 26, 28], to compute all or subsets of the nondominated images including their weighted Tchebycheff weight set decomposition. Moreover, calculating the weighted Tchebycheff weight set decomposition might also enrich already existing algorithms, see for example [14, 21] and the references therein, to provide additional information on the solution set, cf. [9, 22].

Fig. 1
figure 1

An example with image set \(y^1 = (2,6,8)^\top \), \(y^2 = (8,2,4)^\top \), \(y^3 = ( 4,8,2)^\top \), \(y^4 = (6,4,6)^\top \), \(y^5 = (7,7,5)^\top \) illustrated in (a), and their weight set components \(\Lambda (y^r)\), \(r = 1, \dots , 5\), for both weighted sum scalarization (b) and weighted Tchebycheff scalarization (c). Note that, for both scalarizations, the restriction to weights contained in \(\Lambda =\{ \lambda \in {\mathbb {R}}^p_\ge : \sum _{i=1}^p \lambda _i = 1\}\) is without loss of generality. Thus, \(\lambda _3 = 1 - \lambda _1 - \lambda _2\). All images are nondominated. The image \(y^4\) is not extreme supported. The image \(y^5\) is not supported. The images \(y^1\) and \(y^2\) are adjacent w.r.t. the weighted sum weight set decomposition though their weighted Tchebycheff weight set components do not intersect

In Sect. 2, we show that it is necessary and sufficient to consider only the weight set components for nondominated images and establish that weight set components have convexity-related properties: they are star-shaped and convex along rays emanating from a vertex of the weight set. We study the intersection of weight set components in Sect. 3. Such intersections coincide with weight set components of certain weakly nondominated images, and, hence, all convexity-related properties also apply, although intersections of star-shaped sets are not star-shaped in general. In Sect. 4, we follow the approach of Bozkurt et al. [9] to describe weight set components as unions of finitely many polytopes. We show that the obtained polytopes induce, for any weight set component, the existence of a so-called polytopal subdivision, which lays the foundation of the dimensional analysis in Sect. 5. In particular, this allows an adaption and a refinement of the adjacency concepts introduced in [28] and [23], respectively, which ‘reveals the organization’ of the nondominated set. We close with some concluding remarks in Sect. 6.

2 Foundations

In this section, we introduce the concept of the weight set decomposition for the weighted Tchebycheff scalarization. We also derive properties connecting the weight set with the nondominated set \(Y_N\) and investigate convexity properties.

Recall that a polyhedron is the intersection of finitely many halfspaces and the dimension of a polyhedron \(P \subseteq {\mathbb {R}}^p\) is the maximum number of affinely independent points in P minus one. A polyhedron is called a polytope if it is bounded. For \(w \in {\mathbb {R}}^p\) and \(z \in {\mathbb {R}}\), the inequality \(w^\top y \le z\) is valid for P if \(P \subseteq \{y \in {\mathbb {R}}^p: w^\top y \le z\}\). A set \(F \subseteq P\) is a face of P if there is some valid inequality \(w^\top y \le z\) such that \(F = \{y \in P: w^\top y = z\}\). Note that faces of P are polyhedra themselves and, thus, the notion of dimension can be adapted. In particular, faces of dimension 0 are called extreme points. Polyhedra can be generalized as follows:

Definition 2.1

(Ziegler [36]) A polytopal complex \({\mathcal {C}}\) is a finite collection of polytopes in \({\mathbb {R}}^p\) such that

  1. (i)

    the empty polytope is in \({\mathcal {C}}\),

  2. (ii)

    if \(P \in {\mathcal {C}}\), then all the faces of P are also in \({\mathcal {C}}\),

  3. (iii)

    the intersection \(P \cap Q\) of two polytopes P, \(Q \in {\mathcal {C}}\) is a face of both P and Q.

The dimension \(\dim ({\mathcal {C}})\) of the polytopal complex \({\mathcal {C}}\) is the largest dimension of a polytope in \({\mathcal {C}}\). The underlying set of \({\mathcal {C}}\) is the point set \(\bigcup _{P \in {\mathcal {C}}} P\). A subcomplex of a polytopal complex \({\mathcal {C}}\) is a subset \({\mathcal {C}}' \subseteq {\mathcal {C}}\) that is a polytopal complex itself. A polytopal subdivision of a set \(S \subseteq {\mathbb {R}}^p\) is a polytopal complex \({\mathcal {C}}\) with the underlying set \(\bigcup _{P \in {\mathcal {C}}} P = S\). For example, the collection of all faces of a polytope P defines a polytopal subdivision of P itself.

Definition 2.2

(Preparata and Shamos [27]) A set \( S \subseteq {\mathbb {R}}^p\) is star-shaped, if there exists an element \(y \in S\) such that \(\theta y + (1- \theta ) {\bar{y}} \in S\) for all \({\bar{y}} \in S\) and all \(\theta \in (0,1)\). The set of all such elements y is called kernel of S and is denoted by \(\ker (S)\).

In the remainder of this paper, we consider MODOs. Further, we make the following assumption on the reference point used in the weighted Tchebycheff scalarization.

Assumption 1

The reference point \(s\in {\mathbb {R}}^p\) is a utopia point. Thus, \(s < y\) for all images \(y\in Y\) and we can also assume that the reference point s used in the weighted Tchebycheff scalarization is the zero vector (\(s=0\)) and \(Y \subseteq {\mathbb {R}}^p_>\).

As a consequence of Assumption 1, the problem \(\Pi ^{TS} (\lambda )\) simplifies to \(\min \{\Vert f(x) \Vert ^{\lambda }_\infty \;: \; x\in X\} = \min \{\Vert y \Vert ^{\lambda }_\infty \;:\;y\in Y\}\). Furthermore, it holds \(\Vert y \Vert ^{\lambda }_\infty > 0\) for all \(y\in Y\) and \(\lambda \ge 0\).

The following proposition extends Theorem 4.5 in [32] to the case of weakly nondominated images.

Proposition 2.3

For each image \(y \in Y_{wN}\), there exists a weight \(\lambda \in {\mathbb {R}}^p_>\) such that y minimizes \(\Pi ^{TS} (\lambda )\). Moreover, if \(y \in Y_N\), then there exists a weight \(\lambda \) such that y uniquely minimizes \(\Pi ^{TS} (\lambda )\).

Proof

For \(y \in Y_{wN}\), define the weight \(\lambda \) componentwise by \(\lambda _i = \frac{1}{y_i} > 0\) for \(i = 1, \dots , p\). Suppose there exists a \({\bar{y}}\) such that \( \Vert y \Vert ^{\lambda }_\infty > \Vert {\bar{y}} \Vert ^{\lambda }_\infty \). Then, it is \(\max _{i = 1, \dots , p} \lambda _i y_i > \max _{j = 1, \dots , p} \lambda _j {\bar{y}}_j\) which implies \(1 > \lambda _j {\bar{y}}_j \) and, thus, \(y_j > {\bar{y}}_j \) for all \(j = 1, \dots ,p\). This contradicts \(y \in Y_{wN}\). To prove the second statement, we choose again the weight \(\lambda \) defined by \(\lambda _i = \frac{1}{y_i}\) for \(i = 1, \dots , p\). Then, similar calculations imply that, for an image \(y \ne {\bar{y}} \in Y\) with \(\Vert y \Vert ^{\lambda }_\infty \ge \Vert {\bar{y}} \Vert ^{\lambda }_\infty \), it holds that \(y_j \ge {\bar{y}}_j\) for all \(j = 1, \dots , p\). This is a contradiction to \(y \in Y_N\). \(\square \)

Since \(\alpha \Vert y \Vert ^{\lambda }_\infty = \Vert y \Vert ^{\alpha \lambda }_\infty \) holds for all scalars \(\alpha > 0\), normalization of the weight \(\lambda \) does not change the optimal solution set of \(\Pi ^{TS} (\lambda )\). Hence, analogously to the weighted sum method, we restrict the set of eligible parameters to the (normalized) weight set

$$\begin{aligned} \Lambda {:}{=}\left\{ \lambda \in {\mathbb {R}}_\geqq ^{p}: \sum _{k = 1}^{p} \lambda _k =1 \right\} . \end{aligned}$$
(2.1)

Note that \(\Lambda \) is a \((p-1)\) dimensional polytope and the projection/bijection \(\phi : \Lambda \rightarrow \{\lambda \in {\mathbb {R}}^{p-1}_\geqq : \sum _{i=1}^{p-1} \lambda _i \le 1 \}, (\lambda _1, \dots , \lambda _p) \mapsto (\lambda _1, \dots , \lambda _{p-1})\) is particularly useful for the visualization of the weight sets of MODOs with three objectives. Next, we introduce the decomposition of the weight set implied by the weighted Tchebycheff scalarization.

Definition 2.4

For an image \(y \in Y\), the weight set component of y with respect to the weighted Tchebycheff scalarization is defined by

$$\begin{aligned} \Lambda (y) {:}{=}\left\{ \lambda \in \Lambda : \Vert y \Vert ^\lambda _\infty \le \ \Vert {\bar{y}} \Vert ^\lambda _\infty for all {\bar{y}} \in Y \right\} . \end{aligned}$$

Note that \(\lambda \in \Lambda (y)\) if and only if y is optimal for \(\Pi ^{TS}(\lambda )\), i.e., \(y=f(x)\) for some optimal solution x of \(\Pi ^{TS}(\lambda )\). Obviously, if an image is not weakly nondominated, then its weight set component is empty.

We introduce a notation for the normalized weight used in the proof of Proposition 2.3 since it plays a major role.

Definition 2.5

For \(y \in Y_{wN}\), we denote the kernel weightFootnote 1 of y by \(\lambda (y)\) and define it componentwise by

$$\begin{aligned} \lambda _i(y) {:}{=}\frac{1}{y_i} \frac{1}{\sum _{j = 1}^p \frac{1}{y_j}} \text { for } i = 1,\dots , p. \end{aligned}$$

Proposition 2.3 implies that if y is weakly nondominated then its weight set component is nonempty. Hence, an image is weakly nondominated if and only if its weight set component is nonempty. If y is nondominated, we obtain the following corollary.

Corollary 2.6

Let \(B_\varepsilon (\lambda ) {:}{=}\{ \lambda ' \in \Lambda :\sum _{i=1}^p |\lambda _i - \lambda '_i| \le \varepsilon \}\). For an image \(y \in Y_N\), there exists an \(\varepsilon >0\) such that \(B_\varepsilon (\lambda (y)) \subseteq \Lambda (y)\). If \(\varepsilon \) is chosen sufficiently small, then \(B_\varepsilon (\lambda (y)) \cap \Lambda (y') = \emptyset \) for each \(y' \in Y_{wN} {\setminus } \{y\}\).

Proof

The claim follows by Proposition 2.3, the definition of the kernel weight, finiteness of the feasible set, and the continuity of the function defined by \(\lambda \mapsto \Vert {\bar{y}} \Vert ^\lambda _\infty \). Hereby, note that, for any given weakly nondominated image \({\bar{y}} \in Y_{wN}\), the function defined by \(\lambda \mapsto \Vert {\bar{y}} \Vert ^\lambda _\infty \) is continuous since it is the pointwise maximum of finitely many linear functions. \(\square \)

The next propositions show that nondominated images suffice to define the weight set components of all images.

Proposition 2.7

Let an image \(y \in Y\) be given. Then

$$\begin{aligned} \Lambda (y) = \{\lambda \in \Lambda : \Vert y \Vert ^\lambda _\infty \le \Vert {\bar{y}} \Vert ^\lambda _\infty for all {\bar{y}} \in Y_N \}. \end{aligned}$$

Proof

Let \({\bar{y}} \in Y \setminus Y_N\). Then, since Y is finite, there exists an image \(y' \in Y_N\) such that \(y' \le {\bar{y}}\). This yields \(\Vert y' \Vert ^\lambda _\infty \le \Vert {\bar{y}} \Vert ^\lambda _\infty \) for all \(\lambda \in \Lambda \). Thus, \(\Vert y \Vert ^\lambda _\infty \le \Vert y' \Vert ^\lambda _\infty \le \Vert {\bar{y}} \Vert ^\lambda _\infty \) for \(\lambda \in \Lambda (y)\). Hence, the inequality \(\Vert y \Vert ^\lambda _\infty \le \Vert {\bar{y}} \Vert ^\lambda _\infty \) is redundant. \(\square \)

The following proposition shows that all weights \(\lambda \in \Lambda \) map to a nondominated image in \(Y_N\) by optimizing \(\Pi ^{TS} (\lambda )\).

Proposition 2.8

It holds that \( \Lambda = \bigcup _{y \in Y_N} \Lambda (y).\)

Proof

For a weight \(\lambda \in \Lambda \), there exists an image \(y \in Y\) that is optimal for \(\Pi ^{TS} (\lambda )\). Hence, \(\lambda \in \Lambda (y)\). If \(y \not \in Y_N\), then, due to finiteness of Y, there exists \({\bar{y}} \in Y_N\) such that \({\bar{y}}_i \le y_i \text { for all } i = 1, \dots , p.\) Since \(\lambda \ge 0\), this implies \(\Vert {\bar{y}} \Vert ^\lambda _\infty \le \Vert y \Vert ^\lambda _\infty \). So, \({\bar{y}}\) is optimal for \(\Pi ^{TS} (\lambda )\) and, thus, \(\lambda \in \Lambda ({\bar{y}})\). It follows \(\Lambda \subseteq \bigcup _{{\bar{y}} \in Y_N} \Lambda ({\bar{y}})\). The reverse inclusion holds trivially. \(\square \)

Proposition 2.3 implies another fact about the weight set components: a weight set component of a nondominated image cannot be a subset of another. In particular, \(y \in Y_N\) implies \(\Lambda (y) {\setminus } \left( \bigcup _{{\bar{y}} \in Y_N, {\bar{y}} \ne y} \Lambda ({\bar{y}}) \right) \ne \emptyset \). Thus, Proposition 2.8 states a sufficient and necessary condition to decompose the weight set.

Next, we observe a structural property of the weight set components: in contrast to the weighted sum weight set components, the sets \(\Lambda (y)\) are not necessarily convex.

Example 2.9

Consider the set of nondominated images

$$\begin{aligned} Y = \left\{ y^1 = (3,1,2)^\top , y^2 = (2,1,3)^\top , y^3 = (2,2,2)^\top , y^4 = (1,2,3)^\top \right\} . \end{aligned}$$

Let \(\lambda ^1 = (0.24, 0.72, 0.04)^\top \) and \(\lambda ^2 = (0.24, 0.46, 0.3)^\top \). Then, it is \(\lambda ^1, \lambda ^2 \in \Lambda (y^1)\). However, for \(\lambda ^3 {:}{=}\frac{1}{2} \lambda ^1 + \frac{1}{2} \lambda ^2 = (0.24, 0.59, 0.17)^\top \), it holds \(\Vert y^1 \Vert ^{\lambda ^3}_\infty = 0.72>0.59 =\Vert y^2 \Vert ^{\lambda ^3}_\infty \) and, therefore, \(\lambda ^3 \notin \Lambda (y^1)\). Consequently, \(\Lambda (y^1)\) is not convex.

Figure 2 illustrates the weight set components for Example 2.9. In Sect. 4, we explain how these sets can be computed.

Fig. 2
figure 2

The weight set components ad of Example 2.9. The colored regions represent \(\Lambda (y^r)\), the dot represents the kernel weight \(\lambda (y^r)\), \(r = 1,2,3,4\), and the dashed lines indicate the decomposition of the weight set components into its dimensional weight set components. The red line in (a) represents the convex combination of weights investigated in Example 2.9

To gain more insights into the structure of weight set components, we subdivide weight set components into smaller subsets according to the index in which the maximum of the associated scalar product is attained (i.e., defining the weighted Tchebycheff norm value). This will be useful to prove a convexity related property (Corollary 2.12) and to establish a polytopal subdivision of the weight set components.

Definition 2.10

For a weakly nondominated image \(y \in Y_{wN}\) and \(i = 1, \dots , p\), we define the ith dimensional weight set component by

$$\begin{aligned} \Lambda (y, i) {:}{=}\{ \lambda \in \Lambda (y): \lambda _i y_i \ge \lambda _k y_k \text { for all } k = 1, \dots , p \}. \end{aligned}$$

Clearly, \(\Lambda (y, i) = \{ \lambda \in \Lambda (y): \Vert y \Vert ^{\lambda }_\infty =\lambda _i y_i \}\) and \(\bigcup _{i = 1}^p \Lambda (y, i ) = \Lambda (y)\). Figure 2 presents these sets for Example 2.9. With the image set of Example 2.9, one can also show that both \(\lambda ^1\) and \(\lambda ^2\) are contained in \(\Lambda (y^1,1)\) and, thus, the dimensional weight set components are not necessarily convex. However, we can derive a ‘convexity-related’ property.

Proposition 2.11

For a weakly nondominated image \(y \in Y_{wN}\), the following holds true:

  1. (i)

    \(\bigcap \limits _{i = 1}^p \Lambda (y, i) = \{\lambda (y)\}\).

  2. (ii)

    For \(i = 1, \dots , p\), the ith dimensional weight set component \(\Lambda (y, i )\) is a star-shaped set with \(\lambda (y) \in \ker (\Lambda (y,i))\).

Proof

Clearly, \(\lambda _i (y) y_i = \frac{1}{\sum _{j = 1}^p \frac{1}{y_j}}\) for all \(i = 1, \dots , p\), which implies \(\lambda (y) \in \Lambda (y, i)\) for all \(i = 1, \dots , p\). Furthermore, \(\lambda \in \bigcap _{i = 1}^p \Lambda (y, i)\) implies \(\lambda _j y_j \ge \lambda _k y_k\) for all \(j,k = 1, \dots , p\). This yields \( \lambda _1 y_1 = \lambda _2 y_2 = \dots = \lambda _p y_p = M \) if and only if \(\lambda _i = \frac{M}{y_i} \text { for all } i=1,\dots ,p\) for some constant \(M \in {\mathbb {R}}\). Since \(\sum _{i=1}^p \lambda _i = 1\), we get \(M = \left( \sum _{i = 1}^p \frac{1}{y_i}\right) ^{-1}\) and, therefore, \(\lambda \) is the kernel weight. This shows statement (i).

To prove (ii), fix i and let \(\lambda ' \in \Lambda (y, i)\). We first show that for \(\theta \in (0,1)\), the convex combination \((\theta \lambda (y) + (1 - \theta ) \lambda ')\in \Lambda (y)\). To do so, we prove for all images \({\bar{y}}\in Y\) that \(\Vert y \Vert ^{\theta \lambda (y) + (1 - \theta ) \lambda '}_\infty \le \Vert {\bar{y}} \Vert ^{\theta \lambda (y) + (1 - \theta ) \lambda '}_\infty \). Observe that \(\lambda (y), \lambda '\in \Lambda (y,i)\) implies \(\Vert y \Vert ^{\lambda (y)}_\infty =\lambda _i(y) y_i\ge \lambda _k(y) y_k\) and \(\Vert y \Vert ^{\lambda '}_\infty =\lambda '_i y_i\ge \lambda '_k y_k\) for all indices \(k=1,\dots ,p\). Fix \(\theta \in (0,1)\). It is now straightforward to show that for all \(k=1,\dots ,p\),

$$\begin{aligned} (\theta \lambda _i(y) + (1-\theta ) \lambda '_i)y_i&= \theta \lambda _i(y) y_i + (1-\theta ) \lambda '_i y_i \nonumber \\&\ge \theta \lambda _k(y) y_k + (1-\theta ) \lambda '_k y_k= (\theta \lambda _k(y) + (1-\theta ) \lambda '_k)y_k, \end{aligned}$$

and, so,

$$\begin{aligned} \Vert y \Vert ^{\theta \lambda (y) + (1 - \theta ) \lambda '}_\infty =(\theta \lambda _i(y) + (1-\theta ) \lambda '_i)y_i. \end{aligned}$$
(2.2)

Let \({\bar{y}}\in Y\). For some index j, it must be that \(\lambda '_j {\bar{y}}_j = \Vert {\bar{y}} \Vert ^{\lambda '}_\infty \). By Assumption 1, \(\Vert {\bar{y}} \Vert ^{\lambda '}_\infty >0\) and, thus, \(\lambda '_j > 0\). Since \(\lambda ' \in \Lambda (y)\), it holds that \(\Vert y \Vert ^{\lambda '}_\infty \le \Vert {\bar{y}} \Vert ^{\lambda '}_\infty \) and therefore \(\lambda '_iy_i=\Vert y \Vert ^{\lambda '}_\infty \le \lambda '_j {\bar{y}}_j\). Furthermore, \(\lambda '_j y_j \le \lambda '_i y_i\) by the definition of \(\Vert y \Vert ^{\lambda '}_\infty \). Thus, \(\lambda '_j y_j \le \lambda '_j {\bar{y}}_j\). Since \(\lambda '_j > 0\), this implies \(y_j \le {\bar{y}}_j\) and, hence, \(\lambda _j(y)y_j \le \lambda _j(y){\bar{y}}_j\). But \(\lambda _j(y) y_j = \frac{1}{\sum _{k = 1}^p \frac{1}{y_k}} = \lambda _i(y) y_i\), so it is also the case that \(\lambda _i(y) y_i \le \lambda _j(y){\bar{y}}_j\). Putting it all together, we obtain

$$\begin{aligned} \Vert y \Vert ^{\theta \lambda (y) + (1 - \theta ) \lambda '}_\infty&=\theta \lambda _i(y)y_i + (1-\theta ) \lambda '_i y_i \le \theta \lambda _j(y) {\bar{y}}_j + (1-\theta )\lambda '_j{\bar{y}}_j \\&= (\theta \lambda _j(y) + (1-\theta )\lambda '_j){\bar{y}}_j \le \Vert {\bar{y}} \Vert ^{\theta \lambda (y) + (1-\theta )\lambda '}_\infty \end{aligned}$$

and, thus, \((\theta \lambda (y) + (1 - \theta ) \lambda ')\in \Lambda (y)\). From (2.2) we immediately get that \((\theta \lambda (y) + (1 - \theta ) \lambda ')\in \Lambda (y,i)\), which finishes the proof. \(\square \)

The first property of Proposition 2.11 states that the kernel weight is the only weight that is contained in all dimensional weight set components. The second justifies the name kernel weight. We immediately get the following corollary.

Corollary 2.12

Let a weakly nondominated image \(y \in Y_{wN}\) be given. Then, \(\Lambda (y)\) is a star-shaped set and \(\lambda (y)\in \ker (\Lambda (y))\).

For one dimensional weight sets (i.e. for two objectives), star-shapedness is equivalent to convexity of the weight set components. This justifies why the dichotomic search approach used for the weighted-sum scalarization can be adapted to the weighted Tchebycheff scalarization as proposed in [16, 29].

We can also derive a second convexity-related property with the help of the following lemma. Note that we fix \(p-1\) entries of a weight \(\lambda \in {\mathbb {R}}^p_\ge \) and do not consider normalized weights here.

Lemma 2.13

Let an index k, a weight \(\lambda \in {\mathbb {R}}^p_{\ge }\), and a scalar \(t>0\) be given. If an image \(y\in Y\) is optimal for both \(\Pi ^{TS} (\lambda )\) and \(\Pi ^{TS} (\lambda + te_k)\), where \(e_k\) is the kth unit vector in \({\mathbb {R}}^p\), then y is also optimal for \(\Pi ^{TS} (\lambda + \theta t e_k)\) for all \(\theta \in [0,1]\).

Proof

First observe that for any \(y\in Y\), \(\theta \in [0,1]\), and k, \(\lambda \) and t as given,

$$\begin{aligned} \Vert y \Vert ^{\lambda + \theta t e_k}_\infty = \max \{ \lambda _i y_i , (\lambda _k + \theta t) y_k \}, \end{aligned}$$

where i is an index such that \(\Vert y \Vert ^{\lambda }_\infty = \lambda _i y_i\). Consider the image \(y\in Y\) that is optimal for both \(\Pi ^{TS} (\lambda )\) and \(\Pi ^{TS} (\lambda + te_k)\), fix \(i^*\) such that \(\Vert y \Vert ^{\lambda }_\infty = \lambda _{i^*} y_{i^*}\) and fix \(\theta \in (0,1)\). Let \({\bar{y}}\in Y\) and fix i such that \(\Vert {\bar{y}} \Vert ^{\lambda }_\infty = \lambda _{i} {\bar{y}}_{i}\). Thus,

$$\begin{aligned} \Vert y \Vert ^{\lambda + \theta t e_k}_\infty = \max \{ \lambda _{i^*} y_{i^*}, (\lambda _k + \theta t) y_k \} \ \ \text {and}\ \ \Vert {\bar{y}} \Vert ^{\lambda + \theta t e_k}_\infty = \max \{ \lambda _i {\bar{y}}_i, (\lambda _k + \theta t) {\bar{y}}_k \}. \end{aligned}$$

We consider two cases for \(\Vert {\bar{y}} \Vert ^{\lambda + t e_k}_\infty \). For each case, we show that \(\Vert y \Vert ^{\lambda + \theta t e_k}_\infty \le \Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty \). In both cases, we use the observation that \(\lambda _{i^*}y_{i^*} \le \lambda _i{\bar{y}}_i\) since y is optimal for \(\Pi ^{TS} (\lambda )\).

(i) Suppose \(\Vert {\bar{y}} \Vert ^{\lambda + t e_k}_\infty = \lambda _i{\bar{y}}_i\). Since \(\theta <1\), \(t > 0\) and \({\bar{y}}_k > 0\), we can conclude that

$$\begin{aligned} (\lambda _k + \theta t){\bar{y}}_k < (\lambda _k + t){\bar{y}}_k \le \Vert {\bar{y}} \Vert ^{\lambda + te_k}_\infty = \lambda _i{\bar{y}}_i \end{aligned}$$

and, therefore, \(\Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty = \lambda _i{\bar{y}}_i = \Vert {\bar{y}} \Vert ^{\lambda + te_k}_\infty \). Likewise,

$$\begin{aligned} (\lambda _k + \theta t)y_k < (\lambda _k + t)y_k \le \Vert y \Vert ^{\lambda + te_k}_\infty \le \Vert {\bar{y}} \Vert ^{\lambda + te_k}_\infty = \Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty , \end{aligned}$$

where the last inequality follows by optimality of y for \(\Pi ^{TS} (\lambda + te_k)\). Recall that \(\lambda _{i^*}y_{i^*} \le \lambda _i{\bar{y}}_i=\Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty \). Thus,

$$\begin{aligned} \Vert y \Vert ^{\lambda + \theta t e_k}_\infty =\max \{\lambda _{i^*}y_{i^*},(\lambda _k + \theta t)y_k\} \le \Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty . \end{aligned}$$

(ii) Suppose \(\Vert {\bar{y}} \Vert ^{\lambda + t e_k}_\infty = (\lambda _k+t){\bar{y}}_k\). Now, \(\Vert y \Vert ^{\lambda +te_k}_\infty \le \Vert {\bar{y}} \Vert ^{\lambda +te_k}_\infty \) since y is optimal for \(\Pi ^{TS} (\lambda +te_k)\). So, \((\lambda _k+t)y_k \le (\lambda _k + t){\bar{y}}_k\). Since \(t > 0\) and \(\lambda _k\ge 0\), it must be that \(y_k \le {\bar{y}}_k\) and, hence, \((\lambda _k+\theta t)y_k \le (\lambda _k + \theta t){\bar{y}}_k \le \Vert {\bar{y}} \Vert ^{\lambda + \theta t e_k}_\infty \). Furthermore, recall that \(\lambda _{i^*}y_{i^*} \le \lambda _i{\bar{y}}_i \le \Vert {\bar{y}} \Vert ^{\lambda +\theta te_k}_\infty \). Hence, we again have

$$\begin{aligned} \Vert y \Vert ^{\lambda + \theta t e_k}_\infty =\max \{\lambda _{i^*}y_{i^*},(\lambda _k + \theta t)y_k\} \le \Vert {\bar{y}} \Vert ^{\lambda + \theta te_k}_\infty . \quad \end{aligned}$$

\(\square \)

Lemma 2.13 shows that, for any pair \(\lambda ^1\) and \(\lambda ^2\) with \(\lambda ^2-\lambda ^1\) equal to a positive multiple of a unit vector, if an image y is optimal for both \(\Pi ^{TS} (\lambda ^1)\) and \(\Pi ^{TS} (\lambda ^2)\), then y is also optimal for \(\Pi ^{TS} (\lambda )\), where \(\lambda \) is any convex combination of \(\lambda ^1\) and \(\lambda ^2\).

In order to transfer this result to the weight set, we define, for a given index \(k \in \{1, \dots , p\}\) and a vector \(a \in {\mathbb {R}}^p\) such that \(a_i > 0\) for \( i \ne k\), the following line segments:

$$\begin{aligned} H_{k,a} {:}{=}\{ \lambda \in {\mathbb {R}}^p : a_i \lambda _i = a_j \lambda _j \text { for all } i,j \in \{1, \dots , p \} \setminus \{k\} \} \cap \Lambda . \end{aligned}$$
(2.3)

Fig. 3 shows some of these line segments. The line segments \(H_{k,a} \cap \Lambda \) emanate from one of the vertices of \(\Lambda \). This can be seen by rewriting

$$\begin{aligned} H_{k,a} = \{ \lambda \in {\mathbb {R}}^p : \lambda = e_k + (a' - e_k) t \text { for some } t \in [0,1] \}, \end{aligned}$$

where \(e_k\) denotes the kth unit vector and \(a'\) is defined by \(a'_k {:}{=}0\) and \(a'_i {:}{=}\frac{1}{a_i} \frac{1}{\sum _{j \ne k} \frac{1}{a_j}}\) for \(i \ne k\). Along these line segments, some convexity-related property holds true.

Fig. 3
figure 3

The convexity property of Proposition 2.14. The intersection of the line segments \(H_{k,a}\) (dashed lines) in (2.3) and the weight set components are always convex sets. The green-gray and violet-gray checkerboard areas represent the intersection of weight set components \(\Lambda (y^1)\cap \Lambda (y^2)\) and \(\Lambda (y^1)\cap \Lambda (y^3)\), respectively. See Fig. 2 for a representation of the individual weight set components

Proposition 2.14

For any \(k\in \{1,\dots ,p\}\) and \(a\in {\mathbb {R}}^p\) such that \(a_i >0\) for \(i\ne k\), the intersection \( \Lambda (y) \cap H_{k,a} \) is convex for all \(y \in Y_{wN}\).

Proof

Without loss of generality, let \(k = p\). Since the multiplication with a positive scalar does not change the validity of any equality in (2.3), we may assume that the entries of a are chosen such that

$$\begin{aligned}{} & {} \lambda _1&= a_i \lambda _i,{} & {} i = 2, \dots , p-1, \end{aligned}$$
(2.4)

holds for all \(\lambda \in H_{p,a}\). Let \(\lambda ^1\), \(\lambda ^2 \in \Lambda (y) \cap H_{p,a}\) and \(\lambda = \theta \lambda ^1 + (1- \theta ) \lambda ^2 \in \Lambda \cap H_{p,a}\) for some \(\theta \in (0,1)\). Without loss of generality, assume \(\lambda ^1_1 \le \lambda ^2_1\). By (2.4) we then get \(\lambda ^1_i \le \lambda ^2_i\) for \(i = 2, \dots , p-1\), and, therefore, \(\sum _{i = 1}^p \lambda ^{1}_i = \sum _{i = 1}^p \lambda ^2_i = 1\) implies \(\lambda ^1_p \ge \lambda ^2_p\). Since \(\lambda \) is a convex combination of \(\lambda ^1\) and \(\lambda ^2\), we summarize as follows:

$$\begin{aligned} \lambda ^1_i&\le \lambda _i \le \lambda ^2_i \text { for } i = 1, \dots , p-1, \end{aligned}$$
(2.5a)
$$\begin{aligned} \lambda ^1_p&\ge \lambda _p \ge \lambda ^2_p. \end{aligned}$$
(2.5b)

Assume first that \(\lambda ^1_1 > 0\). This implies that \(\lambda ^2_1 \ge \lambda _1 > 0\) and, therefore, we can define positive scalars \(\varepsilon _2 {:}{=}\frac{\lambda ^1_1}{\lambda ^2_1} \text { and } \varepsilon {:}{=}\frac{\lambda ^1_1}{\lambda _1}\) and Eq. (2.5a) implies

$$\begin{aligned} 0 < \varepsilon _2 \le \varepsilon \le 1. \end{aligned}$$
(2.6)

Set \({\bar{\lambda }}^2 {:}{=}\varepsilon _2 \lambda ^2\) and \({\bar{\lambda }} {:}{=}\varepsilon \lambda \). Then, it follows \({\bar{\lambda }}^2_1 = {\bar{\lambda }}_1 = \lambda ^1_1\) and, consequently, \({\bar{\lambda }}^2_i = \lambda ^1_i\) and \({\bar{\lambda }}_i = \lambda ^1_i\) holds for \(i = 2, \dots ,p-1\). Combining (2.5a) and (2.6) results in

$$\begin{aligned} {\bar{\lambda }}^2_p = \varepsilon _2 \lambda ^2_p \le \varepsilon \lambda ^2_p \le \varepsilon \lambda _p&= {\bar{\lambda }}_p \text { and } {\bar{\lambda }}_p = \varepsilon \lambda _p \le \lambda _p \le \lambda ^1_p. \end{aligned}$$

We get \({\bar{\lambda }} \in {{\,\textrm{conv}\,}}\{\lambda ^1, {\bar{\lambda }}^2 \}\). Recall that \(\alpha \Vert y \Vert ^{\lambda }_\infty = \Vert y \Vert ^{\alpha \lambda }_\infty \) holds for all \(y \in Y\) and all scalars \(\alpha > 0\). Hence, as \(\lambda ^2 \in \Lambda (y)\), we know that y is optimal for \(\Pi ^{TS} ({\bar{\lambda }}^2)\). Since \({\bar{\lambda }} \in {{\,\textrm{conv}\,}}\{\lambda ^1, {\bar{\lambda }}^2 \}\), the image y is optimal for \(\Pi ^{TS} ({\bar{\lambda }})\) by Lemma 2.13, and, therefore, the image y is optimal for \(\Pi ^{TS} (\lambda )\).

Now, assume \(\lambda ^1_1 = 0\). Since \(\lambda ^1 \in H_{p,a}\), it follows that \(\lambda ^1_i = 0\) for \(i = 2,\dots ,p-1\) and therefore \(\lambda ^1 = e_p\). We show that there is some weight \(\lambda ' \in H_{p,a} \cap \Lambda (y)\) with \(\lambda '_1 >0\) and \(\lambda \in {{\,\textrm{conv}\,}}(\{\lambda ', \lambda ^2\})\). Then, \(\lambda \in \Lambda (y) \cap H_{p,a}\) follows by the argumentation above. Choose

$$\begin{aligned} 0 < M \le \min _{y' \in Y} \min _{i = 1, \dots , p-1} \frac{y'_p}{ \frac{y'_i}{a_i} + \sum _{j=1}^{p-1} \frac{y'_p}{a_j} } \end{aligned}$$

and define \(\lambda ^M {:}{=}(\frac{M}{a_1}, \frac{M}{a_2}, \dots , \frac{M}{a_{p-1}}, (1 - \sum _{i=1}^{p-1} \frac{M}{a_i} ) )\), where \(a_1 {:}{=}1\). Then,

$$\begin{aligned} \sum _{i=1}^p \lambda ^M_i = \sum _{i=1}^{p-1} \frac{M}{a_i} + 1 - \sum _{i=1}^{p-1} \frac{M}{a_i} = 1 \end{aligned}$$

and, for each \(i= 2,\ldots ,p-1\), it holds that

$$\begin{aligned} \lambda ^M_1 = \frac{M}{a_1} = M = a_i \cdot \frac{M}{a_i} = a_i \cdot \lambda ^M_i. \end{aligned}$$

Consequently, \(\lambda ^M \in \Lambda \cap H_{p,a}\) and, for any \(y' \in Y\), it is easy to verify that \(\Vert y' \Vert ^{\lambda ^M}_\infty = \lambda _p y'_p\). Since y is optimal for \(\Pi ^{TS} (e_p)\), it must be that \(y_p \le y'_p\) for all \(y' \in Y\). Then, also \(\Vert y \Vert ^{\lambda ^M}_\infty = \lambda ^M_p y_p \le \lambda ^M_p y'_p = \Vert y' \Vert ^{\lambda ^M}_\infty \) must hold for all \(y' \in Y\) and, thus, \(\lambda ^M \in \Lambda (y)\). Since \(\lambda ^M \rightarrow e_p\) for \(M \rightarrow 0\), we can find, for each \(\theta > 0\), a small scalar \(M >0\) such that \(\lambda = \theta e_p + (1 - \theta ) \lambda ^2 \in {{\,\textrm{conv}\,}}(\{\lambda ^M, \lambda ^2\})\), which concludes the proof. \(\square \)

3 The intersection of weight set components

In this section, we analyze the structure of the intersection of two weight set components. In general, the intersection of two star-shaped sets is not guaranteed to be star-shaped. However, this holds true for the intersection of two weight set components. To prove this, we first define a (possibly artificial) image.

Definition 3.1

For a subset of weakly nondominated images \({\bar{Y}} \subseteq Y_{wN}\), we define the local nadir image \(y^N({\bar{Y}})\) by

$$\begin{aligned} y^N_i ({\bar{Y}}) = \max _{{\bar{y}} \in {\bar{Y}}} {\bar{y}}_i \text { for } i = 1, \dots , p. \end{aligned}$$

We say an image \({\bar{y}} \in {\bar{Y}}\) contributes to the local nadir image if \({\bar{y}}_i = y^N_i({\bar{Y}})\) for some \(i \in \{1,\dots , p\}\).

In the following, we avoid trivial cases by requiring \(|{\bar{Y}}| \ge 2\). Further, for ease of exposition, we assume that the local nadir image exists in Y. The local nadir image \(y^N({\bar{Y}})\) is dominated, and, consequently, \(\Lambda (y^N({\bar{Y}})) \ne \emptyset \) implies \(y^N({\bar{Y}}) \in Y_{wN} {\setminus } Y_N\).

Definition 3.2

We call the kernel weight \( \lambda ^N({\bar{Y}}) {:}{=}\lambda (y^N({\bar{Y}}))\) of \(y^N({\bar{Y}})\) the local nadir weight.

Clearly, \(\lambda ^N({\bar{Y}}) \in \Lambda \). Observe that, for all \(y \in {\bar{Y}}\), it holds that

$$\begin{aligned} \Vert y \Vert ^{\lambda ^N({\bar{Y}})}_\infty = \max _{i = 1, \dots , p} \frac{y_i}{\max _{{\bar{y}} \in {\bar{Y}}} {\bar{y}}_i } M \le M, \end{aligned}$$

with \(M = \left( \sum _{j=1}^p \frac{1}{\max _{{\bar{y}} \in {\bar{Y}}} {\bar{y}}_j} \right) ^{-1}\). Thus, if an image y contributes to the local nadir image, it holds that \(\Vert y \Vert ^{\lambda ^N({\bar{Y}})}_\infty = M\). In particular, all images contributing to \(y^N({\bar{Y}})\) share the same weighted Tchebycheff norm value with weight \(\lambda ^N({\bar{Y}})\).

The local nadir image is closely related to the intersection of weight set components, as shown in the following proposition.

Proposition 3.3

Let a subset of weakly nondominated images \({\bar{Y}} \subseteq Y_{wN}\) be given. Then, \( \bigcap _{y \in {\bar{Y}}} \Lambda (y) = \Lambda (y^N({\bar{Y}})). \)

Proof

Let \({\bar{Y}} \subseteq Y_{wN}\). We abbreviate \(y^N {:}{=} y^N({\bar{Y}})\). If \(\bigcap _{y \in {\bar{Y}}} \Lambda (y) = \emptyset \), the inclusion ‘\(\subseteq \)’ holds trivially. Let \(\lambda \in \bigcap _{y \in {\bar{Y}}} \Lambda (y)\). Then, there exists a constant \(c > 0\) such that \(\Vert {\bar{y}} \Vert ^{\lambda }_\infty = c\) for all \({\bar{y}} \in {\bar{Y}}\) and \(c \le \Vert y' \Vert ^{\lambda }_\infty \) for all \(y' \in Y\). We show \(\Vert y^N \Vert ^{\lambda }_\infty = c\). Since \(\max _{i = 1, \dots , p} \lambda _i {\bar{y}}_i = c\) for all \({\bar{y}} \in {\bar{Y}}\), it is \( \lambda _i {\bar{y}}_i \le c \text { for all } i = 1,\dots , p \text { and for all } {\bar{y}} \in {\bar{Y}}. \) Thus, \(\max _{{\bar{y}} \in {\bar{Y}}} \lambda _i {\bar{y}}_i \le c\) and, therefore, \(\lambda _i y^N_i = \lambda _i \cdot \max _{{\bar{y}} \in {\bar{Y}}} {\bar{y}}_i \le c\) for all \(i = 1, \dots , p\). Consequently, \(\Vert y^N \Vert ^{\lambda }_\infty = c\), and \(\lambda \in \Lambda (y^N)\). The other direction follows by \({\bar{y}} \leqq y^N\) for all \({\bar{y}} \in {\bar{Y}}\) due to the definition of the local nadir weight. \(\square \)

Thus, Proposition 3.3 implies that Corollary 2.12 and Proposition 2.14 hold in fact for intersections of weight set components:

Corollary 3.4

Let a subset of weakly nondominated images \({\bar{Y}} \subseteq Y_{wN}\) be given. Then:

  1. (i)

    If \(\lambda ^N({\bar{Y}}) \notin \bigcap _{y \in {\bar{Y}}} \Lambda (y)\), then \(\bigcap _{y \in {\bar{Y}}} \Lambda (y) = \emptyset \).

  2. (ii)

    The intersection \(\bigcap _{y \in {\bar{Y}}} \Lambda (y)\) is a star-shaped set with \(\lambda ^N({\bar{Y}})\) in its kernel.

  3. (iii)

    For \(k \in \{1, \dots , p\}\) and \(a_i >0\), \(i \ne k\), the intersection \(\bigcap _{y \in {\bar{Y}}} \Lambda (y) \cap H_{k,a}\) is convex.

Proof

If \(\lambda ^N({\bar{Y}}) \notin \bigcap _{y \in {\bar{Y}}} \Lambda (y)\), it is \(\lambda ^N({\bar{Y}}) \notin \Lambda (y^N)\) by Proposition 3.3. Then, by the proof of Proposition 2.3, it follows \(y^N \notin Y_{wN}\) and, therefore, the weight set component of the local nadir image is empty. Thus, we get by Proposition 3.3 that \(\bigcap _{y \in {\bar{Y}}} \Lambda (y) = \emptyset \). This shows (i). If \(\bigcap _{y \in {\bar{Y}}} \Lambda (y) = \emptyset \), there is nothing to show. Otherwise, statements (ii) and (iii) follow by Corollary 2.12 and Proposition 2.14, respectively, as well as Proposition 3.3 since \(\Lambda (y^N) \ne \emptyset \) and, thus, \(y^N \in Y_{wN}\). \(\square \)

Fig. 4
figure 4

The local nadir weights a \(\lambda ^N(\{y^1,y^2\})\), b \(\lambda ^N(\{y^2,y^3\}) = \lambda ^N(\{y^3,y^{4}\}) = \lambda ^N(\{y^2,y^3,y^4\})\) and c \(\lambda ^N(\{y^1,y^2,y^3\}) = \lambda ^N(\{y^1,y^3,y^{4}\}) = \lambda ^N(\{y^1,y^2,y^3,y^4\})\) for Example 2.9. The intersection sets of weight set components are always star-shaped sets. In particular, the corresponding local nadir weight is contained in the kernel

4 A polytopal subdivision of the weight set components

In the following, we construct a representation of the weight set components as the union of polytopes based on the idea of [9]: For an image \(y \in Y_N\), the weight set can be decomposed into p polytopes where the ith polytope contains all weights such that the weighted Tchebycheff norm of y is attained in the ith index. By taking all nondominated images into account, we can refine this decomposition such that the following holds: for each polytope obtained and for any image \(y \in Y_N\), the index in which the weighted Tchebycheff norm is attained can exactly be determined. Hence, additional dividing hyperplanes based on which image is optimal can be added. In this section, we establish that this construction yields the existence of a polytopal subdivision of each weight set component which lays a well-defined foundation of a notion of dimension.

We formally state the construction. Based on Example 2.9, each step of this construction is illustrated in Fig. 4. Let \(Y_N = \{y^1, \dots , y^R\}\). For \(y^1\), recall the ith dimensional weight set component for \(i\in \{1,\dots ,p\}\) (see Definition 2.10):

$$\begin{aligned} \Lambda (y^1,i) = \{ \lambda \in \Lambda (y^1): \lambda _i y^1_i \ge \lambda _k y^1_k \text { for all } k = 1, \dots , p\}. \end{aligned}$$

Using \(y^1\), we subdivide the weight set into p polytopes \(P_{(i)} {:}{=}\{ \lambda \in \Lambda : \lambda _i y^1_i \ge \lambda _k y^1_k, k\ne i\}\), \(i=1, \dots , p\) (Fig. 5a). For a weight \(\lambda \) in one of these polytopes, we can then immediately identify the index (pairs) in which the weighted Tchebycheff norm (with weight \(\lambda \)) of \(y^1\) is attained. Using \(y^2\), we can further subdivide the weight set into (at most) \(p^2\) polytopes \(P_{(i_1, i_2)} {:}{=}\{ \lambda \in \Lambda : \lambda _{i_1} y^1_{i_1} \ge \lambda _k y^1_k, k\ne i_1, \lambda _{i_2} y^2_{i_2} \ge \lambda _k y^2_k, k\ne i_2\}\), \(i_1,i_2=1, \dots , p\), to identify weights for which the weighted Tchebycheff norm of \(y^1\) is attained in index \(i_1\) and the weighted Tchebycheff norm of \(y^2\) is attained in index \(i_2\) (Fig. 5b). This reasoning can be extended to multiple images \(y^1, \dots , y^R\): For each image \(y^r\), we choose an index \(i_r \in \{1, \dots , p\}\) and consider

$$\begin{aligned} P_{(i_1, \dots , i_R)} {:}{=}\left\{ \lambda \in \Lambda : \begin{array}{lll}&{} \lambda _{i_1} y^1_{i_1} \ge \lambda _k y^1_k &{} \text { for } k \ne i_1,\\ &{}\lambda _{i_2} y^2_{i_2} \ge \lambda _{k} y^2_k &{} \text { for } k \ne i_2,\\ &{}\vdots &{}\\ &{}\lambda _{i_R} y^R_{i_R} \ge \lambda _{k} y^R_k &{} \text { for } k \ne i_R,\end{array} \right\} . \end{aligned}$$
(4.1)

Obviously, each set \(P_{(i_1, \dots , i_R)}\) is a polytope and each weight \(\lambda \in \Lambda \) is contained in at least one polytope of the form 4.1. If \(\lambda \in P_{(i_1, \dots , i_R)}\), we can deduce that \(\Vert y^r \Vert ^{\lambda }_\infty = \lambda _{i_r} y^r_{i_r}\) for all \(r = 1, \dots , R\) (Fig. 5e). Hence, by Proposition 2.7, deciding whether \(\lambda \in \Lambda (y^r)\) holds true reduces to check R inequalities:

$$\begin{aligned} \Lambda (y^r) \cap P_{(i_1, \dots , i_R)} = \{ \lambda \in P_{(i_1, \dots , i_R)}: \lambda _{i_1} y^r_{i_r} \le \lambda _{i_r} y^s_{i_s} \text{ for } \text{ all } s = 1, \dots , R, s \ne r\}. \end{aligned}$$
(4.2)
Fig. 5
figure 5

The construction of the polytopal subdivision of the weight set \(\Lambda \) according to (4.1) for Example 2.9 with \(p = 3\). a The image \(y^1\) induces a decomposition of the weight set into three polytopes \(P_{(i)} = \{ \lambda \in \Lambda : \lambda _i y^1_i \ge \lambda _k y^1_k, k \ne i\}\). b The images \(y^1\) and \(y^2\) induce a decomposition of the weight set into (at most) \(p^2\) polytopes \(P_{(i_1,i_2)} = \{ \lambda \in \Lambda : \lambda _{i_1} y^1_{i_1} \ge \lambda _k y^1_k, k \ne i_1, \lambda _{i_2} y^2_{i_2} \ge \lambda _k y^2_k, k \ne i_2\}\). c Taking the other images \(y^3\) and \(y^4\) into account, the weight set can be decomposed into the polytopes \(P_{(i_1,i_2,i_3,i_4)} = \{ \lambda \in \Lambda : \lambda _{i_r} y^r_{i_r} \ge \lambda _k y^r_k, k\ne i_r, r=1,2,3,4\}\). d Each polytope \(P_{(i_1,i_2,i_3,i_4)}\) can further be subdivided based on the optimal image for the weighted Tchebycheff scalarizations. For example, \(P_{(i_1,i_2,i_3,i_4)}\) is divided into four polytopes \(\Lambda (y^r) \cap P_{(i_1,i_2,i_3,i_4)}\). Hereby, note that the polytopes \(\Lambda (y^3) \cap P_{(i_1,i_2,i_3,i_4)}\) and \(\Lambda (y^4) \cap P_{(i_1,i_2,i_3,i_4)}\) have dimension one, cf. Fig. 2. e Then, the polytopes can individually be assigned to the (in some cases multiple) weight set components. See Fig. 2 for a representation of the individual weight set components

See Fig. 5d for a zoomed-in illustration of such a further refinement. Since \(\Lambda (y^r) \cap P_{(i_1, \dots , i_R)}\) is a polytope, the following definition of family of polytopes for an image \(y^r \in Y_N\) is motivated:

$$\begin{aligned} {\tilde{{\mathcal {C}}}} (y^r)&{:}{=}\left\{ \Lambda (y^r) \cap P_{(i_1, \dots , i_R) }: (i_1, \dots , i_R) \in \{1, \dots ,p \}^R \right\} . \end{aligned}$$
(4.3)

We state some properties of the families defined in (4.3).

Proposition 4.1

The following statements hold:

  1. (i)

    It is \(\bigcup _{P \in {\tilde{{\mathcal {C}}}}(y^r) } P =\Lambda (y^r)\) for all \(y^r \in Y_N\).

  2. (ii)

    Let \(y^r, y^s \in Y_N\) be two nondominated images and \(P_{(i_1, \dots , i_R )}, P_{(j_1, \dots , j_R)}\) be two polytopes. Then, \(\left( P_{(i_1, \dots , i_R )} \cap \Lambda (y^r) \right) \cap \left( P_{(j_1, \dots , j_R)} \cap \Lambda (y^s) \right) \) is a face of both \(P_{(i_1, \dots , i_R )} \cap \Lambda (y^r)\) and \(P_{(j_1, \dots , j_R)} \cap \Lambda (y^s)\). In particular, the face is inclusion-wise maximal.

Proof

The statement (i) is easy to see. Without loss of generality, let \(Y_N\) be enumerated such that \(y^r = y^1\) and \(y^s = y^2\). We define

$$\begin{aligned} H&{:}{=}\{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_1} y_{i_1}^1 = \lambda _{j_1} y^1_{j_1} ,\; \dots ,\; \lambda _{i_R} y_{i_R}^R = \lambda _{j_R} y_{j_R}^R, \lambda _{i_1} y_{i_1}^1 = \lambda _{j_2} y_{j_2}^2 \} \\&= \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_1}^1 y^1_{i_1} = \lambda _{j_2} y^2_{j_2} \} \cap \left( \bigcap \limits _{r = 1}^R \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_r} y^r_{i_r} = \lambda _{j_r} y^r_{j_r}\} \right) . \end{aligned}$$

On the one hand,

$$\begin{aligned} \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_1}^1 y^1_{i_1} \le \lambda _{j_2} y^2_{j_2} \} \cap \left( \bigcap \limits _{r = 1}^R \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_r} y^r_{i_r} \ge \lambda _{j_r} y^r_{j_r}\} \right) \end{aligned}$$

is an intersection of valid inequalities for \(P_{(i_1, \dots , i_R )} \cap \Lambda (y^1)\), as shown in (4.2). These inequalities define a face \(F^1 = P_{(i_1, \dots , i_R )} \cap \Lambda (y^1) \cap H\), if it is nonempty. On the other hand,

$$\begin{aligned} \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_1}^1 y^1_{i_1} \ge \lambda _{j_2} y^2_{j_2} \} \cap \left( \bigcap \limits _{r = 1}^R \{ \lambda \in {\mathbb {R}}^p_\ge : \lambda _{i_r} y^r_{i_r} \le \lambda _{j_r} y^r_{j_r}\} \right) \end{aligned}$$

is an intersection of valid inequalities for \(P_{(j_1, \dots , j_R)} \cap \Lambda (y^2)\). Analogously, the set \(F^2 = P_{(j_1, \dots , j_R)} \cap \Lambda (y^2) \cap H\) is a face of \(P_{(j_1, \dots , j_R)} \cap \Lambda (y^2)\). By definition, a weight \(\lambda \in \left( P_{(i_1, \dots , i_R )} \cap \Lambda (y^1) \right) \cap \left( P_{(j_1, \dots , j_R)} \cap \Lambda (y^2) \right) \) satisfies for \(r = 1, \dots , R\):

$$\begin{aligned} \lambda _{i_r} y^r_{i_r}&\ge \lambda _k y^r_k \text { for } k \ne i_r,\\ \lambda _{j_r} y^r_{j_r}&\ge \lambda _k y^r_k \text { for } k \ne j_r. \end{aligned}$$

This implies \(\lambda _{i_r} y^r_{i_r} = \lambda _{j_r} y^r_{j_r}\). Since also \(\lambda _{i_1} y^1_{i_1} \le \lambda _{i_2} y^2_{i_2}\) and \(\lambda _{j_2} y^2_{j_2} \le \lambda _{j_1} y_{j_1}^1\) hold, we get \(\lambda _{i_1} y^1_{i_1} \le \lambda _{i_2} y^2_{i_2} = \lambda _{j_2} y^1_{j_2} \le \lambda _{j_1} y^2_{j_1} = \lambda _{i_1} y^1_{i_1}\). Thus, the equality \(\lambda _{i_1} y^1_{i_1} = \lambda _{j_2} y^2_{j_2}\) holds true. It follows that

$$\begin{aligned} \lambda \in F^1&\Leftrightarrow \lambda \in \left( P_{(i_1, \dots , i_R )} \cap \Lambda (y^1) \right) \cap \left( P_{(j_1, \dots , j_R)} \cap \Lambda (y^2) \right) \Leftrightarrow \lambda \in F^2 \end{aligned}$$

holds. Consequently, \( F^1 = F^2 = \left( P_{(i_1, \dots , i_R )} \cap \Lambda (y^1) \right) \cap \left( P_{(j_1, \dots , j_R)} \cap \Lambda (y^2) \right) \). Inclusion-wise maximality follows also from the latter equalities. \(\square \)

This motivates to augment the families of polytopes \({\tilde{{\mathcal {C}}}}(y)\).

Definition 4.2

For an image \(y \in Y_N\), the weight set complex of y with respect to the weighted Tchebycheff scalarization is defined by

$$\begin{aligned} {\mathcal {C}}(y) {:}{=}\{ F : \text { there exists a polytope } P \in {\tilde{{\mathcal {C}}}} (y) \text { such that } F \text { is a face of } P \}. \end{aligned}$$

Since Proposition 4.1 (ii) remains true if \(y^r\) is chosen to be equal to \(y^s\), we can conclude that polytopes in \({\tilde{{\mathcal {C}}}}(y^r)\) always intersect in common faces. Hence, Proposition 4.1(i) implies that the weight set complex of \(y^r\) is indeed a polytopal subdivision of its weight set component \(\Lambda (y^r)\).

Corollary 4.3

Let \(y \in Y_N\). Then, \({\mathcal {C}}(y)\) is a polytopal complex such that

$$\begin{aligned} \bigcup _{P \in {\mathcal {C}}(y)} P = \Lambda (y). \end{aligned}$$

Fig. 4 shows the subdivision for Example 2.9. Moreover, Proposition 2.8 can be adapted: The knowledge of all polytopal complexes \({\mathcal {C}}(y)\) is sufficient to cover the weight set, that is, \(\Lambda = \bigcup _{y \in Y_N} \bigcup _{P \in {\mathcal {C}}(y)} P\). Here, note a slight but important difference: for all nondominated images \(y \in Y_N\), the full knowledge of all polytopes in the weight set complex \({\mathcal {C}}(y)\) (and, hence, \(\Lambda (y)\)) is not required anymore, since individual polytopes can belong to multiple weight set complexes. Nevertheless, the knowledge of all nondominated images is still required, and the underlying set of the union of all known polytopes must cover the complete weight set.

Remark 4.4

Analogous to the construction of the weight set complex, we get a polytopal subdivision of the dimensional weight set components if we fix the inequality \(\lambda _i y^r_i \ge \lambda _j y^r_k\) in (4.1) and subsume all faces of the polytopes in

$$\begin{aligned} {\tilde{{\mathcal {C}}}} (y^r,i){:}{=}\{ \Lambda (y^r)&\cap P_{(i_1, \dots , i_{r-1}, i, i_{r+1}, \dots , i_R)} : \\&(i_1, \dots , i_{r-1}, i_{r+1}, \dots , i_R) \in \{1, \dots ,p\}^{R-1} \}. \end{aligned}$$

Then, by construction, the family of polytopes

$$\begin{aligned} {\mathcal {C}}(y,i) {:}{=}\{ F: \text { there exists a polytope } P \in {\tilde{{\mathcal {C}}}}(y,i) \text { such that } F \text { is a face of } P \} \end{aligned}$$

is again a polytopal complex. This construction is consistent with the definition of the dimensional weight set components: it holds that \(\Lambda (y,i) = \bigcup _{P \in {\mathcal {C}}(y,i)} P\) and, moreover, \(P \in {\mathcal {C}}(y,i)\) implies that \(P \in {\mathcal {C}}(y)\), and, vice versa, for each polytope \(P \in {\mathcal {C}}(y)\) there is an index \(i \in \{1,\dots ,p\}\) such that \(P \in {\mathcal {C}}(y,i)\). That is, \({\mathcal {C}}(y,i)\) is indeed a subcomplex of \({\mathcal {C}}(y)\).

Definition 4.5

For a nondominated image \(y \in Y_{N}\) and an index \(i \in \{1, \dots , p\}\), we call

$$\begin{aligned} {\mathcal {C}}(y,i) {:}{=}\{ F : \text { there exists a polytope } P \in {\tilde{{\mathcal {C}}}}(y,i) \text { such that } F \text { is a face of } P \}. \end{aligned}$$

the ith dimensional weight set complex.

Taking images in \(Y_{wN} \setminus Y_N\) into account, the construction of the polytopal subdivision needs to be refined. This can be done by adapting (4.1) and (4.2) based on an enumeration of \(Y_{wN}\). Then, the following result can be analogously derived.

Corollary 4.6

For any weakly nondominated image \(y \in Y_{wN}\), there exists a polytopal subdivision of \(\Lambda (y)\).

Remark 4.7

Due to the construction of these polytopal subdivisions, the intersection of weight set components induces a polytopal subdivision that uses polytopes of both subdivisions only. That is \(\Lambda (y^1) \cap \Lambda (y^2) = \bigcup _{P \in {\mathcal {C}}(y^1) \cap {\mathcal {C}}(y^2)} P\). In particular, \({\mathcal {C}}(y^1) \cap {\mathcal {C}}(y^2)\) itself is also a polytopal complex. Thus, we can compare weight set components based on the polytopes in the polytopal subdivision.

Similarly, the union of two weight set complexes is a polytopal subdivision of the union of the corresponding weight set components. This is, in particularly, important to define a notion of dimension of (unions/intersections of) weight set components.

5 The dimension of the weight set components

In this section, we analyze the dimension of the weight set components. First, we define the dimension with respect to the associated polytopal complex. Recall that a polytopal complex is defined via a finite set of polytopes and, for all weight set components, there exists a polytopal subdivision, see Corollary 4.3.

Definition 5.1

For an image \(y \in Y\), the dimension of its weight set component \(\Lambda (y)\) is defined by \( \dim (\Lambda (y)) {:}{=}\dim ({\mathcal {C}}(y)). \)

Note that the dimension of the dimensional weight set components as well as the intersections or unions of weight set components can be defined analogously by Remarks 4.4 and 4.7. In the following, we distinguish between images in \(Y_N\) and \(Y_{wN} {\setminus } Y_N\).

We first consider nondominated images. Due to the finite number of polytopes in \({\mathcal {C}}(y)\), Corollary 2.6 immediately implies that the dimension of the corresponding weight set complexes \({\mathcal {C}}(y)\) must be equal to \(p-1\).

Corollary 5.2

Let \(y \in Y_N\). Then, \(\dim (\Lambda (y)) = p-1\).

Since a weakly nondominated but dominated image \(y^w \in Y_{wN}{\setminus } Y_N\) is optimal for the scalarized problem \(\Pi ^{TS} (\lambda (y^w))\) with the central weight of \(y^w\), the corresponding weight set component \(\Lambda (y^w)\) is not empty. We have already seen that these sets also have a polytopal subdivision and fulfil the convexity properties stated in Proposition 2.14 and Corollary 2.12. Yet, Corollary 5.2 does not immediately hold due to the fact that there is no \(\varepsilon > 0\) such that \(B_{\varepsilon }(\lambda (y^w)) \subseteq \Lambda (y^w)\) holds. The next example shows this for two objectives.

Example 5.3

Let \({\tilde{Y}} = \{ {\tilde{y}}^1 = (4,4)^\top ,\ {\tilde{y}}^2 = (4,2)^\top \}\). Clearly, \({\tilde{y}}^2 \in Y_N\) and \({\tilde{y}}^1 \in {\tilde{Y}}_{wN} {\setminus } {\tilde{Y}}_N\). For the kernel weight \(\lambda ({\tilde{y}}^1) = (\frac{1}{2},\frac{1}{2})^\top \), it holds \( \Vert {\tilde{y}}^1 \Vert ^{\lambda ({\tilde{y}}^1)}_\infty = 2 = \Vert {\tilde{y}}^2 \Vert ^{\lambda ({\tilde{y}}^1)}_\infty \). However, for any scalar \(\varepsilon > 0\) and \(\lambda = ( \lambda _1({\tilde{y}}^1) + \varepsilon , \lambda _2({\tilde{y}}^1) - \varepsilon )^\top \), we get \( \Vert {\tilde{y}}^1 \Vert ^{\lambda }_\infty = 2 + 4\varepsilon> 1 + 2\varepsilon = \lambda _1 {\tilde{y}}^2_1 \text { as well as }\Vert {\tilde{y}}^1 \Vert ^{\lambda }_\infty = 2 + 4\varepsilon > 2 - 4\varepsilon = \lambda _2 {\tilde{y}}^2_2. \) Hence, \(\Vert {\tilde{y}}^1 \Vert ^{\lambda }_\infty > \Vert {\tilde{y}}^2 \Vert ^{\lambda }_\infty \). Nevertheless, we can conclude \( \Lambda ({\tilde{y}}^1) = \Lambda ({\tilde{y}}^2) \cap \{ \lambda \in \Lambda : \lambda _1 \le \lambda _1({\tilde{y}}^1) \}, \) since the weighted Tchebycheff norm values are attained in the second objective for both \({\bar{y}}^1\) and \({\bar{y}}^2\) and, thus, they coincide. Consequently, both weight set components have dimension \(p-1 = 1\).

This raises the question whether an analogon to Corollary 5.2 for weakly nondominated but dominated images holds true. This is not the case.

Example 5.4

(Example 5.3 cont.) We add another image to the image set:

$$\begin{aligned} {\tilde{Y}}^* = {\tilde{Y}} \cup \{ {\tilde{y}}^3 = (2,4)^\top \}. \end{aligned}$$

Analogously, we get \(\Vert {\tilde{y}}^1 \Vert ^{\lambda }_\infty > \Vert {\tilde{y}}^3 \Vert ^{\lambda }_\infty \), \(\lambda = (\lambda _1({\tilde{y}}^1) - \varepsilon , \lambda _2({\tilde{y}}^1) + \varepsilon )^\top \) for any \(\varepsilon >0\). Thus, \(\Lambda ({\tilde{y}}^1) = \{ \lambda ({\tilde{y}}^1)\}\) and it is \(\dim (\Lambda ({\tilde{y}}^1)) = 0\).

How can we characterize the dimension of the weight set components of images \(y^w \in Y_{wN} \setminus Y_N\)? We will conclude that this depends on the images dominating \(y^w\). If \(y^w \in Y_{wN} {\setminus } Y_N\), then there exists an image \(y \in Y_N\) such that \(y_i \le y^w_i\) for all \(i = 1,\dots , p\), i.e., \(\Lambda (y^w,i) \subseteq \Lambda (y,i)\) for all i satisfying \(y^w_i = y_i\).

Lemma 5.5

Let \(y^w \in Y_{wN} {\setminus } Y_N\) and \(y \in Y_N\) such that \(y \le y^w\) and \(y_{i_1} < y^w_{i_1}\), \(\dots \), \(y_{i_l} < y^w_{i_l}\) for indices \(\{i_1, \dots , i_l\} \subseteq \{1, \dots , p\}\). Then, \(\Lambda (y^w) {\setminus } \left( \bigcup _{i \notin \{i_1, \dots , i_l \} } \Lambda (y^w, i) \right) = \emptyset .\)

Proof

Without loss of generality, let \(i_1 = 1, \dots , i_l = l\). Assume, there is a weight \(\lambda \in \Lambda (y^w) {\setminus } \left( \bigcup _{i = l+1}^p \Lambda (y^w, i) \right) \). Then,

$$\begin{aligned} \lambda \in \left( \bigcup \limits _{i = 1}^l \Lambda (y^w,i) \right) \setminus \left( \bigcup \limits _{i = l+1}^p \Lambda (y^w, i) \right) , \end{aligned}$$

which, with \(y \leqq y^w\), implies that

$$\begin{aligned}&\lambda _i y_i < \lambda _i y^w_i{} & {} i=1,\dots ,l , \end{aligned}$$
(5.1)
$$\begin{aligned}&\lambda _k y_k \le \lambda _k y^w_k{} & {} k=l+1, \dots , p, \end{aligned}$$
(5.2)
$$\begin{aligned}&\text {there is some } i \in \{1,\dots ,l\} \text { such that }\lambda _k y^w_k < \lambda _i y^w_i{} & {} k = l+1, \dots , p. \end{aligned}$$
(5.3)

If \(\Vert y \Vert ^{\lambda }_\infty = \lambda _i y_i\) for an \(i \in \{1, \dots , p\}\), Eq. (5.1) implies that \(\lambda _i y_i < \lambda _i y^w_i \le \Vert y^w \Vert ^{\lambda }_\infty \). If \(\Vert y \Vert ^{\lambda }_\infty = \lambda _k y_k\) for a \(k \in \{l+1, \dots , p\}\), Eqs. (5.2) and (5.3) imply that \(\lambda _k y_k \le \lambda _k y_k^w < \lambda _i y^w_i \le \Vert y^w \Vert ^{\lambda }_\infty \) for an \(i \in \{1, \dots , l\}\). Both are contradictions to \(\lambda \in \Lambda (y^w)\). \(\square \)

Thus, the dimension of the weight set components of weakly nondominated but dominated images depends on the number of images that dominate \(y^w\) and on which indices those images are strictly better.

Proposition 5.6

Let \(y^w \in Y_{wN} \setminus Y_N\). If, for all \(I \subseteq \{1,\dots , p\}\), \(|I| = l\), there exists an image \(y \in Y_N\) such that \(y \leqq y^w\) and \(y_i < y^w_i \) for all \(i \in I\), then \(\dim (\Lambda (y^w)) \le p - 1 - l\).

Proof

This follows from Lemma 5.5. \(\square \)

The dimension of the weight set components of images in \(Y_{wN} {\setminus } Y_N\) is determined by the maximal cardinality of a set that satisfies the assumptions of Proposition 5.6.

Example 5.7

(Example 2.9 cont.) We augment the set Y to

$$\begin{aligned} Y' = Y \cup \{ y^5 = (3,2,2)^\top , y^6 = (2,3,3)^\top , y^7 = (3,2,3)^\top \}. \end{aligned}$$

Then, \(y^5,y^6\), and \(y^7\) are weakly nondominated but dominated images. For \(y^5\), it holds \(y^5_3 \le y_3\) for all \(y \in Y\). Thus, Proposition 5.6 yields \(\dim (\Lambda (y^5)) > 3 - 1 - 1 = 1\) and, therefore, \(\dim (\Lambda (y^5)) = 2 = \dim (\Lambda )\) must hold.

The nondominated images dominating \(y^6\) are \(y^2 = (2,1,3)^\top , y^3 = (2,2,2)^\top \) and \(y^4 = (1,2,3)^\top \). Thus, for all indices i, there exists an image \(y \in Y_N\) such that \(y \leqq y^6\) and \(y_i < y^6_i\). Proposition 5.6 induces that \(\dim (\Lambda (y^6)) \le 3 - 1- 1 = 1\). However, for the index pair (1, 3), there does not exist an image satisfying the assumptions of Proposition 5.6. Thus, \(\dim (\Lambda (y^6)) > 3 - 1- 2 = 0\) and we get \(\dim (\Lambda (y^6)) = 1\).

For the image \(y^7\), there exists for all pairs of indices an image satisfying the assumptions of Proposition 5.6. Thus, \(\dim (\Lambda (y^7)) \le 3 - 1- 2 = 0\) and, therefore, the weight set component consists of the kernel weight, only. Figure 6 illustrates the weight set components of \(y^5\), \(y^6\) and \(y^7\).

Fig. 6
figure 6

The weight set components \(\Lambda (y^r)\), \(r = 5,6,7\), of Example 5.7. Note that \(y^r \in Y_{wN} {\setminus } Y_N\) for \(r = 5,6,7\) and, thus, the interior of at least one dimensional weight set component is empty

As a further consequence, we obtain a characterization of nondominated images.

Corollary 5.8

Let \({{\,\textrm{int}\,}}(\Lambda (y,i))\) denote the set of all weights \(\lambda \in \Lambda (y,i)\) such that there exists a scalar \(\varepsilon >0\) with \(B_\varepsilon (\lambda ) \subseteq \Lambda (y,i)\). An image \(y \in Y\) is nondominated if and only if \({{\,\textrm{int}\,}}(\Lambda (y,i)) \ne \emptyset \) for all \(i = 1, \dots , p\).

The intersection of weight set components In Sect. 3, the intersection of weight set components \(\bigcap _{y \in {\bar{Y}}} \Lambda (y)\) for \({\bar{Y}} \subseteq Y\) is determined by the weight set component of the (dominated) local nadir image \(y^N({\bar{Y}})\). Thus, the dimension of the intersection sets is characterized by Proposition 5.6. Note that a nonempty intersection implies that all images in \({\bar{Y}}\) contribute to \(y^N({\bar{Y}})\), and \({\bar{Y}} = \{y' \in Y_N: y' \leqq y^N({\bar{Y}})\}\). Thus, if all images in \({\bar{Y}}\) coincide in at least one index i, it holds that \(\dim (\bigcap _{y \in {\bar{Y}}} \Lambda (y)) = p-1\) and they share at least one \((p-1)\)-dimensional polytope in their weight set complexes, in particular, in their ith dimensional weight set component. Notice also that this cannot happen between different dimensional weight set components as \(\Lambda (y^1,i) \cap \Lambda (y^2,j) \subseteq \{ \lambda \in \Lambda : \lambda _i y^1_i = \lambda _j y^2_j\}\) and the dimension of the latter polytope is \(p-2\). Considering only two nondominated images, we can therefore define a concept of (proper) adjacency regarding the weighted Tchebycheff scalarization.

Definition 5.9

Let two images \(y, {\bar{y}} \in Y\) be given.

  1. (i)

    The images y and \({\bar{y}}\) are weakly adjacent (with respect to the weighted Tchebycheff scalarization) if \(\Lambda (y) \cap \Lambda ({\bar{y}}) \ne \emptyset \).

  2. (ii)

    The images y and \({\bar{y}}\) are adjacent (with respect to the weighted Tchebycheff scalarization) if \(\dim (\Lambda (y) \cap \Lambda ({\bar{y}})) \ge p-2\).

  3. (iii)

    The images y and \({\bar{y}}\) are properly adjacent (with respect to the weighted Tchebycheff scalarization) if \(\dim (\Lambda (y) \cap \Lambda ({\bar{y}})) = p-2\).

  4. (iv)

    The (dimensional) weight set components of y and \({\bar{y}}\) overlap if \(\dim (\Lambda (y, i) \cap \Lambda ({\bar{y}}, i) ) = p-1\) for some \(i \in \{1, \dots , p\}\).

Fig. 7
figure 7

Two weight set complexes can share a \((p-1)\)-dimensional polytope P (a). Thus, the images are adjacent but not properly adjacent. The adjacency of the images in the image space is visualized in (b). The bold lines indicate an overlapping of the corresponding weight set components. The dotted line indicates weak adjacency

Figure 7 visualizes this concept for Example 2.9. The definition of adjacency of nondominated images with respect to the weighted Tchebycheff scalarization introduced and used in [22, 23] aligns with the definition given in Definition 5.9(i). Definitions 5.9(ii) and (iii) are motivated by the concept given in [28]. In conclusion, taking the dimension of the intersection set into account, the notion of adjacency can and should be refined.

6 Conclusion

Besides the weighted sum and the \(\varepsilon \)-constraint method, the weighted Tchebycheff method is a frequently applied scalarization technique in multiobjective optimization. The weighted Tchebycheff scalarization problem is closely linked to many other single objective optimization disciplines, including robust optimization, goal programming, and location theory. It is a building block of many exact and heuristic algorithms, which systematically vary the choice of weights to get (a subset of) all nondominated images. In other words, these algorithms utilize elements of the weight set while the set itself has not yet been the focus of research.

In this article, we provide the first rigorous and comprehensive theory of the set of all eligible weights for the weighted Tchebycheff scalarization. We analyze the polyhedral and combinatorial structure of the set of all weights yielding the same efficient solution as well as the composition of the weight set as a whole. To date, analogous research has mostly been published for the weighted sum method. However, there are substantial differences: The weighted Tchebycheff scalarization is able to yield all efficient solutions (i.e., not only the supported ones as in the weighted sum method). Additionally, due to absence of convexity, the structure of the weight set of the weighted Tchebycheff method is more complex and the analysis is more technical. Through this analysis, convexity-related properties and bounds on dimension of the weight set components have been proven.

Contrasting the structures of the weight set decomposition of the weighted sum scalarization, the weighted Tchebycheff scalarization provides some additional insights at a higher level. For the weighted sum scalarization, the decomposition describes the gradients of the nondominated part of the convex hull of the set of images as well as information about adjacent nondominated faces. However, it neither provides information about the positioning nor the size of the convex hull in the image space. In fact, nondominated frontiers (of some multiobjective optimization problems) may vary substantially but still share the same weight set decomposition of the weighted sum scalarization (cf. Figure 8(a)).

In contrast, the weight set decomposition of the weighted Tchebycheff norm yields more information about the positioning of the nondominated images. Note that the weight set decomposition includes the knowledge of the local nadir weights. In fact, the weight set decomposition of two sets of nondominated images coincides as long as their set of local nadir weights coincide. With the local nadir weights \(\lambda ^N\) for the weight set components known, the local nadir images must be located on the rays defined by \(D(\lambda ^N){:}{=}\{t \cdot \lambda ^N:t>0\}\). These rays narrow down the configuration of the nondominated set, since each nondominated image y must be within the region determined by all rays of local nadir weights that are contained in the weight set component \(\Lambda (y)\). If the kernel weight for the weight set component \(\Lambda (y)\) is additionally known, the nondominated image y must be located on the ray \(D(\lambda (y)){:}{=}\{t \cdot \lambda (y):t>0\}\). Taking nondominance and the definition of local nadir images into account, the complete nondominated set can be determined up to scaling of the objectives by a multiplicative factor. Figure 8(b) illustrates these observations for a biobjective example. An analogous reasoning is not possible for the weight set decomposition of the weighted sum scalarization.

Fig. 8
figure 8

Biobjective example of distinct sets of nondominated images (indicated by color), each of which have the same weight set decomposition with respect to weighted sum scalarization (a) or weighted Tchebycheff scalarization (b). a The gradient vectors describing the convex hull are equivalent (parallel lines are indicated by line type, e.g., solid, dashed, and dotted) even though the frontiers vary widely in overall shape. b With the local nadir weights for weight set components known, the local nadir images must be located on the associated rays (indicated by dotted lines) and, hence, these rays narrow down the possible location of the nondominated images. If the kernel weights for weight set components are additionally known, then the nondominated images must be located on the associated rays (indicated by dashed lines). In this case, the location of the nondominated set is uniquely determined up to scaling by multiplicative factor

Thus, an immediate idea for future research is a thorough analysis of ‘duality’ between the weight set decomposition and the image space described informally above and illustrated in Fig. 8 Other directions of research include the algorithmic utilization of the derived properties. Star-shapedness and line convexity may be used to derive outer approximation [28] or inner approximation [2, 19] methods that iteratively shrink or augment weight set components, respectively. The properties may be also utilized for interactive approaches with focus on a graphical exploration and presentation of solutions. The idea of weight set decompostion can further be applied to the parameter sets of other scalarizations. For example, weighted p-norm scalarizations or the augmented modified weighted Tchebycheff scalarization yield nondominated images, only, and theoretically connect the already studied weight set decompositions. This may provide methods for dealing algorithmically with overlapping components of weighted Tchebycheff weight set components and revealing additional insights in the images space of multiobjective optimization problems.