1 Introduction

1.1 Motivation

Many applications involve the simultaneous optimization of multiple incompatible and conflicting objectives. The mathematical discipline which is concerned with the analysis and resolution of such problems is known as multiobjective optimization.

The task of multiobjective optimization is generally understood as computing (or approximating) a subset or the set of all “optimal” solutions or their objective function values. Typically, the notion of Pareto optimality is used: a solution is called efficient, if there is no other solution that is strictly better in one objective and not worse in any other. The image of an efficient solution is called nondominated.

Common approaches to obtain single feasible solutions (or images) are the optimization of so-called scalarizations which transform the multiobjective problem into a scalar-valued optimization problem by means of a real valued scalarizing function, additional constraints and variables. Hereby, there are two points of interest: Can any efficient solution be obtained as an optimal solution of the scalarized problem when solved with suitable parameters? Conversely, is an optimal solution of the scalarized problem always efficient?

An important class of scalarizations are the weighted \(p\)-norm scalarizations. These scalarizations use a reference point in the image space that is component-wise better than any image. For some \(p\in {\mathbb {R}}_{\ge 1} \cup \{\infty \}\), a weighted \(p\)-norm scalarization asks for a feasible solution of the multiobjective problem for which the weighted distance (induced by the p-norm) between its image and the reference point is minimal among all distances between the images of feasible solutions and the chosen reference point.

Note that the class of weighted \(p\)-norm scalarizations subsumes the well-studied and widely used weighted sum scalarization and the weighted Tchebycheff scalarization for the special cases of \(p=1\) and \(p=\infty \), respectively. The weighted sum scalarization, linearly combines the weighted objectives and minimizes the resulting weighted sum. By varying the weight, it is possible to obtain only a strict subset of the nondominated set with the weighted sum scalarization. These so-called supported nondominated images are located on the convex hull of the nondominated set. Supported nondominated images that are also extreme points of the convex hull of the nondominated set are called extreme supported nondominated images. However, for multiobjective linear optimization problems, all nondominated images can be obtained by the weighted sum scalarization. For \(p=\infty \), the weighted Tchebycheff scalarization minimizes the maximum distance to the reference point. Unlike the weighted sum scalarization, any nondominated image can be obtained by this scalarization with an appropriately chosen weight. However, it is not guaranteed that optimal solutions are always efficient. Typically, the weighted sum and weighted Tchebycheff scalarization have a computational advantage for multiobjective linear problems (including discrete and combinatorial problems with linear objectives), as these scalarizations preserve linearity or, at least, can be linearized. However, this advantage does not hold for non-linear problems and many practical problems are modeled as non-convex multiobjective (mixed-integer) non-linear problems, see e.g. [9, 17]. Furthermore, for non-convex problems, the set of supported nondominated image is in general only a strict subset of the nondominated set. Thus, the weighted sum scalarization can only be used to compute a possibly very small subset of the set of all nondominated images. In contrast, the Tchebycheff scalarization requires handling the possible output of dominated (i.e., weakly nondominated) images and may require modifications or a second scalarization to avoid this. Therefore, it is worth exploring more complex scalarization methods that combine objective functions non-linearly. In particular, the weighted p-norm scalarizations for \(1<p<\infty \), which output nondominated images and allow the computation of the whole nondominated set for a sufficiently large p.

For any weighted \(p\)-norm scalarization and for any given multiobjective problem, there is a function mapping the weight employed in the scalarized problem to the obtainable (nondominated) images. Observe that this function is not injective, since different weights may yield the same (nondominated) image. This motivates the concept of a weight set decomposition: the subdivision of the set of all eligible weights into subsets such that all weights in some subset yield the same image. These subsets are called weight set components. As a result, the structure of the weight set is linked to the structure of the nondominated set. Therefore, it completely explains the functionality of an algorithm for solving multiobjective optimization problems which is based on the iterative solution of weighted \(p\)-norm scalarizations. Weight set decompositions have been explored for the weighted sum scalarization [1, 6, 12, 18, 28, 30, 34, 35], and for the weighted Tchebycheff scalarization [21, 25].

1.2 Related Work

Next, we place the weighted sum, the weighted Tchebycheff, and general weighted p-norm scalarizations in a broader context, and give a brief summary of the existing literature on weight set decomposition. More details on general scalarizations are given in [14, 39]. In addition, the survey [19] provides an excellent overview on exact algorithms for multiobjective (mixed-) integer linear programs that are partially based on scalarizations.

Introduced by Zadeh [42], the weighted sum scalarization is probably the most common technique to transform a multiobjective optimization problem into a single-objective optimization problem. Due to its simplicity, this scalarization is often utilized in the first phase of so-called two-phase methods [10, 31, 37]): first, an optimal solution set for the weighted sum scalarization is determined, e. g., by the well-known dichotomic search algorithm [2, 11], which is applicable to general biobjective optimization problems, or its generalizations [6, 28] to multiple objectives. Then, other scalarization techniques or algorithms based on the structures of the underlying problems are used to enumerate the “remaining” nondominated images. Beyond that, Heyde and Löhne [22] introduce a geometric duality concept for multiobjective linear programs using the weighted sum scalarization. Based on this concept, algorithms for enumerating all extreme supported nondominated images have been proposed [15, 20]. The algorithm of Bökler and Mutzel [6] adapts this dual approach to multiobjective combinatorial optimization problems.

Bowman [7] suggests using the (weighted) Tchebycheff norm to find nondominated images, even for non-linear objective functions. To ensure that optimal images are always nondominated, modifications are introduced and used to determine nondominated images, see, for example, [13, 23, 24, 36]. Two-phase approaches utilizing the weighted Tchebycheff scalarization in the second phase are proposed in [10, 33].

White [38] and Zeleny [43] study properties of general weighted p-norm scalarizations. In particular, White gives a lower bound for p such that all nondominated images can be obtained with the corresponding weighted p-norm scalarization and studies the application of weighted p-norm scalarizations to problems with special structures. Karakaya et al. [26] develop interactive algorithms that assume that the preferences of the decision makers can be modeled with p-norms. To this end, they introduce a notion of (extreme) p-supportedness and p-adjacency to reduce the search space. Since the distance to the reference point can provide useful information in the optimization process, many applications of all of these scalarization techniques can also be found in the context of interactive approaches, see [27] for an overview.

Weight set decomposition approaches for the weighted sum scalarization date back to 1975, where Yu and Zeleny [41] derive a multiobjective simplex method for multiobjective linear programs and associate efficient basic feasible solutions with the weights in the polyhedral cone that is defined by the corresponding basis. Benson and Sun [4, 5] extend this idea and establish a connection between extreme supported nondominated images of a multiobjective linear program and a partitioning of the weight set. Przybylski et al. [30] present structural results of the weighted sum weight set decomposition applied to multiobjective mixed integer problems. They show that the extreme supported nondominated images are both sufficient as well as necessary to decompose the weight set. Furthermore, they show that weighted sum weight set components are polyhedra and intersect only in common faces, which in particular leads to a notion of adjacency. In addition, they propose an algorithm to enumerate all extreme supported nondominated images which iteratively shrinks supersets of the actual weight set components. In fact, the (generalized) dichotomic search algorithms in [2, 6, 11, 28] implicitly decompose the weight set into weighted sum weight set components and, thus, can be interpreted as prime examples of algorithms using the weighted sum weight set decomposition to enumerate all extreme supported nondominated images. Explicit examples of such algorithms are these of Alves and Costa [1] and Halffmann et al. [18], which utilize the convexity of the weight set components and iteratively enlarge subsets of weight set components to compute the set of all extreme supported nondominated images of multiobjective mixed integer programs. Schulze et al. [34] and Seipp [35] use weight set decomposition to show that the number of extreme supported nondominated images of unconstrained multiobjective combinatorial problems and minimum spanning tree problems, respectively, is polynomially bounded. Correia et al. [12] introduce an algorithm that computes all supported efficient minimum spanning trees based on weight set decomposition.

For general biobjective optimization problems, Eswaran et al. [16] and Ralphs et al. [32] adapt the dichotomic search method to enumerate all nondomianted images with the weighted Tchebycheff scalarization. The weighted Tchebycheff weight set decomposition is thus determined implicitly. Bozkurt et al. [8] and Karakaya and Köksalan [25] decompose weighted Tchebycheff weight set components into polyhedra. This allows them to determine their volumes, which are then used to evaluate efficient solutions and efficient solution sets. A comprehensive study of structural properties of weight set decomposition for the Tchebycheff scalarization has been done by Helfrich et al. [21]. They utilize polytopal complexes and a weakened concept of convexity to characterize weighted Tchebycheff weight set components, give a description of the intersection of components, and examine their dimension.

1.3 Our Contribution

We explore weighted p-norm scalarizations and weighted p-norm weight set decompositions for \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\) applied to discrete multiobjective optimization problems, a broad category including, among others, combinatorial optimization problems and multiobjective integer programs with bounded feasible solution sets. This investigation bridges the research gap between weighted 1-norm and weighted \(\infty \)-norm scalarizations, which have received considerable attention in the literature. The weight set associated with the 1-norm scalarization produces supported images, while exploring the weight set for the \(\infty \)-norm scalarization yields all nondominated images. Consequently, as we progress from \(p=1\) to \(p=\infty \), the set of obtainable nondominated images increases.

In Sect. 2, we introduce the necessary notation and restate well-known results concerning the efficiency of solutions obtained via a weighted p-norm scalarization. In Sects. 3 to 6, we theoretically generalize and connect existing studies on weight set decomposition by providing a profound understanding of weighted p-norms and their weight set decompositions.

More precisely in Sect. 3, we extend existing research on weighted p-norm scalarizations as pursued in [26]: we characterize extreme p-supportedness of nondominated images by means of (unique) optimal images for weighted p-norm scalarized optimization problems. We also interconnect weighted p-norm scalarizations of different p and show that (extreme) p-supportedness monotonically relates supportedness and nondominance. This can be understood in the sense that, for \( {\tilde{p}} \in {\mathbb {R}}_{\ge 1}\), \({\tilde{p}}\)-supported nondominated images must be extreme p-supported for every \(p > {\tilde{p}}\) and, in each instance, there exists a sufficiently large \({\bar{p}}\in {\mathbb {R}}_{\ge 1}\) such that the set of extreme \({\bar{p}}\)-supported nondominated images and the nondominated set coincide.

Then, we introduce weight set decomposition for weighted \(p\)-norm scalarizations as a generalization of both the weighted sum and theweighted Tchebycheff weight set decomposition in Sect. 4. For the purpose of a detailed theoretical study of their structures for \(p\in {\mathbb {R}}_{>1}\), we show the existence of a bijection between weighted p-norm and weighted sum weight set components of a related multiobjective optimization problem in Sect. 5. Consequently, weighted p-norm weight set components are naturally related to weighted sum weight set components. Furthermore, fundamental properties of weighted sum weight set components carry over under this bijection. In particular, this leads to adjacency concepts with respect to the weighted p-norm scalarizations that connect the already established adjacency concepts with respect to weighted sum and weighted Tchebycheff [21, 26, 30] and refines that of [26]. Finally, in Sect. 6, we describe similarities and substantial differences that arise between the weighted Tchebycheff and weighted p-norm weight set decompositions for large \(p\in {\mathbb {R}}_{> 1}\).

2 Preliminaries

In this section, we introduce the necessary notation and review useful basic definitions and results concerning multiobjective programming and \(p\)-norm scalarizations. For a thorough introduction to multiobjective optimization, we refer to the book of Ehrgott [14].

Formally, a multiobjective optimization problem MOP with \(q\ge 2\) objectives and \(n\in {\mathbb {N}}\) variables can be stated as

$$\begin{aligned} \min f(x)&=\min (f_1(x),\dots , f_q(x))^\top \\&\qquad \text {s.t.} \quad x \in X , \end{aligned}$$
(MOP)

where \(X\subseteq {\mathbb {R}}^n\) is the set of feasible solutions. The vector-valued objective function \(f(x):=(f_1(x),\dots ,f_q(x))^\top \) is composed of q objectives. Then, each feasible solution \(x \in X\) is mapped to its image f(x) and the image set \(Y:=f(X):=\{f(x):x\in X\}\) subsumes all possible images. The vector spaces \({\mathbb {R}}^n\) and \({\mathbb {R}}^q\) are called the decision space and the image space, respectively.

Since there is no canonical ordering in the image space \({\mathbb {R}}^q\), we utilize component-wise orders to define optimality: for images \(y,{\bar{y}}\in {\mathbb {R}}^q\), the weak component-wise order, the component-wise order, and the strict component-wise order are defined by

$$\begin{aligned}&y\leqq {\bar{y}} \;\text {, if and only if}\; y_i \le {\bar{y}}_i\; \text {for all}\; i=1,\dots ,q,\\&y\le {\bar{y}} \;\text {, if and only if}\; y_i \le {\bar{y}}_i\; \text {for all}\; i=1,\dots ,q \; \text {and}\; y \ne {\bar{y}},\\&y<{\bar{y}} \;\text {, if and only if}\; y_i < {\bar{y}}_i \; \text {for all}\; i=1,\dots ,q, \end{aligned}$$

respectively. Furthermore, we set \({\mathbb {R}}^q_\geqq :=\{x\in {\mathbb {R}}^q: x\geqq 0\}\), \({\mathbb {R}}^q_\ge :=\{x\in {\mathbb {R}}^q: x\ge 0\}\), and \({\mathbb {R}}^q_>:=\{x\in {\mathbb {R}}^q: x> 0\}\).

Then, a feasible solution \(x^*\in X\) is called (weakly) efficient if there exists no other feasible solution \(x\in X\) such that \(f(x)\le f(x^*)\) (\(f(x)< f(x^*)\)). The corresponding image \(y^*=f(x^*)\) is then called (weakly) nondominated. The set of (weakly) efficient solutions is denoted by \(X_E\) (\(X_{wE}\)) and the set of (weakly) nondominated images by \(Y_N\) (\(Y_{wN}\)).

For a positive weight \(\lambda \in {\mathbb {R}}^q_>\), \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\), and reference point \(s \in {\mathbb {R}}^q\) with \(s \leqq y\) for all \(y \in Y\), the weighted \(p\)-norm scalarization [40] is the scalar-valued optimization problem

figure a

where the weighted \(p\)-norm is defined by

$$\begin{aligned} \left\Vert y \right\Vert _{p}^{\lambda }= {\left\{ \begin{array}{ll} \root p \of {\sum _{i=1}^q (\lambda _i |y_i|)^{p}}, &{} \quad \text {if } p\in {\mathbb {R}}_{\ge 1},\\ \max _{i=1,\dots , q}\{|\lambda _i y_i|\}, &{} \quad \text {if } p=\infty . \end{array}\right. } \end{aligned}$$

We extend this notation to nonnegative weights \(\lambda \in {\mathbb {R}}^q_{\ge }\), even though \(||\cdot ||_p^{\lambda }: {\mathbb {R}}^q \rightarrow {\mathbb {R}}\) does not constitute a norm in this case and keep the terminology weighted \(p\)-norm for convenience. The image \(y\in Y\) of an optimal solution \({x\in X}\) of \(\varPi ^{p}(\lambda )\) is called optimal for \(\varPi ^{p}(\lambda )\). An optimal image y for \(\varPi ^{p}(\lambda )\) is called uniquely optimal for \(\varPi ^{p}(\lambda )\) if no other optimal solution of \(\varPi ^{p}(\lambda )\) with an image \({\bar{y}}\ne y\) exist. It is well-known that, if \(p\in {\mathbb {R}}_{\ge 1}\) and \(\lambda \in {\mathbb {R}}_>^q\), every optimal image of \(\varPi ^{p}(\lambda )\) is nondominated. Moreover, if \(p\in {\mathbb {R}}_{\ge 1} \cup \{\infty \}\) and \(\lambda \in {\mathbb {R}}_\ge ^q\), every optimal image of \(\varPi ^{p}(\lambda )\) is at least weakly nondominated, and every unique optimal image of \(\varPi ^{p}(\lambda )\) is nondominated.

The weighted \(\infty \)-norm scalarization is identical to the weighted Tchebycheff scalarization [36]. Furthermore, the weighted 1-norm scalarization is equivalent to the weighted sum scalarization [42] since

$$\begin{aligned} \left\Vert f(x)-s \right\Vert _{1}^{\lambda }= \sum _{i=1}^q \lambda _i |f_i(x) - s_i | =\sum _{i=1}^q \lambda _i (f_i(x) - s_i )= \sum _{i=1}^q \lambda _i f_i(x) - \underbrace{\sum _{i=1}^{q} \lambda _i s_i}_{\text {constant}}. \end{aligned}$$

In the remainder of this work, we make the following assumptions:

Assumption 1

We consider MOPs with a finite set of feasible solutions, so-called multiojective discrete optimization problems (MODO). Further, we assume that \(Y\subseteq {\mathbb {R}}^q_>\) and, for every \(p \in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\) and every weight \(\lambda \in {\mathbb {R}}_{\ge }^q\), the reference point used in the weighted p-norm scalarization coincides with the zero vector, i.e., \(s = 0\).

The positive image set and the zero valued reference point are used without loss of generality and for simplicity of notation.

As a consequence of Assumption 1, optimal solutions and images of \(\varPi ^{p}(\lambda )\) always exist. Additionally, \(\varPi ^{p}(\lambda )\) simplifies to

$$\begin{aligned} \min&\; \left\Vert f(x) \right\Vert _{p}^{\lambda } \\ \text {s.t.}&\; x\in X, \end{aligned}$$

and the absolute value in the weighted p-norm can be omitted.

3 Properties of Weighted p-Norm Scalarizations

In this section, we study properties of general weighted p-norm scalarizations and interrelate weighted p-norm scalarizations for different p. More specifically, we reformulate the concepts of (extreme) p-supportedness introduced in [26] based on (unique) optimal images of \(\varPi ^{p}(\lambda )\), which allows us to properly introduce general weighted p-norm weight set decompositions, see Sect. 5, and serves as the foundation for their analysis as well as similarities and differences to the weighted sum, see Sect. 5, and the weighted Tchebycheff weight set decomposition, see Sect. 6. In addition, we use our reformulation to strengthen the results of Karakaya et al. [26]: we show that \({\tilde{p}}\)-supported nondominated images must be extreme p-supported for all \(p>{\tilde{p}}\). Additionally, we prove the existence of a sufficiently large but finite \(p\in {\mathbb {R}}_{\ge 1} \) such that, for a fixed weight \(\lambda \in {\mathbb {R}}_>^q\), a unique optimal image of the weighted Tchebycheff scalarization \(\varPi ^{\infty }(\lambda )\) is also a unique optimal image of the weighted p-norm scalarization \(\varPi ^{p}(\lambda )\). Thus, there exists a sufficiently large p such that every nondominated image y is extreme p-supported for every \(p > p^*\).

Recall that the efficient solutions that are obtainable with a weighted sum scalarization are referred to as supported efficient solutions. The corresponding images are called supported nondominated images and it is well-known that they are located on the boundary of \({{\,\textrm{conv}\,}}(Y)+{\mathbb {R}}^q_{\geqq }\) [3], where \({{\,\textrm{conv}\,}}(Y)\) denotes the convex hull of Y. All other efficient solutions and their nondominated images are called unsupported. Furthermore, the set of supported efficient solutions/nondominated images is split into two types:

  1. (i)

    The extreme supported efficient solutions for which the corresponding extreme supported nondominated images are vertices of \({{\,\textrm{conv}\,}}(Y)+{\mathbb {R}}^q_\geqq \).

  2. (ii)

    The non-extreme supported efficient solutions for which the corresponding non-extreme supported nondominated images are located in the relative interior of a face of \({{\,\textrm{conv}\,}}(Y)+{\mathbb {R}}^q_\geqq \).

Therefore, a nondominated image y is extreme supported if and only if a weight \( {\lambda \in {\mathbb {R}}_>^q}\) exists such that \(y\) is the unique optimal image of \(\varPi ^1(\lambda )\), even though multiple solutions with image \(y\) may exists [30]. We adapt this characterization to introduce supportedness for \(p>1\):

Definition 3.1

Let \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\), \(x\in X_E\) be an efficient solution, and \(y=f(x)\in Y_N\) the corresponding nondominated image.

  1. (a)

    The solution x and the image y are called p-supported if there exists a weight \(\lambda \in {\mathbb {R}}^q_{>}\) such that y is optimal for \(\varPi ^{p}(\lambda )\).

  2. (b)

    If no such weight exists, the solution \(x\) and the image \(y\) are called \(p\)-unsupported

The set of \(p\)-supported efficient solutions is denoted by \(X_{pSE}\) and the set of \(p\)-supported nondominated images by \(Y_{pSN}\).

Furthermore, we distinguish between two types of \(p\)-supported efficient solutions:

Definition 3.2

Let \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\), \(x\in X_{pSE}\) be a \(p\)-supported efficient solution and \(y:=f(x)\in Y_{pSN}\) the corresponding \(p\)-supported nondominated image.

  1. (a)

    The solution \(x\) and the image y are called extreme \(p\)-supported if there exists a weight \(\lambda \in {\mathbb {R}}_>^q\) such that \(y\) is uniquely optimal for \(\varPi ^{p}(\lambda )\).

  2. (b)

    The solution \(x\) and the image y are called non-extreme \(p\)-supported if no such weight \(\lambda \in {\mathbb {R}}_>^q\) exists.

The set of extreme \(p\)-supported efficient solutions is denoted by \(X_{pESE}\) and the corresponding set in the image space is denoted by \(Y_{pESN}\).

Karakaya et al. [26] use an extended concept of convex combinations to define (extreme) p-supportedness. More details and the proof of equivalence of these two different definitions is based on Theorem 5.1 and can be found in “Appendix A”.

Note that it is shown in [36] that all nondominated images are unique optimal images for \(\varPi ^{\infty }(\lambda )\) for some weight \(\lambda \in {\mathbb {R}}_>^q\). Hence, the set of all nondominated images coincides with the set of all extreme \(\infty \)-supported nondominated images.

Next, we show that any \({\tilde{p}}\)-supported nondominated image is extreme \(p\)-supported for all \(p>{\tilde{p}}\). This result improves upon a result of Karakaya et al. [26] in the sense that \({\tilde{p}}\)-supported nondominated image are not only \(p\)-supported for all \(p>{\tilde{p}}\) but indeed p-extreme supported.

Theorem 3.1

Let \({\tilde{p}}\in {\mathbb {R}}_{\ge 1}\) and \(y\in Y_{{\tilde{p}}SN}\) be a \({\tilde{p}}\)-supported nondominated image. Then, for all \(p>{\tilde{p}}\) the image \(y\) is extreme \(p\)-supported.

Proof

Let \({\tilde{p}}\in {\mathbb {R}}_{\ge 1}\), \(p>{\tilde{p}}\) and \(y\in Y_{{\tilde{p}}SN}\) be a \({\tilde{p}}\)-supported nondominated image. Then there exists a weight \({\tilde{\lambda }}\in {\mathbb {R}}_{>}^q\) such that \(y\) is an optimal image of \(\varPi ^{{\tilde{p}}}({\tilde{\lambda }})\). We define a weight \(\lambda (y,p,{\tilde{\lambda }})\in {\mathbb {R}}_>^q\) such that \(y\) is the unique optimal image of \(\varPi ^p(\lambda (y,p,{\tilde{\lambda }}))\) by

$$\begin{aligned} \lambda _i(y,p,{\tilde{\lambda }}):= \root p \of {{\tilde{\lambda }}_i^{{\tilde{p}}}y_i^{{\tilde{p}}-p}} \text { for all } i=1,\dots ,q. \end{aligned}$$

Then, it holds that

$$\begin{aligned} \left( \left\Vert y \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\right) ^p=\sum _{i=1}^q {\tilde{\lambda }}_i^{{\tilde{p}}} y_i^{{\tilde{p}}-p} y_i^p=\sum _{i=1}^q {\tilde{\lambda }}_i^{{\tilde{p}}} y_i^{{\tilde{p}}} = \left( \left\Vert y \right\Vert _{{\tilde{p}}}^{{\tilde{\lambda }}}\right) ^{{\tilde{p}}}. \end{aligned}$$

Now, let an image \({\bar{y}}\in Y\backslash \{y\}\) be given. We show that \(\left\Vert y \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}<\left\Vert {\bar{y}} \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\). To this end, we define the function

$$\begin{aligned} h:{\mathbb {R}}_\ge \rightarrow {\mathbb {R}}: r \mapsto \sum _{i=1}^q {\tilde{\lambda }}_i^{{\tilde{p}}}y_i^{{\tilde{p}}} \left( \frac{{\bar{y}}_i}{y_i}\right) ^r. \end{aligned}$$

The second derivative of \(h\) is \(h''(r)=\textstyle \sum _{i=1}^q {\tilde{\lambda }}_i^{{\tilde{p}}} y_i^{{\tilde{p}}}\left( \textstyle \frac{{\bar{y}}_i}{y_i}\right) ^r\left( \ln {\textstyle \frac{{\bar{y}}_i}{y_i}}\right) ^2 \) and, since \(h''\) is positive for all \(r>0\), the function \(h\) is strictly convex. Furthermore, \(y\) is the optimal image of \(\varPi ^{{\tilde{p}}}({\tilde{\lambda }})\) by assumption and, therefore, it holds

$$\begin{aligned} h(0)=\sum _{i=1}^q{\tilde{\lambda }}_i^{{\tilde{p}}} y_i^{{\tilde{p}}} \le \sum _{i=1}^q {\tilde{\lambda }}_i^{{\tilde{p}}} {\bar{y}}_i^{{\tilde{p}}} = h({\tilde{p}}). \end{aligned}$$

Consequently, the only local and, hence, global minimum of \(h\) is attained in the interval \([0,{\tilde{p}}]\). Thus, it follows that \(h({\tilde{p}})<h(r)\) for all \(r>{\tilde{p}}\). In particular, this holds for \(r=p\). Therefore, it is

$$\begin{aligned} \left( \left\Vert {\bar{y}} \right\Vert _{{\tilde{p}}}^{{\tilde{\lambda }}} \right) ^{{\tilde{p}}}=h({\tilde{p}})<h(p)=\left( \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\right) ^p. \end{aligned}$$

This implies that

$$\begin{aligned} \left( \left\Vert y \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\right) ^p =\left( \left\Vert y \right\Vert _{{\tilde{p}}}^{{\tilde{\lambda }}}\right) ^{{\tilde{p}}} \le \left( \left\Vert {\bar{y}} \right\Vert _{{\tilde{p}}}^{{\tilde{\lambda }}}\right) ^{{\tilde{p}}} < \left( \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\right) ^p. \end{aligned}$$

Hence, \(\left\Vert y \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}<\left\Vert {\bar{y}} \right\Vert _{p}^{\lambda (y,p,{\tilde{\lambda }})}\) follows by taking the p-th root. \(\square \)

Thus, increasing \(p\) only increases the number of (extreme) \(p\)-supported nondominated images:

Corollary 3.1

For \(p,{\tilde{p}}\in {\mathbb {R}}_{\ge 1} \cup \{\infty \}\) with \(p>{\tilde{p}}\), it holds that

$$\begin{aligned} Y_{{\tilde{p}}ESN}\subseteq Y_{{\tilde{p}}SN} \subseteq Y_{pESN} \subseteq Y_{pSN}. \end{aligned}$$

We note that the choice of \(\lambda (y,p,{\tilde{\lambda }})\) used in the proof of Theorem 3.1 is motivated by the so-called kernel weight of the weighted Tchebycheff weight set components, see Theorem 6.1. This relationship is described in more detail in Remark 6.1.

Next, we show that for any fixed weight \(\lambda \in {\mathbb {R}}^q_>\), there exists a sufficiently large \(p\) such that a unique optimal image of \(\varPi ^\infty (\lambda )\) also is a unique optimal image of \(\varPi ^{p}(\lambda )\).

Theorem 3.2

Let \(y\in Y_N\) be a nondominated image and \(\lambda \in {\mathbb {R}}_{>}^q\) a weight such that \(y\) is the unique optimal image of \(\varPi ^\infty (\lambda )\). Then there exists a \({\bar{p}}\in {\mathbb {R}}_{\ge 1}\) such that \(y\) is the unique optimal image of \(\varPi ^{p}(\lambda )\) for all \(p\ge {\bar{p}}\).

Proof

Let \(y\in Y_N\) be the unique optimal image of \(\varPi ^\infty (\lambda )\). Consider the minimal difference \(\delta \) of the weighted Tchebycheff norms of any image and y, i. e.

$$\begin{aligned} \delta :=\min _{{\bar{y}}\in Y\backslash \{y\}} \left\{ \left\Vert {\bar{y}} \right\Vert _{\infty }^{\lambda }-\left\Vert y \right\Vert _{\infty }^{\lambda }\right\} . \end{aligned}$$

Since \(Y\) is finite, \(\delta \) exists and is finite as well. Since \(y\) is uniquely optimal for \(\varPi ^{\infty }(\lambda )\), it holds that \(\delta >0\). Furthermore, it is \(\lim _{p\rightarrow \infty } \left\Vert y \right\Vert _{p}^{\lambda } = \left\Vert y \right\Vert _{\infty }^{\lambda }\). Thus, there exists some \({\bar{p}}\in {\mathbb {R}}_{\ge 1}\) such that

$$\begin{aligned} \left| \left\Vert y \right\Vert _{\infty }^{\lambda }-\left\Vert y \right\Vert _{p}^{\lambda }\right| < \delta \text { for all } p\ge {\bar{p}}. \end{aligned}$$

Additionally, it holds that \(\left\Vert {\bar{y}} \right\Vert _{p}^{\lambda }\ge \left\Vert {\bar{y}} \right\Vert _{\infty }^{\lambda }\) for all \({\bar{y}}\in Y\) and \(p\in {\mathbb {R}}_{\ge 1}\). This implies that

$$\begin{aligned} \left\Vert y \right\Vert _{p}^{\lambda } < \left\Vert y \right\Vert _{\infty }^{\lambda }+\delta \le \left\Vert {\bar{y}} \right\Vert _{\infty }^{\lambda } \le \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda } \end{aligned}$$

for all \(p\ge {\bar{p}}\) and \({\bar{y}}\in Y\backslash \{y\}\). \(\square \)

Since for each nondominated image \(y\in Y_N\) there exists a weight \(\lambda \in {\mathbb {R}}^q_>\) such that y is the unique optimal image of \(\varPi ^\infty (\lambda )\) and \(Y_N\) is finite, we obtain that there exists a \(p^*\) such that for all \(p\ge p^*\) all nondominated images are p-supported:

Corollary 3.2

There exists a \(p^*\in {\mathbb {R}}_{\ge 1}\) such that all nondominated images are extreme \(p\)-supported for all \(p\ge p^*\). That is, \(Y_N=Y_{pESN}\).

White [38] also states Corollary 3.2 in a slightly weaker version: There exists a \(p^*\in {\mathbb {R}}_{\ge 1}\) such that all nondominated images are p-supported for all \(p\ge p^*\) (though they only indirectly use the concept of p-supportedness). In the proof of the statement they give the lower bound

$$\begin{aligned} p^L:=\max \left\{ \frac{\ln {q}}{\ln {\frac{y_i}{{\bar{y}}_i}}}: y,{\bar{y}}\in Y_N, i\in \{1,\dots ,q\} \text { and } \frac{y_i}{{\bar{y}}_i} >1\right\} \end{aligned}$$
(1)

such that for all \(p\ge p^L\) all nondominated images are p-supported and thus by Theorem 3.1 all nondominated images are even extreme supported for \(p>p^L\).

4 Weighted p-Norm Weight Set Decompositions

In this section, we introduce weighted p-norm weight set components and weighted p-norm weight set decompositions for general \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\). After the formal introduction, we illustrate p-norm weight set decompositions for different p using a small triobjective problem.

Observe that, for every \({p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}}\), a weight \(\lambda \in {\mathbb {R}}_\ge ^q\) can be scaled by a positive scalar without affecting the set of optimal solutions of the weighted p-norm scalarization with weight \(\lambda \). Thus, we can restrict the set of all eligible weights to the normalized weight set [30]

$$\begin{aligned} \varLambda ^>:= \left\{ \lambda \in {\mathbb {R}}_>^q: \sum _{i=1}^q \lambda _i=1 \right\} . \end{aligned}$$

The normalized weight set is not closed and, hence, not compact. To be able to apply polyhedral theory, it is convenient to consider the weight set

$$\begin{aligned} \varLambda :=\left\{ \lambda \in {\mathbb {R}}_{\ge }^q: \sum _{i=1}^q \lambda _i=1\right\} . \end{aligned}$$

The weight set is a (compact and convex) polyhedron of dimension \(q-1\). Note that there exists a bijection between the set \(\left\{ \lambda \in {\mathbb {R}}_{\geqq }^{q-1}: \textstyle \sum _{i=1}^{q-1} \lambda _i \le 1 \right\} \) and the weight set \(\varLambda \) by setting \(\lambda _q=1-\textstyle \sum _{i=1}^{q-1} \lambda _i\). This is particularly useful for visualizing the weight set of MODOs with \(q=3\) objectives.

Fig. 1
figure 1

Visualization of the image set given in Example 4.1

The weighted sum weight set component of a nondominated image y has been introduced in [30] as the set of all weights for which y is an optimal image for \(\varPi ^1(\lambda )\). Similarly, the weighted Tchebycheff weight set component of a nondominated image y has been introduced in [21] as the set of all weights for which y is an optimal image of \(\varPi ^\infty (\lambda )\). In the next definition, we generalize these definitions to weighted p-norms:

Definition 4.1

For a nondominated image \(y\in Y_N\) and \(p\in {\mathbb {R}}_{\ge 1}\cup \{\infty \}\), the weighted p-norm weight set component of \(y\) is defined as

$$\begin{aligned} \varLambda ^p(y):= \left\{ \lambda \in \varLambda :\left\Vert y \right\Vert _{p}^{\lambda }\le \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda } \text { for all } {\bar{y}}\in Y\right\} . \end{aligned}$$

Before we analyze the structure of weighted p-norm weight set components and their decompositions of the weight set for different finite p, we illustrate what weighted p-norm weight set decompositions look like with the aid of a simple example. This allows us to get a rough idea of their geometry.

Example 4.1

We consider \(f=\text {id}\) and

$$\begin{aligned} X=Y&=\Big \{y^1=(5,2,1)^\top ,y^2=(2,5,2)^\top ,y^3=(2,4,3)^\top ,\\&\qquad y^4=(1,2,4)^\top ,y^5=(3,3,2)^\top ,y^6=(4,1,2)^\top \Big \}. \end{aligned}$$

The image set and different weighted \(p\)-norm weight set decompositions are visualized in Fig. 1 and Fig. 2, respectively. Clearly, all images are nondominated. Furthermore, the images \(y^1, y^2, y^4, y^5\), and \(y^6\) are \(1\)-supported nondominated images, and the image \(y^3\) is a 1-unsupported nondominated image. Additionally, the images \(y^1,y^2,y^4,y^6\) are extreme \(1\)-supported nondominated images. Moreover, the image \(y^5\) is a non-extreme 1-supported nondominated image, and, thus extreme p-supported for all \(p>1\) by Theorem 3.1. Note that the weighted sum weight set component of \(y^5\) is the intersection of the weighted sum weight set components of \(y^2\) and \(y^6\), i. e., it holds that \(\varLambda ^1(y^5)=\varLambda ^1(y^2)\cap \varLambda ^1(y^6)\). Furthermore, by Eq. 1 we get

$$\begin{aligned} p^L=\frac{\ln {3}}{\ln {\frac{5}{4}}}\approx 4.923 \end{aligned}$$

such that for all \(p\ge p^L\) all nondominated images are p-supported and thus by Theorem 3.1 those images are even extreme p-supported for \(p>p^L\). Indeed, for \(5>p^L\) all nondominated images are extreme 5-supported (see Fig. 2).

Fig. 2
figure 2

Different weighted p-norm weight set decompositions (for \(p=1,2,5,50\)) of Example 4.1. Note that for all four scalarizations, the restriction to weights contained in the weight set \(\varLambda =\left\{ \lambda \in {\mathbb {R}}^q_\ge : \sum _{i=1}^q \lambda _i=1\right\} \) is without loss of generality. Thus, \(\lambda _3=1-\lambda _1-\lambda _2\)

5 The Connection Between Weighted p-Norm and Weighted Sum Weight Set Decompositions

In this section we focus on the theoretical study of weighted p-norm weight set decompositions for \(p\in {\mathbb {R}}_{\ge 1}\). In turns out that properties of (intersections of) weighted sum weight set components carry over, under a bijection, to (intersections of) weighted p-norm weight set components for \(p\in {\mathbb {R}}_{\ge 1}\). Thus, the organization of this section is as follows: We review already established basic properties of the weighted sum weight set components and the weighted sum weight set decomposition. Then, we introduce a different but related multiobjective discrete optimization problem, which we call transformedtMODO. We show that there exists a bijection between weighted sum weight set components of the tMODO and weighted p-norm weight set components of the original MODO. This allows us to establish that extreme p-supported nondominated images are sufficient and necessary to compute the weighted p-norm weight set decomposition (Proposition 5.3), to generalize the adjacency concepts introduced in [21, 30] (Definition 5.2), and to conclude that weighted p-norm weight set components are path-connected (Corollary 5.1).

For any nondominated image \(y\in Y_N\), the weighted sum weight set component of y is defined by

$$\begin{aligned} \varLambda ^1(y):=\left\{ \lambda \in \varLambda : \sum _{i=1}^q \lambda _i y_i \le \sum _{i=1}^q \lambda _i {\bar{y}}_i \text { for all } {\bar{y}}\in Y\right\} . \end{aligned}$$

Note that this definition is in line with the corresponding concept of the weighted 1-norm scalarization: the weighted sum weight set components are exactly the weighted 1-norm weight set components. We summarize basic properties of weighted sum weight set components and weight set decompositions, see Przybylski et al. [30].

Proposition 5.1

(Przybylski et al. [30]) Let \(y\in Y_{1SN}\). Then, the following statements hold:

  1. (i)

    \(\varLambda ^{1}(y)=\left\{ \lambda \in \varLambda : \sum _{i=1}^q \lambda _i y_i \le \sum _{i=1}^q \lambda _i {\bar{y}}_i \text { for all } \ {\bar{y}} \in Y_{1ESN}\right\} \).

  2. (ii)

    \(\varLambda ^{1}(y)\) is a nonempty convex polytope.

  3. (iii)

    The 1-supported nondominated image \(y\in Y_N\) is an extreme \(1\)-supported nondominated image if and only if \(\varLambda ^{1}(y)\) has dimension \(q-1\).

  4. (iv)

    \(\varLambda = \bigcup _{{\bar{y}}\in Y_{1ESN}} \varLambda ^{1}({\bar{y}})\).

  5. (v)

    Let \(S\) be a set of \(1\)-supported nondominated images. Then

    $$\begin{aligned} Y_{1ESN}\subseteq S \text {, if and only if } \varLambda =\bigcup _{{\bar{y}}\in S} \varLambda ^{1}({\bar{y}}). \end{aligned}$$

Proposition 5.1 (i) implies that it is sufficient to consider only extreme \(1\)-supported nondominated images in the definition of the weighted sum weight set components. Proposition 5.1 (ii) and (iii) state that the components are (full-dimensional) polytopes. By Statements  (iv) and (v), the components of the extreme \(1\)-supported nondominated images are sufficient and necessary to decompose the weight set. In fact, the next proposition states that any two weighted sum weight set components intersect only in common faces.

Proposition 5.2

(Przybylski et al. [30]) Let \(y,{\bar{y}}\in Y_{1SN}\) be two \(1\)-supported nondominated images and \(\varLambda ^{1}(y)\cap \varLambda ^{1}({\bar{y}})\ne \emptyset \). Then \(\varLambda ^{1}(y)\cap \varLambda ^{1}({\bar{y}})\) is the common face of maximal dimension of \(\varLambda ^{1}(y)\) and \(\varLambda ^{1}({\bar{y}})\).

This proposition implies that the following notion of adjacency is a well-defined symmetric property.

Definition 5.1

(Przybylski et al. [30]) Two extreme \(1\)-supported nondominated images \(y\) and \({\bar{y}}\) are adjacent if and only if \(\varLambda ^{1}(y)\cap \varLambda ^{1}({\bar{y}})\) is a polytope of dimension \(q-2\).

Now we define a bijection between the weighted p-norm weight set components and the weighted sum weight set components of a different but related MODO. This allows us to prove that extreme \(p\)-supported nondominated images are sufficient and necessary to decompose the weight set \(\varLambda \), and derive fundamental structural properties.

We define the function

$$\begin{aligned} g^p:{\mathbb {R}}^q_> \rightarrow {\mathbb {R}}^q_>: y \mapsto z:=(y_1^p,y_2^p,\dots , y_q^p) \end{aligned}$$

for \(p\in {\mathbb {R}}_{\ge 1}\) and consider the transformed multiobjective discrete optimization problem

$$\begin{aligned} \min&\; g^p(f(x)) \\ \text {s.t.}&\; x \in X . \end{aligned}$$
(tMODO)

The set of efficient solutions of a MODO and the related tMODO coincide as \(g^p\) is strictly monotonically increasing in every objective. However, the associated nondominated images clearly differ. For the sake of clarity, we denote the image set of tMODO by \(Z:=g^p(f(X)):=\{g^p(f(x))\in {\mathbb {R}}_>^q:x\in X\}\). Analogously, we denote the set of nondominated images by \(Z_N\). Furthermore, we call these sets transformed for a better distinction. Note that, for each \(y \in Y\), there exists an image \(z \in Z\) such that \(z = g^p(y)\), and vice versa. Hence, we call z the transformed image of y. Moreover, note that \(y\in Y_N\) holds if and only if \(z=g^p(y)\in Z_N\). The weighted sum scalarization of tMODO is the problem

figure b

for which we can construct a weighted sum weight set decomposition. We call the associated weight set components transformed weighted p-norm weight set components and, for \(z=g^p(y)\in Z\), we denote them by

$$\begin{aligned} \varLambda ^1(g^p(y)):= \varLambda ^1(z)=\{\lambda \in \varLambda : \left\Vert z \right\Vert _{1}^{\lambda }\le \left\Vert {\bar{z}} \right\Vert _{1}^{\lambda } \text { for all } {\bar{z}}\in Z\}. \end{aligned}$$

The following theorem states that there exists a bijection between the weighted p-norm weight set components \(\varLambda ^p(y)\) and the transformed weighted p-norm weight set components \(\varLambda ^1(g^p(y))\).

Theorem 5.1

Let \(y\in Y_N\) be a nondominated image, \(p\in {\mathbb {R}}_{\ge 1}\) and \(z=g^p(y)\in Z\). Then, the mapping

$$\begin{aligned} \phi ^p:\varLambda \rightarrow \varLambda : \lambda \mapsto \left( \frac{\lambda _1^p}{\sum _{i=1}^q \lambda _i^p},\dots , \frac{\lambda _q^p}{\sum _{i=1}^q \lambda _i^p}\right) \end{aligned}$$

is continuous, bijective, and maps \(\varLambda ^p(y)\) to \(\varLambda ^1(z)\), i. e., \(\phi ^p(\varLambda ^p(y))=\varLambda ^1(z)\). Furthermore, the inverse of \(\phi ^p\) is given by

$$\begin{aligned} (\phi ^p)^{-1}:\varLambda \rightarrow \varLambda : \lambda \mapsto \left( \frac{\root p \of {\lambda _1}}{\sum _{i=1}^q \root p \of {\lambda _i}},\dots , \frac{\root p \of {\lambda _q}}{\sum _{i=1}^q \root p \of {\lambda _i}}\right) . \end{aligned}$$

Proof

The function \(\phi ^p\) is obviously continuous. We show that \(\phi ^p\) indeed maps weighted p-norm weight set components to the corresponding transformed weighted p-norm weight set components. Let \(y\in Y_N\) and let \(\varLambda ^p(y)=\emptyset \). Thus, there exists no \(\lambda \in \varLambda \) such that \(y\) is an optimal image of \(\varPi ^{p}(\lambda )\). Hence, there is no \(\lambda '\in \varLambda \) such that \(z=g^p(y)\) is an optimal image of \(T^p(\lambda )\) either. Consequently, the transformed weighted p-norm weight set component is empty, i. e., \(\varLambda ^1(z)=\emptyset \).

Let \(\varLambda ^p(y)\ne \emptyset \) and let \(\lambda \in \varLambda ^p(y)\). Then, for all \({\bar{y}}\) and the corresponding transformed image \({\bar{z}}:=g^p({\bar{y}})\), it holds that

$$\begin{aligned} \root p \of {\sum _{i=1}^q (\lambda _i y_i)^p} \le \root p \of {\sum _{i=1}^q (\lambda _i {\bar{y}}_i)^p} \text {, if and only if } \sum _{i=1}^q \phi _i^p(\lambda ) z_i \le \sum _{i=1}^q \phi _i^p(\lambda ) {\bar{z}}_i. \end{aligned}$$

This is equivalent to \(\phi ^p(\lambda )\in \varLambda ^1(z) \) if and only if \( \lambda \in \varLambda ^p(y)\). Checking that \((\phi ^p)^{-1}\) is the inverse of \(\phi ^p\) is straight-forward. \(\square \)

Since \(T^p(\lambda )\) is a weighted sum scalarization, the transformed images of extreme \(p\)-supported nondominated images are extreme points of \({{\,\textrm{conv}\,}}(Z)+{\mathbb {R}}^q_\geqq \). We denote the set of transformed \(p\)-supported nondominated images by \(Z_{pSN}\) and the set of transformed extreme \(p\)-supported nondominated images by \(Z_{pESN}\). An example of a weighted p-norm weight set decomposition and a transformed weighted p-norm weight set decomposition is depicted in Fig. 3.

Fig. 3
figure 3

The weighted \(p\)-norm weight set decomposition 5 and the transformed weighted p-norm weight set decomposition 5 for \(p=5\) of Example 4.1. For all \(i=1,\dots ,5\), the transformed image of \(y^i\) is \(z^i\), i. e. \(z^i=g^p(y^i)\)

Theorem 5.1 allows us to transfer the fundamental properties of weighted sum weight set components stated in Proposition 5.1 to weighted p-norm weight set components:

Proposition 5.3

Let \(y\in Y_{pSN}\). Then, the following statements hold true:

  1. (i)

    \(\varLambda ^p(y)= \left\{ \lambda \in \varLambda : \Vert y \Vert _p^\lambda \le \Vert {\bar{y}} \Vert _p^\lambda \text { for all } {\bar{y}}\in Y_{pESN}\right\} \).

  2. (ii)

    \(\phi ^p(\varLambda ^p(y))\) is a nonempty convex polytope.

  3. (iii)

    The p-supported nondominated image \(y\) is \(p\)-extreme supported if and only if \(\phi ^p(\varLambda ^p(y))\) has dimension \(q-1\).

  4. (iv)

    \(\varLambda = \bigcup _{{\bar{y}}\in Y_{pESN}}\varLambda ^p({\bar{y}}).\)

  5. (v)

    Let \(S\) be a set of p-supported nondominated images. Then

    $$\begin{aligned} Y_{pESN}\subseteq S\text {, if and only if } \varLambda =\bigcup _{{\bar{y}}\in S} \varLambda ^p({\bar{y}}). \end{aligned}$$

Proof

Let \(z\in Z_{pSN}\) with \(z=g^p(y)\). By Proposition 5.1 (i), it is

$$\begin{aligned} \varLambda ^1(z)=\left\{ \lambda \in \varLambda : \sum _{i=1}^{q} \lambda _i z_i \le \sum _{i=1}^{q} \lambda _i {\bar{z}}_i \text { for all } {\bar{z}}\in Z_{1ESN} \right\} . \end{aligned}$$

Thus,

$$\begin{aligned} \varLambda ^p(y)&=(\phi ^p)^{-1} (\varLambda ^1(z)) \\&= \left\{ (\phi ^p)^{-1}(\lambda ): \lambda \in \varLambda , \sum _{i=1}^{q} \lambda _i z_i\le \sum _{i=1}^{q} \lambda _i {\bar{z}}_i \text { for all } {\bar{z}}\in Z_{1ESN} \right\} \\&=\left\{ \lambda \in \varLambda : \Vert y \Vert _p^\lambda \le \Vert {\bar{y}} \Vert _p^\lambda \text { for all } {\bar{y}}\in Y_{pESN} \right\} . \end{aligned}$$

This implies Statement . Statements  and  follow immediately from Theorem 5.1 and Proposition 5.1. Furthermore, by Proposition 5.1. (iv), it holds that

$$\begin{aligned} \varLambda =\bigcup _{{\bar{z}}\in Z_{1ESN}} \varLambda ^1({\bar{z}}). \end{aligned}$$

Therefore,

$$\begin{aligned} \varLambda&= (\phi ^p)^{-1}(\varLambda ) =(\phi ^p)^{-1} \left( \bigcup _{{\bar{z}}_\in Z_{1ESN}} \varLambda ^1({\bar{z}})\right) \\&= \bigcup _{{\bar{z}}\in Z_{1ESN}} (\phi ^p)^{-1}\left( \varLambda ^1({\bar{z}})\right) =\bigcup _{{\bar{y}}\in Y_{pESN}} \varLambda ^p({\bar{y}}). \end{aligned}$$

Thus, Statement  holds. Let \(S\) be a set of \(p\)-supported nondominated images and \(T\) be its transformed set. Then, it again holds that

$$\begin{aligned} \varLambda =\bigcup _{{\bar{z}}\in T} \varLambda ^1({\bar{z}})\text {, if and only if } \varLambda =\bigcup _{{\bar{y}}\in S} \varLambda ^p({\bar{y}}). \end{aligned}$$

Hence, using Proposition 5.1 (v), we obtain

$$\begin{aligned} Y_{pESN}\subseteq S \text {, if and only if } \varLambda =\bigcup _{{\bar{z}}\in T} \varLambda ^1({\bar{z}}) \text {, if and only if } \varLambda =\bigcup _{{\bar{y}}\in S} \varLambda ^p({\bar{y}}). \end{aligned}$$

This yields Statement . \(\square \)

Proposition 5.3 implies that extreme \(p\)-supported nondominated images suffice to compute weighted p-norm weight set components. By Statement , the weighted p-norm weight set components are preimages of convex polytopes under a bijective and continuous function. Thus, a weighted p-norm weight set component is connected and even path-connected. That is, for two weights \(\lambda ,{\bar{\lambda }}\in \varLambda ^{p}(y)\), there exists a path that is a subset of the weighted p-norm weight set component and connects both weights. In particular, such a path can be obtained by transforming a line segment. Hence, we obtain the following corollary.

Corollary 5.1

Let \(y\in Y_{pSN}\) and \(\lambda ,{\bar{\lambda }}\in \varLambda ^p(y)\). Then, for all \(\theta \in [0,1]\) it holds that the weight \(\lambda '(\theta )\) defined by

$$\begin{aligned} \lambda '_i(\theta )=\frac{\root p \of {\theta \lambda _i^p +(1-\theta ) {\bar{\lambda }}_i^p}}{\sum _{j=1}^p\root p \of {\theta \lambda _j^p +(1-\theta ) {\bar{\lambda }}_j^p}} \text {, for all } i\in \{1,\dots ,q\}, \end{aligned}$$

is contained in the weighted p-norm weight set component of y, i. e. \(\lambda '\in \varLambda ^p(y)\).

Proposition 5.3 lays the foundation to define the dimension of weighted p-norm weight set components. Finally, Proposition 5.3 and 5.3 state that the extreme \(p\)-supported nondominated images are sufficient as well as necessary to decompose the weight set into weighted p-norm weight set components.

Similarly, Proposition 5.2 can be adapted, thus characterizing the intersection of any two p-norm weight set components:

Proposition 5.4

Let \(y\) and \({\bar{y}}\) be two \(p\)-supported nondominated images and \(\varLambda ^{p}(y)\cap \varLambda ^{p}({\bar{y}})\ne \emptyset \). Then \(\phi ^p(\varLambda ^{p}(y)\cap \varLambda ^{p}({\bar{y}}))\) is the common face of maximal dimension of \(\phi ^p(\varLambda ^{p}(y))\) and \(\phi ^p(\varLambda ^{p}({\bar{y}}))\).

This allows to adapt the notion of adjacency given in Definition 5.1:

Definition 5.2

Two extreme \(p\)-supported nondominated images \(y\) and \({\bar{y}}\) are \(p\)-adjacent if and only if \(\phi ^p(\varLambda ^p(y)\cap \varLambda ^p({\bar{y}}))\) is a polytope of dimension \(q-2\).

Karakaya et al. [26] introduce a weaker concept of p-adjacency: two p-supported nondominated images y and \({\bar{y}}\) are weakly p-adjacent if and only if the intersection of their weighted p-norm weight set components is nonempty, i. e. \(\varLambda ^p(y)\cap \varLambda ^p({\bar{y}})\ne \emptyset \). Note that this definition includes non-extreme p-supported images as well. Additionally, for \(p=1\) it does not reduce to Definition 5.1 by Przybylski et al. [30].

6 The Connection Between Weighted p-Norm and Weighted Tchebycheff Weight Set Decompositions

In this section, we compare weight set decomposition for the weighted Tchebycheff scalarization with weight set decomposition for weighted p-norm scalarizations. To this end, we first recall properties of the weight set decomposition for the weighted Tchebycheff scalarization. Then, while the weighted Tchebycheff components have a convexity related property, and the weighted 1-norm components are convex, we show that, for \(p\in {\mathbb {R}}_{>1}\), neither holds true for weighted p-norm weight set components in general (Example 6.1). Finally, we relate weighted p-norm weight set components for large p and weighted Tchebycheff weight set components. We show that for sufficiently large p, weighted p-norm and weighted Tchebycheff weight set components differ only on the intersections of weighted Tchebycheff weight set components (Theorem 3.2).

For any nondominated image \(y\in Y_N\), the weighted Tchebycheff weight set components are defined by

$$\begin{aligned} \varLambda ^{\infty }(y):=\left\{ \lambda \in \varLambda : \left\Vert y \right\Vert _{\infty }^{\lambda }\le \left\Vert {\bar{y}} \right\Vert _{\infty }^{\lambda } \text { for all } {\bar{y}}\in Y\right\} . \end{aligned}$$

Note that this definition aligns with Definition 4.1: the weighted Tchebycheff weight set components coincide with the weighted \(\infty \)-norm weight set components.

In general, weight set decomposition for the weighted Tchebycheff scalarization is more complex compared to the weighted sum scalarization, see Helfrich et al. [21]: they are not convex, and they may overlap in the sense that full-dimensional polytopes, i. e. polytopes of dimension \(q-1\), can be a subset of more than two weighted Tchebycheff weight set components simultaneously. Since for every nondominated image y there exists a weight \(\lambda \in {\mathbb {R}}_{>}^q\) such that y is the unique optimal image of \(\varPi ^\infty (\lambda )\), all nondominated images are necessary and sufficient to decompose the weight set and, hence, methods based on the Tchebycheff weight set decomposition are capable of computing all nondominated images:

Proposition 6.1

(Helfrich et al. [21]) It holds that \(\varLambda =\bigcup _{y\in Y_{N}} \varLambda ^{\infty }(y).\)

Surprisingly, weighted Tchebycheff weight set components have a convexity related property: they are star-shaped.

Definition 6.1

(Preparata and Shamos [29]) A set \(S\subseteq {\mathbb {R}}^q\) is star-shaped, if there exists an element \(y\in S\) such that \(\theta y+(1-\theta ){\bar{y}}\in S\) for all \({\bar{y}}\in S\) and all \(\theta \in (0,1)\). The set of all such points \(y\) is called kernel of \(S\) and is denoted by \(\ker {S}\).

Theorem 6.1

(Helfrich et al. [21]) Let \(y\in Y_{N}\). Then, \(\varLambda ^\infty (y)\) is a star-shaped set and the weight \(\lambda (y)\) defined by

$$\begin{aligned} \lambda _i(y):=\frac{1}{y_i}\frac{1}{\sum _{j=1}^q 1/y_j} \text { for all } i\in \{1,\dots ,q\} \end{aligned}$$

is in its kernel, i. e. \(\lambda (y)\in \ker {\varLambda ^\infty (y)}\).

Hence, the weight \(\lambda (y)\) is called the kernel weight of y.

Remark 6.1

Recall that Theorem 3.1 shows that a \({\tilde{p}}\)-supported nondominated image y is extreme p-supported for all \(p>{\tilde{p}}\). For its proof, we used that any weight \({\tilde{\lambda }}\in \varLambda ^{{\tilde{p}}}(y)\) can be mapped to a weight \(\lambda (y,p,{\tilde{\lambda }})\), defined by \(\lambda _i(y,p,{\tilde{\lambda }})=\root p \of {{\tilde{\lambda }}_i^{{\tilde{p}}}y_i^{{\tilde{p}}-p}}\), for \(i=1,\dots ,q\), such that y uniquely minimizes \(\varPi ^p(\lambda (y,p,{\tilde{\lambda }}))\). Hence, it is \(\frac{\lambda (y,p,{\tilde{\lambda }})}{\left\Vert \lambda (y,p,{\tilde{\lambda }}) \right\Vert _{1}^{}} \in \varLambda ^p(y)\). In fact, the weight \(\frac{\lambda (y,p,{\tilde{\lambda }})}{\left\Vert \lambda (y,p,{\tilde{\lambda }}) \right\Vert _{1}^{}}\) converges to the kernel weight of y for increasing p, since, for all \(i \in \{1, \dots ,q\}\), it holds that

$$\begin{aligned} \lim _{p\rightarrow \infty } \root p \of {{\tilde{\lambda }}_i^{{\tilde{p}}}y_i^{{\tilde{p}}}}=1 \end{aligned}$$

and, thus,

$$\begin{aligned} \lim _{p\rightarrow \infty } {\lambda }_i(y,p,{\tilde{\lambda }})=\lim _{p\rightarrow \infty } \root p \of {{\tilde{\lambda }}_i^{{\tilde{p}}}y_i^{{\tilde{p}}-p}}=\lim _{p\rightarrow \infty } \frac{1}{y_i} \lim _{p\rightarrow \infty }\root p \of {{\tilde{\lambda }}_i^{{\tilde{p}}} y_i^{{\tilde{p}}}}=\frac{1}{y_i}. \end{aligned}$$

While the weighted Tchebycheff weight set components are star-shaped and the weighted sum weight set components are even convex, neither holds true in general for weighted \(p\)-norm weight set components:

Fig. 4
figure 4

A visualization of Example 6.1 with the weighted 5-norm weight set components for \(Y=\{y^1,y^2,y^3\}\). The definition of the set K can be motivated as follows: every weight \(\lambda \in \ker (\varLambda ^p(y^1))\) must satisfy the two inequalities induced by the tangents of \(\varLambda ^p(y^1)\cap \varLambda ^p(y^2)\) at the weights \({\bar{\lambda }}=\left( \frac{1}{2},\frac{1}{2},0\right) \) and \(\lambda '=\left( 0,\frac{1}{2},\frac{1}{2}\right) \). Both tangents can be calculated by implicit differentiation. Thus, the kernel of \(\varLambda ^p(y^1)\) must be a subset of K (otherwise there exists convex combination of \({\bar{\lambda }}\) or \(\lambda '\)and a weight in the kernel that is not in \(\varLambda ^p(y^1)\)). Hence, as \(K\cap \varLambda ^p(y^1)=\emptyset \), \(\varLambda ^p(y^1)\) is not star shaped

Example 6.1

Let \(p\in {\mathbb {R}}_{>1}\), \(f=\text {id}\), and

$$\begin{aligned} X=Y&=\Bigg \{ y^1=(a,b,a)^\top ,y^2=(b,a,b)^\top , \\&\qquad y^3 = \left( a \root p \of { 1 - \frac{2}{a^p}} , b \root p \of {1 + \frac{3}{b^p}}, a\sqrt{1 - \frac{2}{a^p}} \right) ^\top \Bigg \}, \end{aligned}$$

where \(a,b\in {\mathbb {R}}_>\) with \(b>a\) are chosen arbitrary, but fixed. We prove that \(\varLambda ^p(y^1)\) is not star-shaped for any \(p\in {\mathbb {R}}_{>1}\), i. e. \(\ker {\varLambda ^p(y^1)=\emptyset }\). A visualization of the following can be found in Fig. 4. First, we show that the kernel of \(\varLambda ^p(y^1)\) must be a subset of the set

$$\begin{aligned} K:=\left\{ \lambda \in \varLambda :\lambda _2\le \frac{1}{2}-\frac{1}{2} \lambda _1 \text { and } \lambda _2 \le \lambda _1\right\} . \end{aligned}$$

Suppose, \(\ker {\varLambda ^p(y^1)}\nsubseteq K\). Thus, there exists a \(\lambda \in \ker {\varLambda ^p(y^1)}\backslash K\). Then, either \(\lambda _2>\frac{1}{2}-\frac{1}{2}\lambda _1\) or \(\lambda _2>\lambda _1\). By Definition 6.1, for \(\lambda \) to be in the kernel of \(\varLambda ^p(y^1)\), for any two weights \({\bar{\lambda }},\lambda '\in \varLambda ^p(y^1)\cap \varLambda ^p(y^2)\) and all \(\theta \in [0,1]\), it must be that \(\theta \lambda +(1-\theta ){\bar{\lambda }}\in \varLambda ^p(y^1)\) and \(\theta \lambda + (1-\theta )\lambda '\in \varLambda ^p(y^1)\). In particular, this must hold for \({\bar{\lambda }}=\left( \frac{1}{2},\frac{1}{2},0\right) \) and \(\lambda '=\left( 0,\frac{1}{2},\frac{1}{2}\right) \) (it is easy to check that \(\lambda ,{\bar{\lambda }}\in \varLambda ^p(y^1)\cap \varLambda ^p(y^2)\)). We prove that \(\lambda _2>\lambda _1\) implies \(\theta \lambda +(1-\theta ) {\bar{\lambda }}\notin \varLambda ^p(y^1)\) for sufficiently small \(\theta \). Consider the function

$$\begin{aligned} k: [0,1]\rightarrow {\mathbb {R}}_\ge : \theta \mapsto \left( \left\Vert y^2 \right\Vert _{p}^{\theta \lambda +(1-\theta ) {\bar{\lambda }}}\right) ^p- \left( \left\Vert y^1 \right\Vert _{p}^{\theta \lambda +(1-\theta ) {\bar{\lambda }}}\right) ^p. \end{aligned}$$

Then, \(k(0)=0\) as \({\bar{\lambda }}\in \varLambda ^p(y^1)\cap \varLambda ^p(y^2)\). Furthermore, the first derivative of k at 0 is

$$\begin{aligned} p(b^p-a^p)\left( \frac{1}{2}\right) ^{p-1}(\lambda _1-\lambda _2)<0. \end{aligned}$$

Thus, \(k(\theta )<0\) for sufficiently small \(\theta \) and, hence, \(\theta \lambda +(1-\theta ) {\bar{\lambda }} \notin \varLambda ^p(y^1) \). That \(\lambda _2>\frac{1}{2}-\frac{1}{2}\lambda _1\) implies \(\theta \lambda + (1-\theta )\lambda '\notin \varLambda ^p(y^1)\) for sufficiently small \(\theta \) can be shown analogously.

However, it holds that \(K\subset \varLambda ^p(y^3)\backslash \varLambda ^p(y^1)\): first, note that

$$\begin{aligned} \left( \left\Vert y^1 \right\Vert _{p}^{\lambda }\right) ^p-\left( \left\Vert y^3 \right\Vert _{p}^{\lambda }\right) ^p&= (\lambda _1^p +\lambda _3^p) \left( a^p - a^p\left( 1 - \frac{2}{a^p}\right) \right) \\&\quad + \lambda _2^p \left( b^p - b^p \left( 1 + \frac{3}{b^p}\right) \right) \\&= 2 (\lambda _1^p + \lambda _3^p) - 3\lambda _2^p. \end{aligned}$$

Furthermore, all \(\lambda \in K\) satisfy \(\lambda _2 \le \frac{1}{3}\), and therefore

$$\begin{aligned} 2 (\lambda _1^p + \lambda _3^p) - 3 \lambda _2^p \ge 2 (\lambda _1^p + \lambda _3^p) - 3 \left( \frac{1}{3}\right) ^p \ge 4 \left( \frac{1}{3}\right) ^p - 3 \left( \frac{1}{3}\right) ^p>0, \end{aligned}$$

where the second inequality follows since \(\lambda _1^p+\lambda _3^p\) is minimal at \(\lambda _1=\lambda _3=\frac{1}{3}\) under the condition \(\lambda _1+\lambda _3=\frac{2}{3}\). Thus, \( \left\Vert y^1 \right\Vert _{p}^{\lambda } = \left\Vert y^2 \right\Vert _{p}^{\lambda } > \left\Vert y^3 \right\Vert _{p}^{\lambda }.\) Hence, it must be that \(\ker {\varLambda ^p(y^1)}=\emptyset \) and \(\varLambda ^p(y^1)\) cannot be star-shaped.

Next, we categorizes the connection between the weighted p-norm weight set components for large p and the weighted Tchebycheff weight set components. By Theorem 3.2, each weight which is contained in exactly one weighted Tchebycheff weight set component \(\varLambda ^{\infty }(y)\) is also contained in the corresponding weighted p-norm weight set component \(\varLambda ^p(y)\) for sufficiently large p. As a consequence, we obtain:

Corollary 6.1

Let \(\lambda \in \varLambda \) be a weight and \(y\in Y_N\) be a nondominated image such that \(\lambda \in \varLambda ^\infty (y)\) and \( \lambda \notin \varLambda ^\infty ({\bar{y}})\) for all \({\bar{y}}\in Y_N {\setminus } \{y\}\). Then, there exists a \({\bar{p}}\in {\mathbb {R}}_{>1}\) such that \(\lambda \in \varLambda ^p(y)\) for all \(p>{\bar{p}}\) and \(\lambda \notin \varLambda ^p({\bar{y}})\) for all \({\bar{y}}\in Y_N \setminus \{y\}\).

The following example shows that this does not hold for weights in the intersection of two or more weighted Tchebycheff weight set components.

Example 6.2

We consider the image set introduced in Example 4.1 and study the intersection of the weighted Tchebycheff weight set components of the nondominated images \(y^2=(2,5,2)^\top \) and \(y^3=(2,4,3)^\top \). For each weight in the polytope

$$\begin{aligned} O=\left\{ \lambda \in \varLambda : \lambda _1 \ge \frac{3-3\lambda _2}{5},\lambda _1\ge \frac{5}{2}\lambda _2, \lambda _1\le \frac{2-2\lambda _2}{3}\right\} , \end{aligned}$$

it holds that \(\left\Vert y^2 \right\Vert _{\infty }^{\lambda }=2\lambda _1=\left\Vert y^3 \right\Vert _{\infty }^{\lambda }\). Consequently, the intersection \(\varLambda ^\infty (y^2) \cap \varLambda ^\infty (y^3)\) contains a polytope of dimension \(q-1\). By Proposition 5.4, the intersection of any two weighted p-norm weight set components cannot contain a polytope of dimension \(q-1\): Let \(y,{\bar{y}}\in Y_N\) such that \(\varLambda ^p(y)\cap \varLambda ^p({\bar{y}})\) contains a polytope P of dimension \(q-1\). Thus, there exists a \(\lambda \in P\) and a sufficiently small \(\varepsilon >0\) such that \(B_{\varepsilon }(\lambda )\cap \varLambda \subseteq P \subseteq \varLambda ^p(y) \cap \varLambda ^p({\bar{y}})\). Hereby, \(B_{\varepsilon }(\lambda )\) denotes the \(\varepsilon \)-neighborhood of \(\lambda \). Then, since \(\phi ^p\) is a continuous function, there exists an \(\varepsilon '>0\) such that \(B_{\varepsilon '}(\phi ^p(\lambda ))\cap \varLambda \subseteq \phi ^p(\varLambda ^p(y) \cap \varLambda ^p({\bar{y}}))\). Hence, we can find a \(q-1\) dimensional polytope \(P'\subseteq B_{\varepsilon '}(\phi ^p(\lambda ))\cap \varLambda \subseteq \phi ^p(\varLambda ^p(y) \cap \varLambda ^p({\bar{y}}))\) which contradicts Proposition 5.4. Thus, no result similar to Theorem 3.2 holds for all weights \(\lambda \in O\) (cf. Figure 5).

Fig. 5
figure 5

Weighted p-norm weight set decompositions (for \(p=50,\infty \)) of Example 4.1. For \(p=\infty \), i.e. the weighted Tchebycheff weight set decomposition, more than six partitions are visible. The additional partitions are intersections of weighted Tchebycheff weight set components, specifically, \(\varLambda ^\infty (y^2)\cap \varLambda ^\infty (y^3)=O\). A detailed characterization of this phenomenon is given by Helfricht et al. [21]

Next, we show that for a weight \(\lambda \in \varLambda \) in the intersection of two or more weighted Tchebycheff weight set components, i. e. \(\lambda \in \bigcap _{y\in S(\lambda )} \varLambda ^\infty (y)\) for some \(S(\lambda )\subseteq Y_N\), there exists a \(p'\in {\mathbb {R}}_{\ge 1}\) such that \(\lambda \) is contained in the intersection of weighted p-norm weight set components for all \(p\ge p'\) for all images in a subset \(U(\lambda ) \subseteq S(\lambda )\). The next theorem specifies \(U(\lambda )\) exactly: the weight \(\lambda \) is contained in the weighted p-norm weight set components of all nondominated images \(y\in S(\lambda )\) for which the vector \((\lambda _1 y_1, \dots , \lambda _q y_q)\) sorted in decreasing order is minimal with respect to the lexicographic order. More precisely, we denote the function that maps nondominated images \(y\in Y_N\) and weights \(\lambda \in \varLambda \) to \((\lambda _1 y_1,\dots , \lambda _q y_q)\) and sorts the components of the vector in decreasing order by

$$\begin{aligned} \psi : Y_N \times \varLambda \rightarrow {\mathbb {R}}^q_> \text { with } \left( y,\lambda \right) \mapsto \left( \lambda _{\sigma (1)} y_{\sigma (1)},\dots ,\lambda _{\sigma (q)} y_{\sigma (q)}\right) , \end{aligned}$$

where \(\sigma \) is a permutation of the components such that

$$\begin{aligned} \lambda _{\sigma (1)} y_{\sigma (1)}\ge \lambda _{\sigma (2)} y_{\sigma (2)}\ge \dots \ge \lambda _{\sigma (q)} y_{\sigma (q)}. \end{aligned}$$

Furthermore, we denote the lexicographic order by \(\le _{\text {lex}}\). Then:

Theorem 6.2

Let \(\lambda \in \varLambda \) be a weight and \(S(\lambda )\subseteq Y_N\) be a set of nondominated images such that \(\lambda \in \bigcap _{y\in S(\lambda )} \varLambda ^{\infty }(y)\) and \(\lambda \notin \varLambda ^\infty ({\bar{y}})\) for all \({\bar{y}}\in Y_N\backslash S(\lambda )\). Define

$$\begin{aligned} U(\lambda ):=\left\{ y\in Y_N: \psi (y,\lambda )\le _{\text {lex}} \psi ({\bar{y}},\lambda ) \text { for all } {\bar{y}}\in Y_N\right\} \subseteq S(\lambda ). \end{aligned}$$

Then there exists a \(p'\in {\mathbb {R}}_{\ge 1}\) such that for all \(p\ge p'\) it holds that

$$\begin{aligned} \lambda \in \varLambda ^p(y) \text {, if and only if } y\in U(\lambda ) \end{aligned}$$

holds true for all \(y\in Y_N\).

Proof

Note that for \(y\in S(\lambda )\) and \({\tilde{y}}\in Y_N\backslash S(\lambda )\) it holds that \(\left\Vert y \right\Vert _{\infty }^{\lambda }< \left\Vert {\tilde{y}} \right\Vert _{\infty }^{\lambda }\). Thus, we have \(\psi (y,\lambda )<_{\text {lex}}\psi ({\tilde{y}},\lambda )\) and hence, \(U(\lambda )\subseteq S(\lambda )\).

First, we conclude that such a sufficiently large \(p'\in {\mathbb {R}}_{\ge 1}\) exists. Sorting of the components of a vector does not change the value of a (weighted) \(p\)-norm, hence \(\left\Vert y \right\Vert _{p}^{\lambda }=\left\Vert \psi (y,\lambda ) \right\Vert _{p}^{}\). Let \({\bar{y}}\in Y_N\) be a nondominated image such that \(\psi ({\bar{y}},\lambda )<_{\text {lex}} \psi (y,\lambda )\) and denote by \(j\in \{1,\dots ,q\}\) the first index in which the two vectors differ, that is,

$$\begin{aligned} j:=\min \left\{ i\in \{1,\dots ,q\}:\psi _i({\bar{y}}, \lambda )<\psi _i(y,\lambda )\right\} . \end{aligned}$$

We set

$$\begin{aligned} \delta := \psi _j(y,\lambda ) - \psi _j({\bar{y}},\lambda ) >0. \end{aligned}$$
(2)

Additionally, for \({\hat{y}}\in \{y,{\bar{y}}\}\) it holds that

$$\begin{aligned} \lim _{p\rightarrow \infty } \root p \of {\sum _{i=j}^q \psi _i({\hat{y}},\lambda )^p} = \psi _j({\hat{y}},\lambda ). \end{aligned}$$

Thus, there exists a \(p'\in {\mathbb {R}}_{\ge 1}\) such that for all \(p\ge p'\) it is

$$\begin{aligned} \left| \root p \of {\sum _{i=j}^q \psi _i({\hat{y}},\lambda )^p} - \psi _j({\hat{y}},\lambda ) \right| < \frac{\delta }{2}. \end{aligned}$$
(3)

Next, we show that \(y\notin U(\lambda )\) implies \(\lambda \notin \varLambda ^p(y)\) for all \(p\ge p'\). It holds that

$$\begin{aligned} \root p \of {\sum _{i=j}^q \psi _i(y,\lambda )^p} - \root p \of {\sum _{i=j}^q \psi _i({\bar{y}},\lambda )^p}&{\mathop {>}\limits ^{(3)}} \left( \psi _j(y,\lambda ) - \frac{\delta }{2}\right) - \left( \frac{\delta }{2} + \psi _j({\bar{y}},\lambda )\right) \\&= \psi _j(y,\lambda ) - \frac{\delta }{2} - \psi _j({\bar{y}},\lambda ) - \frac{\delta }{2} \\&{\mathop {=}\limits ^{(2)}} \delta -\delta \\&= 0. \end{aligned}$$

Hence,

$$\begin{aligned} \left( \left\Vert y \right\Vert _{p}^{\lambda }\right) ^p - \left( \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda }\right) ^p = \sum _{i=j}^q \psi _i(y,\lambda )^p - \sum _{i=j}^q \psi _i({\bar{y}},\lambda )^p >0 \end{aligned}$$

and thus \( \left\Vert y \right\Vert _{p}^{\lambda } > \left\Vert {\bar{y}} \right\Vert _{p}^{\lambda }\). Consequently, \(\lambda \not \in \varLambda ^p(y)\).

Finally, we show that \(y\in U(\lambda )\) implies \(\lambda \in \varLambda ^p(y)\) for all \(p\ge p'\). As already shown, there exists at least one \(y\in U(\lambda )\) with \(\lambda \in \varLambda ^p(y)\). Furthermore, for all \(y,y'\in U(\lambda )\) and \(p\in {\mathbb {R}}_{\ge 1}\) it holds that

$$\begin{aligned} \left\Vert y \right\Vert _{p}^{\lambda }=\left\Vert y' \right\Vert _{p}^{\lambda } \end{aligned}$$

since \(\psi (y,\lambda )\le _{\text {lex}}\psi (y',\lambda )\) as well as \(\psi (y',\lambda )\le _{\text {lex}}\psi (y,\lambda )\) and thus, \(\psi (y,\lambda )=\psi (y',\lambda )\). Consequently, \(\lambda \in \varLambda ^p(y)\) for all \(y\in U(\lambda )\). \(\square \)

We finish this section by illustrating the application of Theorem 6.2 to Example 6.2, specifically considering the polytope O.

Example 6.3

Recall, that the intersection of the weighted Tchebycheff weight set components of \(y^2=(2,5,2)^\top \) and \(y^3=(2,4,3)^\top \) contains the polytope

$$\begin{aligned} O=\left\{ \lambda \in \varLambda : \lambda _1 \ge \frac{3-3\lambda _2}{5},\lambda _1\ge \frac{5}{2}\lambda _2, \lambda _1\le \frac{2-2\lambda _2}{3}\right\} . \end{aligned}$$

Let \(\lambda \) be a weight in the interior of O. The case that \(\lambda \) is contained in the boundary of O can be treated analogously by also considering the image \(y^4\).

Theorem 6.2 implies that, for sufficiently large p, \(\lambda \in \varLambda ^p(y^2)\) if \(\psi (y^2,\lambda )\le _{\text {lex}} \psi (y^3,\lambda )\), and \(\lambda \in \varLambda ^p(y^3)\) otherwise. Since \(y_1^2=y_1^3\) and \(\max _{i\in \{1,\dots ,q\}} \lambda _i y_i^2=\lambda _1 y_1^2=\max _{i\in \{1,\dots ,q\}} \lambda _i y_i^3\), only the second largest components is relevant for the lexicographic minimum. Hence,

$$\begin{aligned} \psi (y^2,\lambda )\le _{\text {lex}} \psi (y^3,\lambda ) \text {, if and only if }\max \{5 \lambda _2, 2\lambda _3\}\le \max \{4 \lambda _2, 3\lambda _3\}. \end{aligned}$$

Therefore, we can conclude that

$$\begin{aligned} O(y^2)=\left\{ \lambda \in O: \lambda _1\le \frac{3-8\lambda _2}{3} \right\} \end{aligned}$$

is the set of all \(\lambda \in O\) that are contained in the weighted p-norm weight set component of \(y^2\) for a sufficiently large \(p\), and, analogously,

$$\begin{aligned} O(y^3)=\left\{ \lambda \in O: \lambda _1\ge \frac{3-8\lambda _2}{3} \right\} \end{aligned}$$

is the set of all \(\lambda \in O\) that are contained in the weighted p-norm weight set component of \(y^3\) for a sufficiently large p.

Thus,

$$\begin{aligned} \lambda \in O(y^2)\backslash O(y^3)&\text {, if and only if } U(\lambda )=\{y^2\},\\ \lambda \in O(y^3)\backslash O(y^2)&\text {, if and only if } U(\lambda )=\{y^3\}, \text { and} \\ \lambda \in O(y^2)\cap O(y^3)&\text {, if and only if } U(\lambda )=\{y^2,y^3\}. \end{aligned}$$

7 Conclusion

Until recently, weight set decomposition methods have been thoroughly studied only for the weighted sum and the weighted Tchebycheff scalarization. With this work, we link the weight set analysis of these scalarization methods. Moreover, we characterize extreme p-supported nondominated images by means of the decomposition of the weight set into its weight set components. We show that fundamental properties of the weighted sum weight set decomposition for a multiobjective discrete optimization problem can be transferred to weighted p-norm weight set decompositions, in particular, establishing path-connectedness of the weight set components and a refined notion of adjacency between (subsets of) nondominated images. Further, we capture similarities and differences of (the limit of) weighted p-norm weight set decompositions and the weighted Tchebycheff weight set decomposition. Though theoretical, our research implies a pragmatic approach for sensitivity analysis of nondominated images and even sets of nondominated images. Additionally, it provides new insights into the structure of the nondominated set and may set the theoretical foundation for the development of new algorithms enumerating (subsets) of nondominated images. Specifically, if the preference function of the decision maker can be modeled by a weighted p-norm, Karakaya et al. [26] argue that only p-supported solutions are of interest for the decision maker, and, for any weight, there is always an extreme p-supported efficient solution that is most preferable. Hence, a method generating weighted p-norm weight set decompositions provides a decision maker with all solutions of interest as well as additional information that can be used to evaluate extreme p-supported efficient solutions, i. e., by employing some form of volume of the weighted p-norm components (as done by Karakaya et al. [25] for the Tchebycheff norm). Furthermore, thanks to the relation of weighted p-norm weight set decompositions and weighted sum algorithms that generate the weighted sum weight set decomposition, see [1, 6, 18, 28, 30], immediately adapt to efficiently generate weighted p-norm weight set decompositions.