Adaptive piecewise linear relaxations for enclosure computations for nonconvex multiobjective mixed-integer quadratically constrained programs

In this paper, a new method for computing an enclosure of the nondominated set of multiobjective mixed-integer quadratically constrained programs without any convexity requirements is presented. In fact, our criterion space method makes use of piecewise linear relaxations in order to bypass the nonconvexity of the original problem. The method chooses adaptively which level of relaxation is needed in which parts of the image space. Furthermore, it is guaranteed that after finitely many iterations, an enclosure of the nondominated set of prescribed quality is returned. We demonstrate the advantages of this approach by applying it to multiobjective energy supply network problems.


Introduction
Multiobjective optimization problems arise if there is more than one objective of interest and the objectives are in conflict with each other, i.e. in general it is not possible to obtain a solution that is optimal w.r.t. each of the objective functions simultaneously. Therefore, the nondominated set (cf. Definition 2.1) consists of so-called optimal compromises. To have an overview of (at least some of) these optimal compromises is an advantage as one can observe the interplay between the objective functions in a more direct and easy way. Especially in real-world applications, it occurs rather rarely that only one objective is of interest. This is also the case in our application, where we are looking for energy supply network plans which are optimal w.r.t. the occurring costs as well as its accompanying CO 2 -emissions. Naturally, a network plan being low in emissions is more costly and vice versa. Thus, having a list of optimal compromises between these two extreme solutions helps the political decisionmakers to get a broader picture of the situation.
In most cases, solving a multiobjective optimization problem (MOP) numerically means that one computes either an enclosure (cf. [23][24][25][26]) or an approximation (cf. [5,10,17,30,33]) of the nondominated set. This is due to the fact that the nondominated set is in general infinite. In the linear case, it is also possible to compute the entire nondominated set (cf., e.g., [47]). In this paper, we introduce a deterministic approach to compute an enclosure of the nondominated set with a prescribed quality ε > 0. Although the focus of our algorithm is to compute such an enclosure, it also returns an approximation of the nondominated set, namely a finite set of ε-nondominated points of (MOP) (see Definition 2.3).
In this work, we present two new methods to compute such enclosures. The first method (see Algorithm 2) tackles the problem more directly without being too complicated. In fact, it is a slight modification of the method presented in [23] in the way that we use the Restricted Weighted Sum Method (cf. [31,32]) to obtain nondominated points of the problem. While being very effective for problems where single objective subproblems can be solved very quickly, it is not applicable (in terms of efficiency) to more complex problems-which can happen even with a comparably small number of variables in the nonconvex mixedinteger setting. Furthermore, this approach depends heavily on the availability of solvers for the resulting single objective problems. These drawbacks are addressed with the second method, where we combine the direct approach with piecewise linear relaxations to bypass the nonlinearities of the original problems-and therefore only single objective mixed-integer linear programming (MILP) problems have to be solved, where powerful solvers are available (cf. [14,29]). The accuracy of these relaxations is chosen adaptively by the method itself during the solution process. Consequently, it is possible to use different levels of accuracy in different parts of the criterion space, which can be of importance if also the relaxed problems are difficult to solve.
This approach is the first of its kind for a (MOP). In [24] a method is presented to solve multiobjective convex mixed-integer optimization problems via relaxations. It makes use of cuts in the criterion space in order to discard infeasible integer assignments as well as to separate parts that do not belong to the image of the feasible set. In [25] the authors present the first deterministic method being able to deal with multiobjective mixed-integer nonconvex problems. However, the method presented there is based on a branch-and-bound scheme in the decision space which is a fundamental difference from the methods presented in this paper. Note further that due to clarity and conciseness, we restrict ourselves to nonconvex quadratically constrained problems in this paper as this is of relevance to our application. However, the presented method can be straightforwardly generalized to polynomially constrained programs using reduction techniques to decompose polynomials into quadratic and bilinear terms (cf., e.g., [45] and Remark 4.3).
The manuscript is organized as follows: After introducing preliminary notions in Sect. 2 we define enclosures and related notions as well as provide a basic approach for computing such an enclosure in Sect. 3. In Sect. 4 we introduce and relate piecewise linear relaxations of (MOP) to so-called lower bound sets of the nondominated set, which results in a new procedure for computing an enclosure using adaptively chosen relaxations presented in Sect. 5. We further prove correct and finite termination of this procedure. In Sect. 6 we demonstrate the advantages of using this novel approach by applying it to different instances of an energy supply network optimization problem, which are in fact nonconvex multiobjective mixed-integer quadratically constrained problems. Finally, a conclusion is drawn in Sect. 7.

Notations and definitions
Throughout we write x ≤ y for x, y ∈ R k if for any i ∈ [k] := {1, . . . , k} we have that x i ≤ y i . In the same manner we write x < y if x i < y i for any i ∈ [k].
The ingredients of a general multiobjective optimization problem are the continuous components f i : R n+m → R, i ∈ [k], of the objective function f = ( f 1 , . . . , f k ) : R n+m → R k , the continuous components h i : R n+m → R, i ∈ [p], of the equality constraint function h = (h 1 , . . . , h p ) : R n+m → R p and the continuous components g i : R n+m → R, i ∈ [q], of the inequality constraint function g = (g 1 , . . . , g q ) : R n+m → R q . For all of these functions, we only allow quadratic and bilinear terms. The corresponding multiobjective optimization problem is then given by where X C = [ C , u C ] R n and X I = [ I , u I ] ∩ Z m R m are non-empty boxes 1 with C , u C ∈ R n , C < u C , and I , u I ∈ Z m , I < u I , respectively. These boxes represent the box constraints of the continuous and integer variables, respectively. If m = 0, i.e. if there are no integrality constraints for any variable, we call (MOP) a continuous multiobjective optimization problem, and if n = 0 we call it a pure integer multiobjective optimization problem. Note that the components of the occurring functions f , g, and h are neither assumed to be linear nor convex. For a survey on solution methods for general versions of (MOP) we refer the reader to [22]. Single objective versions of (MOP) we denote by MIQCP (or more general MINLP) problems. Interested readers are referred to [1,6,9,36,43,52]. As we may require integrality of some variables and due to the possible non-convexity of the occurring functions of (MOP) we obtain its possibly nonconvex feasible set which we assume to be nonempty, as well as its possibly nonconvex image set We define the projection of the feasible set on R m by S I = x I ∈ Z m | ∃ x C ∈ R n : (x C , x I ) ∈ S , which we call the set of feasible integer assignments of (MOP). For a given integer assignment x I ∈ Z m we define the sets Sx I = x ∈ S | x I =x I and Xx I = x ∈ X | x I =x I , describing the feasible and box-feasible solutions corresponding tox I , respectively. Notice that Sx I = ∅ holds, provided thatx I ∈ Z m \S I . Since we assume all constraint functions to be continuous and by the definition of X , the feasible set S is compact. Furthermore, by the continuity of the objective functions, we know that f (S) is also compact and we can therefore In general, the k objective functions of (MOP) are in conflict with each other, i.e. we cannot assume that there is a solutionx ∈ S minimizing all objective functions simultaneously. This motivates the concepts of (non-)dominance and efficiency (cf. [20]).
Definition 2.1 (1) Let y 1 , y 2 ∈ R k . Then y 1 dominates y 2 (and y 2 is dominated by y 1 ) if We write y 1 ≤ y 2 , y 1 = y 2 . If even y 1 i < y 2 i for all i ∈ [k], then y 1 strictly dominates y 2 (and y 2 is strictly dominated by y 1 ) and we write y 1 < y 2 .
(2) Let y ∈ R k and N ⊂ R k . Then the vector y is (strictly) dominated by the set N if there exists a vectorŷ ∈ N (strictly) dominating y. Similarly, if y is not (strictly) dominated by N we call y (weakly) nondominated by N .
, thenx is called a weakly efficient solution of (MOP). The set of (weakly) efficient solutions of (MOP) is called its (weakly) efficient set.
The set of (weakly) nondominated points of (MOP) is called its (weakly) nondominated set and is denoted by N (and N w ).

Remark 2.2 (1) Note that
(2) Note that even if we assume all objective and constraint functions of (MOP) to be convex (or linear), the nondominated set of (MOP) can be disconnected and nonconvex due to integrality constraints (e.g., c.f. [16, Figure 1]).
The goal of multiobjective optimization is to determine the nondominated set N of a given (MOP). As the set N can be infinite or of a very complicated structure it can be impossible to compute it exhaustively using numerical methods. Therefore, numerical methods for multiobjective optimization mostly focus on computing a finite approximation A of N subsequent to the following three goals (cf. [49]): • Coverage all parts of the nondominated set N should be represented in the approximation A.
• Uniformity the points of the approximation A should be distributed uniformly along the nondominated set N . • Cardinality the set A should contain an appropriate number of points naturally depending on the number and extent of the cost functions.
Another approach of numerically solving an (MOP) is-instead of computing a sole approximation-to compute a coverage of the nondominated set N as presented in, e.g., [23][24][25][26]. Besides the introduction of enclosures of the nondominated set, the authors present a branch-and-bound framework for computing such an enclosure of the nondominated set with a prescribed quality. As we are aiming at the same goal in this paper, we present the enclosure-related notions in more detail in Sect. 3. However, speaking of enclosures of a certain quality we come to the issue of obtaining convergence of numerical methods mostly only in the limit, i.e. after possibly infinitely many iterations. In general, this is overcome by using termination criteria together with specific tolerances-in fact, one terminates a method if it computed a solution satisfying an a-priori chosen criterion up to a specified tolerance. In single objective optimization, one way to use such tolerances is embodied in branch-and-bound methods which iteratively compute lower and upper bounds of the optimal value while checking if the gap between these two becomes smaller than the prescribed tolerance. In particular, these methods terminate with so-called ε-optimal solutions. This concept can also be applied to nondominance in multiobjective optimization.

Definition 2.3
Let ε > 0. A pointȳ ∈ f (S) is called a (weakly) ε-non-do-mi-na-ted point of (MOP) if there exists no y ∈ f (S) such that y + εe (strictly) dominatesȳ, where e ∈ R k denotes the all-one vector. Similarly, a solutionx ∈ S is called (weakly) ε-efficient for (MOP) if there is no x ∈ S such that f (x) + εe (strictly) dominates f (x). The set of ε-nondominated points is denoted by N ε .

Remark 2.4
Depending on the considered problem (MOP) one could also use another vector instead of e. E.g., this could be useful if the attained magnitudes of the objective functions differ largely. For an example see the explanations of the numerical examples in Sects. 3 and 6.

Enclosures and local bounds
In [23,24,26] the authors present methods for solving problems like (continuous or convex versions of) (MOP). The aim of these methods is to find an enclosure of the nondominated set N . Following the sandwiching idea of single objective branch-and-bound methods, an extension to the multiobjective setting is needed.
Then L is called a lower bound set, U is called an upper bound set, and the set E := E(L, U ) defined as is called enclosure of the nondominated set N of (MOP).
For transferring the gap termination criterion to the multiobjective setting we need some sort of quality measure for the set E(L, U ). In [23][24][25][26] the authors use the so-called width w(E) which is defined as the optimal value of the problem where s( , u) := min{u i − i | i ∈ [k]} represents the shortest edge length of a given box [ , u]. The justification for the choice of this width measure is given by the following lemma.
holds, i.e. for any feasible point x ∈ S with f (x) ∈ E we know that f (x) is an ε-nondominated point of (MOP).
Due to Lemma 3.2, finding lower and upper bound sets L, U and thus an enclosure E with w(E) < ε becomes important. In fact, computing such an enclosure is the aim of the methods from [23,24], whereas in the branch-and-bound based method from [26] the authors use the width measure in the preimage space to declare a box sufficiently branched. As we propose a criterion space method, we focus on computing an enclosure E such that w(E) < ε. In order to do so, we make use (as done in [23][24][25]) of the concept of Local Lower and Local Upper Bounds (LLBs and LUBs, respectively) which is an extension of the work in [15,34] and also [21].
In [15] the authors call a set Y ⊆ R k stable if there are no elements in Y dominating each other, i.e. for any two distinct y 1 , y 2 ∈ Y there are i, j ∈ [k] such that y 1 i < y 2 i and y 2 j < y 1 j . One can immediately see that for any (MOP) the corresponding nondominated set N is stable. We start with the definition of local upper bounds as introduced in [34].

Definition 3.3
Let N ⊆ f (S) be a finite and stable set. Then the lower search region for N is and the lower search zone for some u ∈ R k is given by Each point u ∈ U (N ) is called a local upper bound (LUB).
In [23,24] the authors extended the concept of LUBs to so-called Local Lower Bounds. Each point ∈ L (N ) is called a local lower bound (LLB).
In [26] the authors show that given a stable set N , the corresponding local upper bound set is uniquely determined. In the same manner, one can show that also the corresponding local lower bound set is unique. The following result from [23] relates the concept of local lower/upper bounds to lower/upper bounds as introduced in Definition 3.1. As the local lower and upper bound sets depend on the stable set N , we have to update both if we add a new point y to the set N . These procedures come from [34,Algorithm 3], [23,Algorithm 2]. In Algorithm 1 we provide the scheme for updating a local upper bound set. Similar to [34] we use the following notation: for y ∈ R k , c ∈ R and i ∈ [k] we define y −i := (y 1 , . . . , y i−1 , y i+1 , . . . , y k ) and

Algorithm 1 Updating a Local Upper Bound Set
Require: Local upper bound set U (N ) and update point y ∈ f (S) for u ∈ A do 8: end for 10: end for 11: for i ∈ [k] do 12: In order to obtain an algorithm for updating a local lower bound set L(N ) w.r.t. an update point y ∈ int(B), one can modify Algorithm 1 in the following way. We replace U (N ) by L(N ), and change the following: Furthermore, by analyzing the update procedure of U (N ) w.r.t. a point y ∈ f (S), one can relate the local upper bounds from U (N ) to the ones from U (N ∪ {y}) in the following way: for a local upper bound u ∈ U (N ∪ {y}) we have either u ∈ U (N ) or u = y i , u −i for some i ∈ [k] and u ∈ U (N ). For the latter case, we call u the parent of u. Otherwise, u is its own parent. Similarly, we define the parents of a given local lower bound . Furthermore, the analogue of Lemma 3.6 holds also for any local lower bound ∈ L (N ∪ {y}).
In [15,34] the authors present a general scheme for computing an approximation of the nondominated set of a multiobjective optimization problem (cf., e.g., [15,Algorithm 2]). In [23] the authors extend this scheme for computing an enclosure E of N which, in fact, consists of boxes [ , u], where is a local lower and u a local upper bound, and satisfies w(E) < ε for given ε > 0. Both methods make use of the search zones determined by a given local upper bound u.
For this paper, we use a slightly modified approach of these as a basic scheme which is written in Algorithm 2.
In Algorithm 2, we loop through the set U loop , i.e. the set of local upper bounds at the beginning of the iteration. If the current local upper bound u is part of a not sufficiently small box (see Step 5), we try to improve this local upper bound, i.e. find a nondominated point in the search region determined by u. In order to do so, we solve the weighted-sum problem x ∈ S and f (x) < u − δe, (WSP (α; u)) Algorithm 2 General scheme for computing an enclosure of the nondominated set relying on a scalarization technique.
Require: box B = z , z u with f (S) ⊆ int(B), termination tolerance ε > 0 and off-set factor δ > 0 1: Initialize nondominated set N = ∅, set of local lower bounds L = z and set of local upper bounds for u ∈ U loop do 5: if there exists ∈ L with ≤ u and s( , u) ≥ ε then 6: if there exists y ∈ N with y < u − δe then 7: Update U w.r.t. y using Algorithm 1 8: Update L w.r.t. y using modified Algorithm for some weight vector α ∈ int(R k + ). Note that if the weight vector α has only strictly positive entries we know that any global solutionx of (WSP (α; u)) is efficient for (MOP) (cf. [4,Lemma 1.5.2], [20, p. 214f]), i.e.ȳ = f (x) ∈ N andȳ < u − δe. Furthermore, we know that there exists y ∈ N with y < u − δe if and only if (WSP (α; u)) has a solution. In sum, this means that we can decide whether to enter and execute the if-statement in Step 6 after solving (WSP (α; u)) for a strictly positive weight vector α. This weight vector could be chosen always the same, e.g., α = (1, . . . , 1) /k. But one could also compute it individually for any u, as done for the numerical examples presented later in this work. For example, one could compute the weight vector α for the problem (WSP (α; u)) using a computation technique proposed in [48]. Given the current local upper bound of interest u ∈ U (N ), where N is the current approximation of N , for any i ∈ [k] let y i ∈ N be a defining point of the i-th component of u, i.e. for any i ∈ [k] we have y i i = u i and y i −i < u −i . We then solve the system ⎛ ⎜ ⎝ and set α := |α|/ α 2 . Note that α is the normal vector of the hyperplane determined by the points y 1 , . . . , y k . Using (WSP (α; u)), Algorithm 2 can be seen in the context of the so-called Adaptive Weighted Sum Method as presented in [31,32].

Remark 3.7
For a clear definition of defining points we refer to [34]. Furthermore, in [34] the authors present methods for updating local upper bound sets together with the sets of defining points, i.e. an algorithm doing the same as Algorithm 1, but also updating the corresponding defining points (see [34,Algorithm 5]). In fact, [34,Algorithm 5] is used in any implementation of the present paper instead of Algorithm 1. We waived to present [34,Algorithm 5] in detail to keep things more concise.
If (WSP (α; u)) is feasible, it is solved to optimality and its solutionȳ ∈ N is used to update the local upper and lower bound sets. Afterward, it proceeds to the next iteration. If otherwise (WSP (α; u)) is not feasible, we know that there is no nondominated point y of (MOP) satisfying y < u − δe and therefore particularly y < u − εe. Therefore, we can use u − δe to update the set of local lower bounds (see Step 10). An exemplary situation during Algorithm 2 is depicted in Fig. 1, whereû ∈ U loop denotes the current local upper bound of interest. The green boxes represent the current enclosure based on the set of local upper bounds U and local lower bounds L, which depend on the current approximation N of the Pareto set N . Note that the enclosure simultaneously functions as the leftover search zone, i.e. the area where nondominated points can lie. The dashed red line mimics the restrictions coming from (WSP (α; u)), i.e. we are only looking for nondominated points in the lower left quadrant w.r.t.û − δe. Clearly, if in this area there is no part of the Pareto set, we can update the lower bound set w.r.t.û − δe and declare the search region determined byû as well-enough explored. Furthermore, in that picture one can imagine the impact of enlarging δ, as the distance between the new potentially nondominated point and the old ones increases if δ gets closer to ε. Remark 3. 8 We briefly describe the method from [23] as it is closely related to ours from Algorithm 2.
The method from [23] iterates through the elements of the list of local lower bounds L in an outer loop and through the elements of the list of local upper bounds U in an inner loop. Givenˆ ∈ L, it iteratively choosesû ∈ U satisfyingˆ ≤û and s(ˆ ,û) > ε. After fixing a pair (ˆ ,û) it solves the following optimization problem, where =ˆ and u =û are set, Problem (PSP ( , u)) can be interpreted as Pascoletti-Serafini problem (cf. [46]) with reference pointˆ and directionû −ˆ . Subsequently, it solves an optimization problem that ensures that one obtains a nondominated pointȳ. Note that for an optimal solution (x,t) of (PSP ( , u)) the pointx ∈ S is only guaranteed to be weakly efficient (cf. [46]). Ifȳ satisfies some specific criterion (cf. [23, Section 4.3]), it is used to update the sets of local lower and upper bounds. If otherwise,ȳ does not satisfy this criterion, the pointˆ +t(û −ˆ ) is used to update the local lower bound set, wheret ≥ 0.5 is computed in a way that strict nondominance ofˆ +t(û −ˆ ) w.r.t. N is ensured. We decided to avoid solving problems like (PSP ( , u)) since preliminary numerical experiments pointed in the direction that solving (PSP ( , u)) is computationally not as efficient as solving (WSP (α; u)) for our test instances. Furthermore, by doing so, one avoids the procedure for ensuring strict nondominance of the update points. After running through both loops, it is guaranteed that any new box [ new , u new ] evolving from the above-described updates satisfies  (3).
We proceed with proving correctness and finiteness of Algorithm 2.

Lemma 3.9 Let U be the local upper bound set at some arbitrary point in Algorithm 2. Then U is an upper bound set in the sense of Definition 3.1.
Proof We initialize the set U = {z u }. During Algorithm 2 the set U is updated only in Step 7. The update procedure takes place w.r.t. a point y ∈ N and therefore particularly y ∈ f (S). Thus, by correctness of Algorithm 1 we obtain that U is a local upper bound set w.r.t. some set N 1 ⊆ f (S). The claim follows by Lemma 3.5. Step 8, L is updated w.r.t. a point y ∈ N and therefore in particular y / ∈ f (S)+int(R k + ). If otherwise updated in Step 10, it is updated w.r.t. y =û −δe for some local upper boundû ∈ U . Since we only enter Step 10, if there exists noȳ ∈ N satisfyingȳ ≤û − δe we particularly know that y =û − δe / ∈ f (S) + int(R k + ). Hence, by correctness of the modified version of Algorithm 1 for local lower bounds we obtain that L is a local lower bound set w.r.t. some set ). The claim follows by Lemma 3.5.
The above results show that the sets L and U are lower and upper bound sets in the sense of Definition 3.1 at any time of Algorithm 2. To show that also the output sets are sets of that type, we have to ensure that the sets ) are finite. If we show that Algorithm 2 is itself finite, i.e. terminates after a finite number of steps, the finiteness of N 1 and N 2 follows immediately. In order to show finiteness of Algorithm 2 we start with a result about the decrease of certain edge lengths during one iteration of the method. The following is essentially the same as the corresponding results in [23]. However, as some adaptions have to be made, we present the adapted proofs in detail.

Theorem 3.11
Let ε > δ > 0 and z , z u ∈ R k be the input parameters of Algorithm 2. Moreover, let L start and U start be the local lower and upper bound sets at the beginning of some iteration in Algorithm 2, i.e. at the begin of the while-loop of some iteration. Accordingly denote by L end , U end the sets at the end of this iteration. Then for any e ∈ L end and any u e ∈ U end with e ≤ u e there exist s ∈ L start and u s ∈ U start such that the following hold: (1) s ≤ e ≤ u e ≤ s , i.e. the width does not increase during one iteration.
(2) There exists an index j ∈ [k] such that Proof Let e ∈ L end and u e ∈ U end with e ≤ u e . We denote by P( e ), P(u e ) ⊆ R k the sets containing the parent history of e and u e in the current iteration, i.e. their parents, their parents' parents and so on until the ancestors of e and u e belonging to L start and U start . By Lemma 3.6 we have that P e ∩ L = 1 and P u e ∩ U = 1 ( 5 ) at any point in the current iteration, i.e. for any assignment of L and U during the whileloop, and therefore in particular we obtain s ∈ P( e ) ∩ L start and u s ∈ P(u e ) ∩ U start . By the definition of parents and the corresponding update procedures given in Algorithm 1 and its adaption for local lower bounds, we know that p ≤ c and u c ≤ u p , where c and u c are the children of p and u p , respectively. Hence, we obtain that and therefore (1) is satisfied. We now show that also (2) holds. We assume for a contradiction that there exist e ∈ L end and u e ∈ U end with e ≤ u e such that for any ∈ L start and u ∈ U start with ≤ e ≤ u e ≤ u and any index i ∈ [k] we have that In particular, (7) holds for the ancestors of e and u e in L start and U start , namely s and u s . We consider now the point in Algorithm 2, where u s is chosen in the for-loop. Note that u s might not be the first local upper bound from U loop considered in the for-loop and therefore it might be the case that u s / ∈ U current , where U current is the current assignment of U . However, since for any i ∈ [k] we have that (u e − e ) i ≥ ε we know that for any ∈ P( e ) ∩ L current we have that (u s − ) i ≥ ε for any i ∈ [k], where L current denotes the current assignment of L. Hence, we have that and we therefore enter the if-statement in Step 5. Similarly, we fix u ∈ P(u e ) ∩ U current . Note that s ≤ ≤ e and u e ≤ u ≤ u s .
We denote by L updated and U updated the assignments of L and U after the current iteration of the for-loop is executed. We have to distinguish two main cases, namely we enter the if-statement in Step 6 (case A) or we do not (case B).
(A) In that case we enter the if-statement in Step 6, i.e. there exists y ∈ N with y < u s − δe.
We have to distinguish two cases. Case A.1 y < u . Then u would be removed during the update of U current using Algorithm 1, i.e. u / ∈ U updated . Now, we have the candidates from which at least one belongs to U updated by (5). Say u j ∈ U updated . Using (6), we compute a contradiction to (7). Case A.2 y ≮ u . Then there exists j ∈ [k] with u j ≤ y j . Again, using (6), we compute a contradiction to (7). (B) Assume now that we do not enter the if-statement in Step 6, i.e. there exists no y ∈ N with y < u s − δe =: y. We have to distinguish two cases. Case B.1 < y. Then would be removed during the update of L current using Algorithm 1 for local lower bounds, i.e. / ∈ L updated . Now, we have the candidates from which at least one belongs to L updated by (5). Say j ∈ L updated . Using (6), we compute a contradiction to (7).
Obtaining a contradiction in all possible cases shows that our assumption (7) cannot be true and therefore statement (2) is also true, which completes the proof.
Knowing that in every iteration of Algorithm 2 for any pair ( e , u e ) we obtain a decrease w.r.t. the edge length for at least one edge j ∈ [k] we are able to prove that Algorithm 2 terminates after finitely many iterations. Theorem 3.12 Let ε > δ > 0 and z , z u ∈ R k be the input parameters of Algorithm 2. We define Then the number of iterations of Algorithm 2, i.e. the number of iterations of the whileloop, is bounded by max {1, κ}. Hence, Algorithm 2 is finite.
Therefore, let κ > 1. We write 1 = z and u 1 = z u and for any l ≥ 1 let L l and U l be the assignments of L and U at the begin of the l-th execution of the while-loop.
By Theorem 3.11, we know that for any l ≥ 2 and any l ∈ L l and u l ∈ U l there exist l−1 ∈ L l−1 and u l−1 ∈ U l−1 with l−1 ≤ l ≤ u l ≤ u l−1 as well as an index i l ∈ [k] such that Suppose now that Algorithm 2 has more than κ iterations. Then there exist κ ∈ L κ and Iterative use of (9) yields that Hence, we have that w(E κ ) < ε and Algorithm 2 terminates.

Corollary 3.13
Let ε > δ > 0 and z , z u ∈ R k be the input parameters of Algorithm 2. Then after finitely many iterations of the while-loop, an enclosure E of the nondominated set N satisfying w(E) < ε is returned.
Proof Theorem 3.12 tells us that Algorithm 2 terminates after at most κ calls of the whileloop, i.e. the output set E κ satisfies w(E κ ) < ε. By Lemma 3.9 and Lemma 3.10 we know that the set L, resp. U , is a lower, resp. upper, bound set in the sense of Definition 3.1. In particular, this holds for L κ and U κ . Thus, E κ is an enclosure of the nondominated set N satisfying w(E κ ) < ε.
We conclude this section with an example together with the numerical results. Note that for the numerical realization, we did not exactly implement Algorithm 2, but a modification in a way that we do not iterate through all local upper bounds in every call of the while-loop. Instead, at the begin of any while-loop we determine a local upper boundû ∈ U such that there exists ∈ L with ≤û and s( ,û) = w(E) and then perform the loop forû only. By doing so, we avoid performing computations for local upper bounds u ∈ U loop which do not belong to the current U anymore. However, finiteness of this procedure is not guaranteed by Theorem 3.12 anymore. But if it terminates after finitely many steps, then we still have w(E) < ε. Furthermore, we use a normalized shortest edge calculation, i.e. instead of s( , u) = min i∈ [k] {u i − i } we compute it via: Consequently, the termination tolerance ε is not an absolute measure anymore, but a relative one. This normalized approach guarantees that differences in the magnitude of the different objective functions are taken into account.
Example 3.14 We consider the optimization problem: (Circles)  We consider two different instances of (Circles): • The first one is determined by n = 2, r = 1, m = 3 as well as m 1 = (3, 0) , m 2 = (2, 1) and m 3 = (0, 3) . The result of Algorithm 2 with relative tolerance ε = 0.0125 (this is the equivalent of an absolute tolerance of ε = 0.05) and off-set factor δ = 0.95ε is depicted in Fig. 2. One can observe that the gap in the nondominated set is realized in the enclosure via the edge with corner at (2,2) . The corresponding local upper bound is not dominated by any point in the nondominated set and therefore the method updates the local lower bounds with the point u − δe. • The second instance is determined by n = 3, r = 1, m = 3 as well as m 1 = (1, 0, 0) , m 2 = (0, 1, 0) and m 3 = (0, 0, 1) . The result of Algorithm 2 with relative tolerance ε = 0.05 (this is the equivalent of the absolute tolerance of ε = 0.1) and off-set factor δ = 0.95ε is depicted in Fig. 3. Again, similar to the two-dimensional case, we have a region in the image space with local upper bounds not dominated by any nondominated point of the problem. We observe the same behavior.
All numerical computations in this paper are realized using the Python-package (cf. [41]) of the SCIP optimization suite (cf. [7]) and are carried out on a machine with Intel Core i7-8565U processor and 32GB of RAM. In Fig. 2 (Right) and Fig. 3 the enclosure obtained by the method is depicted. One can observe a significant increase in computational time going from two dimensions to three. In Fig. 2 (Left) the approximation of the nondominated set is depicted.

Piecewise linear relaxations and lower bounding
As we have seen, using Algorithm 2 we are able to compute an enclosure of the nondominated set of a given (MOP). However, as all the weighted-sum problems that have to be solved during Algorithm 2 are still MINLP problems in general, there may arise inconvenience while tackling, e.g., larger instances (see Sect. 6). One idea to cope with this issue is to bypass the non-linearity of the problems, e.g., trying to solve just MILP problems.
We have seen before that the idea of computing enclosures instead of a finite approximation of the nondominated set is somehow lifted from the single objective setting to the multiobjective one. In the same spirit, we are going to use an idea coming from single objective optimization in regard to solving MINLP problems. In [13,40,45] several methods for solving MINLP problems, e.g., a (MOP) with k = 1, by iteratively solving adaptively refined convex or linear MIP relaxations of the original problem are presented. Note that there is plenty of literature and ongoing research in the area of efficiently computing linear or convex relaxations (or approximations) of nonlinear problems (cf., e.g., [6,12,13,28,38,44,53]). Although the basic idea-but not the realization-of consequently refining the relaxations and therefore guaranteeing convergence of the sequence of optimal values of the relaxed problems to the optimal value of the original problem is the same, the respective termination criteria of the algorithms from [45] on the one hand and [13,40] on the other hand differ. In the latter, both algorithms terminate if the maximal possible constraint violation of the relaxed problem in comparison to the original problem is below an a-priori given, but arbitrarily small error bound. Morally speaking, we say that the original MINLP problem is solved to optimality if we computed an optimal solution of a relaxation violating the nonlinear constraints only in an acceptable manner. The authors prove that under certain assumptions on the chosen relaxation technique, the proposed methods terminate after a finite number of steps.
In [45] the authors terminate their algorithm by the gap criterion which is known from classical branch-and-bound methods. The algorithm uses parts of the optimal solutions of the relaxed problems to fix variable values in the original problem, e.g., one can fix all integer variables in the MINLP problem to take the corresponding values of the relaxed solution and obtain an NLP problem which-if it is feasible-might be easier to solve and then gives an upper bound on the optimal value. In case of fixing the integer variables of the relaxed solution (x C ,x I ) in the MINLP problem the resulting NLP problem has the following form Furthermore, as before, the lower bound on the optimal value is consequently improved by refining the relaxations. In every step, the smallest available upper bound and the best lower bound is used to sandwich the optimal value of the original MINLP problem and terminate the algorithm if the gap between these bounds is small enough. However, it is not obvious that the resulting NLP problems become feasible at any time during the procedure, and therefore there is no guarantee that any upper bound is available for the gap criterion at all. Nevertheless, in practice, it is rather unlikely to produce exclusively infeasible integer assignments with the solution of the relaxed problems, in particular with increasing accuracy.
In this paper, we want to merge both ideas as the use of upper bounds together with the gap criterion may cause termination even if the maximal possible constraint violation of the relaxation is not below the tolerance and therefore accelerate the computations. In fact, we use the scheme presented in [13] to preserve the theoretically ensured convergence of the method, but also add the upper bounding part as well as the gap termination criterion from [45] to avoid unnecessary computations. Furthermore, the relaxation techniques used in [13] are able to handle general nonlinear functions, whereas the techniques in [45] are based on the so-called McCormick relaxations for bilinear and multilinear terms (cf. [42]) and are therefore only able to handle general polynomial terms. For this paper, we restrict ourselves to the explicit handling of quadratic and bilinear terms as these are the only ones appearing in our application (see Sect. 6). However, the method presented in the remainder of this paper is capable of handling general nonlinearities if an adequate relaxation technique is available. We start by introducing the necessary notions.
a relaxation of (MOP). We denote the nondominated set of (RMOPS) by NS.
Note that, if k = 1, we have that the optimal valuef (x * ) of (RMOPS) is a lower bound on the optimal value f (x * ) of (MOP). For brevity we assume without loss of generality that the objective f in (MOP) is linear. This is also motivated by our energy supply network considered in Sect. 6 Consequently, f needs no further relaxation and for the remainder of that paper a relaxation (RMOPS) is characterized by its feasible setS. Given two different relaxationsS andŜ, we call the relaxationS finer thanŜ ifS ⊆Ŝ. Again, if k = 1, for two relaxationsS ⊆Ŝ we have that the optimal value f (x * ) of the relaxationŜ is a lower bound on the optimal value f (x * ) of the relaxationS. There is a plenty of possibilities to obtain relaxations of a given (MOP). One way is to use the concept of piecewise linear under-and overestimators (cf., e.g., [11,13,28]). h(x 1 , . . . , x n ) be a nonlinear real-valued function with compact domain D h ⊂ R n appearing in (MOP), e.g., as a constraint function.

Definition 4.2 Let
• We call a continuous piecewise linear function h u : Thus, given (MOP) with nonlinear terms h i , i ∈ I, for some finite index set I together with piecewise linear relaxations R h i , i ∈ I, and proceeding as above, we obtain a relaxation of (MOP) with feasible set denoted byS R or simply by R, where R is the collection of all R h i . For the remainder of that paper we only consider relaxations coming from piecewise linear relaxations of all appearing nonlinear terms of (MOP). One can see immediately that given two relaxations R and R we have that R ⊆ R if and only if for any nonlinear term for all x ∈ D h . Consequently, by improving the piecewise linear under-and overestimators we refine the corresponding relaxation. Accordingly with [11], we measure the quality of a given piecewise linear relaxation R h of a nonlinear term h by • the overestimation error • the underestimation error • and the overall relaxation error ε Clearly, the relaxation error decreases while refining relaxations, i.e. for two relaxations . Naturally, we can also extend this concept to measure the quality of a relaxation R of a given (MOP). In fact, the overestimation error of R is defined by Similar for the underestimation error. The overall relaxation error is then given by ε R rel := max{ε R o , ε R u }. Similar as before, if R ⊆ R for two relaxations R and R we have that ε R rel ≤ ε R rel . Let us now briefly introduce the relaxation technique of interest for that paper, namely the so-called McCormick piecewise linear relaxations (cf. [42]).

Remark 4.3
As already mentioned, we put the focus in this paper on the ingredients necessary for our application. As there appear only bilinear and quadratic constraints, we restrict ourselves to the description of exactly this case. However, by decomposing general polynomials into bilinear and quadratic terms one can apply the method from this work to general polynomially constrained problems. For example, the polynomial x 3 y can be decomposed into where x i , i = 1, 2, 3, are additional variables.
We start with the bilinear case, i.e. h(x, y) = x y, with compact domain Although the quadratic case is just a special case of the bilinear case we give the explicit formulation. Therefore, let h( and linear overestimator is given by

Remark 4.4
In [45] the authors provide an adaptive partitioning scheme, i.e. there is no requirement for equidistant intervals. New breakpoints are added in parts of the variable domains close to the value of the previous solution in order to refine the relaxation. The idea is to only partition on regions of the variable domain that appear to influence optimality the most. In contrast, using uniformly distributed break points, one runs the risk of refining in uninteresting regions of the variable domain and therefore unnecessarily blowing up the problem. The scheme presented in [45] could also be incorporated 2 into the methods of the present paper. But for simplicity of presentation and since the equidistant procedure works very well for our application, we restrict ourselves to the equidistant partitioning scheme presented in this paper.
After we have introduced the relaxations of interest for the present paper, we now relate relaxations (RMOPS) and lower bounds of the nondominated set N of (MOP). For the ease of notation, we subsequently assume that for a given relaxationS we have that f (S) ⊆ int(B). We obtain the following lemma.

Lemma 4.5 Let f (x) ∈ NS for somex ∈S and some relaxationS of (MOP). Then f (x) is nondominated by
Proof Assume for a contradiction that there is someȳ ∈ S such that f (ȳ) dominates f (x). This contradictsȳ ∈ S ⊆S together with the nondominance of f (x) w.r.t. (RMOPS).
The above lemma tells us that any stable setÑ consisting of nondominated points of possibly different relaxations of a given (MOP) is nondominated w.r.t. N . Therefore, by employing Lemma 3.5, we obtain a lower bound set L(Ñ ) of the nondominated set N . If we have furthermore any potentially nondominated-and therefore particularly stable-set N ⊆ f (S) available, we obtain an enclosure E(L(Ñ ), U (N )) of the nondominated set in the sense of Definition 3.1 again by Lemma 3.5. That is the core idea of the method presented in the next section.
We finalize this section with some aspects concerning the sets N andÑ . We want to make use of the idea from [45], where the optimal value is sandwiched between lower bounds coming from relaxed problems and upper bounds coming from solutions of the reduced problems. In the multiobjective setting, the lower bound is not a single value anymore, but a stable set consisting of optimal solutions of possibly different relaxations, namelyÑ . In the same spirit, also the upper bound is not a single value but a stable set consisting of potentially nondominated points, namely N . As we want to somehow decrease this set N towards N , it is updated w.r.t. nondominance, i.e. if we compute a new potentially nondominated point y, we add it to N if y is nondominated w.r.t. N . Furthermore, we discard any points in N which are dominated by y. This is written in Algorithm 3. By doing so, we ensure that N stays stable and consequently improves towards N as shown in the next lemma. Note that since N ⊆ f (S) we have that N is nondominated w.r.t. N , i.e. N actually approximates N from above.

Lemma 4.6
Let N 1 be a stable input set and let y ∈ f (S). Then Algorithm 3 returns a stable set N 2 . Furthermore, either N 1 ⊆ N 2 or there exists y ∈ N 1 such that y dominates y .
Proof We have to distinguish two base cases. Firstly, assume there exists no y ∈ N 1 with y ≤ y . Then either y is not nondominated w.r.t. N 1 and we have that N 1 = N 2 , or y is nondominated w.r.t. N 1 and we have that N 1 ⊂ N 2 = N 1 ∪ {y}. In both cases, we have that N 2 is stable.
Secondly, assume that there exist y i ∈ N 1 with y ≤ y i , i ∈ [s] for some s ∈ N. Due to the stability of N 1 we have that y is nondominated w.r.t.
If otherwise, we have that y dominates y 1 .
Let us now turn to the setÑ which is meant to approach N from below, i.e. we only want to use the best relaxed solutions available. In that setting, we call a nondominated pointỹ of a relaxationS better than a nondominated pointŷ of a relaxationŜ, ifŷ dominatesỹ. Note that if y dominatesỹ we know thatS S holds for the corresponding relaxations. In a similar but reversed way as for N , we update the setÑ as written in Algorithm 4.

General scheme
We have seen at the end of Sect. 3 that Algorithm 2 is able to compute an enclosure as well as an approximation of the nondominated set of a given (MOP). However, if nonlinear constraint functions are present in (MOP), the scalarized problems arising during Algorithm 2 are MINLP problems. At the end of Sect. 3 we have also seen that if the used solver, like, e.g., SCIP, is capable of handling the occurring nonlinear constraints one could solve the scalarized single objective problems directly. Nevertheless, if the complexity of (MOP) and therefore the one of the resulting MINLP problems increases, the run time of solvers like SCIP for computing a solution to such an MINLP problem may increase, too. Note that for any computed nondominated point in our approximation, we have to solve at least one such MINLP problem, so even a small increase of computational time per problem may cause a tremendous upturn of run time of the whole procedure.
In Sect. 4 we have presented ideas and concepts from single objective optimization of MINLP problems which are meant to reduce complexity and therefore facilitate the computations while solving a scalar MINLP problem. Now one could think of choosing a specific relaxation R and then using one of the present algorithms for computing a representation (or enclosure) of the nondominated set of the relaxed mixed-integer linear (or convex) problem, see, e.g., [47,51] for the biobjective linear case and [24] for the multiobjective convex case, or even Algorithm 2 or [23]. One could argue that if the relaxation R satisfies some quality criterion, e.g., a small enough estimation error ε R rel , the approximation (or enclosure) of N R can be considered to be an approximation (or enclosure) of N , similar as proposed in [13] for the single objective case. Note that convergence then only relies on the theory of the used multiobjective method.
However, computing such a relaxation and solving the arising problem using an available solution method may be very time-consuming as the complexity, even of the relaxed problems, may increase with ongoing refinement-in particular, as the number of integer variables increases while tightening the relaxations. One strategy for avoiding this is trying to use cheap relaxations whenever possible and refining them only when necessary, e.g., only in specific parts of the image space. This idea of adaptively refining the relaxations while computing an enclosure of the nondominated set of (MOP) is the core of this work. Note that one could additionally incorporate adaptivity in the variable domains into the refinement procedure (see Remark 4.4).
In the following, we present an algorithm similar to Algorithm 2 which makes use of these ideas in order to compute an enclosure of the nondominated set without solving scalarized MINLP problems, but only MILP and NLP problems (see Algorithm 5). U loop = U 6: for u ∈ U loop do 7: if there exists ∈ L with ≤ u and s( , u) ≥ ε encl then 8: done = false 9: while done = false do 10: Before starting the procedure we fix a relaxation technique guaranteeing that the relaxation error quality criterion, namely ε R rel < ε rel , is satisfied after a finite number of refinement steps. We introduce the set consisting of all such relaxations and assume the following for the remainder of the paper.

Assumption 5.1 For any relaxation technique and any initial relaxation
. be a chain of strictly decreasing relaxations. Then for any ε rel > 0 there exists an s ∈ N with ε R s rel < ε rel , i.e. R l ∈ for all l ≥ s. Remark 5. 2 We should mention that there is a wide range of alternative relaxation techniques in the literature, including the use of convex underestimators (cf. [2,3,37,50]) instead of piecewise linear ones. One could then either solve the resulting convex MINLP problems or combine them with outer approximation techniques (cf. [18,27,35,39,54]). However, here we restrict ourselves to the case of piecewise linear relaxations since the relaxation error computation and especially convergence in the sense of Assumption 5.1 is straightforward. For the above-mentioned approaches, one has to ensure that both of these requirements are fulfilled.
Furthermore, if R ∈ we consider any feasible pointx ∈S of the corresponding relaxed problem (RMOPS) based on the feasible setS = S R as a feasible point of (MOP), i.e.
x ∈ S. Consequently, by Lemma 4.5 for any efficient pointx ∈S of (RMOPS) we have that f (x) ∈ N . This means, that for a relaxation R ∈ we consider any nondominated point of (RMOPS) as a nondominated point of (MOP). For ease of notation, we writex ∈ S if x ∈ S R for some R ∈ for the remainder of that paper. Note that Assumption 5.1 holds for the McCormick piecewise linear relaxations introduced in Sect. 4. In Step 3, we initialize the set consisting of any present local upper bound together with the relaxation R which was needed to obtain this specific local upper bound. We say that the relaxation R caused the computation of a local upper bound u if u entered the set of local upper bounds after it was updated w.r.t. a point y whose computation relied on R. This could be either the case if y is the solution of the relaxed problem corresponding to R and R ∈ or if y is a nondominated point of (redMOP(x I )), wherex I was computed using the relaxation R. Furthermore, we initialize the set D Ñ := (ỹ, R) |ỹ ∈Ñ , R caused the computation ofỹ , consisting of solutions of relaxed problems together with their corresponding relaxation R. Suppose now we are at the beginning of the l-th call of the outer while-loop in Step 4 and we have that w(E) ≥ ε encl . We fix the set U loop to be the current assignment of the set of local lower bounds U and start the for-loop in Step 6. In that for-loop, letû ∈ U loop such that there exists ∈ L with ≤û and s( ,û) ≥ ε encl , i.e. the search zone determined by andû is not yet well enough explored and we set done = false. We use the indicator done to determine whether we achieved an improvement w.r.t.û, i.e. the inner while-loop ensures that we concentrate onû until we made some improvement. We say that we improved u if we entered one of the if-statements in the Steps 15 and 19 or the else-statement in Step 29 as in these steps either a potentially nondominated point y with y <û − δe is found or the search region c(û) is declared to be well enough explored.
However, given the current local upper boundû we have to choose an appropriate relaxation R current for executing our computations. This is realized in Step 10. We choose a relaxation at least as fine as the relaxation which led toû, i.e. Rû ⊇ R current . Furthermore, if there exists some relaxed solutionỹ ∈Ñ withỹ <û − ε encl e we choose a strictly finer relaxation than Rỹ, i.e. we ensure that Rỹ R current . Note that the strictness of the inclusion is not necessary for the convergence of Algorithm 5 since the method also refines the relaxations if the incumbent relaxed solution did not lead to an improvement of the lower bound set. However, for some problems, it may be of advantage to refine the relaxations more aggressively instead of solving cheaper problems that do not have a high chance of leading to a significant improvement of the lower bound set. In fact, not forcing the inclusion R current ⊆ Rỹ to be strict makes it impossible to find a relaxed solution y dominatingỹ, and therefore satisfyinĝ u − ε encl e ≤ y , as can be seen in Fig. 5. The possible negative effect of not forcing strictness can be seen in the comparison of Figs. 7 and 8, where we can observe an increase in runtime as well as in the number of problems to be solved. However, refining too aggressively may also be a problem as it may result in solving harder problems than necessary. From our first observations, it is a good strategy to take the coarsest possible relaxation without losing the strictness of the inclusion-at least with using our basic refinement strategy.
After that we initialize the relaxationS = S R current , solve (WSP (α; u)) with u =û for some α ∈ int(R k + ) and feasible setS and then decide whether we are able to enter the ifstatement in Step 12. If (WSP (α; u)) is infeasible, we declare the current search region c(û) as well enough explored by the same arguments as in Algorithm 2, and set done = true. If, otherwise, there exists a solutionỹ to (WSP (α; u)) we check ifÑ is nondominated w.r.t.ỹ, i.e. ifỹ improves the setÑ . If that is not the case, i.e. if there exists y ∈Ñ withỹ ≤ y , we have to restart the inner while-loop with a finer relaxation as the current one did not lead to any improvement. If otherwise,Ñ is nondominated w.r.t.ỹ, it is reasonable to move on asỹ suggests an improvement ofû. Consequently, we update the setsÑ and L w.r.t.ỹ and save the corresponding relaxation. If now the relaxation is fine enough in the sense of [13], i.e. R current ∈ , we considerỹ as a nondominated point of (MOP), update the sets N and U w.r.t.ỹ and set done = true. If otherwise, the relaxation is not yet fine enough we try to make use of the idea from [45], i.e. using parts of the relaxed solution to set up the reduced problem (redMOP(x I )). We solve the corresponding (WSP (α; u)) and if it has a solution we obtain a potentially nondominated point, i.e. update the sets N and U and set done = true. If it is infeasible, we have to restart with a finer relaxation, since R current suggested wrongly that we would find a potentially nondominated point.

Remark 5.3
Note that it is not necessary to solve the (WSP (α; u)) corresponding to (redMOP(x I )) to global optimality. Since we aim to find a feasible point x ∈ S satisfying f (x) <û − δe-and this is already incorporated in the constraints-it suffices to find a feasible point for (WSP (α; u)). Furthermore, even solving to global optimality would not guarantee finding an efficient solution f (x) ∈ N for (MOP) since we are restricted to a specific integer assignment. However, it may speed up the algorithm investing the time in computing global solutions of the possibly nonconvex (WSP (α; u)) depending on the structure and complexity of (MOP).
We turn now to proving correctness and finiteness of Algorithm 5.

Lemma 5.4 LetÑ be the set of best relaxed solutions at some arbitrary point in Algorithm
Proof We initialize the setÑ = ∅. During Algorithm 5 the setÑ is updated only in Step 14.
The update takes place w.r.t. a pointỹ ∈ int(B)\( f (S) + int(R k + )) by Lemma 4.5. Thus, by correctness of Algorithm 4 we obtain thatÑ ⊆ int(B)\( f (S) + int(R k + )) andÑ is stable. ). If otherwise updated in Step 20, it is updated w.r.t. y =û − δe for some local upper bound u ∈ U . Since we only enter Step 20, if there exists noȳ ∈ NS satisfyingȳ ≤û − δe we particularly know that y =û − δe / ∈ f (S) + int(R k + ). Hence, by correctness of Algorithm 1 adapted to local lower bounds we obtain that L is a local lower bound set w.r.t. some set N ⊆ int(B)\( f (S) + int(R k + )). Note thatÑ ⊆ N . The claim follows by Lemma 3.5.
After proving that at any point in Algorithm 5 the sets N ,Ñ , L and U satisfy the requirements, we proceed by showing termination of Algorithm 5 after finitely many iterations of the outer while-loop. For that purpose, we first prove that the inner while-loop is terminated after a finite number of iterations.

Lemma 5.8
Let ε encl > δ > 0, ε rel > 0, R I and z , z u ∈ R k be the input parameters of Algorithm 5. Moreover, letû ∈ U loop be the local upper bound chosen in the for-loop at some arbitrary point of Algorithm 5. Assume that there exists ∈ L with ≤û and s ,û ≥ ε encl . Then after finitely many iterations of the inner while-loop we have that done = true.
Proof We have to show that after finitely many iterations we either enter one of the ifstatements in Steps 15 or 19 or we enter the else-statement in Step 29. For a contradiction, we assume that none of them is entered after finitely many iterations of the inner while-loop. Then, every iteration is executed with a relaxation strictly finer than the one before. This is due to the fact that any already executed iteration of the inner while-loop terminated either with Step 23 or Step 27. Thus, we have an infinite chain of relaxations where R l denotes the relaxation corresponding to the l-th iteration of the inner while-loop. Now, by Assumption 5.1 there exists s ∈ N such that R s ∈ and therefore ε R s rel < ε rel . Hence, in the sth iteration of the inner while-loop we either enter the if-statement in Step 12 or the else-statement in Step 29. By our assumption, we do not enter the elsestatement. By Lemma 4.5 we have thatÑ ⊆ int(B)\( f (S) + int(R k + )) and thereforeÑ is nondominated w.r.t. N . By the fact that R s ∈ we have thatỹ ∈ N and therefore we enter the if-statement in Step 13 and subsequently the one in Step 15, a contradiction.
Similar to Algorithm 2 we can prove some decrease in some edge lengths after any iteration of the outer while-loop.

Theorem 5.9
Let ε encl > δ > 0, ε rel > 0, R I and z , z u ∈ R k be the input parameters of Algorithm 5. Moreover, let L start and U start be the local lower and upper bound sets at the beginning of some iteration in Algorithm 5, i.e. at the begin of the outer while-loop of some iteration. Accordingly denote by L end , U end the sets at the end of this iteration. Then for any e ∈ L end and any u e ∈ U end with e ≤ u e there exist s ∈ L start and u s ∈ U start such that the following hold: (1) s ≤ e ≤ u e ≤ s , i.e. the width does not increase during one iteration.
(2) There exists an index j ∈ [k] such that Proof The proof of part (1) works exactly the same as in the proof of Theorem 3.11. We therefore directly show part (2). Assume for a contradiction that there exist e ∈ L end and u e ∈ U end such that for any ∈ L start and u ∈ U start with ≤ e ≤ u e ≤ u and any index i ∈ [k] we have that In particular, (10) holds for the ancestors of e and u e in L start and U start , namely s and u s . We consider now the point in Algorithm 5, where u s is chosen in the for-loop in Step 6. Recall that we have introduced the sets P( e ) and P(u e ) in the proof of Theorem 3.11. Note that u s might not be the first local upper bound from U loop considered in the for-loop and therefore it might be the case that u s / ∈ U current , where U current is the current assignment of U . However, since for any i ∈ [k] we have that (u e − e ) i ≥ ε encl we know that for ∈ P( e ) ∩ L current we have that (u s − ) i ≥ ε encl for any i ∈ [k], where L current denotes the current assignment of L. Hence, we have that s , u s ≥ ε encl (11) and we therefore enter the if-statement in Step 7. Similarly, we fix u ∈ P(u e ) ∩ U current . Note that s ≤ ≤ e and u e ≤ u ≤ u s (12) for any ∈ P( e ) ∩ L and u ∈ P(u e ) ∩ U and for any L and U . We denote by L updated and U updated the assignments of L and U at the end of that iteration of the for-loop. As the exit from the inner while-loop can happen in three different ways, we have to distinguish three main cases, namely, if we set done = true in Step 17 (case A), if we set done = true in Step 21 (case B) or if we set done = true in Step 31 (case C). Note that Lemma 5.8 guarantees that after finitely many iterations of the inner while-loop one of the three above options is actually chosen.
(A) In that case we entered the if-statement in Step 15, i.e. there existsỹ ∈ f (S) with y < u s − δe. We have to distinguish two cases. Case A.1ỹ < u . Then u would be removed during the update of U current using Algorithm 1, i.e. u / ∈ U updated . We have the candidates from which at least one belongs to U updated by (5). Say u j ∈ U updated . Using (12), we compute a contradiction to (10). Case A.2ỹ ≮ u . Then there exists j ∈ [k] with u j ≤ỹ j . Again, using (12), we compute u e − e j ≤ u − e j ≤ỹ j − s j < u s − s j − δ, a contradiction to (10). (B) In that case we entered the if-statement in Step 19, i.e. there exists y ∈ f (S) with y < u s − δe. Again, we have to distinguish two cases, which both work exactly the same as the ones from (A). (C) In that case we entered the else-statement in Step 29, i.e. there exists noỹ ∈ NS with y < u s − δe =: y. Therefore, the set L is updated w.r.t. y. We have to distinguish two cases. Case C.1 < y. Then would be removed during the update of L current using Algorithm 1 adapted to local lower bounds, i.e. / ∈ L updated . We have the candidates from which at least one belongs to L updated by (5). Say j ∈ L updated . Using (12), we compute a contradiction to (10). Case C.2 ≮ y. Then there exists j ∈ [k] with ≥ y j = u s j − δ, i.e. particularly s( , u s ) ≤ δ < ε encl , a contradiction to (11).
Obtaining a contradiction in all possible cases shows that our assumption (10) cannot be true and therefore statement (2) is true, which completes the proof.
Similar as in the case of Algorithm 2, Theorem 5.9 enables us to prove finiteness of Algorithm 5.
Theorem 5.10 Let ε encl > δ > 0, ε rel > 0, R I and z , z u ∈ R k be the input parameters of Algorithm 5. We define Then the number of iterations of Algorithm 5, i.e. the number of iterations of the outer while-loop, is bounded by max {1, κ}. Furthermore, Algorithm 5 terminates after finitely many steps.

Proof
The proof for bounding the number of calls of the outer while-loop works exactly the same as the one of Theorem 3.12. Termination after finitely many steps follows by the fact that in every iteration of the outer while-loop, the inner while-loop is only called finitely many times as shown in Lemma 5.8.

Corollary 5.11
Let ε encl > δ > 0, ε rel > 0, R I and z , z u ∈ R k be the input parameters of Algorithm 5. Then after finitely many iterations of the outer while-loop an enclosure E of the nondominated set N satisfying w(E) < ε encl is returned.
Proof Theorem 5.10 tells us that Algorithm 5 terminates after at most κ iterations of the outer while-loop, i.e. the output set E κ satisfies w(E κ ) < ε encl . By Lemma 5.6 and Lemma 5.7 we know that the set L, resp. U , is a lower, resp. upper, bound set in the sense of Definition 3.1 at any point of Algorithm 5. In particular, this holds for L κ and U κ . Thus, E κ is an enclosure of the nondominated set N satisfying w(E κ ) < ε encl .

Remark 5.12
Note that one could also use two different off-set factors such that 0 < δ < δ < ε encl . The idea is that for a relaxed solutionỹ we expect the eligible solutions y of (redMOP(x I )) to satisfyỹ + ρe < y for some sufficiently small ρ > 0. If nowỹ satisfies y < u − δe too tightly, e.g., ifỹ + ρe ≮ u − δe, we do not find any point y with the desired property in Step 19 and therefore would refine the relaxation. This is not a problem in general, as we expect ρ → 0 for finer relaxations, i.e. at some point we either do not find any admissible pointsỹ anymore and therefore declare the search region c(u) for well enough explored, or we find admissible points y. However, this may be very time-consuming as we might have to refine the relaxations and repeat the computations a few times. Thus, using two different off-set factors δ andδ may help in terms of run time by being more restrictive forỹ by requiringỹ < u −δe in Step 12 in comparison to y where we require y < u − δe in Step 19, where 0 < δ <δ.
We conclude this section with the performance of the described method on the problem of Example 3.14. Again, as for Algorithm 2, we did not exactly implement the procedure given in Algorithm 5 but a slight modification. In fact, we do not iterate through all local upper bounds in every iteration of the outer while-loop, but choose only one local upper boundû where the current width is attained, i.e.û ∈ {u ∈ U | ∃ ∈ L : s( , u) = w(E)}. Again, finiteness of that implementation is not anymore guaranteed by Theorem 5.10. But if it terminates after finitely many steps, we still have that w(E) < ε encl . Again we use a relative shortest edge calculation as described in Sect. 3. Furthermore, we use a relative relaxation error calculation. For example, given a quadratic term h(x) = x 2 on the interval x ≤ x ≤ x u we compute the relative relaxation error via where r ∈ N is the number of considered equidistant partitions. Note that ε (h,R r h ) rel → 0 for r → ∞, i.e. refining relaxations by increasing the number of equidistant partitions satisfies Assumption 5.1. Furthermore, note that the relaxation R only depends on the number of equidistant partitions, i.e. we identify R by r in the following.
We consider the same instances of Example 3.14 as in Sect. 3. In the Figs. 6, 7 and 8, the respective enclosure computed by Algorithm 5 is depicted on the left together with the computational time and number of WSM problems solved, where we count the solution of (RMOPS) and the corresponding (redMOP(x I )) as one. On the respective right-hand side, for each of these solved relaxed problems, we count the corresponding degree of relaxation-in our case the number of equidistant partitions.
• The first one is determined by n = 2, r = 1, m = 3 as well as m 1 = (3, 0) , m 2 = (2, 1) and m 3 = (0, 3) . The enclosure computed by Algorithm 5 with relative tolerance ε encl = 0.0125 (this is the equivalent of a standard tolerance of ε encl = 0.05) and unique off-set factor δ = 0.95ε encl is depicted in Fig. 6 (Left). In Fig. 6    intervals. One can see that the method uses maximally four partitions per interval, but mostly one or two. We used a tolerance for the relative relaxation error of ε rel = 0.01. This yields the necessity of at least five partitions per variable, i.e. a degree of relaxation equal to five, to satisfy the relaxation error criterion. We can see that the method does not need to go all the way it is allowed to since four partitions seem to be enough. Of course, if we increase the tolerance of the relative relaxation error, the number of needed partitions decreases. • The second one is determined by n = 3, r = 1, m = 3 as well as m 1 = (1, 0, 0) , m 2 = (0, 1, 0) and m 3 = (0, 0, 1) . The enclosure computed by Algorithm 5 with relative tolerance ε encl = 0.05 (this is equivalent to an absolute tolerance of ε encl = 0.1) and unique off-set factor δ = 0.95ε encl is depicted in Fig. 7 (Left). In Fig. 7 (Right) the use counters of each relaxation are depicted. Again we used a relative relaxation error tolerance of ε rel = 0.01, which yields a minimum of five partitions per variable to obtain an error smaller than the tolerance. In particular, that means that if the current number of partitions is greater than five, we do not need to refine anymore. We can see that the method makes extensive use of that, i.e. in most of the iterations a relaxation satisfying the relaxation error criterion is chosen. In fact, the method decides early in the solution process that coarse relaxations do not suffice to terminate the method. Together with the fact that the degree of relaxation is inherited in the parent history of local upper bounds, this yields that only a few problems are solved with degrees of relaxation 1 and 2. Then the method sticks some time with degree of relaxation 4, while it needs relaxation degree 8 for a large number of problems. In Fig. 8 we have the same setting as in Fig. 7, but do not require a strict inclusion in Step 10 of Algorithm 5. One can see the effect in an increase in computational time and more problems to be solved. Furthermore, on the right one can observe that the algorithm is trying to stick longer with a coarse relaxation before actually going for the refinement step (see the increase of problems with relaxation degrees 1 and 4).
However, comparing the number of problems to be solved as well as especially the computational time with the ones from Sect. 3, one can see that the power of available solvers like SCIP dealing with quadratically constrained problems makes the use of relaxations redundant in some cases, as e.g. the ones considered above. However, with increasing complexity of the problems one might have an advantage by only considering relaxations instead of the original problem, as can be seen in the next section.

Application to the multiobjective optimization of decentralized energy supply networks
In this section, we present numerical results of the described method on some network optimization problem. In fact, aiming to model a decentralized energy supply network we obtain a MIQCP problem. The general network structure is a graph, where the nodes represent individual consumers and the edges connect the consumer nodes with the so-called source node, where energy is supplied. The mixed-integer character is coming from certain decision options available in the optimization process, e.g., if a gas pipe is laid at some edge or not. Furthermore, we take stationary models of energy flow into account, namely an equation based on the Ohmic law for the electricity flow as well as the Darcy-Weisbach equation for gas flow. As both of them contain bilinear or quadratic terms the resulting optimization problem has the mentioned MIQCP structure. As objective functions, we use the overall costs for realizing a given network plan on the one hand and the carbon emissions of that network plan on the other hand. Naturally, a cheap network plan results in high carbon emissions, and a low carbon emission can be obtained by, e.g., investing in energy-efficient house renovation which results in higher costs. Thus, we have a classical (MOP) with two conflicting objective functions. Details regarding the modeling aspects can be found in [38] and more recently in [19]. For the present paper, we consider three network instances of such decentralized energy supply networks, namely: If e.g., we set up a single objective optimization problem with cost minimization as objective function and put the carbon emissions to the constraints, we obtain the following computational times • network 1: 0.57 s • network 2: 3.35 s • network 3: > 3 h, using the SCIP solver with the standard settings 3 from the pyscipopt-package (cf. [41]). For testing the new method we use a relative width tolerance ε encl = 0.03 as well as two off-set factorsδ = 0.95ε encl and δ = 0.8ε encl . In the network models the present nonlinearities are of the following form: • For modeling the low-voltage energy flow we use for instance R e i, j f e in,i, j = a e u jūi, j , where R e i, j > 0 denotes the resistance of the underlying cable at arc (i, j), a e > 0 the calorific multiplier of three-phase electric power flows, f e in,i, j is a variable representing the electric power flow on arc (i, j) into j. The variable u i denotes the electrical voltage at node i and the variableū i, j = u i − u j the voltage drop on arc (i, j). Consequently, we have a quadratic term u 2 i and a bilinear term u i u j appearing in (13). For the computation of the corresponding relative relaxation errors the box constraints 360 ≤ u i , u j ≤ 440 are relevant. Thus, if we partition the corresponding intervals into r equidistant intervals, i.e. use the relaxation R r , we obtain and therefore the number of partitions of each interval to fall below a given tolerance ε rel is given by Thus, if we require a relative relaxation error ε rel = 0.01 we have to partition the corresponding intervals into at least r = 1 partitions, i.e. we do not have to partition at all. • For modeling low-pressure gas supply we use a reformulation of the Darcy-Weisbach equation avoiding the use of the sign-function as proposed in [8]. By doing so, we obtain for instance where R g i, j > 0 denotes the resistance constant of the underlying gas pipeline on arc (i, j),q i j the gas flow on arc (i, j),p max > 0 the maximal pressure loss allowed in the network as well as a binary decision variable y i, j indicating if a gas pipe is laid at arc (i, j). The relevant box constraints are −150 ≤q i j ≤ 150 and partitioning into r intervals, i.e. using relaxation R r , we obtain ε R r rel = 300 2 4r 2 1 max{x 2 | −150 ≤ x ≤ 150} , and therefore the number of partitions of each interval to fall below a given tolerance ε rel is given by Thus, if we require a relative relaxation error ε rel = 0.01 we have to partition the corresponding intervals into at least r = 10 partitions. Note that even if we just require a relative relaxation error ε rel = 0.03 we still have to partition into at least r = 6 intervals.
In sum, this yields that-if we use r equidistant partitions for any variable appearing in any nonlinear term of our problem-we fall below a relaxation error tolerance of ε rel = 0.01 as soon as we use a relaxation R r with r ≥ 10. Note that if we did not use the adaptive approach given in Algorithm 5, but chose a relaxation with r ≥ 10 and then used a method for solving multiobjective linear mixed-integer problems we would have to solve a problem with at least 10 · |Edges| additional integer variables, i.e. in the case of network 3 about 400 if we just use the ones for the quadratic terms. Looking at the results for network 3 (see Fig. 9; the results for network 1 and network 2 are similar) we can see that the method uses only the relaxation R r with r = 1, i.e. the coarsest relaxation possible using the McCormick relaxations. This shows the power of the proposed method dealing with the considered large network instances.

Conclusion
In the present work, a general MIQCP problem is considered and two novel methods for computing an enclosure of the nondominated set are presented. For both of them, we proved correct and finite termination as well as demonstrated the respective advantages and disadvantages. The implementation of the second approach is currently only able to deal with bilinear and quadratic terms. However, one could handle general polynomial terms still relying on McCormick relaxations. For general nonlinear terms, one has to go for a more elaborate relaxation technique as presented in, e.g., [11]. However, these are only implementation issues. As long as the relaxation technique satisfies Assumption 5.1 the theoretical results presented in this paper still apply.
Funding Open Access funding enabled and organized by Projekt DEAL.

Data availability
The datasets used for computations in Sect. 6 are available from the corresponding author on reasonable request.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.