Nonconvex constrained optimization by a filtering branch and bound

A major difficulty in optimization with nonconvex constraints is to find feasible solutions. As simple examples show, the α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}BB-algorithm for single-objective optimization may fail to compute feasible solutions even though this algorithm is a popular method in global optimization. In this work, we introduce a filtering approach motivated by a multiobjective reformulation of the constrained optimization problem. Moreover, the multiobjective reformulation enables to identify the trade-off between constraint satisfaction and objective value which is also reflected in the quality guarantee. Numerical tests validate that we indeed can find feasible and often optimal solutions where the classical single-objective α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}BB method fails, i.e., it terminates without ever finding a feasible solution.

of the problem at hand, it may already be a challenging task to find feasible solutions, and even more so to find feasible solutions that are provably optimal.

Constraint handling and multiobjective counterparts
In this paper, we focus on constrained optimization problems where the objective function as well as the constraints are given by twice continuously differentiable functions. Noting that feasibility constraints are a prevalent difficulty in constrained optimization, we take a multiobjective perspective and suggest to relax all complicating constraints and re-interprete them as additional and independent objective functions. From a practical point of view, such multiobjective counterpart models account for the fact that the right-hand-side values of constraints are often based on individual preferences and/or estimations (consider, for example, quality constraints in an engineering design problem). Indeed, slight constraint violations may be acceptable if this allows for a significant improvement of the primary (cost) objective function. Conversely, if a significantly better quality can be obtained at only slightly higher cost, the decision maker may be willing to invest more for a solution that in a sense "oversatisfies" the constraints. A multiobjective counterpart model provides trade-off information between constraint satisfaction on one hand and primary objective function on the other hand, and thus supports the decision making process and the selection of a most preferred solution.
From a numerical point of view, multiobjective counterpart models have another important advantage: By relaxing all complicating constraints, feasible solutions are easily available to initiate the optimization procedure. Depending on the selected constrained programming solver, this may actually be a crucial property, see Example 1 below. Moreover, even when the original constrained optimization problem is infeasible, multiobjective counterpart models have the potential to efficiently compute "best possible" solutions that, naturally, remain infeasible for the original constrained problem, but perform as well as possible w.r.t. both constraint satisfaction and primary objective in a multiobjective sense.

1.2˛BB method
The αBB-method suggested by [1] is an example of a deterministic constrained programming algorithm that aims at the determination of a globally optimal solution. It can be interpreted as a geometric branch-and-bound algorithm that discards subregions of the feasible set (referred to as boxes) based on efficiently computed lower and upper bounds on the optimal objective value. It terminates with the best found feasible solution as soon as the difference between upper and lower bound falls below a prespecified accuracy requirement. On a specific subbox, a lower bound is obtained by solving an auxiliary convex optimization problem (referred to as convexified problem in the following) restricted to a convex superset of the feasible set in the considered subbox. The objective function of this convexified problem is a convex underestimator of the original objective function, and thus an optimal solution yields a lower bound on the best possible objective value in the subbox in question. Upper bounds, on the other hand, are obtained from images of already known feasible solutions, and thus crucially depend on the ability to efficiently compute such feasible solutions. Usually, feasible solutions are retrieved from the optimal solutions of the convexified problems, i.e., by evaluating the respective solutions with the original constraint functions. However, since the convexified problems operate on relaxed (convex) feasible sets, their solutions do not have to be feasible for the original problem, and in this case the determination of upper bounds becomes complicated. This difficulty was described in [17] and exemplified at the following example problem: Example 1 [17] Consider the single-objective optimization problem The global optimum of problem (1) is x * = ( 4 √ 2; 0) ≈ (1.1892; 0), and the optimal objective value can be obtained as f (x * ) = 4 √ 2 ≈ 1.1892.
As was noted in [17], the αBB method does not make any progress when applied to problem (1) since the optimal solutions of the convexified problems are always infeasible for the original problem. As a consequence, lower bounds are constructed, while upper bounds are never obtained. Therefore, the αBB method can never satisfy the accuracy requirements and thus does not terminate. In the following, we will use Example 1 as a test case to exemplify the advantages and also the difficulties when considering multiobjective counterpart models and, as a natural consequence, a multiobjective counterpart of the αBB method that is specifically tailored for the solution of constrained optimization problems.

Related literature
The close relationship between constrained optimization problems and multiobjective counterpart models has been discussed and exploited in a variety of different ways. We refer to [18] for a general survey on the interrelation between different relaxation strategies of constrained programming problems (like, for example, Lagrangian relaxation and exact penalty functions) and corresponding scalarizations of the multiobjective counterpart model (in case of Lagrangian relaxation this is the classical weighted-sum scalarization, while exact penalty functions correspond to so-called elastic constraint scalarizations). In [15], a multiobjective counterpart approach is introduced for constrained multiobjective optimization problems and the interrelation of constrained and unconstrained multiobjective optimization is examined. Multiobjective counterpart models have motivated a variety of solution approaches for different classes of constrained optimization problems. As a prominent example, multiobjective counterpart models naturally relate to filter methods where total constraint violation is interpreted as a second objective function, see, for example, [11] and the survey in [12]. Multiobjective counterpart models are also used, among others, to handle constraints in evolutionary algorithms (see, for example, [32] for a survey), and in the context of combinatorial optimization problems to efficiently compute solution alternatives for multiobjective and multidimensional knapsack problems [30]. Finally, we note that besides constraint handling multiobjective optimization is also a powerful tool for other algorithmic aspects; see [34] for a recent example.
Common algorithms to solve multiobjective optimization problems globally can be divided into deterministic and stochastic algorithms. Stochastic algorithms, also named evolutionary methods, construct a population of (feasible) solutions and use evolutionary techniques in order to find new feasible solutions with a better objective value. See [3,4,13] for some exemplary procedures. A drawback of such algorithms is that they cannot guarantee to find optimal solutions in a finite amount of time.
Hence, we want to make use of a deterministic algorithm. Good surveys on deterministic methods in multiobjective optimization can be found in the books [22,25]. Most of the known deterministic methods for biobjective optimization are based on the branch-and-bound approach, see, for example, [10,21,35]. The upper and lower bounds for "optimal" points in the image space are often obtained by using interval methods or Lipschitz properties of the objectives. For multiobjective optimization problems (with more than two objectives) only a few branch-and-bound algorithm exist, see [2,9,24,28,29]. In [24] an improved lower bounding procedure is introduced which uses αBB underestimators and Benson's outer approximation algorithm for convex multiobjective optimization problems, see [7,19]. Furthermore, the proposed algorithm guarantees a certain accuracy of the computed solutions in the pre-image and in the image space.

Contribution
It is well known from the literature that using multiobjective counterparts can be highly beneficial as outlined above. In a short conference proceeding, see [8], it was already mentioned by the authors of this paper that solving the nonconvex multiobjective counterpart by a global solution technique can deliver feasible solutions for the original constrained single-objective problem, and first numerical experiments on problem (1) were performed.
In this work, after introducing the basic notations and definitions in Sect. 2, we examine the relations between constrained optimization problems and their multiobjective counterparts in detail, see Sect. 3. We also study ε-optimality as well as weak optimality notions. In Sect. 4, we present for the first time a branch-and-bound based algorithm for the specific nonconvex multiobjective counterpart problem. This algorithm is specifically tailored to find those points which deliver useful information for the original constrained optimization problem-without waisting time by a full determination of the nondominated set of the multiobjective counterpart. We prove the finiteness and the correctness of the proposed new algorithm.
In Sect. 5, we propose some possible post-processing for improving the results further. Finally, we illustrate the new algorithm on various test instances in Sect. 6. We conclude with Sect. 7.

Definitions and notations
In this paper, we focus on two variants of constrained optimization problems. Let X ⊆ R n be a box, i.e., there are vectors x, x ∈ R n with x ≤ x and X = {x ∈ R n | x ≤ x ≤ x}. The inequality sign ≤ has to be understood component-wise. Furthermore, let f : R n → R and g k : R n → R, k = 1, . . . , r be twice continuously differentiable functions. For an arbitrary set A ⊆ R n , we denote the image set of A under the map f by f ( The following two constrained optimization problems are equivalent in the sense that they have the same feasible set and objective functions. We assume in the following that the feasible set S of (P 1 ) and (P 2 ) given by S := {x ∈ X | g k (x) ≤ 0, k = 1, . . . , r } is not empty. We denote the elements of the feasible set of the single-objective optimization problem by feasible solutions. Definition 1 Let ε > 0 be given.
(i) A feasible solutionx ∈ S is said to be minimal for (P 1 ) (or (P 2 ), respectively) if Next, we formulate two box constrained multiobjective optimization problems that relax the constraints of (P 1 ) and (P 2 ), respectively, and reinterpret them as additional objective functions. We refer to these problems as the multiobjective counterparts to (P 1 ) and (P 2 ), respectively. The relations between the original problems and the counterpart problems are explored later.
Note that the bi-objective problem (M O P 2 ) has an objective function which is in general not differentiable. To simplify the notation, we define the mapping G : R n → R by Moreover, we write ( f , g 1 , . . . , g r )(x) and ( f , G)(x) to denote the objective vector of a solution x ∈ X w.r.t. problem (M O P 1 ) and (M O P 2 ), respectively.

Multiobjective counterpart models
Multiobjective counterpart models fall in the class of multiobjective optimization problems. In this context, optimality concepts are based on comparing outcome vectors subject to order relations. We use the classical concept of Pareto dominance that is based on the following order relation for two vectors x, y ∈ R m (see, for example, [6] or [22]): x ≤ y ⇔ x j ≤ y j for all j = 1, . . . , m x y ⇔ x j ≤ y j for all j = 1, . . . , m and x = y x y ⇔ x j > y j for at least one j ∈ {1, . . . , m} x < y ⇔ x j < y j for all j = 1, . . . , m The symbols ≥, , and > are defined analogously.
To simplify notation, we denote the vector-valued objective functions of (M O P 1 ) and (M O P 2 ) by h : R n → R 1+r with h(x) = ( f , g 1 , . . . , g r )(x) and h : R n → R 2 with h(x) = ( f , G)(x), respectively. Moreover, e = (1, . . . , 1) T ∈ R m is the m-dimensional all-ones vector, where m = 1 +r or m = 2, respectively, depending on the dimension of the currently considered problem.
. Ifx is weakly efficient, then the corresponding outcome vector h(x) is called weakly nondominated.
Note that for all considered optimization problems (P 1 ), (P 2 ), and (M O P 1 ), (M O P 2 ), an optimal or efficient solution exists, respectively, because both feasible sets S and X are compact sets and the objective functions are continuous. Moreover, for the efficient sets of (M O P 1 ) and (M O P 2 ) external stability holds since the ordering cone R m + is a pointed closed convex cone and h(X ) := {h(x) | x ∈ X } is a compact set, compare with [27,Theorem 3.2.9]. This means that for any x ∈ X there exists somex ∈ X withx is efficient and h(x) ≤ h(x). As a consequence, the next theorem, which is given for instance in [18], also holds for our setting.

Theorem 1
The set of minimal solutions of the constrained problem (P 1 ) always contains an efficient solution of the associated multiobjective counterpart problem (M O P 1 ), and all minimal solutions of (P 1 ) are weakly efficient for (M O P 1 ). Conversely, the set of efficient solutions of (M O P 1 ) contains at least one minimal solution of (P 1 ).
Since the pair ((P 2 ), (M O P 2 )) is a special case of the pair ((P 1 ), (M O P 1 )), see Remark 1, Theorem 1 also holds for ((P 2 ), (M O P 2 )). As it was explained in [18], the optimal solution of (P 1 ) can be calculated as a specific efficient solution of (M O P 1 ): Thenx is minimal for (P 1 ).
In general, the determination of the complete efficient set is not practicable if only the solution of a single-objective constrained optimization problem (P 1 ) is sought. However, if an algorithm can compute an approximation of the efficient set in a region of interest, it is possible to find near-optimal solutions of the constrained optimization problem in reasonable time, and additionally provide trade-off information between objective values and constraint satisfaction.

Lower bounding by convex underestimators
Branch-and-bound algorithms heavily rely on lower bounds for the values of scalar-valued functions on subsets. One possible approach to obtain lower bounds is the formulation of so called convex underestimators. A convex underestimator of a function f : for all x ∈ X , see, for example, [1]. A convex underestimator for a twice continuously differentiable function f on X can, for example, be calculated as where α f ≥ max{0, − min x∈X λ min, f (x)}. Here, λ min, f (x) denotes the smallest eigenvalue of the Hessian H f (x) of f in x, see [20]. The minimal value of f over X , which can be calculated by standard techniques from convex optimization, delivers then a lower bound for the values of f on X . A lower bound for λ min, f (x) over X can be calculated easily with the help of interval arithmetic, see again [20]. For that, the Matlab toolbox Intlab can efficiently be used [26]. See also [31] for improved lower bounds. The above proposed and some other methods to obtain lower bounds for λ min, f (x) are described and tested in [1]. There are also other possibilities for the calculation of convex underestimators. For example in [1] special convex underestimators for bilinear, trilinear, fractional, fractional trilinear or univariate concave functions were defined. Here, we restrict ourselves to the above proposed convex underestimator. The theoretical results remain true in case that the above underestimators are replaced by tighter ones.

Relations between constrained optimization problems and their multiobjective counterparts
From Sect. 2 we already know that (P 1 ) and (P 2 ) are equivalent and that all optimal solutions of (P 1 ) and (P 2 ) are weakly efficient for (M O P 1 ) and (M O P 2 ), respectively. Next to some obvious relations, we can state a relationship between the ε-minimal solutions of (P 1 ) and the ε-efficient solutions of (M O P 1 ). Due to Remark 1, an equivalent result holds for ((P 2 ),(M O P 2 )).
Proof Sincex ∈ S is ε-minimal for (P 1 ) it holds that f (x) < f (x) + ε for all x ∈ S. Now assume thatx is not ε-efficient for (M O P 1 ). Then a solutionx ∈ X exists with obtain a contradiction to the ε-minimality ofx for (P 1 ).
The relationship between the different solution categories for (P 1 ) and (M O P 1 ) according to Theorem 1 and Lemma 1 are visualized in Fig. 1.
In contrast to the equivalence between (P 1 ) and (P 2 ), the optimization problems (M O P 1 ) and (M O P 2 ) are in general not equivalent if r ≥ 2. However, we can state the following lemma.

Lemma 2
Letx ∈ X be weakly efficient (or even efficient) for (M O P 2 ). Thenx ∈ X is weakly efficient for (M O P 1 ).
To see that the inverse implication does not hold, set f , Then all x ∈ X are weakly efficient for (M O P 1 ), but only x = 0 is weakly efficient for (M O P 2 ).
Note that for the special case that r = 1 the two problems (M O P 1 ) and (M O P 2 ) are The feasible solutionsx andx are both efficient of (M O P 1 ) and their images do not dominate each other. Therefore, The assertion thatx is minimal for (P 2 ) follows analogously to (i). Because of the equivalence of (P 1 ) and (P 2 ), the pointx is also minimal for (P 1 ). (iv) The assertion thatx is minimal for (P 2 ) follows analogously to (ii). Because of the equivalence of (P 1 ) and (P 2 ), the pointx is also minimal for (P 1 ).
Note that a solutionx according to Lemma 3, cases (i) and (iii), does not have to exist. Consider, for example, the simple linear problem min (note that x * = 0 is the unique minimal solution for (P 1 ) in this case). Moreover, Lemma 3, cases (iii) and (iv) withx ∈ X do not hold in general. This is the reason why we focus on solving problem (M O P 2 ) rather than problem (M O P 1 ) in the following sections.

Multiobjective counterpart˛BB algorithm
We will now focus on the constrained optimization problem (P 2 ) and its multiobjective counterpart (M O P 2 ) as suggested at the end of Sect. 3.
To solve (M O P 2 ) we adapt the branch-and-bound based algorithm for nonconvex multiobjective optimization which was given in [24]. It is based on an iterative subdivision of the feasible set X = [x, x] into successively smaller boxes (see Sect. 4.1 below), combined with a bounding scheme that prunes irrelevant areas of the decision space. While this multiobjective αBB algorithm determines an approximation of the complete nondominated set, the multiobjective counterpart αBB algorithm (MOCPαBB algorithm) directs the search towards the region of interest for the constrained problem (P 2 ) in order to avoid unnecessary computations as much as possible. More precisely, given a required accuracy ε > 0, the multiobjective αBB algorithm from [24] terminates with an approximation of the complete nondominated set of (M O P 2 ) that consists of ε-nondominated points (i.e., images of ε-efficient solutions) such that for every nondominated point y there exists a representativeȳ in the approximation such thatȳ − ε 2 e ≤ y. Recognizing the fact that in our case (M O P 2 ) is a multiobjective counterpart of a constrained optimization problem (P 2 ), we suggest appropriately adapted discarding tests (Sect. 4.2) as well as selection and termination rules (Sect. 4.3) to direct the search towards an optimal solution of the constrained problem. The overall MOCPαBB algorithm is stated in Sects. 4.4 and 4.5 verifies the existence of appropriate bounds.

Box generation and branching scheme
The subdivision of the feasible set X = [x, x] of (M O P 2 ) in the decision space is implemented in the standard way of such subdivision algorithms. Thus, it is completely analogous to the subdivision as done in the multiobjective αBB algorithm of [24]: Starting from the initial box X = [x, x], a series of subboxes is generated by iteratively splitting a currently selected box X * perpendicularly to a longest edge into two subboxes X 1 and X 2 . For the description of the selection rule for a box X * , which determines in which series the boxes are subdivided, we refer to Sect. 4.3. We denote the list of current, also called active, boxes by L W . The list L W is also denoted as working list in the following, and it is initialized with X . The method terminates when the list L W is empty. During the course of the algorithm, boxes that can not contribute to the approximation of the optimal set of problem (P 2 ) are discarded from further consideration. On the other hand, boxes that have achieved a certain accuracy w.r.t. the current bound sets and for which a further refinement is not promising are stored in a tentative solution list L S .

Bound computation and discarding test
A central element in the MOCPαBB algorithm is the discarding test which is used to prune selected boxes from the list L W that do not contain (near) optimal solutions of (P 2 ). For this purpose, upper and lower bounds are computed based on solutions of (M O P 2 ) (upper bounds) and convex underestimators over selected subboxes (lower bounds), respectively. Upper Bounds During the course of the MOCPαBB algorithm and while exploring selected boxes from L W , we generate a stable set L P N S of feasible outcome vectors (called provisional nondominated set) representing upper bounds for the global nondominated points of (M O P 2 ). In this context, a set N ⊂ R m is called stable if for any y 1 , y 2 ∈ N , y 1 y 2 holds. Every time a point q is a new candidate for L P N S , we check if this point is dominated by any other point of L P N S . In this case, q will not be included in L P N S . Otherwise, q will be added to L P N S and all points that are dominated by q are removed. Figure 2 shows an example for a provisional nondominated set L P N S for an instance of (M O P 2 ). Note that in the biobjective case considered here, the points y = ( f , G)(x) in L P N S can be ordered such that their objective function values f (x) are strictly increasing while their largest constraint values G(x) = max k=1,...,r g k (x) are strictly decreasing.
Since we want to direct the search towards the region of interest of the constrained problem (P 2 ), we are particularly interested in the two points from L P N S defined by see Fig. 2 for an illustration. Intuitively, these are the points of L P N S ⊆ R 2 which are "around" G(x) = 0. Note that q 1 = argmax p∈L P N S { p 1 | p 2 > 0} and q 2 = argmin p∈L P N S { p 1 | p 2 ≤ 0} also hold. It is possible that one of the two points does not exist. This case is considered in Sect. 4.5. For the remainder of this section, we assume that both points q 1 and q 2 exist. Note that, as the list L P N S changes its cardinality and precise-ness during the algorithm, the points q 1 and q 2 generally change their position during the procedure.
Lower Bounds A main ingredient of the adapted discarding test in our new MOCPαBB algorithm is to calculate lower bounds for the image set of a selected box X * from L W , i.e., a set L B such that L B + R 2 + is a superset of {y ∈ R 2 | ∃x ∈ X * : y 1 = f (x), y 2 = G(x)}, and compare them with the upper bounds q 1 and q 2 of the interesting part of the nondominated set.
As a lower bound for the image set of a box X * , we use the so called ideal point of convex underestimators of the objective functions on the considered box X * . In general, the ideal point of a vector-valued function consists of the global minimal values of each function. For Determining a in the nonconvex case is not possible without applying techniques from global optimization, which are time consuming. As we have to calculate ideal points repeatedly over many subboxes, we use convex underestimators as defined in Sect. 2.2 for both objective functions. The ideal point of the convex underestimators is denoted by ( f , G) T and is calculated as where f α is a convex underestimator of f on X * and G β is a convex underestimator of G on X * . While we can directly use the convex underestimator (2) to define f α , the particular structure of G as the maximum of r functions suggests several alternative definitions based on (2). In the following lemma, we propose two such definitions for G β .

Lemma 4
Let a box X * = [x * , x * ] be given. Moreover, let g k : R n → R, k = 1, . . . , r be twice continuously differentiable functions and let g k,β k : R n → R be the corresponding convex underestimators on X * according to (2), i.e., Proof Follows immediately from [20] and the fact that the maximum of convex functions is again a convex function.
While Gβ is probably the more intuitive choice for a convex underestimator since it follows directly from (2), the convex underestimator G β is generally preferable since it provides stronger lower bounds. To illustrate this, consider an optimization problem with two constraints g 1 and g 2 where g 1 (x) ≥ g 2 (x) for all x ∈ X * . Additionally, assume that g 1 is a convex function and g 2 is nonconvex. Therefore, β 1 = 0 and β 2 > 0, and hence Then, we obtain for the convex underestimators for G on X * : The following lemma shows that this is true in general, i.e., G β is indeed a stronger convex underestimator than Gβ .

Lemma 5 Under the assumptions of Lemma 4, it holds that Gβ
Proof Let x ∈ X * be arbitrary but fixed, and let k * ∈ {1, . . . , r } be an index for which Thereby, G β is the preferred convex underestimator and is used in our procedure. As G β is in general not differentiable, we solve the smooth convex optimization problem to obtain the minimum of G β on the box X * : Discarding Tests The following lemma is the basis for the first two discarding tests.

Lemma 6
Let q 1 and q 2 be defined as in (3) and (4). Moreover, let ( f , G) T be a lower bound vector, where f and G are lower bounds for f and G on X * , respectively. If f > q 2 1 or G > q 1 2 holds, the box X * does not contain any minimal solution of (P 2 ). Proof If the first condition f > q 2 1 holds, we obtain q 2 1 < f ≤ f (x) for all x ∈ X * . Moreover, the point q 2 is the objective vector of a solutionx ∈ X , and by (4) it holds G(x) = q 2 2 ≤ 0. Hence, q 2 is an image of a feasible solutionx of (P 2 ) and the objective function value q 2 1 = f (x) is less than any function value attainable in X * . Thus, there is no minimal solution for (P 2 ) in X * in this case.
If G > q 1 2 holds, by (3) it follows 0 < q 1 2 < G ≤ G(x) = max k=1,...,r g k (x) for all x ∈ X * . Hence, for every x ∈ X * there is at least one constraint g k with g k (x) > 0. This means that every x ∈ X * is infeasible for (P 1 ) and cannot be a minimal solution of (P 1 ).
Note that the set {( f , G) T } + R 2 + can be interpreted as an outer approximation of the convex set and that the discarding test of Lemma 6 actually tests whetherp := (q 2 1 , q 1 2 ) T / ∈ {( f , G) T }+ R 2 + (and hencep / ∈ P), see Fig. 2 for an illustration ofp. In this context, an outer approximation of a set A ⊆ R 2 is given by a set L B ⊆ R 2 such that A ⊆ L B + R 2 + . While the discarding test of Lemma 6 is very simple and can be easily evaluated, it is rather weak in practice. In order to obtain a tighter outer approximation of the set ( f α , G β )(X * ) and, hence, to strengthen the discarding test, we replace the ideal point of the convex underestimators by an improved lower bound set L B that is implicitly obtained from solving a single-objective convex optimization problem to decide whetherp belongs to P: A minimal solution of (P * X * ,p ) is named (x,t). Ift ≤ 0 holds, we havep ∈ P. Otherwise, ift > 0 holds, the pointp lies outside P and can thus be separated from P with a supporting hyperplane that can be constructed from a Lagrange multiplier λ * ∈ R 1+r for the inequality constraints of (P * X * ,p ): Given λ * , a normal vector of this hyperplane is obtained as Moreover, a support vector of this hyperplane isp +te. This result can be shown by using necessary and sufficient optimality conditions for nonsmooth, convex optimization problems with linear constraints.
Note that the outer approximation of P gets indeed improved by using the constructed supporting hyperplane, since the pointp is infeasible for this improved approximation.
The next lemma formally proves the above mentioned fact that a box can be discarded for t > 0.
Lemma 7 Let L P N S be a stable subset of h(X ), X * ⊆ R n be a box, f α and G β be the convex underestimators of f and G on X * and q 1 , q 2 be defined as in (3) and (4). Moreover, consider the optimization problem (P * X * ,p ) withp = (q 2 1 , q 1 2 ) T with the minimal solution (x,t). Ift > 0 holds, then X * does not contain any minimal solution for (P 1 ).
Proof Assume that there is a minimal solution x * for (P 1 ) with x * ∈ X * ∩ S. The point q 2 is the function value of a feasible solutionx ∈ S, i.e., there is anx ∈ S with Consider the pair (x * , 0) ∈ R n+1 . Using the minimality and feasibility of x * and properties of convex underestimators, we obtain that this pair is feasible for (P * X * ,p ): This is a contradiction to the minimality oft > 0.
Note that the optimization problem (P * X * ,p ) is also used for a termination procedure which is discussed in Sect. 4.3 below.

The selection and termination rule
A selection rule determines within the branch-and-bound procedure which of the remaining subboxes is selected next to be bisected. The aim of every selection rule is to identify boxes whose subboxes deliver good bounds for the globally minimal value. Such rules are usually heuristic. A common rule in multiobjective optimization is to choose the box which has the smallest lower bound w.r.t. one (arbitrary but fixed) objective function, or w.r.t. a weighted sum of all objectives. In our case, we are interested in the minimization of f , but also in finding at least one feasible solution of (P 1 ). Therefore, as long as no point q 2 exists, a box with the smallest lower bound for G is selected as the next box which will be bisected, while a box with the smallest calculated lower bound for f is chosen otherwise.
The termination rule decides whether a box is discarded, whether it is stored in the solution list L S (and thus not further bisected), or whether it is stored in the working list L W (i.e., the list of active boxes that will be analyzed further). In the work of [24], a termination procedure is introduced which guarantees some accuracies of calculated points in the decision and in the objective space. The minimal solution of the optimization problem (P * X * ,p ) plays an essential role for the decision to store a box in the solution list. For our algorithm we use a simpler rule in the first part, which ensures an accuracy of q 1 and q 2 : 1. Termination rule: Solve (P * X * ,p ) for the box X * andp = (q 2 1 , q 1 2 ) T , where q 1 and q 2 are defined as in (3) and (4). The minimal solution is denoted by (x,t). Store X * in L S if 0 ≥t ≥ − ε 2 holds for a given ε > 0. Note that in case oft > 0, the box is discarded because of Lemma 7.

The MOCP˛BB algorithm
As mentioned in Sect. 4.1, the MOCPαBB algorithm is based on a box subdivision of X = [x, x]. For all not-discarded boxes X * in the working list L W and in the solution list L S , an outer approximation of the set P (which is an underestimation of possible outcome vectors on X * , see (7)) is encoded in a set H. This outer approximation is initialized using the ideal point ( f , G) of P and further refined whenever additional supporting hyperplanes of P become available. Every element of H is a pair consisting of the normal vector λ of the hyperplane and a scalar value, which is the scalar product of λ and one point belonging to the hyperplane. Thus, the initial outer approximation of P obtained by minimizing the convex underestimators on the box X * is given by Hence, the lower bounds f and G for each objective are stored in H as well.
The complete algorithm consists of two parts. The first part stated above is the main part and includes the discarding test as well as the selection and termination rule which are explained in the previous sections. The second part is a post processing algorithm which includes different (optional) steps in order to further improve the solution quality.
The following theorem states that the algorithm (without the post processing) terminates. We omit the proof as it is a special case of the proof of Lemma 4.1 of [24].

Theorem 3 Algorithm 1 needs finitely many subdivisions in line 7.
Next, we want to derive some properties of the points q 1 and q 2 computed with Algorithm 1. These properties will be used in a post processing for further improvements.
Theorem 4 Let q 1 , q 2 be defined by (3) and (4) after Algorithm 1, line 26. In particular, assume that both exist. Denote the pre-images of q 1 and q 2 by x 1 and x 2 , respectively. Then x 1 and x 2 are ε-efficient of (M O P 1 ) and (M O P 2 ). Additionally, to each x 1 and x 2 exist some boxes X 1 , X 2 ∈ L S with x 1 ∈ X 1 and x 2 ∈ X 2 .

Proof
The solution x 1 is the pre-image of q 1 , i.e., it holds f (x 1 ) = q 1 1 and max k=1,...,r g k (x 1 ) = G(x 1 ) = q 1 2 . Assume that x 1 is not ε-efficient for (M O P 1 ) or for (M O P 2 ). Then there is a solution x ∈ X with

Algorithm 1 MOCPαBB algorithm to solve (P 2 )
INPUT: (P 2 ), ε > 0, δ > 0 OUTPUT: if a point q 2 exists then 4: Select a pair (X * , H) with a smallest value f from L W , delete it from L W 5: else Select a pair (X * , H) with a smallest value G from L W , delete it from L W 6: end if 7: Bisect X * perpendicularly to a direction of maximum width → X 1 , X 2 8: for l = 1, 2 do 9: Compute for f and G := max k=1,...,r g k (·) their convex underestimators on X l and their corresponding minimal solution x f and x G 10:

11:
Store image of x f and x G in L P N S and update L P N S to a stable set 12: Obtain q 1 , q 2 andp from L P N S by (3), (4) and (11)  13: Discard X l 15: else 16: Solve (P * X * ,p ) with minimal solution (x,t) and Lagrange multiplierλ, and set λ ← (λ 1 , 1 −λ 1 ) 17: ift > 0 then 18: Discard X l 19: and at least one inequality is strict. First assume that x belongs to a box X ∈ L S . Then the termination rule implies that the minimal solution of (P * X * ,p ) is (x,t) with − ε 2 ≤t ≤ 0. Consider the pair (x , −ε) ∈ R n+1 for the optimization problem (P * X * ,p ) withp = (q 2 1 , q 1 2 ) as before. Then, with (9) and (10) we obtain x ∈ X , All constraints of (P * X * ,p ) are satisfied. Thus, (x , −ε) is feasible for (P * X * ,p ), but this contradicts the minimality oft ≥ − ε 2 . We can conclude that, since the algorithm terminates, x has to belong to a box X which was discarded. There are two conditions based on which a box can be discarded. Either because of the test from Lemma 7 or from Lemma 6. As (x , −ε) is still feasible for (P * X * ,p ), we can see that X was not discarded because of Lemma 7.
Let ( f , G ) be the ideal point of the convex underestimators of f and G on X . Then the two reasons for discarding X from Lemma 6 are still possible: (a) f > q 2 1 and (b) G > q 1 2 .

Fig. 3
Example where pre-image of q 2 is not ε-minimal of (P 1 ) For case (a), we obtain with (9) the following chain of inequalities: This is a contradiction to ε > 0. Case (b) leads with (10) to which contradicts ε > 0 as well. Hence, x does not exist and x 1 is ε-efficient for (M O P 1 ) and (M O P 2 ). Now choose any box X * , which is considered during Algorithm 1, with x 1 ∈ X * . Let ( f , G) be the ideal point of convex underestimators of f and G on X * . As hold even for the final points q 1 , q 2 , X * can not have been discarded by Lemma 6. In addition, with (x 1 , 0) ∈ R n+1 a feasible solution for (P * X * ,p ) is found and thus, for the minimal valuet of this optimization problemt ≤ 0 holds. Hence, X * does not get discarded. As the algorithm terminates (as shown in Theorem 3) there exists a subbox X 1 with x 1 ∈ X 1 ⊆ X * which is stored in L S .
The proof for x 2 is analogous. Note that G(x 2 ) ≤ 0 < q 1 2 holds. Figure 3 shows that the pre-image of q 2 does not have to be ε-minimal for the singleobjective problem (P 1 ). Indeed, even in cases where q 2 2 = 0 holds, f (x 2 ) = q 2 1 may be arbitrarily far away from the optimal value f * .
Nevertheless, the following lemma states that we can enclose the image point of a minimal solution for (P 1 ) by a tube-shaped set determined byp and ensure the ε-minimality for (P 1 ) of the pre-image of q 2 in a special case. Lemma 8 Let x * be a minimal solution of (P 1 ). Then A direct consequence from this is that if q 1 2 > ε 2 holds, the pre-image of q 2 is ε-minimal for (P 1 ) (actually ( ε 2 + μ)-minimal for all μ > 0).
Proof Since x * is minimal for (P 1 ) and because of the definition of q 1 and q 2 (see (3) and (4)), and g(x * ) < q 1 2 − ε 2 . Then we can choose a parameter μ > 0 such that Let X * be a box from the solution list L S with x * ∈ X * . Because of Lemma 7 and the termination rule, X * was not discarded, i.e., for the minimal solution (x,t) of (P * X * ,p ) it holds 0 ≥t ≥ − ε 2 . But (x * , −( ε 2 + μ)) is feasible for (P * X * ,p ), which contradicts the minimality oft ≥ − ε 2 . This shows the first part of the result, i.e., that holds. The second part follows immediately: can not hold because of the feasibility of x * for (P 1 ). Hence, f (x * ) ≥ q 2 1 − ε 2 holds. Using the minimality of x * , it follows that Consequently, the pre-image of q 2 is ( ε 2 + μ)-minimal for (P 1 ), and, in particular, ε-minimal for (P 1 ).

Existence of q 1 and q 2
Recall the definitions of q 1 and q 2 from (3) and (4). The list L P N S is not empty after a first element is added to this list at the beginning. Therefore, at least one of the two points q 1 , q 2 exists. However, it is possible that only one of the two points exists during the whole course of the algorithm. This is, for example, the case when X = S or S = ∅ holds. Nevertheless, we can still define a suitable pointp which can be used for initializing the discarding test from Lemma 7.p Thereby, the constant real number M has to be chosen as an upper bound of f (·) + ε and G(·) + ε on the box X , i. e., Such an upper bound can be calculated by interval arithmetic, for instance. If one of the two points q 1 , q 2 does not exist, it is clearly not possible to apply the corresponding discarding test from Lemma 6 which depends on the missing point. Recall from Sect. 4.3 that the rule to select a box from L W depends on the existence of q 2 as well. As long as q 2 does not exist, a box with a smallest lower bound for G is chosen, otherwise the box with a smallest lower bound for f is selected. By the next lemma, we observe that in case that feasible solutions for problem (P 2 ) exist, the algorithm is able to find at least a so-called ε 2 -feasible solution for (P 2 ), i.e., it finds a solution x ∈ R n with G(x) ≤ ε 2 . Lemma 9 If S = ∅ holds, Algorithm 1 finds at least an ε 2 -feasible solution for (P 2 ) (within the main while-loop).

Proof If Algorithm 1 finds a solution
x with ( f , G)(x) = q 2 , then x is feasible for (P 2 ) and, thus, also ε 2 -feasible. We thus have to consider the case that q 2 does not exist. Then q 1 exists andp = (M, q 1 2 ) T . If q 1 2 =p 2 ≤ ε 2 , then q 1 is ε 2 -feasible and the statement is satisfied. Otherwise, we have that q 1 2 =p 2 > ε 2 . Every time a box X * that contains feasible solutions for (P 2 ) is considered, we obtain for this box a lower bound G ≤ 0 for G. Define x f ∈ argmin{ f α (x) | x ∈ X * } and x G ∈ argmin{G β (x) | x ∈ X * }. Then the images of x f and x G are possible candidates for L P N S . If one of x f and x G is feasible for (P 2 ), then the algorithm has found a point q 2 . Similarly, if one of x f and x G is ε 2 -feasible for (P 2 ), then the image of this point is added to L P N S and an ε 2 -feasible solution is found. Otherwise, G(x f ) > ε 2 and G(x G ) > ε 2 hold, i. e., x f and x G are both infeasible for (P 2 ). In this case, problem (P * X * ,p ) is equivalent to Note that the first constraint induced byp, M + t ≥ f α (x), is satisfied for all x ∈ X * and for all t ≥ −ε if M is chosen as described above. Moreover,p 2 > ε 2 and G β (x G ) ≤ 0 (recall that x G ∈ X * ) imply that an optimal solution (x,t) of problem (P * X * ,p ) satisfiest < − ε 2 . Thus, as long as no ε 2 -feasible solution is found, no box with G ≤ 0 is discarded. Assume that this is the case every time a box X * with feasible solutions is considered. Recall that the convex underestimators get better for smaller box widths, see [1]. Thus, if a certain box width is reached, we get G(x) − G β (x) ≤ ε 2 for all x ∈ X * and thus also for Now either G(x G ) ≤ ε 2 , and x G is ε 2 -feasible, or G(x G ) > ε 2 , but then with (12) we obtain a contradiction.

Post processing: filtering, box refinement, and local search
The post processing further improves the so far found ε 2 -efficient solutions. As before, we assume that the algorithm has found until now two outcome vectors q 1 and q 2 . The preimage of q 2 is a feasible solution for (P 2 ) and thus q 2 1 is an upper bound for the minimal value f * of (P 2 ). In the special case that q 2 does not exist, the algorithm continues with p = (M, q 1 2 ) as described in Sect. 4.5. The pre-image of q 1 is infeasible for (P 2 ). If it is just slightly infeasible, i. e., if q 1 2 ≤ ε 2 , the two procedures Box Refinement and Local Search and Adaptive Discarding aim to find better upper bounds for f * than q 2 1 . Otherwise, i. e., in case in which q 1 is farer above 0, we can prove that the pre-image of q 2 is already ε-minimal for (P 2 ), see Lemma 8. We summarize the proposed steps in Algorithm 2.

Filtering
The first post processing step eliminates further boxes from the solution list. The reason for that is that boxes can be stored in the solution list during Algorithm 1 at an early iteration where the bounds q 1 and q 2 may be quite weak, but the termination rule is satisfied. As the Algorithm 2 Post processing INPUT: (P 2 ), L S , q 1 , q 2 ,p, δ OUTPUT: (x min , U B) ← Local Search and Adaptive Discarding(L S 3 , A, δ) 5: end if bounds q 1 and q 2 and, thus,p changes during Algorithm 1 until line 26 and come closer to 0 in their second component, some boxes (which are already in the solution list) would be discarded if they were considered later. The filtering procedure presented in Algorithm 3 aims to find such boxes in order to delete them from the solution list L S .

Box refinement
The next procedure is a refinement procedure which consists of decreasing the box width of the boxes in the solution list. By using Lipschitz properties (recall that the objective function f is continuous and bounded on a box), we can show that we obtain thereby a collection of ε-efficient solutions for (M O P 2 ). For every box of the solution list we find an ε-efficient solution, see the forthcoming Lemma 11. Moreover, we even get ε-efficient points which are close to a minimal solution for (P 2 ) in the pre-image space.
Let ω(X * ) be the box width of X * = [x, x], i. e., ω(X * ) = x − x , where · is the Euclidean norm. During the refinement a more sophisticated rule to store boxes in the solution list is used to get an additional preciseness in the pre-image space and to ensure the ε-efficiency of the returned solutions: 2. Termination rule: Solve (P * X * ,p ) for the box X * andp = (q 2 1 , q 1 2 ) T , where q 1 and q 2 are defined as in (3) and (4). The minimal solution is denoted by (x,t). Let ε > 0 and δ > 0 be given. Store X * in L S if We summarize the proposed steps in Algorithm 4.

Algorithm 4 Box Refinement
INPUT: (P 2 ), list L S 2 of boxes a the minimal solution (x,t) of (P * X * ,p ), q 1 , q 2 ,p, δ OUTPUT: Select a triple (X * ,x,t) from L S 2 and delete it from L S 2 4: ift > 0 then 5: Discard X * 6: else if (ω(X * ) < δ) and h(x) p or ω(X * ) < ε max{α,max k=1,...,r β k } then 7: Store (X * ,x) in L S 3 andx in A 8: else 9: Bisect X * perpendicularly to a direction of maximum width → X 1 , X 2 10: for l = 1, 2 do 11: Solve (P * X * ,p ) with minimal solution (x,t) 12: Store (X l ,x,t) in L S 2 13: end for 14: end if 15: end while Let L be the Lipschitz constant for f on X . A bound for L can be calculated, for example, by using interval arithmetic for the partial derivatives of f . Indeed, as boxes with minimal solutions are not discarded and for each box a representativex is stored in A, a bound for the minimal value f * for (P 1 ) can be derived. Using Lipschitz properties for f and condition (i) from the second termination rule, we obtain min { f (x) − Lδ | x ∈ A} as a lower bound for f * .

Lemma 10
The Box Refinement, see Algorithm 4, needs finitely many subdivisions in line 9.
Proof A box X * with ω(X * ) < min δ, ε/max{α, max k=1,...,r β k } := δ is stored in L S 3 automatically. Therefore, all boxes from L S 2 and their subboxes are either discarded, stored in L S 3 or bisected until their box width is smaller than δ .
We omit the proof of the following lemma as it follows the structure of the proofs of Lemma 4.10 and Lemma 4.11 of [24].

Lemma 11
Let A be the output of Algorithm 4. Then everyx ∈ A is ε-efficient for (M O P 2 ).

Local search with adaptive discarding
It is possible to further improve the outcome of Algorithm 1 with a local search algorithm which should be applied after Box Refinement, see Algorithm 4.
As we have proven in Lemmas 4 and 8, the pre-image of q 2 is ε-minimal for (P 2 ) if q 1 2 > ε 2 holds. In the other case we cannot guarantee any ε-minimality for the pre-image of q 2 . Figure 3 shows an example, where q 2 lies far away from the minimal value f * .
For the local search, we use each solution of A, which came from the Box Refinement, see Algorithm 4, as a suitable starting point for a local solver to solve (P 1 ). Here, we solve (P 1 ) instead of (P 2 ) to avoid the handling of the max-term in (P 2 ). At least one of these starting points is in a δ-neighborhood of a minimal solution of (P 1 ). The procedure is described in Algorithm 5. Additionally, we can skip the local search procedure for some boxes in case in which the lower bound of f on a current box, which is given by f (x) − Lδ, is larger than a current upper bound for f * . There,x is the ε-efficient solution belonging to a box from the previous solution list and L is the Lipschitz constant of f .

Algorithm 5 Local Search for (P 1 ) with Adaptive Discarding
INPUT: (P 1 ), a list L S 3 of boxes of box width less than δ and one ε-efficient solution for each box, A, δ OUTPUT: Select a pair (X ,x) from L S 3 withx ∈ argmin x∈A { f (x)} and delete it from L S 3 8: if U B < f (x) − Lδ then 9: DiscardX 10: else 11: Apply a local solver with starting pointx to (P 1 ) and obtain locally minimal solution x * ∈ S 12: x The local search strategy can also be applied if the Box Refinement is not done before. In this case the Adaptive Discarding is not possible because the boxes are not small enough. Nevertheless, a local search from an ε-efficient starting point can always be done for every box of the solution list L S 2

Numerical results
In this section, we examine the performance of the new algorithm MOCPαBB on Example 1 and on several additional test instances. First, all steps of the algorithm including the post processing are presented in detail for Example 1. Some of the further test instances are scalable in the number of variables and constraints. The algorithm was implemented in MATLAB R2018a. All experiments have been done on a computer with Intel(R) Core(TM) i5-7400T CPU and 16 GBytes RAM on operating system Windows 10 Enterprise. For the local search step, we obtain the Lipschitz constant by interval arithmetic using Intlab, [26]. For every box X * a possible Lipschitz constant of f on X * is L = sup( ∇ F(X * ) ), where ∇ F is the natural interval extension of the gradient of f . The local search was performed with fmincon from Matlab using the SQP-algorithm and default parameters. As it is stated in Algorithm 5, for each selected boxX the starting point for the SQP-algorithm is the ε-efficient solutionx ∈X .

Illustrative example
First, we illustrate the new algorithm on the motivating Example 1, as given in (1), cf. [17].
We have chosen ε = 10 −5 and δ = 0.0001. In the following figures, the star marks always the best found minimal solution for the single-objective problem or its image w.r.t. the multiobjective counterpart. The figures representing the pre-image space always show box partitions. The dark gray boxes are the boxes in the current solution list, i. e., those boxes, which could not yet be discarded. The light gray boxes are the discarded boxes. The different color shades represent different criteria based on which the boxes have been discarded.
In the image space, we plot some representatives of the image set by light gray points. They were obtained by discretizing the initial box and taking the function values. These points serve just for illustrative purposes on the structure of the multiobjective counterpart. The black crosses are the points of L P N S . In the magnified picture, the first cross with positive second component is q 1 , and the first with negative second component is q 2 . The black point above q 2 and on the right hand of q 1 isp. Figure 4 shows the box partition and the image space and their magnifications after the execution of the main while-loop of Algorithm 1 without the post processing step. The lightest gray boxes are the ones which got discarded by the discarding test based on Lemma 7. The middle gray boxes are discarded because of Lemma 6. Here, it holds that q 1 2 > ε 2 . Hence, with Lemma 8 the pre-image of q 2 is already ( ε 2 + μ)-efficient for all μ > 0. Therefore, the post processing would perform only the filtering step. For illustration, we also present here the other post processing steps. Figure 5a illustrates the box partition of the feasible set after the first post processing stepthe filtering, see Algorithm 3. The two large boxes on the upper left were discarded during this step. Concerning the minimal solution of Example 1 no improvement was obtained, and q 2 remains the best found image point so far.
In Figs. 5b and 6a the second step of the post processing is illustrated. In this step, see Algorithm 4, the boxes are usually refined until they are small enough and contain an εefficient solution. For this example, the two boxes are already small enough. The star shaped markers are the ε-efficient solutions and their images.
The last step of the post processing is a local search. In Fig. 6b, the star marks the image of the solution found by the local search steps. We can see that the minimal value improved slightly, because the star is a bit more on the left of q 2 .  Table 1 states the number of iterations (# it), the number of boxes in the respective solution list (# sb) and the computational time (t) for each step. After the main loop and the local search step a feasible solution is found with a (nearly) minimal value. Those values are stated in the last column. Here, q 2 1 and the minimal value obtained by the local search step are nearly the same. Actually, the difference between both values is 1.87 × 10 −6 , and the difference of the computed minimal value at the end to the actual minimal value 4 √ 2 is 0 (within the tolerances of MATLAB).

Test instances
In addition to Example 1, we also tested some more instances. The instance KSS2Con is as in Example 1, but with one additional constraint x 1 + x 2 ≤ 2: Test instance 5 (KSS2Con) The dimension of the pre-image space is n = 2, and we have r = 2 constraints.
For HimmCon, which has as objective function the Himmelblau function, see [16], we added one to two constraints which exclude some of the known minima.
Test instance 6 (HimmCon) The dimension of the preimage space is n = 2, and we have r ∈ {1, 2} constraints.
The objective function minimized just with respect to the box constraints is known to have four globally minimal solutions with the minimal value 0: The first added nonconvex constraint excludes the minimal solution x 3 . The second nonconvex constraint excludes x 2 .
The instance FF is a typical test problem for biobjective optimization, see [13]. Since the second objective has an image within the interval [0, 1] but will serve as a constraint now, we adapted this function slightly.

Test instance 7 (FF)
The dimension of the preimage space n ∈ N can be arbitrarily chosen. We have r = 1.
Since the analytical form of the efficient and nondominated set of the original biobjective optimization problem is known, we can state the minimal solution and the minimal value by Recall that e is the all-ones vector.
Moreover, we make use of the multiobjective test instance suite DTLZ, in particular of DTLZ2 and DTLZ7, [5]. These instances can be scaled regarding the number of variables and the number of objectives. Thus, the first objective is always the objective function and all other original objectives are the constraints. Again, we had to adapt the functions slightly to ensure that there are some feasible solutions.

Test instance 8 (DTLZ2)
The dimension of the preimage space n ∈ N and the number r < n of constraints can be chosen arbitrarily.
Test instance 9 (DTLZ7) The dimension of the preimage space n ∈ N and the number r < n of constraints can be chosen arbitrarily. with (1 + sin(3π x i )).

Numerical results
For all instances we set ε = δ = 0.01 and a time limit of 6 h (21,600 s). Note that for all the instances in Table 2, the main while-loop took a maximum of 3969 s and most instances have been solved much faster. Table 2 shows the overall results for all chosen test instances which were obtained within the time limit. In the first columns, we state the number n of variables and the number r of constraints. Note that for r ≥ 2, the second objective of the multiobjective counterpart is the maximum function of all constraints. The next block shows the computational time of Table 2 Results of MOCPαBB on different test instances the main while-loop of Algorithm 1 and the number of needed iterations. Next, the post processing is detailed. PP is the indicator whether the whole post processing is performed. If PP is 0, only the filtering was applied and because of q 1 2 > ε 2 and Lemma 8 the refinement and local search are redundant. Then the computational time and the number of needed iterations (# it f i ) are displayed. If refinement (# it r f ) and local search (# it ls ) are applied, we mention all three counts separately. The next columns state the computed nearly minimal values: First, we give q 2 1 and then the best found minimal value (best f * ). If PP= 0 holds, this is the value of q 2 1 , since the pre-image of q 2 is always feasible. In case of PP= 1, the local search was done and thus, we state that minimal value which was found during the local search. To compare with the actual minimal values, we give these in the last column.
We observe that in some cases PP= 0 holds. The reason for that is q 1 2 > ε 2 = 0.005, and by Lemma 8 the pre-image of q 2 is already ε-minimal for the single-objective optimization problem. Thus, the additional post processing, i. e., refinement and local search do not have to be executed. Even if those steps have been performed for most of the cases, we also observed that the value q 2 1 is already close to the real minimal value and therefore its pre-image is ε-efficient for (P 1 ). The only instance for which the pre-image of q 2 is not ε-efficient is DTLZ2 with n = 4, r = 3, see the italic entry of q 2 1 . We give in the following some more details on one of the instances, on HimmCon. Figures 7 and 8 illustrate the results of MOCPαBB for the cases r ∈ {1, 2}, i.e., with one or two constraints. The meaning of the different shades of the boxes and points are the same as in Sect. 6.1. As the run for the optimization problem with one constraint did the refinement and local search step, the minimal solution and minimal value, visualized by the stars in Fig. 7, are found during the local search. In fact, the minimal solution there is x * = (−2.8051, 3.1313) T with f (x * ) ≈ 0. For two constraints, q 1 2 ≈ 0.83 was large enough to skip the refinement and local search steps. Thus, the star in Fig. 8a is the pre-image of q 2 , i.e, x * * = (3.5844, −1.8481) T , and the one in Fig. 8b is q 2 = (0.0000, −37.5066) T itself. Both found solution correspond to one of the minimal solutions of the Himmelblau function without additional constraints, i.e., x * ≈ x 2 and x * * ≈ x 4 .

Discussion
The above analysis shows that taking a multiobjective perspective on constrained optimization in general, and on αBB-methods in particular, leads to a new and promising class of We emphasize that our implementation is prototypical and that no state-of-the-art preprocessing was used. We can thus not expect computational times that are competitive with other (commercial) state-of-the-art solvers. Indeed, the above test instances were solved within (milli-)seconds using GAMS models [14] with BARON [33] (instances KSS2Con, HimmCon, FF) and MINOS [23] (instances DTLZ2 and DTLZ7), and using the built-in advanced pre-processing routines that in most cases already returned near-optimal solutions. Moreover, our numerical results clearly show one of the main drawbacks of branch and bound type methods in this context: Scaling the number of variables increases the computational time. This is clear, because the branching happens in the pre-image space and for more variables a box has to be bisected more often to become small. Despite this shortcoming, bounding and discarding have proven to be highly effective so that the number of active boxes remains comparably small and storage requirements were never a problem.

Conclusions
Within this paper, we proposed to use a multiobjective counterpart to find feasible solutions for nonconvex single-objective constrained optimization problems. It is well-known that finding feasible solutions can be a hard task and we proposed a new approach to find such points. For that approach, we adapted a global solution method for multiobjective optimization problems such that it concentrates on the region of interest. By using such multiobjective counterparts one also gets additional information on the trade-off which one would obtain by relaxing the constraints slightly in favour of an improved objective function value. appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.