A general branch-and-bound framework for continuous global multiobjective optimization

A Correction to this article is available

This article has been updated


Current generalizations of the central ideas of single-objective branch-and-bound to the multiobjective setting do not seem to follow their train of thought all the way. The present paper complements the various suggestions for generalizations of partial lower bounds and of overall upper bounds by general constructions for overall lower bounds from partial lower bounds, and by the corresponding termination criteria and node selection steps. In particular, our branch-and-bound concept employs a new enclosure of the set of nondominated points by a union of boxes. On this occasion we also suggest a new discarding test based on a linearization technique. We provide a convergence proof for our general branch-and-bound framework and illustrate the results with numerical examples.


In this paper we propose a general solution approach for continuous multiobjective optimization problems of the form

$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \quad \min \,f(x) \quad \text{ s.t. }\quad g(x)\le 0,\ x\in X\qquad \qquad \qquad \qquad \qquad \qquad (MOP) \end{aligned}$$

with a vector \(f: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^m\) of continuous objective functions, a vector \(g: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^k\) of continuous inequality constraint functions, and an n-dimensional box \(X=[{\underline{x}}, {\overline{x}} ]\) with \({\underline{x}},{\overline{x}}\in \mathbb {R}^n\), \({\underline{x}}\le {\overline{x}}\). We do not impose any convexity assumptions on the entries of f or g so that, in particular, the set of feasible points

$$\begin{aligned} M=\{x \in X \mid g(x) \le 0\} \end{aligned}$$

is not necessarily convex. Nevertheless, the proposed approach will aim at the global solution of MOP, in a sense defined below.

The literature on deterministic algorithms for globally solving problems of the type MOP can be divided into two classes. One class comprises methods which use a parametric scalarization approach and then apply a single objective global optimization technique, mostly branch-and-bound, to the resulting auxiliary problems for each parameter. Examples and a discussion of the drawbacks of such approaches are given in [26].

The present paper falls into the second class of solution approaches which try to adapt the ideas of single objective branch-and-bound methods directly to the multiobjective setting, that is, they do without the intermediate step of some scalarization approach. The basic idea of any branch-and-bound algorithm for the minimization of a single function f over a set M starts by subdividing M iteratively into partial sets \(M'\) and then discarding all sets \(M'\) which cannot contain minimal points. The discarding tests rely on partial lower bounds for f on the sets \(M'\), and from them also an overall lower bound for the globally minimal value of f on M can be computed. In addition, the generation of feasible points during the discarding tests leads to overall upper bounds on the globally minimal value. A standard termination criterion for such algorithms is that the difference between the current overall upper and lower bounds drops below some prescribed tolerance. Consequently, the choice of a partial set \(M'\) which is branched into smaller sets in the next iteration is typically governed by the aim to reduce the difference between overall upper and lower bounds. As the partial sets \(M'\) may be interpreted as nodes of the underlying branch-and-bound tree, the latter step is known as node selection.

The contribution of the present paper is motivated by the fact that current generalizations of the branch-and-bound idea to the multiobjective setting do not seem to follow this train of thought all the way. In fact, while several suggestions for generalizations of partial lower bounds and of overall upper bounds are available, we are not aware of general constructions for overall lower bounds from partial lower bounds, and of the corresponding termination criteria and node selection steps. The aim of the present paper is to close this gap.

In particular, in Sect. 2 we will discuss in more detail that the single objective branch-and-bound idea focuses on constructions in the image space of the optimization problem, whereas some of the known multiobjective branch-and-bound methods invest additional effort into constructions in the decision space. While this may be useful for good approximations of the efficient set (to be defined below), it is unrelated to the basic branch-and-bound idea.

The first multiobjective branch-and-bound approach without an intermediate scalarization step, but with convergence considerations, was given in [12]. It is formulated for biobjective problems only and focuses on simple discarding tests based on monotonicity considerations and interval arithmetic. Its node selection rule chooses boxes one by one until all of them are sufficiently small for termination. Branch-and-bound methods for more than two objectives are proposed in [31] and [11]. The convergence results in [31] rely on growth conditions of image boxes in terms of decision space boxes as they hold in, e.g., interval arithmetic. The approach from [11] considers general lower bounding procedures for discarding tests, however in combination with a nonstandard notion of \(\varepsilon \)-efficient sets (to be defined below). Both in [11] and in [31] the termination criterion and node selection rule resemble the ones from [12]. In [26] significantly more efficient discarding tests for general multiobjective problems on convex feasible sets are suggested, which base on computing partial lower bounds by convex underestimators of the objective functions instead of interval arithmetic. Their termination criterion is based on a bound for the possible improvement from some lower bounds compared to upper bounds.

A different algorithm for box constrained biobjective problems is presented in [28, 37]. In combination with iterative trisections of the feasible set it makes use of the Lipschitz property of the objective functions to compute partial lower bounds via Lipschitz underestimators of the objective functions. An overall lower bound is constructed from these partial lower bounds, and they are compared to some simple overall upper bound in the node selection rule. However, the generalization of this approach to more objective functions and to more general feasible sets does not seem to be straightforward.

The remainder of this paper is structured as follows. Section 2 reviews some preliminaries from single objective branch-and-bound methods and from multiobjective optimization. Section 3 introduces enclosures of the nondominated set of MOP as well as the computation of their widths as the central tools of our approach. Based on the concept of local upper bounds, an explicit choice for an overall upper bounding set within such an enclosure is introduced in Sect. 4. The construction of corresponding overall lower bounding sets bases on the partial lower bounding sets used for discarding tests. These are discussed in Sect. 5 along with three explicit discarding techniques, where the construction of the corresponding partial lower bounding sets is based on singletons, convex underestimators and a relaxation-linearization technique, respectively, the latter being novel. Section 6 constructs corresponding overall lower bounding sets from such partial lower bounding sets, before Sect. 7 explicitly states a natural termination criterion, a related node selection rule, and the resulting multiobjective branch-and-bound framework in Algorithm 1. Section 8 provides convergence results for this algorithm, and Sect. 9 complements them with a proof of concept by some numerical illustrations. Section 10 concludes the article with final remarks.


To motivate the concrete branch-and-bound steps in the case of multiobjective optimization, let us first describe the framework for single objective optimization problems in some more detail.

Overview of single objective branch-and-bound

In single objective optimization of f over M a globally minimal point is some \(x_{min}\in M\) such that no \(x\in M\) satisfies \(f(x)<f(x_{min})=v\), where v denotes the globally minimal value. The discarding tests for subsets \(M'\) of M are performed by comparing efficiently computable partial lower bounds \(\ell b'\) of f on \(M'\) with the currently best known overall upper bound ub on the globally minimal value of f on M. The value \(ub=f(x_{ub})\) results from the evaluation of f at the currently best known feasible point \(x_{ub}\in M\). In fact, any set \(M'\) with \(\ell b'>ub\) may safely be discarded without deleting a globally minimal point of f on M, since all \(x\in M'\) then satisfy \(f(x)\ge \ell b'>ub\ge v\). Of course also any empty set \(M'\) can be discarded. The algorithm keeps a list \(\mathcal{L}\) of subsets \(M'\) which have not yet been discarded.

A branch-and-bound iteration proceeds by choosing and deleting a subset \(M'\) from \(\mathcal{L}\), splitting this subset into two or more parts, and checking if the new subsets may be discarded or if some of them must again be written to \(\mathcal{L}\). If during the latter tests a point \(x_{ub}'\in M\) with \(f(x_{ub}')<f(x_{ub})\) is generated, the currently best known feasible point and the corresponding upper bound are updated to \(x_{ub}'\) and \(f(x_{ub}')\), respectively, and possibly further subsets may be discarded from \(\mathcal{L}\) by this updated information.

These constructions do not only upper bound the globally minimal value v by \(ub=f(x_{ub})\ge v\), but they also yield the overall lower bound \(v\ge \ell b:=\min _{M'\in \mathcal{L}}\ell b'\), since the partial minimal values \(v'\) of f on \(M'\) satisfy \(v=\min _{M'\in \mathcal{L}}v'\) and the lower bounding property \(v'\ge \ell b'\) thus implies \(v\ge \ell b\). Consequently, the minimal value v is sandwiched between \(\ell b\) and ub. If for a given tolerance \(\varepsilon >0\) the branch-and-bound method generates bounds satisfying the termination criterion \(ub-\ell b<\varepsilon \), then, in addition to \(v\le f(x_{ub})\), the point \(x_{ub}\) also satisfies \(f(x_{ub})=ub< \ell b+\varepsilon \le v+\varepsilon \) and is hence \(\varepsilon \)-minimal, that is, no \(x\in M\) satisfies \(f(x)< f(x_{ub})-\varepsilon \).

For any \(\varepsilon >0\) the above termination criterion will be met after finitely many branch-andbound steps if the method chooses points \(x_{ub}^k\) such that \(ub^k=f(x_{ub}^k)\) converges to v from above, and if for the sets \(M'\in \mathcal{L}^k\) the values \(\ell b^k=\min _{M'\in \mathcal{L}^k}\ell b'\) converge to v from below. The latter is usually guaranteed by employing a node selection rule which in the hypothetical case \(\varepsilon =0\) would select a set \(M'\) with \(\ell b'=\ell b^k\) infinitely often, along with choosing a lower bounding procedure with appropriate convergence properties.

In practical implementations of such a branch-and-bound framework, the feasible set is usually assumed to be given in the form \(M=M(X)=\{x \in X \mid g(x) \le 0\}\) from (1) with a box X. Subdividing M can then be performed by subdividing X into subboxes \(X'\) and putting \(M':=M(X')=\{x\in X'\mid g(x)\le 0\}\). The list \(\mathcal{L}\) then only needs to contain the information on subboxes \(X'\) along with their corresponding partial lower bound \(\ell b'\) for f on \(M(X')\). A common subdivision step consists in choosing some box \(X'\) from \(\mathcal{L}\) and halving it along a longest edge into two subboxes \(X^1\) and \(X^2\), which corresponds to splitting the set \(M'=M(X')\) into \(M^1=M(X^1)\) and \(M^2=M(X^2)\).

Upon termination of the branch-and-bound method the list \(\mathcal{L}\) will contain at least one subbox \(X'\) with \(x_{ub}\in M(X')\), so that for the entire remaining boxes \(X'\) in \(\mathcal{L}\) the set \(\bigcup _{X'\in \mathcal{L}}M(X')\) as well as its superset \(\bigcup _{X'\in \mathcal{L}}X'\) are nonempty. In addition to the information that \(x_{ub}\) is \(\varepsilon \)-minimal, the described construction also implies that both latter sets form coverings of the set of all globally minimal points \(X_{min}\) of f on M.

This approach, however, neither forces the sizes of the remaining boxes \(X'\) or of the intervals \(f(M(X'))\) to become small, nor does it guarantee that the points in the covering \(\bigcup _{X'\in \mathcal{L}}M(X')\) form a subset of the set of \(\varepsilon \)-minimal points \(X_{min}^\varepsilon \). In fact, examples show that the latter property may not even hold if the termination criterion \(ub-\ell b<\varepsilon \) is ignored, but the algorithm terminates after an arbitrary large number of iterations.

Note that, in particular, the single objective branch-and-bound method does not focus on good approximations of minimal points \(x_{min}\), but mainly on the approximation of the minimal value v. Moreover, the discarding tests and the termination criteria solely rely on bounds on objective function values, that is, the approach mainly works in the image set f(M). Below we shall see how this carries over to the multiobjective setting. Clearly, working exclusively in the image space may be algorithmically beneficial since in many multiobjective applications its dimension is significantly smaller than the decision space dimension.

Efficient and nondominated points

In the presence of more than one objective function there is in general no feasible point \(x_{min}\in M\) which minimizes all objective functions simultaneously, that is, such that \(f(x_{min})\le f(x)\) holds for all \(x\in M\). Instead, one takes the equivalent negative formulation of optimality from the single objective case, namely, for \(x_{min}\in M\) there exists no \(x\in M\) with \(f(x)<f(x_{min})\), and transfers it to vector-valued functions f. In fact, a point \(x_{wE}\in M\) is called weakly efficient for MOP if there exists no \(x\in M\) with \(f(x)<f(x_{wE})\), where the inequality is meant componentwise (cf., e.g., [10, 25]). Since in some situations weakly efficient points allow the improvement of one objective function without any trade-off against other objectives, usually a stronger concept is employed in which the strict inequality \(f(x)<f(x_{min})\) from the single objective case is rewritten as \(f(x)\le f(x_{min})\) and \(f(x)\ne f(x_{min})\). A feasible point \(x_E \in M\) is called efficient for MOP if there exists no \(x \in M\) with \(f(x) \le f(x_E)\) and \(f(x)\ne f(x_E)\). The set of all efficient points \(X_E\) is called efficient set of MOP and forms a subset of the set \(X_{wE}\) of all weakly efficient points of MOP. We remark that under our assumptions the problem MOP possesses efficient points whenever M is nonempty [10]. They usually form a set of infinitely many alternatives from which the decision maker has to choose.

The notion of \(\varepsilon \)-minimality from the single objective case can be generalized to multiobjective problems as well. For \(\varepsilon >0\) a point \(x_E^\varepsilon \in M\) is called \(\varepsilon \)-efficient for MOP (cf. [23] with choice \(\varepsilon e\in \mathbb {R}^m_+\)) if there exists no \(x \in M\) such that \(f(x) \le f(x_E^\varepsilon )-\varepsilon e\) and \(f(x)\ne f(x_E^\varepsilon )-\varepsilon e\) hold, where e stands for the all ones vector. We denote the set of all \(\varepsilon \)-efficient points by \(X_E^\varepsilon \). It is not hard to see that the chain of inclusions \(X_E\subseteq X_{wE}\subseteq X_E^\varepsilon \) holds for any \(\varepsilon >0\) so that, in particular, under our assumptions all three sets are nonempty.

Whereas efficiency, weak efficiency and \(\varepsilon \)-efficiency are notions in the decision space \(\mathbb {R}^n\), as mentioned above the branch-and-bound idea focuses on constructions in the image space \(\mathbb {R}^m\). These are covered by the following concepts. For points \(y^1,y^2\in \mathbb {R}^m\) we say that \(y^1\) dominates \(y^2\) if \(y^1\le y^2\) and \(y^1\ne y^2\) holds. In this terminology a point \(x_E\in M\) is efficient if and only if \(f(x_E)\) is not dominated by any f(x) with \(x\in M\). Hence, the set \(Y_N=f(X_E)\) of points \(y_N\in f(M)\) which are not dominated by any \(y\in f(M)\) is called the nondominated set (also known as Pareto set) of MOP. The nondominated set \(Y_N\) plays the role of the minimal value v from the single objective case.

Analogously, the weakly nondominated set \(Y_{wN}=f(X_{wE})\) of MOP consists of the points \(y_{wN}\in f(M)\) such that no \(y\in f(M)\) satisfies \(y<y_{wN}\), that is, such that no element of f(M) strictly dominates \(y_{wN}\), and the \(\varepsilon \)-nondominated set \(Y_N^\varepsilon =f(X_E^\varepsilon )\) of MOP consists of the points \(y_N^\varepsilon \in f(M)\) such that no element of f(M) dominates \(y_N^\varepsilon -\varepsilon e\). In the single objective case \(Y_N^\varepsilon \) corresponds to the (rarely discussed) set \([v,v+\varepsilon ]\cap f(M)\) of \(\varepsilon \)-minimal values, whereas \(Y_{wN}\) behaves like \(Y_N\) and collapses to v. In view of our above remarks, the three sets satisfy \(Y_N\subseteq Y_{wN}\subseteq Y_N^\varepsilon \) for any \(\varepsilon >0\), and they are nonempty for \(M\ne \emptyset \).

Enclosing the nondominated set

The first aim of our multiobjective generalization of the branch-and-bound framework is to sandwich the nondominated set \(Y_N\) of MOP in some sense between an overall lower bounding set LB and an overall upper bounding set UB, where LB is constructed from partial lower bounding sets. In Sect. 7 this will lead to a termination criterion and a node selection rule.

Since in set notation the single objective sandwiching condition \(\ell b\le v\le ub\) may be rewritten as \(\{v\}\subseteq (\ell b+\mathbb {R}_+)\cap (ub-\mathbb {R}_+)\) (where, e.g., the expression \(\ell b+\mathbb {R}^m_+\) is shorthand for the Minkowski sum \(\{\ell b\}+\mathbb {R}^m_+\)), let us generalize it to the requirement

$$\begin{aligned} Y_N\subseteq (LB+\mathbb {R}^m_+)\cap (UB-\mathbb {R}^m_+) \end{aligned}$$

with nonempty and compact sets \(LB,UB\subseteq \mathbb {R}^m\). This is in line with the sandwiching approaches reviewed in [30] which, however, have not been combined with branch-and-bound ideas.

The enclosing interval \([\ell b,ub]\) for v from the single objective case thus generalizes to the enclosure

$$\begin{aligned} E(LB,UB):=(LB+\mathbb {R}^m_+)\cap (UB-\mathbb {R}^m_+) \end{aligned}$$

for \(Y_N\). Since the single objective termination criterion \(ub-\ell b<\varepsilon \) may be interpreted as an upper bound on the interval length of the enclosing interval \([\ell b, ub]\), a natural multiobjective termination criterion is to upper bound some width w(LBUB) of the enclosure E(LBUB) by a tolerance \(\varepsilon \). In view of the special structure of the enclosure we suggest to measure its width with respect to the direction of the all ones vector e, that is, we define w(LBUB) as the supremum of the problem

$$\begin{aligned} \qquad \quad \max _{y,t}\,\Vert (y+te)-y\Vert _2 /\sqrt{m}\quad \text{ s.t. }\quad t\ge 0,\ y,y+te\in E(LB,UB).\qquad ({{\widetilde{W}}}(LB,UB)) \end{aligned}$$

Thanks to the normalization constant \(\sqrt{m}\) the objective function of \(\widetilde{W}(LB,UB)\) equals t. Imposing the nonnegativity constraint on t is possible due to symmetry. Lemma 3.3 will provide a sufficient condition for the solvability of \(\widetilde{W}(LB,UB)\) which is, however, only needed later. The following lemma first justifies the choice of this width measure.

Lemma 3.1

For sets \(LB,UB\subseteq \mathbb {R}^m\) with \(Y_N\subseteq LB+\mathbb {R}^m_+\) and some \(\varepsilon >0\) let \(w(LB,UB)<\varepsilon \). Then the relation

$$\begin{aligned} E(LB,UB)\cap f(M)\subseteq Y_N^\varepsilon \end{aligned}$$



For any \({\bar{y}}\in E(LB,UB)\cap f(M)\) assume that there exists some \(y\in f(M)\) with \(y\le {\bar{y}}-\varepsilon e\) and \(y\ne {\bar{y}}-\varepsilon e\). Since our assumptions imply external stability [32, Theorem 3.2.9], the point y either lies in \(Y_N\) or is dominated by some \(y_N\in Y_N\). This implies \(y\in Y_N+\mathbb {R}^m_+\subseteq LB+\mathbb {R}^m_+\), and together with \(y\le {\bar{y}}-\varepsilon e\le {\bar{y}}\in UB-\mathbb {R}^m_+\) it shows \(y\in E(LB,UB)\). Moreover, the point \(y+\varepsilon e\) clearly lies in \(LB+\mathbb {R}^m_+\), and with \(y+\varepsilon e\le {\bar{y}}\in UB-\mathbb {R}^m_+\) we also obtain \(y+\varepsilon e\in E(LB,UB)\). Consequently \((y,\varepsilon )\) is a feasible point of \(\widetilde{W}(LB,UB)\) , resulting in the contradiction \(w(LB,UB)\ge \Vert \varepsilon e\Vert _2/\sqrt{m}=\varepsilon \). \(\square \)

Lemma 3.1 states that for \(w(LB,UB)<\varepsilon \) all attainable points in the enclosure E(LBUB) are \(\varepsilon \)-nondominated. In the single objective case (\(m=1\)) this statement collapses to the simple observation that for \(ub-\ell b<\varepsilon \) all values in the interval \([\ell b,ub]\cap f(M)\) are \(\varepsilon \)-minimal, that is, they lie in \([v,v+\varepsilon ]\cap f(M)\). Recall that, in combination with discarding tests based on ub, this does not entail that the elements of \(\bigcup _{X'\in \mathcal{L}}M(X')\) are \(\varepsilon \)-minimal points. Analogously one may not expect that the multiobjective discarding tests based on UB, as discussed in Sect. 5, will yield \(\bigcup _{X'\in \mathcal{L}}M(X')\subseteq X_E^\varepsilon \) for \(w(LB,UB)<\varepsilon \).

As the condition \(y\in (LB+\mathbb {R}^m_+)\cap (UB-\mathbb {R}^m_+)\) is equivalent to the existence of some \(\ell b\in LB\) and \(ub\in UB\) with \(\ell b\le y\le ub\), the enclosure E(LBUB) can be written as the union of the nonempty boxes which can be constructed with lower and upper bound vectors from LB and UB, respectively,

$$\begin{aligned} E(LB,UB)\ = \bigcup _{\begin{array}{c} (\ell b,ub)\in LB\times UB\\ \ell b\le ub \end{array}}[\ell b, ub]. \end{aligned}$$

This shows, in particular, that appropriate choices of LB and UB might lead to a disconnected enclosure E(LBUB), thus correctly capturing the topological structure of \(Y_N\). Moreover, the following result relates the computation of w(LBUB) to the description (3) of E(LBUB) and implies that w(LBUB) coincides with the largest value \(s(\ell b, ub)\) among the above boxes \([\ell b,ub]\), where

$$\begin{aligned} s(\ell b,ub):=\min _{j=1,\ldots ,m}(ub_j-\ell b_j) \end{aligned}$$

denotes the length of a shortest edge of \([\ell b,ub]\).

Lemma 3.2

For any sets \(LB,UB\subseteq \mathbb {R}^m\) the width w(LBUB) of E(LBUB) coincides with the supremum of

$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \quad \max _{\ell b,ub}\,s(\ell b,ub)\quad \text{ s.t. }\quad (\ell b,ub)\in LB\times UB,\ \ell b\le ub.\qquad (W(LB,UB)) \end{aligned}$$


Since the objective function of \(\widetilde{W}(LB,UB)\) satisfies \(\Vert (y+te)-y\Vert _2/\sqrt{m}=t\), in particular it does not depend on y. Hence the supremum of \(\widetilde{W}(LB,UB)\) coincides with the supremum of

$$\begin{aligned} \max _t\,t\quad \text{ s.t. }\quad t\ge 0,\ \exists \,y\in \mathbb {R}^m\!:\ y,y+te\in (LB+\mathbb {R}^m_+)\cap (UB-\mathbb {R}^m_+).\qquad ({{\overline{W}}}(LB,UB)) \end{aligned}$$

As the condition \(y\in LB+\mathbb {R}^m_+\) implies \(y+te\in LB+\mathbb {R}^m_+\) and, analogously, \(y+te\in UB-\mathbb {R}^m_+\) implies \(y\in UB-\mathbb {R}^m_+\), the latter existence constraint simplifies to

$$\begin{aligned} \exists \,y\in \mathbb {R}^m\!:\ y\in LB+\mathbb {R}^m_+,\ y+te\in UB-\mathbb {R}^m_+ \end{aligned}$$

or, equivalently,

$$\begin{aligned} \exists \,(\ell b,ub)\in LB\times UB:\ te\le ub-\ell b. \end{aligned}$$

The supremum of \(\widetilde{W}(LB,UB)\) thus coincides with the supremum of

$$\begin{aligned} \max _{\ell b, ub,t}\,t\quad \text{ s.t. }\quad 0\le te\le ub-\ell b,\ (\ell b,ub)\in LB\times UB \end{aligned}$$

which, after explicitly computing the upper bound \(s(\ell b,ub)\) for t, yields the assertion. \(\square \)

The next result immediately follows from Lemma 3.2 in view of the Weierstrass theorem.

Lemma 3.3

Let the sets \(LB,UB\subseteq \mathbb {R}^m\) be nonempty and compact, and let E(LBUB) be nonempty. Then the problem W(LBUB) is solvable. In particular, the width w(LBUB) is a real number, and there exists some box \([\ell b^\star ,ub^\star ]\) with \((\ell b^\star ,ub^\star )\in LB\times UB\), \(\ell b^\star \le ub^\star \) and \(w(LB,UB)=s(\ell b^\star , ub^\star )\).

Note that under our assumptions the enclosure E(LBUB) is nonempty whenever LB and UB satisfy (2), as then \(\emptyset \ne Y_N\subseteq E(LB,UB)\) holds.

The combination of Lemmas 3.13.2 and 3.3 yields the following explicit sufficient condition for \(\varepsilon \)-nondominance of the attainable points in the enclosure E(LBUB).

Theorem 3.4

For nonempty and compact sets \(LB,UB\subseteq \mathbb {R}^m\) with (2) and some \(\varepsilon >0\) let

$$\begin{aligned} \max \left\{ s(\ell b,ub)|\ (\ell b,ub)\in LB\times UB,\ \ell b\le ub\right\} <\varepsilon . \end{aligned}$$

Then the relation

$$\begin{aligned} E(LB,UB)\cap f(M)\subseteq Y_N^\varepsilon \end{aligned}$$


For the application of Theorem 3.4 we shall subsequently construct sequences of nonempty sets \((LB^k),(UB^k)\subseteq \mathbb {R}^m\) with (2) for all \(k\in \mathbb {N}\) as well as \(\lim _k w(LB^k,UB^k)=0\). Then, in analogy to the single objective case, (5) may be employed as the termination criterion of a multiobjective branch-and-bound method for any prescribed tolerance \(\varepsilon >0\).

Upper bounding the nondominated set

Regarding the generalization of the concept of upper bounds \(ub=f(x_{ub})\) for v with \(x_{ub}\in M\) from the single objective case, observe that the notion of a currently best feasible point does not make sense in the multiobjective setting.

The provisional nondominated set

Instead, different feasible points \(x_{ub}\in M\) may provide good approximations for different efficient points. Consequently, in the course of the algorithm one keeps a subset \(\mathcal{X}_{ub}\) of the finitely many feasible points generated so far or, as we wish to work in the image space, we rather keep the set \(\mathcal{F}:=f(\mathcal{X}_{ub})\), whose elements are to approximate different nondominated points of MOP.

It would be useless, however, to store a point \(f(x^1)\) with \(x^1\in \mathcal{X}_{ub}\) in \(\mathcal{F}\) which is dominated by \(f(x^2)\) for some \(x^2\in \mathcal{X}_{ub}\), as in this case the statement that \(x^2\) is a better feasible point than \(x^1\) does make sense. Hence, whenever some new point \(x_{ub}\in M\) is generated in the course of the algorithm, its image \(f(x_{ub})\) is only inserted into \(\mathcal{F}\) if \(f(x_{ub})\) is not dominated by any element from \(\mathcal{F}\). Moreover, all elements of \(\mathcal{F}\) which are dominated by \(f(x_{ub})\) are deleted from \(\mathcal{F}\). In the following we will refer to this procedure as updating \(\mathcal{F}\) with respect to \(f(x_{ub})\).

The source of the points \(x_{ub}\) will be elements of subboxes \(X'\) of X (e.g., their midpoints \({{\,\mathrm{mid}\,}}(X')\)) which are chosen for the discarding tests described below. If such an \(x_{ub}\in X'\) is feasible for MOP the list \(\mathcal{F}\) will be updated with respect to \(f(x_{ub})\).

As a consequence of this construction, no element of \(\mathcal{F}\) dominates any other element of \(\mathcal{F}\). Any subset of \(\mathbb {R}^m\) with the latter property is called stable, so that \(\mathcal{F}\) forms a finite and stable subset of the image set f(M) of MOP. Observe that also the nondominated set \(Y_N\) of MOP is a (not necessarily finite) stable subset of f(M). This motivates to call \(\mathcal{F}\) a provisional nondominated set [12].

From the single objective construction one may expect that the choice \(UB=\mathcal{F}\) for the upper bounding set in (2) is possible. However, while the inclusion \(Y_N\subseteq \mathcal{F}\cup (\mathcal{F}+\mathbb {R}^m_+)^c\) does hold, simple examples show that the required inclusion \(Y_N\subseteq \mathcal{F}-\mathbb {R}^m_+\) may fail. Fortunately the following concept, already used in [26], allows us to construct an upper bounding set UB from the information in \(\mathcal{F}\).

Local upper bounds

The subsequent construction assumes the existence of a sufficiently large box \(Z=[{\underline{z}},{{\overline{z}}}]\) with \(f(M) \subseteq {{\,\mathrm{int}\,}}(Z)\) (where \({{\,\mathrm{int}\,}}\) denotes the topological interior). In the present setting of MOP the set f(M) is contained in the compact set f(X), so that the existence of such a box Z is no restriction. The explicit construction of some suitable Z is possible, for example, by interval arithmetic, which we shall discuss in more detail in Sect. 5.3.

Given a finite and stable set \(\mathcal{F}\subseteq f(M)\), the elements of the nondominated set \(Y_N\) can, in particular, not be dominated by any \(q\in \mathcal{F}\). This motivates to define the so-called search region

$$\begin{aligned} S({\mathcal {F}})=\{z \in {{\,\mathrm{int}\,}}(Z)\mid \forall q \in {\mathcal {F}}: q \not \le z\}, \end{aligned}$$

that is, the set of all points in \({{\,\mathrm{int}\,}}(Z)\setminus \mathcal{F}\) which are not dominated by any point from \({\mathcal {F}}\). While this clearly implies

$$\begin{aligned} Y_N\subseteq \mathcal{F}\cup S(\mathcal{F}), \end{aligned}$$

now we need some algorithmically useful description of \(S(\mathcal{F})\).

To motivate this description, first note that the set complement of \(S(\mathcal{F})\) relative to \({{\,\mathrm{int}\,}}(Z)\),

$$\begin{aligned} S(\mathcal{F})^c=\bigcup _{q\in \mathcal{F}}\{z\in {{\,\mathrm{int}\,}}(Z)\mid q\le z\} \end{aligned}$$

is the union of finitely many closed sets \(q+\mathbb {R}^m_+\), \(q\in \mathcal{F}\), in \({{\,\mathrm{int}\,}}(Z)\). In [22] it is shown that the search region \(S(\mathcal{F})\) itself is the union of finitely many open sets \((p-{{\,\mathrm{int}\,}}(\mathbb {R}^m_{+}))\cap {{\,\mathrm{int}\,}}(Z)\) where each p can be interpreted as a local upper bound, and each \(p-{{\,\mathrm{int}\,}}(\mathbb {R}^m_{+})\) covers a part of the search region. The set of these local upper bounds induced by \(\mathcal{F}\) is actually a uniquely determined, nonempty and finite set \({{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(S(\mathcal{F})=\bigcup _{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})}\{z\in {{\,\mathrm{int}\,}}(Z)\mid z<p\}\) (cf. Fig. 1).

Fig. 1

Local upper bounds. The circles mark \(\mathcal{F}\), the bullets mark \({{\,\mathrm{lub}\,}}(\mathcal{F})\)

The following formal definition of local upper bounds is given in [22], along with an extensive discussion and graphical illustrations.

Definition 4.1

Let \(\mathcal{F}\) be a finite and stable subset of f(M). A set \({{\,\mathrm{lub}\,}}(\mathcal{F}) \subseteq Z\) is called local upper bound set with respect to \(\mathcal{F}\) if

  1. (i)

    \(\forall z \in S({\mathcal {F}}) : \, \exists p \in {{\,\mathrm{lub}\,}}(\mathcal{F}): z <p\)

  2. (ii)

    \(\forall z \in ({{\,\mathrm{int}\,}}(Z))\setminus S({\mathcal {F}}): \, \forall p \in {{\,\mathrm{lub}\,}}(\mathcal{F}): z\nless p\)

  3. (iii)

    \( \forall p_1,p_2 \in {{\,\mathrm{lub}\,}}(\mathcal{F}): \, p_1 \nleq p_2\) or \(p_1 =p_2\)

Since at the start of a branch-and-bound method no feasible points \(x\in M\) may be known, the set \(\mathcal{F}\) then is empty. In this case the search region \(S(\emptyset )\) coincides with \({{\,\mathrm{int}\,}}(Z)\), and Definition 4.1 leads to \({{\,\mathrm{lub}\,}}(\emptyset )=\{{\overline{z}}\}\).

By [26, Lemma 3.4], see also the forthcoming Lemma 5.1, it holds \(\mathcal{F}\subseteq {{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+\), and thus we obtain \(Y_N\subseteq \mathcal{F}\cup S(\mathcal{F})\subseteq {{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+\). Hence, the set \(UB:={{\,\mathrm{lub}\,}}(\mathcal{F})\) is an upper bounding set in the sense of condition (2), and in the resulting sandwiching condition

$$\begin{aligned} Y_N\subseteq (LB+\mathbb {R}^m_+)\cap ({{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+) \end{aligned}$$

it remains to find a suitable lower bounding set LB. We remark that, in the course of a branch-and-bound method, algorithms from [8, 22] may be employed to efficiently calculate and update the local upper bound sets \({{\,\mathrm{lub}\,}}(\mathcal{F})\) with respect to the appearing provisional nondominated sets \(\mathcal{F}\).

Lower bounding sets for partial upper image sets

Before we turn to the construction of an overall lower bounding set LB which satisfies the necessary relation \(Y_N\subseteq LB+\mathbb {R}^m_+\) from the sandwiching condition (2), recall that in the single objective case the overall lower bound \(\ell b\) for the minimal value v is defined as \(\ell b=\min _{X'\in \mathcal{L}}\ell b'\) with the partial lower bounds \(\ell b'\) for f on the subsets \(M(X')\) of M(X). In addition to their role in the definition of the overall lower bound \(\ell b\), the values \(\ell b'\) are also needed to discard sets \(M(X')\) with \(\ell b'>ub\), since due to \(\min _{x\in M(X')}f(x)=v'\ge \ell b'>ub\ge v\) those sets cannot contain minimal points. To develop a natural generalization of the partial lower bound \(\ell b'\) to a partial lower bounding set \(LB'\) in the multiobjective setting, let us first transfer the latter discarding argument.

Discarding by partial upper image sets and local upper bounds

Before lower bounding sets are introduced at all, we need a generalization of the fact that a set \(M(X')\) with \(\min _{x\in M(X')}f(x)>ub\) cannot contain minimal points. In a set formulation the latter discarding condition can be rewritten as \(\{ub\}\cap (f(M(X'))+\mathbb {R}_+)=\emptyset \) with the partial image set \(f(M(X'))\) of f on \(M(X')\) and its corresponding partial upper image set \(f(M(X'))+\mathbb {R}_+\). From the discussion on upper bounding sets in Sect. 4.2 one may expect that the appropriate generalization to the multiobjective setting claims that the condition

$$\begin{aligned} {{\,\mathrm{lub}\,}}(\mathcal{F})\cap \left( f(M(X'))+\mathbb {R}^m_+\right) =\emptyset \end{aligned}$$

rules out the existence of efficient points in \(M(X')\), so that \(X'\) can be discarded from \(\mathcal{L}\). This is indeed the case. For the proof of the correctness of this discarding condition we need the following two lemmata.

Lemma 5.1

[21, 26] Let \({\mathcal {F}}\) be a finite and stable subset of f(M). Then for every \(q\in {\mathcal {F}}\) and for every \(j \in \{1,...,m\}\) there is a point \(p \in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(q_j=p_j\) and \(q_k<p_k\) for all \(k \in \{1,...,m\}\setminus \{j\}\).

For the present paper the main consequence of Lemma 5.1 is the relation \(\mathcal{F}\subseteq {{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+\).

Lemma 5.2

For any subbox \(X'\) the discarding condition (8) implies

$$\begin{aligned} \mathcal{F}\cap \left( f(M(X'))+\mathbb {R}^m_+\right) =\emptyset . \end{aligned}$$


Assume that for some subbox \(X'\) the condition (9) is violated. Then there exist some \(q\in \mathcal{F}\) and some \(y\in f(M(X'))\) with \(y\le q\). Moreover, by Lemma 5.1 there exists a local upper bound \(p \in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(q\le p\). This yields \(y\le q\le p\) and, thus, \(p\in f(M(X'))+\mathbb {R}^m_+\), so that \(X'\) violates (8) as well. This shows the assertion. \(\square \)

The main idea of the following result was already presented in [26]. We include its short proof for completeness.

Proposition 5.3

Let \({\mathcal {F}}\) be a finite and stable subset of f(M), and let \(X'\) be a subbox of X. Then (8) implies

$$\begin{aligned} Y_N\cap \left( f(M(X'))+\mathbb {R}^m_+\right) =\emptyset . \end{aligned}$$

In particular, \(f(M(X'))\) cannot contain any nondominated point of MOP so that \(X'\) can be discarded.


Assume that (8) holds, but some \(y_N\in Y_N\) lies in \(f(M(X'))+\mathbb {R}^m_+\). Then, on the one hand, since \(y_N\) is not dominated by any point in f(M(X)), it must lie in \(f(M(X'))\). On the other hand, by (6) the point \(y_N\) either belongs to \(\mathcal{F}\) or to the search region \(S(\mathcal{F})\). The first case implies \(y_N\in \mathcal{F}\cap f(M(X'))\) which is impossible by Lemma 5.2. In the second case, by Definition 4.1(i) there exists some local upper bound \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(p>y_N\), which yields \(p\in y_N+\mathbb {R}^m_+\subseteq f(M(X'))+\mathbb {R}^m_+\), in contradiction to (8). This shows (10), of which the second assertion is an immediate consequence. \(\square \)

Observe that Proposition 5.3 correctly covers also the case of an empty set \(M(X')\). Also note that, as in Sect. 4.1, we cannot replace the set \({{\,\mathrm{lub}\,}}(\mathcal{F})\) in (8) by \(\mathcal{F}\), since simple examples show that \(f(M(X'))\) may contain nondominated points if only the weaker condition (9) holds.

Subsequently we shall need an algorithmically tractable version of at least a sufficient condition for (8). As a preparation for this as well as for some later developments, for any \(z\in \mathbb {R}^m\) and a compact set \(A\subseteq \mathbb {R}^m\) let us consider the auxiliary optimization problem

$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \min _t\,t \quad \text{ s.t. }\quad z+te\in A+\mathbb {R}^m_+.\qquad \qquad \qquad \qquad \qquad (D(z,A)) \end{aligned}$$

Its infimum \(\varphi _{A,e}(z)\) is known as Tammer-Weidner functional [15]. In the present paper we will abbreviate it by \(\varphi _A(z)\), since we shall exclusively use the direction e. In the case \(A=\emptyset \) the feasible set of D(zA) is empty, and we follow the usual convention to formally define \(\varphi _\emptyset (z):=+\infty \). The following two lemmata are based on [7, Proposition 1.41] and [16, Section 2.3].

Lemma 5.4

For any nonempty compact set \(A\subseteq \mathbb {R}^m\) and \(z\in \mathbb {R}^m\) the feasible set of D(zA) possesses the form \([\varphi ,+\infty )\) with some \(\varphi \in \mathbb {R}\). In particular, D(zA) is solvable with optimal point as well as optimal value \(\varphi \), and \(\varphi \) coincides with \(\varphi _A(z)\).

Lemma 5.5

Let \(A\subseteq \mathbb {R}^m\) be compact. Then \(z\in A+\mathbb {R}^m_+\) holds if and only if the infimum \(\varphi _A(z)\) of D(zA) satisfies \(\varphi _A(z)\le 0\).

Note that the latter result correctly covers the case \(A=\emptyset \). The following reformulation of Proposition 5.3 with the help of Lemma 5.5 allows us to check its assumption (8) by solving a finite number of one-dimensional optimization problems. In fact, the continuity of f and g together with the compactness of any subbox \(X'\) imply the compactness of the partial image set \(f(M(X'))\), so that we arrive at the following form of the discarding condition.

Proposition 5.6

Let \({\mathcal {F}}\) be a finite and stable subset of f(M), and let \(X'\) be a subbox of X. If for each \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) the infimum \(\varphi _{f(M(X'))}(p)\) of

$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \min _t\,t \quad \text{ s.t. }\quad p+te\in f(M(X'))+\mathbb {R}^m_+\qquad \qquad \quad (D(p,f(M(X')))) \end{aligned}$$

satisfies \(\varphi _{f(M(X'))}(p)>0\), then \(X'\) can be discarded.

Discarding tests basing on Proposition 5.6 may obviously stop to compute the infima \(\varphi _{f(M(X'))}(p)\) of all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) as soon as the result \(\varphi _{f(M(X'))}(p)\le 0\) occurs for some \(p\in {{\,\mathrm{lub}\,}}{\mathcal{F}}\), because then the set \(X'\) cannot be discarded. While in general all values \(\varphi _{f(M(X'))}(p)\), \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\), have to be checked for positivity to guarantee that \(X'\) may be discarded, in the case \(M(X')=\emptyset \) it is of course sufficient to stop these computations after \(\varphi _{f(M(X'))}(p)=+\infty \) occurs for the first tested \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\).

We remark that algorithmically the problem \(D(p,f(M(X')))\) should be treated in its equivalent lifted formulation

$$\begin{aligned} \min _{t,x}\,t \quad \text{ s.t. }\quad p+te\ge f(x),\ x\in M(X'). \end{aligned}$$

Discarding by relaxed partial upper image sets and local upper bounds

Unfortunately, for general continuous functions f and g and any \(p\in \mathbb {R}^m\) it may not be algorithmically tractable to determine the globally minimal value of \(D(p,f(M(X')))\) or of its lifted version (11). Instead, sufficient conditions for \(\varphi _{f(M(X'))}(p)>0\) can be obtained by checking the infimum of some tractable relaxation of \(D(p,f(M(X')))\) for positivity.

A natural approach to the construction of such a relaxation first returns to the general condition (8) which we had seen to generalize the requirement \(ub<\min _{x\in M(X')}f(x)\) from the single objective case. Also there, in general it is not algorithmically tractable to determine the value \(\min _{x\in M(X')}f(x)\) so that, instead, for an efficiently computable lower bound \(\ell b'\) of \(\min _{x\in M(X')}f(x)\) one only checks the sufficient condition \(ub<\ell b'\).

In analogy to this, in the multiobjective case we try to find a compact set \(LB'\subseteq \mathbb {R}^m\) with \(f(M(X'))+\mathbb {R}^m_+\subseteq LB'+\mathbb {R}^m_+\) and such that \(LB'+\mathbb {R}^m_+\) possesses, for example, a polyhedral or convex smooth description. Then for any \(p\in \mathbb {R}^m\) the condition \(p\notin f(M(X'))+\mathbb {R}^m_+\) is a consequence of \(p\notin LB'+\mathbb {R}^m_+\). Hence, in a branch-and-bound framework any such set \(LB'\) plays the role of a partial lower bound \(\ell b'\) from the single objective case, which motivates the following definition.

Definition 5.7

Let \(X'\) be a subbox of X. Then any compact set \(LB'\subseteq \mathbb {R}^m\) with \(f(M(X'))+\mathbb {R}^m_+\subseteq LB'+\mathbb {R}^m_+\) is called partial lower bounding set for \(f(M(X'))\).

In fact, Proposition 5.3 immediately implies the following discarding test.

Proposition 5.8

Let \({\mathcal {F}}\) be a finite and stable subset of f(M), let \(X'\) be a subbox of X, and let \(LB'\) be some partial lower bounding set for \(f(M(X'))\). If

$$\begin{aligned} {{\,\mathrm{lub}\,}}(\mathcal{F})\cap \left( LB'+\mathbb {R}^m_+\right) =\emptyset \end{aligned}$$

holds, then \(X'\) can be discarded.

By Lemma 5.5 and due to the assumed tractable structure of \(LB'\), the condition (12) can be checked efficiently via the positivity of the infima \(\varphi _{LB'}(p)\) of

$$\begin{aligned} \qquad \qquad \qquad \quad \qquad \qquad \qquad \quad \min _{t}\,t\quad \text{ s.t. }\quad p+te\in LB'+\mathbb {R}^m_+\qquad \qquad \qquad \qquad \qquad (D(p,LB')) \end{aligned}$$

for all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\). The following tractable discarding test hence follows from Proposition 5.8.

Theorem 5.9

(General discarding test) Let \({\mathcal {F}}\) be a finite and stable subset of f(M), let \(X'\) be a subbox of X, and let \(LB'\) be some partial lower bounding set for \(f(M(X'))\). If \(\varphi _{LB'}(p)>0\) holds for all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\), then \(X'\) can be discarded.

As mentioned before, whenever a discarding test for \(X'\) based on Proposition 5.8 or Theorem 5.9 fails, we may at least check whether some point \(x'\in X'\) lies in \(M(X')\). If this succeeds, then we update the set \(\mathcal{F}\) with respect to \(f(x')\), which also leads to an update of \({{\,\mathrm{lub}\,}}(\mathcal{F})\) for the subsequent discarding tests. Here \(x'\) may be the midpoint of \(X'\) or a point generated during the solution of the optimization problem \(D(p,LB')\).

In the following we shall describe two known and one novel possibility for the construction of partial lower bounding sets. For a construction based on Lipschitz underestimators of the objective functions in the biobjective case we refer to [28, 37].

Lower bounding via a singleton

For a subbox \(X'\subseteq X\) with \(M(X')\ne \emptyset \), the point \(a'\in \mathbb {R}^m\) with components

$$\begin{aligned} a'_j=\min _{x\in M(X')}\, f_j(x),\quad j=1,\ldots ,m, \end{aligned}$$

is called ideal point of the partial image set \(f(M(X'))\). In the case \(M(X')=\emptyset \) we formally set \(a'=+\infty e\).

For any \({{\widetilde{a}}}'\le a'\) the singleton \(LB'_{S}:=\{{{\widetilde{a}}}'\}\) is a partial lower bounding set for \(f(M(X'))\), where in the formal case \({{\widetilde{a}}}'=+\infty e\) we put \(LB'_S=\{+\infty e\}:=\emptyset \). In fact, for \(M(X')=\emptyset \) the inclusion \(f(M(X'))+\mathbb {R}^m_+\subseteq LB'_{S}+\mathbb {R}^m_+\) is trivially true, and otherwise for each \(y\in f(M(X'))+\mathbb {R}^m_+\) there exists some \(x\in M(X')\) with \(y\ge f(x)\ge a'\ge {{\widetilde{a}}}'\), so that y also lies in \(LB'_{S}+\mathbb {R}^m_+\). The compactness of \(LB'_{S}\) is clear.

The discarding test resulting from Proposition 5.8 with this partial lower bounding set is formulated in Corollary 5.10. Note that in the present case of a singleton partial lower bounding set we do not have to employ Theorem 5.9.

Corollary 5.10

(Discarding via a singleton) Let \({\mathcal {F}}\) be a finite and stable subset of f(M), let \(X'\) be a subbox of X, and let the ideal point \(a'\) of \(f(M(X'))\) be bounded below by \({{\widetilde{a}}}'\le a'\). If the set \(\{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\mid p\ge {{\widetilde{a}}}'\}\) is empty, then \(X'\) can be discarded.

Discarding via singletons is suggested in [12, 31, 37]. The option chosen in [12] for the computation of a lower estimate \({{\widetilde{a}}}'\) to the ideal point \(a'\) is to use techniques from interval analysis (IA) [18, 27]. This requires the additional assumption of factorability of the entries of f, which means that each objective function \(f_j\) can be written as a composition of finitely many arithmetic operations and elementary functions [24]. Interval arithmetic then allows to compute an interval \(F_j(X')\subseteq \mathbb {R}\) containing the partial image set \(f_j(X')\) for each \(j\in \{1,\ldots ,m\}\), so that \(F(X'):=\prod _{j=1}^mF_j(X')\) is an m-dimensional box containing \(f(X')\). Due to \(M(X')\subseteq X'\) this also implies \(f(M(X'))\subseteq F(X')\), and \({{\widetilde{a}}}'\) may be chosen as the vector whose entries are the lower limits of the intervals \(F_j(X')\). In the following we shall denote this point as \({{\widetilde{a}}}'_{IA}\).

In [12] this approach is not combined with the concept of local upper bounds, but a box \(X'\) is discarded if \({{\widetilde{a}}}'_{IA}\) is dominated by some \(q\in \mathcal{F}\). Also note that, since the function g does not enter the computation of \(\widetilde{a}'_{IA}\), the identification of empty sets \(M(X')\) must be handled separately. In [31] the ideal point estimate may in general also be computed by interval arithmetic, but for the multiobjective location problems considered as applications in that paper, more explicit bounds can be computed. The ideal point estimate in [37] results from Lipschitz underestimates of the single objective functions on \(X'\).

For the subsequent sections it will be relevant that any technique T for the choice of a partial lower bounding set \(LB'_T\) also induces a specific lower estimate \({{\widetilde{a}}}'_{T}\) for the ideal point. In fact, for each partial lower bounding set \(LB'_T\) of \(f(M(X'))\) the infima of the problems

$$\begin{aligned} \min _y\,y_j \quad \text{ s.t. }\quad y\in LB'_T+\mathbb {R}^m_+ \end{aligned}$$

with \(j\in \{1,\ldots ,m\}\) form a vector \({{\widetilde{a}}}'_{T}\le a'\), that is, a specific lower estimate for the ideal point of \(f(M(X'))\). Therefore we obtain the induced singleton partial lower bounding set \(LB'_{T,S}:=\{{{\widetilde{a}}}'_T\}\).

Clearly, in view of Corollary 5.10 a subbox \(X'\) can be discarded if \(\{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\mid p\ge {{\widetilde{a}}}'_{T}\}\) is empty. However, due to the relation \(LB'_T+\mathbb {R}^m_+\subseteq {\widetilde{a}}'_T+\mathbb {R}^m_+\) we may also modify the general discarding test from Theorem 5.9 as follows.

Corollary 5.11

(Modified general discarding test) Let \({\mathcal {F}}\) be a finite and stable subset of f(M), let \(X'\) be a subbox of X, let \(LB'_T\) be some partial lower bounding set for \(f(M(X'))\), and let \({{\widetilde{a}}}'_{T}\) be the induced lower estimate of the ideal point of f on \(M(X')\). If \(\varphi _{LB'_T}(p)>0\) holds for all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(p\ge {\widetilde{a}}'_{T}\), then \(X'\) can be discarded.

The computation of \({{\widetilde{a}}}'_{T}\) requires the solution of m optimization problems over the set \(LB'_T+\mathbb {R}^m_+\), and also the computation of each value \(\varphi _{LB'_T}(p)\) requires the solution of an optimization problem over the same set. Therefore the modified discarding test from Corollary 5.11 is computationally beneficial whenever the current number of local upper bounds exceeds m. Since this number of local upper bounds is observed to become vast in the course of numerical experiments, the discarding test from Corollary 5.11 reduces the overall number of required solutions of optimization problems significantly. As a consequence, subsequently we will only work with this modified discarding test.

Lower bounding via convex underestimators

Following the ideas presented in [26] a partial lower bounding set \(LB'\) for a subbox \(X'\) of X can be determined with the aid of convex underestimators of the functions \(f_j\), \(j=1,\ldots ,m\), and \(g_i\), \(i=1,\ldots ,k\), on \(X'\). Since in [26] the proposed construction of convex underestimators uses the \(\alpha \hbox {BB}\) technique from [1, 3], the functions \(f_j\) and \(g_i\) have to be twice continuously differentiable with factorable Hessians. Then, using interval arithmetic on the second derivatives, functions \(f_{j,\alpha }\) and \(g_{i,\alpha }\) may be constructed which, on \(X'\), are smooth, convex and satisfy \(f_{j,\alpha }\le f_j\) as well as \(g_{i,\alpha }\le g_i\). In [26] the entries of g are actually assumed to be convex functions, so that there no convex underestimators have to be computed for them.

Using the convex underestimators for the entries of g, the set

$$\begin{aligned} M_\alpha (X'):=\{x\in X'\mid g_\alpha (x)\le 0\} \end{aligned}$$

is a convex relaxation of \(M(X')\) with a smooth and convex description. In particular, the individual minimization of the smooth and convex functions \(f_{j,\alpha }\) over the convex set \(M_\alpha (X')\) is efficiently possible and yields the specific lower estimate \({{\widetilde{a}}}'_{\alpha BB}\) for the ideal point of \(f(M(X'))\). Numerical tests reveal that the lower estimates \({{\widetilde{a}}}'_{IA}\) and \({{\widetilde{a}}}'_{\alpha BB}\) of the ideal point are in general unrelated. Again, in the case \(M_\alpha (X')=\emptyset \) we obtain \({\widetilde{a}}'_{\alpha BB}=+\infty e\).

However, also an essential improvement of the singleton partial lower bounding set \(LB'_{\alpha BB,S}=\{{{\widetilde{a}}}'_{\alpha BB}\}\) is possible. In fact, \(LB'_{\alpha BB}:=f_\alpha (M_\alpha (X'))\) is another partial lower bounding set, since the compactness of \(X'\) and the continuity of \(f_\alpha \) and \(g_\alpha \) yields its compactness, and for each \(y\in f(M(X'))+\mathbb {R}^m_+\) there exists some \(x\in M(X')\subseteq M_\alpha (X')\) with \(y\ge f(x)\ge f_\alpha (x)\), so that y also lies in \(f_\alpha (M_\alpha (X'))+\mathbb {R}^m_+\). Note that this construction benefits from explicitly using that all objective functions are evaluated simultaneously at the same points x, whereas the construction of \({{\widetilde{a}}}'_{\alpha BB}\) treats the behavior of the single objective functions independently of each other. In the case \(M_\alpha (X')=\emptyset \) we obtain the empty partial lower bounding set \(LB'_{\alpha BB}=f_\alpha (\emptyset )\).

Corollary 5.11 yields the following discarding test.

Corollary 5.12

(Discarding via convex underestimators) Let \({\mathcal {F}}\) be a finite and stable subset of f(M), and let \(X'\) be a subbox of X. If the infimum \(\varphi _{LB'_{\alpha BB}}(p)\) of

$$\begin{aligned} \qquad \qquad \qquad \qquad \qquad \qquad \min _{t,x}\,t \quad \text{ s.t. }\quad p+te\ge f_\alpha (x),\ g_\alpha (x)\le 0,\ x\in X'\qquad \qquad \qquad (D_{\alpha BB}(p)) \end{aligned}$$

satisfies \(\varphi _{LB'_{\alpha BB}}(p)>0\) for all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(p\ge {{\widetilde{a}}}'_{\alpha BB}\), then \(X'\) can be discarded.

Note that \(D_{\alpha BB}(p)\) is the lifted version of the problem \(D(p,LB'_{\alpha BB})\), in analogy to the lifting (11). We remark that here the computation of \({{\widetilde{a}}}'_{\alpha BB}\) requires the solution of m convex optimization problems, and the computation of each value \(\varphi _{LB'_{\alpha BB}}(p)\) requires the solution of another convex optimization problem.

It is not hard to see that the set \(\{{{\widetilde{a}}}'_{\alpha BB}\}+\mathbb {R}^m_+\) may be essentially larger than its subset \(f_\alpha (M_\alpha (X'))+\mathbb {R}^m_+\), so that for a box \(X'\) which is discarded via the partial lower bounding set \(LB'_{\alpha BB}=f_\alpha (M_\alpha (X'))\), this may not be possible via \(LB'_{\alpha BB,S}=\{{{\widetilde{a}}}'_{\alpha BB}\}\).

In [26] this discarding test is further refined by an iterative construction of polyhedral outer approximations of the convex set \(f_\alpha (M(X'))+\mathbb {R}^m_+\) in the spirit of a cutting plane method, starting from the polyhedral outer approximation \(\{\widetilde{a}'_{\alpha BB}\}+\mathbb {R}^m_+\). More precisely, first the infimum \(\varphi _{LB'_{\alpha BB}}({\bar{p}})\) of \(D_{\alpha BB}({\bar{p}})\) is computed for some \({\bar{p}}\in \{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\mid p\ge {{\widetilde{a}}}'_{\alpha BB}\}\). In the case that \(\varphi _{LB'_{\alpha BB}}({\bar{p}})\) is positive and finite, also the remaining elements in \(\{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\mid p\ge {{\widetilde{a}}}'_{\alpha BB}\}\setminus \{{\bar{p}}\}\) must be checked for being elements of \(LB'_{\alpha BB}+\mathbb {R}^m_+\). However, this is not necessarily done by computing their values \(\varphi _{LB'_{\alpha BB}}(p)\). Instead information from the Lagrange multipliers corresponding to the optimal point of \(D_{\alpha BB}({\bar{p}})\) lead to an inequality \(\langle a,y\rangle \le b\) which is violated by \(y={\bar{p}}\) but satisfied for all \(y\in LB'_{\alpha BB}\) (see [26, Section 3.2] for details and illustrations). Hence, subsequently infima \(\varphi _{LB'_{\alpha BB}}(p)\) only have to be computed for the elements of the set \(\{p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\mid p\ge {{\widetilde{a}}}'_{\alpha BB},\ \langle a,p\rangle \le b\}\) (if any). This generation of cutting planes is repeated as long as local upper bounds p with \(\varphi _{LB'_{\alpha BB}}(p)>0\) are identified and speeds up the discarding test significantly.

Lower bounding via a linearization technique

Instead of constructing polyhedral relaxations of the partial upper image set \(f(M(X'))+\mathbb {R}^m_+\) by the generation of cutting planes to a convex relaxation, there is also a more direct method to generate polyhedral relaxations. To the best of our knowledge this has not yet been used in the framework of multiobjective branch-and-bound, and we shall present it next.

This approach assumes that all entries of f and g are factorable. By introducing auxiliary variables it first reformulates the description of \(f(M(X'))\) such that nonlinear elementary functions only appear in single equations, and then the graphs of these nonlinear functions are relaxed to polyhedral sets. This so-called reformulation-linearization technique (RLT, also known as auxiliary variable method; AVM) is based on the reformulation analysis in [33, 34]. The complete RLT approach is explained in, e.g., [4, 35].

Example 5.13

Let us briefly illustrate the main idea of RLT for the polyhedral relaxation of the graph of the function \(\psi (x)=\sin (x_1\exp (x_2))\) on the set \(X=[0,1]^2\). This graph may be written as

$$\begin{aligned} {{\,\mathrm{gph}\,}}(\psi ,X)=\,&\{(x,x_3)\in X\times \mathbb {R}\mid x_3=\sin (x_1\exp (x_2))\}\\ =\,&\{(x,x_3)\in X\times \mathbb {R}\mid x_3=\sin (x_4)\ \text{ with }\ x_4=x_1\exp (x_2)\}\\ =\,&\{(x,x_3)\in X\times \mathbb {R}\mid x_3=\sin (x_4)\ \text{ with }\ x_4=x_1 x_5,\ x_5=\exp (x_2)\}. \end{aligned}$$

Next, enclosing intervals \(X_i\) for the new variables \(x_i\), \(i=3,4,5\) may be computed by interval arithmetic, e.g., \(x_5\in X_5:={{\,\mathrm{EXP}\,}}([0,1])=[\exp (0),\exp (1)]\), where \({{\,\mathrm{EXP}\,}}\) is the interval version of the elementary function \(\exp \). Hence, if \(x^+\) denotes the vector of auxiliary variables and \({{\widehat{x}}}:=(x,x^+)\) the vector of all variables, we may recursively compute a box \({{\widehat{X}}}=X\times X^+\) with \({{\widehat{x}}}\in {{\widehat{X}}}\). This means that \({{\,\mathrm{gph}\,}}(\psi ,X)\) may be written as the projection of the lifted graph

$$\begin{aligned} {\widehat{{{\,\mathrm{gph}\,}}}}(\psi ,X):=\{{{\widehat{x}}}\in {{\widehat{X}}}\mid x_3=\sin (x_4),\ x_4=x_1 x_5,\ x_5=\exp (x_2)\} \end{aligned}$$

to the space of the first three variables. This completes the reformulation step.

The linearization step constructs a polyhedral relaxation of the lifted graph by relaxing each set defined by an individual equality constraint. For example, a polyhedral relaxation of the individual graph \(\{(x_2,x_5)\in X_2\times X_5\mid x_5=\exp (x_2)\}\) is given by the points in \(X_2\times X_5=[0,1]\times [1,\exp (1)]\) below the secant to the function \(\exp \) through the points \((0,\exp (0))\) and \((1,\exp (1))\), and above the two tangents to \(\exp \) in \((0,\exp (0))\) and in \((1,\exp (1))\). In the factorization of a function usually most of those factors whose graphs must be relaxed are such univariate functions. Among the few exceptions is the multiplication of two variables as in the above expression \(x_4=x_1 x_5\). However, the convex hull of the graph of a product of two variables over a box is explicitly known, and it is actually polyhedral [2].

Proceeding in this manner we arrive at a collection of linear inequalities \(A{{\widehat{x}}}\le b\) which describe the polyhedral relaxation \(\{{{\widehat{x}}}\in {{\widehat{X}}}\mid A{{\widehat{x}}}\le b\}\) of the lifted graph \({\widehat{{{\,\mathrm{gph}\,}}}}(\psi ,X)\). The projection of this set to the first three variables then is a polyhedral relaxation of \({{\,\mathrm{gph}\,}}(\psi ,X)\). Fortunately, in our application of this technique in the framework of optimization problems it will not be necessary to compute a representation of the polyhedral relaxation in the original variables, but we will be able to work with the explicitly known lifted polyhedron.

With a subbox \(X'\) of X, the reformulation-linearization technique for the construction of a partial lower bounding set \(LB'\) for \(f(M(X'))\) first views the partial image set \(f(M(X'))\) as the projection of the graph

$$\begin{aligned} {{\,\mathrm{gph}\,}}(f,M(X')):=\{(x,f(x))\in X'\times \mathbb {R}^m\mid g(x)\le 0\} \end{aligned}$$

to \(\mathbb {R}^m\). Then we introduce auxiliary variables \(x_f\in \mathbb {R}^m\) and \(x_g\in \mathbb {R}^k\) to lift this graph to the set

$$\begin{aligned} {\widehat{{{\,\mathrm{gph}\,}}}}(f,M(X')):=\{(x,x_f,x_g)\in X'\times X_f'\times X_g'\mid x_g\le 0,\,x_f=f(x),\,x_g=g(x)\} \end{aligned}$$

where the boxes \(X'_f\subseteq \mathbb {R}^m\) and \(X'_g\subseteq \mathbb {R}^k\) are computed by interval arithmetic. Clearly, also the projection of \({\widehat{{{\,\mathrm{gph}\,}}}}(f,M(X'))\) to the \(x_f\)-space \(\mathbb {R}^m\) coincides with \(f(M(X'))\).

Next, each of the \(m+k\) factorable equations \(x_f=f(x)\) and \(x_g=g(x)\) is treated as in Example 5.13, yielding a further lifting step to the set

$$\begin{aligned} {\widehat{{{\,\mathrm{gph}\,}}}}_{RLT}(f,M(X')):=\{{{\widehat{x}}}\in {{\widehat{X}}}'\mid x_g\le 0,\,A'_f\,{{\widehat{x}}}\le b'_f,\, A'_g\,{{\widehat{x}}}\le b'_g\}. \end{aligned}$$

The projection of \({\widehat{{{\,\mathrm{gph}\,}}}}_{RLT}(f,M(X'))\) to the \((x,x_f,x_g)\)-space \(\mathbb {R}^n\times \mathbb {R}^m\times \mathbb {R}^k\) then is a polyhedral relaxation of the lifted graph \({\widehat{{{\,\mathrm{gph}\,}}}}(f,M(X'))\), and its further projection to the \(x_f\)-space \(\mathbb {R}^m\) constitutes a polyhedral relaxation of the original partial image set \(f(M(X'))\). This projection to \(\mathbb {R}^m\) is the desired partial lower bounding set \(LB'_{RLT}\). Note that the latter set is closed as the projection of a closed polyhedron and thus compact as a closed subset of the compact box \(X'_f\subseteq \mathbb {R}^m\).

For the formulation of the corresponding auxiliary optimization problem \(D(p,LB'_{RLT})\) an explicit description of the set \(LB'_{RLT}\) is fortunately not required, but due to the projection property we may as well solve the lifted problem

$$\begin{aligned} \qquad \qquad \min _{t,{{\widehat{x}}}}\,t \quad \text{ s.t. }\quad p+te\ge x_f,\ x_g\le 0,\, A'_f\,{{\widehat{x}}}\le b'_f,\, A'_g\,{{\widehat{x}}}\le b'_g,\,{{\widehat{x}}}\in \widehat{X}',\qquad (D_{RLT}(p)) \end{aligned}$$

that is, a box constrained linear program. Corollary 5.11 thus yields the following discarding test, where the entries of the corresponding lower estimate \({{\widetilde{a}}}'_{RLT}\) of the ideal point of f on \(M(X')\) are the optimal values of the m linear optimization problems

$$\begin{aligned} \min _{{{\widehat{x}}}}\,(x_f)_j \quad \text{ s.t. }\quad x_g\le 0,\, A'_f\,{{\widehat{x}}}\le b'_f,\, A'_g\,{{\widehat{x}}}\le b'_g,\,{{\widehat{x}}}\in \widehat{X}' \end{aligned}$$

with \(j\in \{1,\ldots ,m\}\).

Corollary 5.14

(Discarding via a linearization technique) Let \({\mathcal {F}}\) be a finite and stable subset of f(M), and let \(X'\) be a subbox of X. If the infimum \(\varphi _{LB'_{RLT}}(p)\) of \(D_{RLT}(p)\) satisfies \(\varphi _{LB'_{RLT}}(p)>0\) for all \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \(p\ge {{\widetilde{a}}}'_{RLT}\), then \(X'\) can be discarded.

In the discarding test from Corollary 5.14 the computation of a value \(\varphi _{LB'_{RLT}}(p)\) can actually be terminated prematurely as soon as a dually feasible point with positive dual objective value has been generated, as weak duality then implies \(\varphi _{LB'_{RLT}}(p)>0\).

Lower bounding the nondominated set

After the concept of a partial lower bounding set \(LB'\) for \(f(M(X'))\) has been clarified, let us turn to the generalization of the overall lower bound \(\ell b=\min _{X'\in \mathcal{L}}\ell b'\) for v from the single objective case. Recall that for the sandwiching property (2) we wish to define a set \(LB\subseteq \mathbb {R}^m\) with \(Y_N\subseteq LB+\mathbb {R}^m_+\), that is, an overall lower bounding set LB for \(Y_N\).

Previous suggestions for overall lower bounding

The convexification based branch-and-bound method from [26] guarantees that for its termination tolerance \(\varepsilon >0\) the boxes \(X'\in \mathcal{L}\) satisfy \(({{\,\mathrm{lub}\,}}(\mathcal{F})-(\varepsilon /2) e-{{\,\mathrm{int}\,}}(\mathbb {R}^m_+))\cap (f_\alpha (M(X'))+\mathbb {R}^m_+)=\emptyset \). From this particular property one can conclude \(Y_N\subseteq ({{\,\mathrm{lub}\,}}(\mathcal{F})-(\varepsilon /2) e-{{\,\mathrm{int}\,}}(\mathbb {R}^m_+))^c\) which leads to the overall lower bounding set \(LB=\mathcal{F}-(\varepsilon /2)e\). However, in [26] this set LB is not used for node selection or a termination criterion, but solely to interpret the accuracy of the algorithm’s output. A similar construction is suggested in [11], but without an algorithmically suitable description and again only for an error estimate upon termination.

As mentioned above, [37] suggests to compute partial lower bounds via Lipschitz underestimators of the objective functions. An overall lower bound is constructed from them by taking the nondominated set of the union of partial lower bounds. While this is in the spirit of our suggestion in Sect. 6.3, in [37] it remains unclear how the latter nondominated set is computed. Furthermore, the generalization of this approach to more than two objective functions and to feasible sets other than boxes does not seem to be straightforward.

Overall lower bounding via partial lower bounding sets

The following result shows that for any lower bounding technique T we may choose \(LB=\bigcup _{X'\in \mathcal{L}}LB'_T\). Note that this set is compact as the union of the finitely many compact sets \(LB'_T\), \(X'\in \mathcal{L}\).

Lemma 6.1

For any lower bounding technique T and partial lower bounding sets \(LB'_T\) for \(f(M(X'))\) define \(LB_T:=\bigcup _{X'\in \mathcal{L}}LB'_T\). Then the inclusion

$$\begin{aligned} Y_N\subseteq LB_T+\mathbb {R}^m_+ \end{aligned}$$



Corresponding to any \(y_N\in Y_N\) there exists some efficient point \(x_E\in M(X)\) with \(y_N=f(x_E)\). Assume that the subbox \(X'_E\) with \(x_E\in M(X'_E)\) does not lie in \(\mathcal{L}\). Then it has been discarded, and since all discussed discarding tests are based on the general condition (12) from Proposition 5.8 with the set of local upper bounds \({{\,\mathrm{lub}\,}}(\mathcal{F})\) of some provisional nondominated set \(\mathcal{F}\), the box \(X'_E\) also satisfies (8) with this set \({{\,\mathrm{lub}\,}}(\mathcal{F})\). Proposition 5.3 thus yields \(Y_N\cap (f(M(X'_E))+\mathbb {R}^m_+)=\emptyset \) which contradicts \(y_N\in f(M(X'_E))\). Consequently there is some \(X'_E\in \mathcal{L}\) with \(y_N\in f(M(X'_E))\) which implies

$$\begin{aligned} y_N\in f(M(X'_E))+\mathbb {R}^m_+\subseteq LB'_{T,E}+\mathbb {R}^m_+\subseteq \left( \bigcup _{X'\in \mathcal{L}}LB'_T\right) +\mathbb {R}^m_+ \end{aligned}$$

and shows the assertion. \(\square \)

In view of \(Y_N\ne \emptyset \) Lemma 6.1 particularly guarantees \(LB_T\ne \emptyset \). More importantly, together with (7) we have shown that the desired enclosing property (2) for the nondominated set \(Y_N\) may in fact be formulated as

$$\begin{aligned} Y_N\subseteq \left( LB_T+\mathbb {R}^m_+\right) \cap \left( {{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+\right) , \end{aligned}$$

that is, the enclosure \(E(LB_T,{{\,\mathrm{lub}\,}}(\mathcal{F}))\) satisfies (2).

In the single objective case it is clear that not only the function value \(v=f(x_{min})\) of any minimal point \(x_{min}\) lies in the enclosing interval \([\ell b,ub]\), but also the function value \(f(x_{ub})=ub\) of the currently best known feasible point. In the multiobjective setting this corresponds to the set \(E(LB_T,{{\,\mathrm{lub}\,}}(\mathcal{F}))\) not only enclosing the nondominated set \(Y_N\), but also the provisional nondominated set \(\mathcal{F}=f(\mathcal{X}_{ub})\). The following result verifies this.

Lemma 6.2

For any lower bounding technique T the inclusion

$$\begin{aligned} Y_N\cup \mathcal{F}\subseteq E(LB_T,{{\,\mathrm{lub}\,}}(\mathcal{F})) \end{aligned}$$



In view of (14) we only need to show \(\mathcal{F}\subseteq E(LB_T,{{\,\mathrm{lub}\,}}(\mathcal{F}))\). As external stability holds by [32, Theorem 3.2.9], we have \(\mathcal{F}\subseteq Y_N+\mathbb {R}^m_+\). By Lemma 6.1 this implies \(\mathcal{F}\subseteq LB_T+\mathbb {R}^m_+\). From Lemma 5.1 we additionally know \(\mathcal{F}\subseteq {{\,\mathrm{lub}\,}}(\mathcal{F})-\mathbb {R}^m_+\), so that the assertion is shown. \(\square \)

Since both, \(LB_T\) and \({{\,\mathrm{lub}\,}}(\mathcal{F})\), are nonempty and compact sets, and since \(\mathcal{F}\subseteq f(M)\) consists of attainable points, Lemma 6.2 and Theorem 3.4 even yield the following result, which prepares the termination criterion and node selection rule suggested in Sect. 7.

Proposition 6.3

For some \(\varepsilon >0\) let

$$\begin{aligned} \max \left\{ s(\ell b,p)\mid (\ell b,p)\in LB_T\times {{\,\mathrm{lub}\,}}(\mathcal{F}),\ \ell b\le p\right\} <\varepsilon . \end{aligned}$$

Then all elements of the provisional nondominated set \(\mathcal{F}\) are \(\varepsilon \)-nondominated points of MOP.

Overall lower bounding by nondominated ideal point estimates

For efficient handling of the set \(LB_T+\mathbb {R}^m_+=\bigcup _{X'\in \mathcal{L}}\left( LB'_T+\mathbb {R}^m_+\right) \) in the definition of the enclosure \(E(LB_T,{{\,\mathrm{lub}\,}}(\mathcal{F}))\) as well as in the resulting termination criterion and node selection rule, one should take into account that many boxes \(X'\in \mathcal{L}\) may be redundant in its description. In the single objective case this corresponds to the fact that most \(X'\in \mathcal{L}\) satisfy \(\ell b'>\ell b=\min _{X'\in \mathcal{L}}\ell b'\), so that we are only interested in a box \(X'\) with \(\ell b'=\ell b\). Usually the latter box is unique. In the multiobjective setting one cannot expect the existence of a single box \(X'\) with \(LB'_T+\mathbb {R}^m_+=LB_T+\mathbb {R}^m_+\), but several boxes \(X'\) may be nonredundant, namely the ones for which the sets \(LB'_T\) are ‘nondominated’ in some sense.

A situation for which nondominance among the sets \(LB'_T\), \(X'\in \mathcal{L}\), is easily defined is the case of singleton sets \(LB'_T\). Hence, for each \(X'\in \mathcal{L}\) let us again consider the lower estimate \({{\widetilde{a}}}'_T\) for the ideal point of \(f(M(X'))\) induced by \(LB'_T\). Recall that \(LB'_{T,S}=\{{{\widetilde{a}}}'_T\}\) is a singleton partial lower bounding set for \(f(M(X'))\), so that by Lemma 6.1 the finite set

$$\begin{aligned} LB_{T,S}=\{{{\widetilde{a}}}'_{T}\mid X'\in \mathcal{L}\} \end{aligned}$$

is an overall lower bounding set, that is, \(Y_N\subseteq LB_{T,S}+\mathbb {R}^m_+\) holds.

Let us now consider only boxes \(X'\in \mathcal{L}\) which correspond to the nondominated points of \(LB_{T,S}\), that is, to the (unique) stable subset \(LB_{T,S,N}\) of \(LB_{T,S}\) so that no element of \(LB_{T,S,N}\) is dominated by some point in \(LB_{T,S}\).

Lemma 6.4

With the nondominated set \(LB_{T,S,N}\) of \(LB_{T,S}\) the inclusions

$$\begin{aligned} LB_T+\mathbb {R}^m_+\subseteq LB_{T,S}+\mathbb {R}^m_+\subseteq LB_{T,S,N}+\mathbb {R}^m_+ \end{aligned}$$



The first inclusion is valid because of

$$\begin{aligned} LB_T+\mathbb {R}^m_+&=\left( \bigcup _{X'\in \mathcal{L}}LB'_T\right) +\mathbb {R}^m_+=\bigcup _{X'\in \mathcal{L}}(LB'_T+\mathbb {R}^m_+)\subseteq \bigcup _{X'\in \mathcal{L}}(\{{{\widetilde{a}}}'_T\}+\mathbb {R}^m_+)\nonumber \\&=\left( \bigcup _{X'\in \mathcal{L}}\{\widetilde{a}'_T\}\right) +\mathbb {R}^m_+=LB_{T,S}+\mathbb {R}^m_+. \end{aligned}$$

Furthermore, for each \(y\in LB_{T,S}+\mathbb {R}^m_+\) there is some \(X'\in \mathcal{L}\) with \({{\widetilde{a}}}'_{T}\le y\). As \(|\mathcal{L}|<\infty \), either \(\widetilde{a}'_{T}\in LB_{T,S,N}\) holds or there exists some \(a\in LB_{T,S,N}\) with \(a\le {{\widetilde{a}}}'_{T}\le y\). This shows the second inclusion. \(\square \)

The first inclusion in the assertion of Lemma 6.4 means that, as expected, \(LB_{T,S}\) is a coarser overall lower bound for \(Y_N\cup \mathcal{F}\) than \(LB_T\). The second inclusion implies that the ‘relevant’ boxes for the definition of \(LB_{T,S}+\mathbb {R}^m_+\) are the ones from the sublist

$$\begin{aligned} \mathcal{L}_N:=\{X'\in \mathcal{L}\mid {{\widetilde{a}}}'_{T}\in LB_{T,S,N}\}. \end{aligned}$$

The combination of Lemma 6.2 with Lemma 6.4 and (3) immediately yields the next result.

Lemma 6.5

For any lower bounding technique T the inclusion

$$\begin{aligned} Y_N\cup \mathcal{F}\subseteq E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))=\bigcup _{\begin{array}{c} (a,p)\in LB_{T,S,N}\times {{\,\mathrm{lub}\,}}(\mathcal{F})\\ a\le p \end{array}}[a,p]. \end{aligned}$$


Termination criterion, node selection rule, and conceptual algorithm

In view of Lemma 6.5 and Theorem 3.4 we can now state the basis for the termination criterion of the multiobjective branch-and-bound method.

Theorem 7.1

In some iteration of the branch-and-bound method let \(LB_{T,S,N}\) be the nondominated set of the current set of induced ideal point estimates \(LB_{T,S}=\{{{\widetilde{a}}}'_{T}\mid X'\in \mathcal{L}\}\), let \(\mathcal{F}\) denote the current provisional nondominated set, and for some \(\varepsilon >0\) let

$$\begin{aligned} \max \left\{ s(a,p)\mid (a,p)\in LB_{T,S,N}\times {{\,\mathrm{lub}\,}}(\mathcal{F}),\ a\le p\right\} <\varepsilon \end{aligned}$$

hold. Then all \(q\in \mathcal{F}\) are \(\varepsilon \)-nondominated points of MOP.

We point out that the maximum in condition (17) is taken over finitely many choices so that checking (17) is algorithmically tractable.

Moreover, if for given \(\varepsilon >0\) the condition (17) is violated, that is,

$$\begin{aligned} w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))= \max \left\{ s(a,p)\mid (a,p)\in LB_{T,S,N}\times {{\,\mathrm{lub}\,}}(\mathcal{F}),\ a\le p\right\} \ge \varepsilon \end{aligned}$$

holds, then we expect to reduce the width \(w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) of \(E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) by branching a box \(X^\star \in \mathcal{L}_N\) such that with the corresponding ideal point estimate \({{\widetilde{a}}}^\star _T\) and some \(p^\star \in {{\,\mathrm{lub}\,}}(\mathcal{F})\) with \({{\widetilde{a}}}^\star _T\le p^\star \) we have

$$\begin{aligned} s({{\widetilde{a}}}^\star _T,p^\star )=\max \left\{ s(a,p)\mid (a,p)\in LB_{T,S,N}\times {{\,\mathrm{lub}\,}}(\mathcal{F}),\ a\le p\right\} . \end{aligned}$$

Observe that in single objective optimization the rule to select a box \(X^\star \) with \(\ell b^\star =\ell b=\min _{X'\in \mathcal{L}}\ell b'\) does not need information about the upper bound \(ub=f(x_{ub})\), since increasing \(\ell b\) reduces the length of the enclosing interval \([\ell b,ub]\) anyway. In the multiobjective setting decreasing \(w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) reduces the worst case width of the enclosure, which is still a natural generalization of the single objective case since there no worst case over several instances has to be considered.

Selection rules employed in the literature so far are to choose \(X'\in \mathcal{L}\) with minimal value \(({{\widetilde{a}}}'_{LB'})_j\), for some \(j\in \{1,\ldots ,m\}\) [12, 26] or with maximal width of \(F(X')\) [31]. Recall that, as opposed to our above suggestion, such node selection rules are unrelated to the basic branch-and-bound idea. In contrast to this, [37] proposes a node selection rule in the spirit of our approach where however, as mentioned above, it remains unclear how the overall lower bounding set is computed.

The multiobjective branch-and-bound framework resulting from our considerations is stated in Algorithm 1. It does not need our previous assumption of a nonempty feasible set M. Subsequently we shall comment on the implementation of some of its lines.


In line 1 of Algorithm 1 one may choose, for example, \(Z:= F(X)\) with the interval enclosure F(X) of f(X). In the node selection rule from line 9 the function s denotes the length of a shortest box edge, as defined in (4). Possibilities for the computation of partial lower bounding sets in line 14 and of their induced ideal point estimates in line 15 are discussed in Sects. 5.35.4 and 5.5. Line 16 and line 21 base on the modified general discarding test from Corollary 5.11. We point out that, if in line 14 the infeasibility of \(M(X^{k,\ell })\) is detected by the computation of an empty partial lower bounding set \(LB_T^{k,\ell }\), then line 15 results in \(\widetilde{a}_T^{k,\ell }=+\infty \,e\), so that in line 16 there does not exist any \(p\in {{\,\mathrm{lub}\,}}(\mathcal{F}^k)\) with \(p\ge {{\widetilde{a}}}_T^{k,\ell }\), and \(X^{k,\ell }\) is discarded. In line 18 one may check, for example, if the choice \(x^{k,\ell }={{\,\mathrm{mid}\,}}(X^{k,\ell })\) is feasible. We will discuss more general constructions for \(x^{k,\ell }\) in Sect. 8. The update in line 20 can be implemented with the algorithms from [8, 22]. Line 21 can become numerically expensive for long lists \(\mathcal{L}^k\), so that one may decide not to use it in each iteration, but only occasionally. In line 23 one can either do a pairwise comparison for determining the nondominated set, or, in case of a long list \(\mathcal{L}^k\), one can use for instance the Jahn-Graef-Younes method, see [17, 19].


In this section we show that for any \(\varepsilon > 0\) Algorithm 1 terminates after a finite number of iterations. To this end, we extend results from the single objective case to our multiobjective branch-and-bound approach. The main observation for the following is that for a given subbox \(X'\subseteq X\) the entries \(({{\widetilde{a}}}'_T)_j\) of the induced ideal point estimate for \(f(M(X'))\) can be interpreted as the results of a lower bounding procedure for \(f_j\) on \(M(X')\).

In preparation of the convergence proof we briefly review the definition and some important properties of such lower bounding procedures in the single objective branch-and-bound framework. As common in global optimization, a sequence of boxes \((X^k)\) is called exhaustive if we have \(X^{k+1} \subseteq X^k\) for all \(k \in {\mathbb {N}}\), and \(\lim _k {{\,\mathrm{diag}\,}}(X^k) = 0\) where \({{\,\mathrm{diag}\,}}(X')\) denotes the diagonal length of a box \(X'\subseteq X\). The following definitions are taken from [20].

Definition 8.1

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\), \(g:\mathbb {R}^m\rightarrow \mathbb {R}^k\) and \(M(X)=\{x\in X\mid g(x)\le 0\}\).

  1. (a)

    A function \(\ell \) from the set of all subboxes \(X'\) of X to \({\mathbb {R}}\cup \{+\infty \}\) is called M-dependent lower bounding procedure if \(\ell (X') \le \inf _{x \in M(X')} f(x)\) holds for all subboxes \(X' \subseteq X\) and any choice of the functions f and g.

  2. (b)

    An M-dependent lower bounding procedure is called convergent if every exhaustive sequence of boxes \((X^k)\) and all choices of f and g satisfy

    $$\begin{aligned} \lim _{k} \ell (X^k) = \lim _{k} \inf _{x \in M(X^k)} f(x). \end{aligned}$$

We remark that for any exhaustive sequence of boxes \((X^k)\) the set \(\bigcap _{k\in \mathbb {N}}X^k\) is a singleton, say \(\{{{\widetilde{x}}}\}\). In the case \({{\widetilde{x}}} \not \in M(X)\) we have \(M(X^k)=\emptyset \) for all sufficiently large k, so that with the usual convention \(\inf _{x \in \emptyset } f(x) = +\infty \) the convergence of a lower bounding procedure then requires \(\lim _k\ell (X^k)=+\infty \). Moreover, if in the case \({{\widetilde{x}}}\in M(X)\) we have \(x^k\in X^k\) for all k, then in view of \(\lim _k{{\,\mathrm{diag}\,}}(X^k)=0\) the sequence \((x^k)\) also converges to \({{\widetilde{x}}}\), and the convergence of \(\ell \) and continuity yield

$$\begin{aligned} \lim _k\ell (X^k)=\lim _k\inf _{x\in M(X^k)}f(x)=f({{\widetilde{x}}})=\lim _k f(x^k). \end{aligned}$$

Basically all lower bounding procedures that are commonly used in global optimization are convergent in the sense of Definition 8.1, in particular the ones considered throughout this article, that is, interval arithmetic [18, 27], convex relaxations via the \(\alpha \hbox {BB}\) method [1, 3] as well as lower bounds based on linearization techniques as described in [4, 33,34,35]. Note that in the above case \({{\widetilde{x}}}\not \in M(X)\) these lower bounding procedures even satisfy \(\ell (X^k)=+\infty \) for almost all k, thus implying the convergence property \(\lim _k\ell (X^k)=+\infty \). While this stronger property is employed for discarding inconsistent subproblems in line 16 of Algorithm 1, we will not use it in our convergence analysis.

As mentioned above, in Algorithm 1 the entries \((\widetilde{a}'_T)_j\) of the induced ideal point estimate for \(f(M(X'))\) are the results \(\ell _{T,j}(X')\) of a lower bounding procedure \(\ell _{T,j}\) for \(f_j\) on \(M(X')\). For the following let \(\ell _T(X')\) denote the vector with entries \(\ell _{T,j}(X')\), so that we may write \(\widetilde{a}'_T=\ell _T(X')\).

We start by showing that problems with empty feasible sets are handled by our branch-and-bound method as expected.

Lemma 8.2

Let the feasible set of problem MOP be empty and assume that in Algorithm 1 the entries of the induced ideal point estimates are computed by some convergent M-dependent lower bounding procedure. Then for any \(\varepsilon > 0\) Algorithm 1 terminates after a finite number of iterations.


We assume the contrary and derive a contradiction. First of all, note that due to the absence of feasible points the local upper bounds are never updated, so that we have \({{\,\mathrm{lub}\,}}({\mathcal {F}}^k) = \{{\overline{z}}\}\) in each iteration k.

Under our assumption that the algorithm does not terminate it is well-known from the theory of spatial branch-and-bound methods that an exhaustive sequence of boxes \((X^{k_{\nu }})\) is generated. Since none of these boxes contains a point from M(X), the convergence of the lower bounding procedure yields \(\ell _T(X^{k_{{\bar{\nu }}}})=\widetilde{a}_T^{k_{{\bar{\nu }}}} > {\overline{z}}\) for some sufficiently large \({\bar{\nu }}\). Thus in line 16 of Algorithm 1 the box \(X^{k_{{\bar{\nu }}}}\) is discarded. For that reason, the sequence of boxes \((X^{k_{\nu }})\) actually terminates after finitely many iterates and cannot be exhaustive. \(\square \)

Analogously to the single objective case, we must assume that in line 18 of Algorithm 1 feasible points are eventually available to improve the local upper bounds. This is formally stated in the following assumption, which we shall discuss in some detail after the proof of convergence.

Assumption 8.3

There exist some \(\delta > 0\) and some procedure so that for all boxes \(X'\subseteq X\) created by Algorithm 1 with \({{\,\mathrm{diag}\,}}(X')<\delta \) and \(M(X')\ne \emptyset \) a feasible point \(x' \in M(X')\) can be computed.

Lemma 8.4

Let the feasible set of problem MOP be nonempty, assume that in Algorithm 1 the entries of the induced ideal point estimates are computed by some convergent M-dependent lower bounding procedure, and let Assumption 8.3 hold. Then for any \(\varepsilon > 0\) Algorithm 1 terminates after a finite number of iterations.


Assume that the algorithm does not terminate. Then it generates an exhaustive sequence of boxes \((X^{k_{\nu }})\) all of which contain a feasible point. Note that otherwise the sequence of boxes would terminate after finitely many iterations, for a similar reason as in the proof of Lemma 8.2 (see, e.g., [20] for a more detailed explanation). As the sequence \((X^{k_\nu })\) is exhaustive and due to Assumption 8.3, for all sufficiently large \(\nu \) a point \(x^{k_\nu }\in M(X^{k_\nu })\) is available in line 18 of Algorithm 1. Since the image point \(f(x^{k_{\nu }})\) is used to update the provisional nondominated set in line 19, it either becomes an element of \({\mathcal {F}}^{k_\nu }\) or it is ignored since it is dominated by some point from \({\mathcal {F}}^{k_\nu }\). In any case, there is some \(q^{k_\nu } \in {\mathcal {F}}^{k_\nu }\) with

$$\begin{aligned} q^{k_\nu } \le f(x^{k_{\nu }}). \end{aligned}$$

Due to (18), for some sufficiently large \({\overline{\nu }} \in {\mathbb {N}}\) we additionally have

$$\begin{aligned} f(x^{k_{{\overline{\nu }}}}) - \ell _T(X^{k_{{\overline{\nu }}}}) \le \frac{\varepsilon }{2} e, \end{aligned}$$

where e again stands for the all ones vector. Finally, as the termination criterion was violated in iteration \(k_{{\bar{\nu }}}-1\), the ideal point estimate \({\widetilde{a}}_T^{k_{{\bar{\nu }}}} \in LB_{T,S,N}^{k_{{\bar{\nu }}}-1}\) and the local upper bound \(p^{k_{{\bar{\nu }}}} \in {{\,\mathrm{lub}\,}}({\mathcal {F}}^{k_{{\bar{\nu }}}-1})\) with \({\widetilde{a}}_T^{k_{{\bar{\nu }}}} \le p^{k_{{\bar{\nu }}}}\), in which the maximum is attained, satisfy

$$\begin{aligned} p^{k_{{\bar{\nu }}}} - \ell _T(X^{k_{{\bar{\nu }}}}) = p^{k_{{\bar{\nu }}}} - {\widetilde{a}}_T^{k_{{\bar{\nu }}}}\ge \varepsilon e, \end{aligned}$$

and thus

$$\begin{aligned} \ell _T(X^{k_{{\bar{\nu }}}}) +\frac{\varepsilon }{2} e \le p^{k_{{\bar{\nu }}}} - \frac{\varepsilon }{2} e. \end{aligned}$$

The combination of (19), (20) and (21) yields the chain of inequalities

$$\begin{aligned} q^{k_{{\overline{\nu }}}} \le f(x^{k_{{\overline{\nu }}}}) \le \ell _T(X^{k_{{\overline{\nu }}}}) + \frac{\varepsilon }{2} e \le p^{k_{{\overline{\nu }}}}- \frac{\varepsilon }{2} e < p^{k_{{\overline{\nu }}}} \end{aligned}$$

and, thus, \(q^{k_{{\overline{\nu }}}} < p^{k_{{\overline{\nu }}}}\). This contradicts property (ii) of Definition 4.1 and, hence, the assertion is shown. \(\square \)

From Lemma 8.2 and Lemma 8.4 we immediately obtain the main convergence theorem of the present article.

Theorem 8.5

Assume that in Algorithm 1 the entries of the induced ideal point estimates are computed by some convergent M-dependent lower bounding procedure, and let Assumption 8.3 hold. Then for any \(\varepsilon > 0\) Algorithm 1 terminates after a finite number of iterations.

Since, as discussed above, in global optimization usually convergent lower bounding procedures are used, the seemingly restrictive assumption of Theorem 8.5 is Assumption 8.3. In the remainder of this section we shall briefly comment on this. First note that the difficulty of finding feasible points also arises in single objective branch-and-bound methods: In case that no sufficiently good feasible point is found, the algorithms do not terminate since the gap between upper and lower bound does not drop below the termination tolerance \(\varepsilon \).

Usually the objective function is evaluated at feasible points in order to obtain upper bounds at the globally minimal value. These feasible points can be obtained by means of different approaches. In a very simple case one may check if the midpoint of a box that is currently examined is feasible and use this for an update of the current upper bound, if the objective value is sufficiently good. In practice, commonly a local procedure is used to solve the original problem locally where, for instance, the midpoint of the current box is used to initialize the starting point. Under Assumption 8.3 one may eventually expect the box midpoint to lie sufficiently close to the feasible set to make this local concept work. Although this approach in fact works well on many instances of practical interest, in general there is no guarantee that always sufficiently good upper bounds are computed in this way.

One possibility to transfer this approach from the single objective case to multiobjective branch-and-bound problems might be to consider a scalarization approach and solve the resulting single objective problem locally. However, although this attempt might work in practice, we prefer to not follow this line of research here since the drawback of this approach from the single objective case still remains and, in general, there is no proof of convergence.

Instead, in single objective optimization it is often proposed to accept so-called \(\varepsilon _M\)-feasible points, meaning that a point \(x \in X\) is accepted if \(g_i(x) \le \varepsilon _M\), \(i\in \{1,\ldots ,k\}\), holds for some predefined value \(\varepsilon _M > 0\). Unfortunately, a reasonable value for \(\varepsilon _M\) is hard to determine in advance, since even for \(\varepsilon _M\) close to zero usually one cannot be sure that an \(\varepsilon _M\)-feasible point is actually close to the feasible set.

For that reason, so-called upper bounding procedures have been developed that ensure Assumption 8.3 and, thus, termination of a branch-and-bound method by computing a convergent sequence of upper bounds, as for instance proposed in [20] for the purely inequality constrained case. For problems that also involve equality constraints we refer to [14]. These techniques can be adapted to multiobjective problems which is, however, beyond the scope of the present paper.

Numerical illustrations

The present section provides a brief proof of concept for Algorithm 1. Note that we neither aim at studying its numerical behavior for increasing dimensions m and n, nor do we intend to compare the performance of the lower bounding procedures from Sects. 5.35.4 and 5.5 among each other or in comparison to other lower bounding procedures like, for example, Lipschitz based underestimators. In fact, a thorough numerical study is beyond the scope of the present paper and postponed to future research. In contrast, the aim of the article at hand is the introduction of a general framework which allows to consider such comparisons in the first place.

For these reasons we illustrate the behavior of Algorithm 1 only along five biobjective examples with decision space dimensions up to \(n=4\), where for three merely box constrained instances we employ interval arithmetic as the lower bounding technique T, while for the two instances with additional constraints we test the linearization technique from Sect. 5.5. The box removal step from line 21 is performed only in every 50th iteration to speed up the computation time. We implemented the algorithm in Matlab 2019 with the Intlab Toolbox (version 6) for interval arithmetic [29], and ran it on a computer with an Intel i7 processor with 3.60 GHz and 32 GB of RAM.

Test Problem 9.1

As the first merely box constrained test instance we choose the Fonseca-Fleming problem from [13],

$$\begin{aligned} FF: \quad \min \,&\begin{pmatrix} 1-\exp \left( - {\sum }_{i=1}^n\left( x_i- \frac{1}{\sqrt{n}}\right) ^2\right) \\ 1-\exp \left( - {\sum }_{i=1}^n\left( x_i+ \frac{1}{\sqrt{n}}\right) ^2\right) \end{pmatrix} \text {s.t. }\quad -4 \le x_i \le 4,\ i=1,\ldots ,n\\ \end{aligned}$$

for the three decision space dimensions \(n\in \{2,3,4\}\). The nondominated set of this problem can be determined analytically as

$$\begin{aligned} Y_N=\left\{ \begin{pmatrix} 1-\exp (-4(t-1)^2) \\ 1-\exp (-4t^2) \end{pmatrix}\mid t\in [0,1]\right\} \end{aligned}$$

for any \(n\in \mathbb {N}\) (cf. Fig. 2a).

For \(n=2\) a set of image points for this problem is generated by the evaluation of the objective functions on an equidistant grid in the decision space, and illustrated in Fig. 2a. For \(n=2\) and the termination tolerance \(\varepsilon =0.1\), Fig. 2b and c depict the terminal enclosure and the terminal provisional nondominated set, respectively. The refinement of the latter for \(\varepsilon =0.05\) is shown in Fig. 2d.

Figure 2e and f provide an impression of the distribution of visited points in the decision space for \(n=2\) and \(n=3\), respectively, where also the pre-image points of the terminal provisional nondominated set (i.e., the provisional efficient points) are marked. This indicates that the suggested branch-and-bound framework promotes clustering of the decision space iterates near the efficient set, as opposed to a uniform distribution of these points, corresponding to a worst-case algorithm [36].

Fig. 2

Test problem FF (Fonseca–Fleming)

Table 1 Computational results for test problems

Table 1 provides some more details for Test Problem 9.1 up to decision space dimension \(n=4\) and for the following test instances, namely the discarding method (T), the largest of the shortest box edge lengths (w), the number of discarded boxes (# disc. boxes), the number of iterations (# iter.), the computational time (time), and the number of points in the final provisional nondominated set (#\({\mathcal {F}}\)).

The graphical illustrations for the following four test problems are structured analogously to Test Problem 9.1. As also similar comments apply, these will not be explicitly repeated.

Test Problem 9.2

We use a slightly modified version of the merely box constrained problem DEB2DK from [6], which has the form

$$\begin{aligned} DEB2DK: \quad \min&\begin{pmatrix} r(x) \sin (x_1\pi /2 ) \\ r(x) \cos (x_1\pi /2 ) \end{pmatrix} s.t. \quad 0 \le x_1,x_2 \le 1 \end{aligned}$$

with \(r(x)=(5+10(x_1-0.5)^2+\cos (4\pi x_1))(1+9x_2)\). Figure 3 and Table 1 provide the results.

Fig. 3

Test problem DEB2DK

Test Problem 9.3

This problem consists of two Shekel functions which are well known from evaluations of single-objective global optimization algorithms (see, e.g., [28]).

$$\begin{aligned} Shekel: \quad \min \,&\begin{pmatrix} -\frac{0.1}{0.1+(x_1-0.1)^2+2(x_2-0.1)^2}- \frac{0.1}{0.14+20((x_1-0.45)^2+(x_2-0.55)^2)} \\ - \frac{0.1}{0.15+40((x_1-0.55)^2+(x_2-0.45)^2)} - \frac{0.1}{0.1+(x_1-0.3)^2+(x_2-0.95)^2} \end{pmatrix} \\ \text {s.t. }&0 \le x_1, x_2 \le 1. \end{aligned}$$

The results are given in Figure 4 and Table 1.

Fig. 4

Test problem Shekel-functions

Test Problem 9.4

A test problem with additional constraints is given by the Constr-Ex problem from [9],

$$\begin{aligned} {\textit{Constr-Ex:}} \quad \min \,&\begin{pmatrix} x_1 \\ \frac{1+x_2}{x_1} \end{pmatrix} \\ \text {s.t. }&x_2+9x_1 \ge 6 \\&9x_1 -x_2 \ge 1 \\&0.1 \le x_1 \le 1 \\&0 \le x_2 \le 5. \end{aligned}$$

The results of Algorithm 1 are provided by Figure 5 and Table 1.

Fig. 5

Test problem Constr-Ex

Test Problem 9.5

As a second test instance with additional constraints we take the problem

$$\begin{aligned} TP5: \quad \min \,&\begin{pmatrix}x_1^2-x_2 \\ -0.5x_1-x_2-1 \end{pmatrix} \\ \textit{s.t. }&6.5-\frac{x_1}{6}-x_2 \ge 0 \\&7.5-0.5 x_1-x_2 \ge 0 \\&30-5x_1-x_2 \ge 0 \\&-7 \le x_1, x_2 \le 4 \end{aligned}$$

from [5]. In Figure 6 and Table 1 the results are shown.

Fig. 6

Test problem TP5

Final remarks

In line with the approaches from [26, 31] and, partly, [11, 12] also Algorithm 1 generates a finite subset of the \(\varepsilon \)-nondominated set \(Y^\varepsilon _N\) for a prescribed tolerance \(\varepsilon >0\), namely the provisional nondominated set \(\mathcal{F}\) upon termination. The preliminary computational experience indicates as a proof of concept that Algorithm 1 may possess the potential to solve also problems from practice in higher dimensions and with more objective functions.

Let us mention that for \(m\ge 3\) we expect the computation time to increase quickly for at least two reasons. First, adaptively refining the boxes which form the enclosure of the nondominated set may have exponential complexity in m and, second, increasing the number of objective functions usually increases the fraction of nondominated points among all attainable points, resulting in additional algorithmic workload. A thorough study of the algorithm’s numerical behavior for increasing dimensions m and n as well as a systematic comparison of different lower bounding procedures is beyond the scope of the present paper and postponed to future research.

Examples show that one must not expect that for \(\varepsilon \rightarrow 0\) the sets \(\mathcal{F}\) will converge to the nondominated set \(Y_N\), but only to the weakly nondominated set \(Y_{wN}\) where, furthermore, the latter convergence may be arbitrarily slow in \(\varepsilon \). We have also seen in the proof of Lemma 6.1 that our discarding rules guarantee that the set \(\bigcup _{X'\in \mathcal{L}}M(X')\) as well as its superset \(\bigcup _{X'\in \mathcal{L}}X'\) form coverings of the set of efficient points \(X_E\) of MOP. Consequently the set \(\bigcup _{X'\in \mathcal{L}}f(M(X'))\), its superset \(\bigcup _{X'\in \mathcal{L}}f(X')\), and any further supersets like \(\bigcup _{X'\in \mathcal{L}}F(X')\) for an interval extension F of f, form coverings of the set of nondominated points \(Y_N\).

In the framework of our approach it is clearly also important to study approximation properties of the enclosing sets \(E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) for \(w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\rightarrow 0\). Observe that \(Y_N\) is known to be a subset of the boundary of the image set f(M) [10] so that, if f(M) is a topological manifold with boundary, the dimension of \(Y_N\) can be at most \(m-1\). At the same time, for \(w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\rightarrow 0\) all boxes [ap] in (16) ‘become flat at least in one direction’ and thus at most \((m-1)\)-dimensional. This fits well to the expected dimension of \(Y_N\).

However, in general one may not expect that the enclosures \(E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) uniformly approximate the nondominated set \(Y_N\) for \(w(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\rightarrow 0\). This is due their different connectedness structure. In fact, examples like Test Problem 9.2 and Test Problem 9.3 show that, as opposed to the convex case [10], in the nonconvex case the set \(Y_N\) may be disconnected while \(E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) is connected.

One way to construct a subset of \(E(LB_{T,S,N},{{\,\mathrm{lub}\,}}(\mathcal{F}))\) with the same connectedness structure as \(Y_N\) may be to intersect it with the above union of interval enclosures

$$\begin{aligned} \bigcup _{X'\in \mathcal{L}}F(X')\supseteq Y_N \end{aligned}$$

for sufficiently small boxes \(F(X')\), \(X'\in \mathcal{L}\). However, in our framework this is not desirable as we wish to avoid the effort of constructing such small boxes in the first place.

We would also like to mention that our branch-and-bound framework can be generalized to the presence of equality constraints in the description of the feasible set of MOP. The crucial modifications for this are appropriate generalizations of the lower bounding procedures as well as meeting Assumption 8.3 for the convergence proof. As the details of this are mainly technical, we did not include them in the present paper for the clarity of presentation.

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Change history

  • 10 February 2021

    The publisher introduced an error in the last sentence of the proof for Lemma 3.2. Its beginning “The supremum of \(\widetilde{W}(LB,UB)\)” must be replaced by “The supremum of \(\overline{W}(LB,UB)\)”.

  • 23 February 2021

    A Correction to this paper has been published: https://doi.org/10.1007/s10898-021-00998-0


  1. 1.

    Adjiman, C.S., Dallwig, S., Floudas, C.A., Neumaier, A.: A global optimization method, aBB, for general twice-differentiable constrained NLPs: I. Theoretical advances. Comput. Chem. Eng. 22, 1137–1158 (1998)

    Article  Google Scholar 

  2. 2.

    Al-Khayyal, F.A., Falk, J.E.: Jointly constrained biconvex programming. Math. Oper. Res. 8, 273–286 (1983)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Androulakis, I.P., Maranas, C.D., Floudas, C.A.: \(\alpha \)BB: a global optimization method for general constrainted nonconvex problems. J. Glob. Optim. 7, 337–363 (1995)

    Article  Google Scholar 

  4. 4.

    Belotti, P.: Disjunctive cuts for nonconvex MINLP. In: Lee, J., Leyffer, S. (eds.) Mixed Integer Nonlinear Programming, pp. 117–144. Springer, Berlin (2012)

    Google Scholar 

  5. 5.

    Binh, T.: A multiobjective evolutionary algorithm: the study cases. In: Proceedings of the 1999 Genetic and Evolutionary Computation Conference, pp 127–128 (1999)

  6. 6.

    Branke, J., Deb, K., Dierolf, H., Osswald, M.: Finding knees in multi-objective optimization. In: Parallel Problem Solving from Nature: PPSN VIII, Springer, pp 722–731 (2004)

  7. 7.

    Chen, G., Huang, X., Yang, X.: Vector Optimization. Springer, Berlin (2005)

    Google Scholar 

  8. 8.

    Dächert, K., Klamroth, K., Lacour, R., Vanderpooten, D.: Efficient computation of the search region in multi-objective optimization. Eur. J. Oper. Res. 260, 841–855 (2017)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Deb, K.: Multi-objective optimization using evolutionary algorithms. Wiley, New York (2001)

    Google Scholar 

  10. 10.

    Ehrgott, M.: Multicriteria Optimization. Springer, Berlin (2005)

    Google Scholar 

  11. 11.

    Evtushenko, YuG, Posypkin, M.A.: Method of non-uniform coverages to solve the multicriteria optimization problems with guaranteed accuracy. Autom. Remote Control 75, 1025–1040 (2014)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Fernández, J., Tóth, B.: Obtaining the efficient set of nonlinear biobjective optimization problems via interval branch-and-bound methods. Comput. Optim. Appl. 42, 393–419 (2009)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Fonseca, C.A., Fleming, P.J.: Multiobjective genetic algorithms made easy: selection sharing and mating restriction. In: Proceedings of the 1st International Conference on Genetic Algorithms in Engineering Systems: Innovations and Applications, pp 45–52. IEEE Press, Piscataway, NJ (1995)

  14. 14.

    Füllner, C., Kirst, P., Stein, O.: Convergent upper bounds in global minimization with nonlinear equality constraints. Math. Program. (2020). https://doi.org/10.1007/s10107-020-01493-2

    Article  Google Scholar 

  15. 15.

    Gerth (Tammer), C., Weidner, P.: Nonconvex separation theorems and some applications in vector optimization. J. Optim. Theory Appl. 67, 97–320 (1990)

  16. 16.

    Göpfert, A., Riahi, H., Tammer, C., Zalinescu, C.: Variational Methods in Partially Ordered Spaces. CMS Books. Springer, New York (2003)

    Google Scholar 

  17. 17.

    Günther, C., Popovici, N.: New algorithms for discrete vector optimization based on the Graef–Younes method and cone-monotone sorting functions. Optimization 67, 975–1003 (2018)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Hansen, E., Walster, G.W.: Global Optimization Using Interval Analysis. Marcel Dekker Inc., New York (2004)

    Google Scholar 

  19. 19.

    Jahn, J., Rathje, U.: Graef–Younes method with backward iteration. In: Küfer, K.-H., et al. (eds.) Multicriteria Decision Making and Fuzzy Systems-Theory. Methods and Applications, pp. 75–81. Shaker, Aachen (2006)

    Google Scholar 

  20. 20.

    Kirst, P., Stein, O., Steuermann, P.: Deterministic upper bounds for spatial branch-and-bound methods in global minimization with nonconvex constraints. TOP 23, 591–616 (2015)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Klamroth, K.: Personal communication (2017)

  22. 22.

    Klamroth, K., Lacour, R., Vanderpooten, D.: On the representation of the search region in multi-objective optimization. Eur. J. Oper. Res. 245, 767–778 (2015)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Loridan, P.: \(\epsilon \)-solutions in vector minimization problems. J. Optim. Theory Appl. 43, 265–276 (1984)

    MathSciNet  Article  Google Scholar 

  24. 24.

    McCormick, G.P.: Computability of gobal solutions to factorable nonconvex programs: part I—convex underestimating problems. Math. Program. 10, 147–175 (1976)

    Article  Google Scholar 

  25. 25.

    Miettinen, K.: Nonlinear Multiobjective Optimization. Springer, Berlin (1998)

    Google Scholar 

  26. 26.

    Niebling, J., Eichfelder, G.: A branch-and-bound based algorithm for nonconvex optimization problems. SIAM J. Optim. 29, 794–821 (2019)

    MathSciNet  Article  Google Scholar 

  27. 27.

    Neumaier, A.: Interval Methods for Systems of Equations. Cambridge University Press, Cambridge (1990)

    Google Scholar 

  28. 28.

    Pardalos, P., Žilinskas, A., Žilinskas, J.: Non-convex Multi-Objective Optimization. Springer, Berlin (2017)

    Google Scholar 

  29. 29.

    Rump, S.M.: INTLAB-interval laboratory. In: Csendes, T. (ed.) Developments in Reliable Computing, pp 77–104. Springer, Dordrecht (1999)

  30. 30.

    Ruzika, S., Wiecek, M.M.: Approximation methods in multiobjective programming. J. Optim. Theory Appl. 126, 473–501 (2005)

    MathSciNet  Article  Google Scholar 

  31. 31.

    Scholz, D.: The multicriteria big cube small cube method. TOP 18, 286–302 (2010)

    MathSciNet  Article  Google Scholar 

  32. 32.

    Sawaragi, Y., Nakayama, H., Tanino, T.: Theory of Multiobjective Optimization. Elsevier, Amsterdam (1985)

    Google Scholar 

  33. 33.

    Smith, E., Pantelides, C.: Global optimisation of nonconvex MINLPs. Comput. Chem. Eng. 21, 791–796 (1997)

    Article  Google Scholar 

  34. 34.

    Smith, E., Pantelides, C.: A symbolic reformulation/spatial branch-and-bound algorithm for the global optimisation of nonconvex MINLPs. Comput. Chem. Eng. 23, 457–478 (1999)

    Article  Google Scholar 

  35. 35.

    Tawarmalani, M., Sahinidis, N.V.: Convexification and Global Optimization in Continuous Mixed-Integer Nonlinear Programming. Springer, Dordrecht (2002)

    Google Scholar 

  36. 36.

    Žilinskas, A.: On the worst-case optimal multi-objective global optimization. Optim. Lett. 7, 1921–1928 (2013)

    MathSciNet  Article  Google Scholar 

  37. 37.

    Žilinskas, A., Žilinskas, J.: Adaptation of a one-step worst-case optimal univariate algorithm of bi-objective Lipschitz optimization to multidimensional problems. Commun. Nonlinear Sci. Numer. Simul. 21, 89–98 (2015)

    MathSciNet  Article  Google Scholar 

Download references


Open Access funding enabled and organized by Projekt DEAL. The authors wish to thank the two anonymous referees and Anne-Sophie Reichhardt for their valuable comments and remarks on an earlier version of this manuscript.

Author information



Corresponding author

Correspondence to Oliver Stein.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Eichfelder, G., Kirst, P., Meng, L. et al. A general branch-and-bound framework for continuous global multiobjective optimization. J Glob Optim (2021). https://doi.org/10.1007/s10898-020-00984-y

Download citation


  • Multiobjective optimization
  • Nonconvex optimization
  • Global optimization
  • Branch-and-bound algorithm
  • Enclosure