# A modified NBI and NC method for the solution of N-multiobjective optimization problems

## Authors

- First Online:

- Received:
- Revised:
- Accepted:

DOI: 10.1007/s00158-011-0729-5

- Cite this article as:
- de S. Motta, R., Afonso, S.M.B. & Lyra, P.R.M. Struct Multidisc Optim (2012) 46: 239. doi:10.1007/s00158-011-0729-5

- 19 Citations
- 426 Views

## Abstract

Multiobjective optimization (MO) techniques allow a designer to model a specific problem considering a more realistic behavior, which commonly involves the satisfaction of several targets simultaneously. A fundamental concept, which is adopted in the multicriteria optimization task, is that of Pareto optimality. In this paper we test several well-known procedures to deal with multiobjective optimization problems (MOP) and propose a novel modified procedure that when applied to the existing Normal Boundary Intersection (NBI) method and Normal Constraint (NC) method for more than two objectives overcomes some of their deficiencies. For the three and four objective applications analyzed here, the proposed scheme presents the best performance both in terms of quality and efficiency to obtain a set of proper Pareto points, when compared to the analyzed existing approaches.

### Keywords

Multiobjective optimizationMultidimensional Pareto frontierNBI methodNC method## 1 Introduction

The advance in computational and numerical capabilities allows more efficient engineering design, through the solution of optimization problems. Thus, new realistic and challenge optimization applications have been tackled. One of these, is the assumption that the true goal of a design is the improvement of a set of objectives. This formulation leads to the so-called multiobjective optimization problem.

A fundamental concept, which is adopted in the multicriteria optimization task, is that of Pareto optimality (Collette and Siarry 2004), in which the solution of the MOP is the set of points that have optimum trade-off relation. Several approaches proposed in literature to obtain Pareto points are implemented in this work. Among them we have methods like the NBI (Das and Dennis 1996) and NC (Messac et al. 2003), which are efficient approaches to obtain good Pareto distribution for two objective optimization problems (Macedo 2002; Bates 2003; Lalonde et al. 2009; Motta 2009). For problems involving more than two objective functions, both standard techniques may fail in covering the whole Pareto frontier.

To overcome such drawback a new modification of the above mentioned methods is presented in this work. The purpose is to obtain an even distribution of points over the whole Pareto frontier without significant additional computational cost. Other modification of standard methods, like that proposed by Messac and Mattson (2004) to NC method, is also examined.

A detailed description of the proposed modification is presented and the scheme is tested and compared to other existing schemes by the solution of several model problems. For the analyzed applications the results obtained validate the proposed scheme and demonstrate its superiority against the other tested schemes.

The next two sections briefly describe the MOP and the concept of Pareto optimality. After that, in Section 4, four methods proposed in literature are detailed, and the difficulties of them to handle problems involving more than two objectives are illustrated, especially for the NBI and NC methods. Also, in this section, a modification to them is proposed. Section 5 presents some examples of MOP with three and four objectives that illustrate the behavior of the results obtained by the analyzed methods, as well as those obtained by the proposed approach. The last section draws the main conclusions of present work.

## 2 Mathematical definition

*x*is the design variable vector,

*n*is the dimension of

*x*, F(

*x*) is the set of objective functions, which is to be minimized,

*nobj*is the number of objective functions,

*h*

_{k}(

*x*) is an equality constraint,

*g*

_{i}(

*x*) is an inequality constraint and

*x*

_{lj},

*x*

_{uj}are, respectively, the lower and upper bounds of a typical design variable. Differently from the scalar optimization problems, in which a unique objective function is involved, MOP involves more than one objective and are also referred as vector optimization.

## 3 Pareto minima concept

*x*

^{p}such that no other point

*x*exists under the conditions:

*f*

_{j}at least.

A single solution of problem described in (1) and (2) is frequently impossible to obtain because the objective functions in general conflict with each other. Using the Pareto concept, the designer has to identify as many Pareto minima points as possible. These points can be used to construct a point-wise approximation to the Pareto front.

## 4 Schemes for generating the Pareto front

There are several techniques to obtain the set of Pareto minima. In this work we will discuss the so-called Weighting Sum method (WS) (Steuer 1985), Min–Max method (Hwang et al. 1980), Normal Boundary Intersection method (NBI) (Das and Dennis 1996) and Normalized Normal-Constrain method (NC) (Messac et al. 2003). Currently, in literature, the latter two strategies are pointed out to have more success to obtain the Pareto curves (for bi-objective problems). The problems that arise from such methods, when more than two objectives are involved, are discussed in Section 4.5.

### 4.1 WS method

*f*

_{0k}is the

*k*objective function evaluated in the initial design

*x*

_{0}and the elements \(\beta _k^j \) are the weighting coefficients, that will also be used in all subsequent techniques discussed here. They represent the relative importance of each objective and are normalized according to:

The Pareto optimum points are obtained solving many scalarizations of the MOP for various **β**^{j}, *j* = 1 ...*nPp*, where *nPp* is the total number of Pareto points required.

This technique presents the drawback that an even spread of weights rarely produces an even spread of Pareto points and it is also unable to find solutions in non-convex regions. In certain cases, the difficulty to find Pareto minima points increases and the designer cannot have an estimation of the shape of the trade-off frontier. Recently, different adaptive WS methods have been proposed by several researchers (Kim and Weck 2004; Huang et al. 2008). Different type of weighted metric for the objectives can be considered as an alternative to improve the standard form of this scheme (Huang et al. 2008). Kim and Weck (2004) overcame the concave region problem and obtained uniformly spaced Pareto points through an adaptive weighted sum method that solves MOP with more than two objective functions. Their approach is not analyzed here.

### 4.2 Min–Max method

*f*

_{k}” and “max

*f*

_{k}” are respectively the minimum and maximum values of

*f*

_{k}in the range of the scalar (or single objective) optimization solutions. The new vector of objective functions is

*β*

_{k}is assigned to each objective function \(\bar f_k \) and the following optimization problem is carried out

### 4.3 The NBI method

The NBI procedure is based on the so called Das parameterization of the Pareto curve (Das and Dennis 1996) and produces an even spread Pareto points distribution. This property turns out to be very adequate for obtaining the trade-off solutions among the various conflicting objectives. The NBI procedure is briefly described in what follows, and the details of such scheme can be found in Das and Dennis (1996).

#### 4.3.1 Basic definitions

**F**

^{*}and the normalized vector of objective functions \({\bar {\bf F}}({\bf{x}})\) are defined respectively, as

*f*

_{i}(

**x)**. The pay off matrix is defined as

*Convex Hull of Individual Minima*(CHIM). In addition, \(\Im\) is defined to be the set of feasible objective points. The boundary of \(\Im\) is denoted \(\partial\Im\).

#### 4.3.2 Generation of Pareto points

**p**and \(\partial\Im\) closest to the origin is the solution of the following sub problem:

The original method uses the additional constrain as \(\Phi \beta + t{\bar{\bf n}}\; = \;{\bar{\bf F}}(x)\). The use of inequality form of (14), instead of the equality form, gives more freedom to the optimization algorithm to find global Pareto points and to avoid bad formulated optimization sub-problems, as will be discussed later in Section 4.5. The subproblem (14) is called the NBI_{β} subproblem. The parameters **β** provides an alternate parameterization of the Pareto set, which is called Das parameterization (Das and Dennis 1996). In this procedure an even spread distribution for **β** is used. This leads to an even distribution of the points **ϕβ** on the CHIM, called “CHIM Points”. The normal emanating from these evenly spaced points intersect the boundary of the set of feasible solutions, containing the Pareto optimal points. This entire process forces the arcs joining two consecutive Pareto points to have equal projections on the CHIM, hence the points obtained are uniformly spread in this sense. The whole concept extends for more than two objectives.

### 4.4 The NC method

The NC method was introduced by Messac et al. (2003) and attempt to improve the NBI method by reducing the number of non-Pareto points generated. These methods have been shown to be able to generate uniformly spread Pareto points. NC works in a similar fashion as the NBI method and its graphical representation can be seen in Figure 3, which illustrates the objective space for a general bi-objective optimization problem and the corresponding Pareto frontier.

*j*th Pareto point is obtained solving the following problem

*j*th point in the utopia plane which is obtained as

For more than two objectives, the projection of the utopia plane does not cover the entire Pareto front, thus Messac and Mattson (2004) provides an important improvement of the original NC method, to overcome this limitation. In addition, minimizing \(\bar f_{nobj}\) does not guarantee the intersection solution, between the hyper-planes NU and the Pareto front. The inequality constraints presented in (15), allow the solution to move in the Pareto front parallel to a hyperplane NU, i.e. in \(\bar N_k \left( {{\bar{\bf F}} - {\bar{\bf X}}_{pj} } \right)^T = 0\), for specific *k* and *j*. The limitations and some improvements of this method will be discussed later.

### 4.5 NBI and NC methods for multiple dimension Pareto front

Efficient Pareto points calculation over the whole frontier, for problems involving more than two objective functions, is still an important issue of investigation (Arora et al. 2007).

As already discussed, NBI Pareto points can be obtained trough the problem formulated in (14), once a **β** parametrization procedure (Das and Dennis 1996) has been selected. However, for MOP involving more than two objective functions, some of the components of **β** are not necessarily greater than zero (Das and Dennis 1996).

It can be noticed that it is possible to find Pareto points only inside the CHIM, which means that the obtained points do not cover the whole Pareto region. Moreover, the Pareto points could even be located outside the Pareto region, as can be seen in Fig. 4a, in which part of the projection of the segment \(\overline {{\bf{F}}(x_1^* )\ {\bf{F}}(x_3^* )}\) is outside the Pareto region. In this case, the original NBI method will find non Pareto points, due to its several equality constraints, imposed in the original problem formulation of such scheme. The different constraints imposed here (14) may overcame this problem. Similarly, this is ameliorated in NC method due to the existence of more flexible constraints in its formulation (15).

The modification proposed here requires an additional definition denoted CHIM^{ + }. CHIM^{ + } represents the whole Pareto frontier projected at the hyper planes which contains the individual minima as indicated in Fig. 4b. It is worth to mention that CHIM^{ + } is not a convex hull and for the two objective case CHIM^{ + } = CHIM (Das and Dennis 1996).

Figure 4b illustrates also the untaken regions (hatched areas in the figure) by standard NBI and NC methods. In order to find points covering the whole Pareto frontier, it is necessary to distribute the base points at CHIM^{ + } instead of at standard CHIM region, as illustrated in Fig. 4b. However, a priori CHIM^{ + } is unknown.

### 4.6 A modified approach

In this work, the proposed methodology aims to obtain the boundary of the Pareto front and as consequence the CHIM^{ + } region, and to determine interior points inside the Pareto region. To achieve the Pareto front boundary, MOP involving subsets of the objective functions need to be solved. In this way each border of the Pareto frontier is obtained in turn.

During the revision of the present work, the paper Mueller-Gritschneider et al. (2009) was mentioned by a reviewer. In this paper a similar idea of our methodology is described. It is important to mention that the present approach has been developed since 2008, and the first results were published in Motta (2009). The major difference between the two works is the process of generation of base points (inner points), when more than two objectives are involved. In Mueller-Gritschneider et al. (2009) the distribution of the interior points is calculated by solving a linear programming optimization problem for each point, instead of employing the CVT DOE technique considered here. The CVT procedure may not lead to an exactly uniform distribution of the interior points, but as all the points are distributed simultaneously, the process become more efficient, when compared to the interior points generation process used in Mueller-Gritschneider et al. (2009). The methodology proposed here also uses auxiliary optimizations in the sub-problems, that improves the reliability of the procedure, as described in what follows. Moreover, numerical evidences, that the proposed methodology works for MOP involving more than three objective functions, are given in the present paper.

*f*

_{1}and

*f*

_{2}objectives the Pareto border \({\bf{F}}({\bf{x}}_{[1, 2]}^* )\) is obtained, following by solving a MOP considering only

*f*

_{2}and

*f*

_{3}objectives the border \({\bf{F}}({\bf{x}}_{[2, 3]}^* )\) is obtained and similarly the border \({\bf{F}}({\bf{x}}_{[1, 3]}^* )\) is attained.

During this stage (bi-objective optimizations) the third objective (the ignored one) may assume a bad value (i.e. resulting in a non Pareto point), thus an additional (auxiliary) suboptimization must be evaluated. The latter is a scalar optimization problem considering the third objective function and the pair of functions previously optimized fixed as additional constraints. This feature will be detailed later.

In Fig. 6 an example of Pareto points distribution is shown, in which the black points were obtained as described before. The projection of such points define the CHIM^{ + } region. This encompass the whole Pareto front, and the remaining CHIM^{ + } points (highlighted in gray) are distributed inside (Motta 2009). The distribution of the base points on the interior of the CHIM^{ + } region is obtained by an appropriate CVT (Centroidal Voronoi Tessellation) design of experiment (DoE) algorithm (Du et al. 1999; Giunta et al. 2003).

The CVT method is an iterative algorithm used to redistribute an initial design so that the distance between each points are optimized by achieving the configuration of minimum energy. To characterize the minimum energy, the centers of mass of the Voronoi Polygons must be determined to become the new generators. When the generators move to the centers of mass of the Voronoi regions, the relations between the new generators and each point are modified, and so the Voronoi Tessellation must be redesigned. This process must be repeated as many times as necessary for a satisfactory uniformity. There are many known methods for constructing CVT, in the present work an algorithm based on Du et al. (1999) is considered.

In the adapted algorithm used here, the projection of the Pareto points which form the boundaries of the CHIM^{ + } region, where the interior points are distributed, are used to built the Voronoi Tessellation, but their positions are fixed during the CVT interactions.

- 1.
Perform an optimization for each objective function individually;

- 2.
Perform bi-objective optimizations for each possible bi-objective space;

- 3.
Once the Pareto curves have been obtained, for all possible bi-objective combinations, solve the three objective problems for all possible permutation;

- 4.
Once the Pareto surfaces have been obtained, for all combinations of three objective functions, conduct four objective problems for all possible permutation;

- 5.
Subsequently continue with the above procedure until the total number of objective functions has been covered.

Applications considering the proposed methodology are presented in Section 5, for MOP involving three and four objectives.

The number of Pareto points obtained in each step, and consequently, the number of sub-optimizations performed, is determined so that the *nPp* remains the same as the originals methods (and equal to the desired *nPp*). For all the MO methods described here the number of optimization problems to be solved is equal to *nPp* plus the additional (auxiliary) scalar suboptimizations evaluated. In other words, the main difference between the MO techniques is on their optimization problem formulation.

Thus, for a MOP considering *nobj* objectives and *k* values of *β* a total of ((*nobj* + *k* − 2)!/(*k* − 1)!/(*nobj* − 1)! = *nPp*) sub problems are solved (Das and Dennis 1996). It is important to emphasize that when more objectives are considered more Pareto points are needed to cover the entire Pareto front properly.

*β*are considered ([0, .25, .5, .75, 1]), (following the parametrization presented in Das and Dennis 1996), a total of 35 (7!/4!/3!) sub-optimization problems should be solved, for all methods presented here. In the case of the modified version presented here, these 35 sub-problems are divided in:

Four scalar optimizations;

Six bi-MOP (each one computing three Pareto points) totalizing 18 sub-problems;

Four three-MOP (each one computing three Pareto points) totalizing 12 sub-problems;

and one four-MOP computing only one Pareto points, corresponding to vector;

**β = [.25, .25, .25, .25]**, the only vector**β**without a null component (that satisfy (6)).

*nPp*used for all methods, in the applications analyzed here, be equal, the number of feasible solutions, significantly different, that satisfy the Pareto concept could vary for each method. Thus, the number of Pareto points effectively obtained (excluding overlapping points, non Pareto points and infeasible solutions) will be refer to

*nEPp*(number of effective Pareto points), note that

*nEPp*≤

*nPp*.

The sub-optimizations in which any objective function is ignored may result in an optimum point that can move and get better in some of these omitted objective functions, without changing the value of the considered objective functions.

For example, consider the idealized problem involving two functions to be minimized, namely: \(f_1(x,y)=x^2\) and \(f_2(x,y)=y^2\). Clearly, the optimum and unique Pareto point is (*x*, *y*) = [0, 0]. Although, using the NNC or NBI methods, the result of the scalar optimizations (considering the objective functions individually), are deeply dependent on the starting point and so the remain of the MO process. The optimization of function *f*_{1} starting from any [*x*_{0}, *y*_{0}] point, could result in a (*x*, *y*) = [0, *y*_{0}] point, or in a objective function space \((f_1,f_2)=[0,y_0^2]\). Thus, the result from the scalar optimization leads to non Pareto point (except if the starting point is [*x*_{0}, 0]), the same observation is valid for function *f*_{2}.

Therefore, after any sub-optimization in which any objective function is ignored, another auxiliary optimization is performed, starting from the optimum point obtained before. The objective function previously ignored is considered as a new goal and the optimum value achieved is preserved (or improved) as a result of an additional inequality constraint applied.

This process is introduced in the proposed methodology and all MOP solved in this paper, consider it in order to guarantee robustness. Note that for sub-optimizations in which any beta value is equal to zero the correspondent objective function is ignored by the WS and Min-Max methods, as well as in the methodology presented here. It is worth emphasizing that the majority of auxiliary optimizations considered during the solution of the examples of this paper, reach convergence very rapidly (less than three iterations).

General algorithm of the modified MO methodology

1. Loop over objectives (for 1 to |

1.1 Run scalar optimization |

1.2 Run auxiliary optimization |

End loop |

2. Loop over dimension (for |

2.1 Compute permutations |

2.2 Loop over permutations (for 1 to |

2.2.1 Update permutation data |

2.2.2 If ( |

i. Project Pareto boundary points |

ii. Distribute interior points - CVT |

End if |

2.2.3 Loop over interior points |

i. Run sub optimization |

ii Run auxiliary optimization (if |

End interior points loop (sub-optimization) |

End permutation loop |

End dimension loop |

### Remark 1

Step 1. Scalar optimizations and the auxiliary optimization cited (to guarantee robustness).

Step 2. Starting with bi-objective subproblems, than tri-objective subproblems and so on.

Step 2.1. For example, for the bi-objective subproblems of a problem with three objectives, the permutations computed are: [(*f*_{1},*f*_{2}); (*f*_{2},*f*_{3});(*f*_{1},*f*_{3})].

Step 2.2. Obtain subsets of Pareto points for all permutations of the current dimension case (boundary subproblems if dimension≠nobj) . This can be consider as a subset of interior points for the current dimension, or a boundary subset for higher dimensions (if exist).

Step 2.2.1. Update the Pareto points (of the boundary) needed to obtain the interior points of the current permutation. For the example cited above, the first “permutation data” needed are the results of the one-objective optimization involving only *f*_{1} and only *f*_{2}. But for the three dimension case (tri-objective subproblems) and permutation (*f*_{1},*f*_{2},*f*_{3}), the “permutation data” needed are the results of the one- objective optimization involving *f*_{1} only, *f*_{2} only and *f*_{3} only, as well as the results (Pareto curves) of the bi-objective optimization (*f*_{1,2}), (*f*_{2,3}) and (*f*_{1,3}).

Step 2.2.2. Project these (boundary) Pareto points (CHIM^{ + } generation) and Generate the interior points via CVT.

Step 2.2.3. Obtain the sub optimization result for all interior points of the current permutation. The auxiliary optimizations here are analogous to that in step 1.2 (see remark step 1).

## 5 Numerical examples

In this section, several academic multiobjective applications, which consider three and four objective functions and a problem from the literature with three objectives, are studied to highlight some of the difficulties of the standard methods when dealing with this class of problems. The performance of the modified methods proposed here is also presented. Additionally to the examples presented here, an application on structural MOP with three objective functions, was successfully solved in Motta (2009) using the proposed modification combined with a reduced order modeling approach called Reduced Basis Method (Afonso et al. 2009). In the results of the present paper, the parameter *nFC* is the total number of function evaluations necessary for the Pareto solutions and *nEPp* is the number of effective Pareto points (described previously).

### 5.1 Three objective functions: first analytical test problem

This very simple problem results in a wide and smooth Pareto surface, that can show properly the untaken regions by the standard NBI and NC methods.

*β*

_{i}values, with

*i*= 1, ...3 which combined with each other and considering (6) leads to 120 different

*β*vectors (Das and Dennis 1996) and so to 120 optimization subproblems. In both figures (for each method) the left hand side graphic shows the top views (

*f*

_{1}(

*x*) ×

*f*

_{2}(

*x*)) of the objective functions space, where the gray scale refers to the value of the third objective function

*f*

_{3}(

*x*). The right hand side graphics present an isometric view of the three objective function spaces, the gray scale refers to the value of the third objective function

*f*

_{3}(

*x*) for a better visualization. For each method the caption of the figure presents, within the brackets, the

*nFC*/

*nEPp*parameters.

As can be observed, the WS and Min–Max methods lead to point concentration in some regions, empty portions in some other regions and the smallest number of function evaluations. WS method obtain just 22 effective Pareto points, out of the 120 desired. The NC and NBI methods give uniformly distributed points. However, they are unable to cover the whole Pareto surface, but solely the regions that result from the CHIM projection (i.e. the triangle formed by the individual minima). By analyzing the results obtained using the modified methods NBIm and NCm (which determine the boundary of the Pareto region), an even distributed set of points were obtained over the whole Pareto front, see Fig. 8. The standard methods, NBI and NC and their modified versions (NBIm and NCm), required basically the same number of function computations.

^{ + }. Finally, the internal points (in black) were obtained after determination of the CHIM

^{ + }, using a CVT algorithm as described in Section 4.5. The presented results were obtained using NBIm, but those obtained using the NCm (not shown) were basically the same.

### 5.2 Three objective functions: second analytical test problem

For this particular example it can be observed that \(f_1 ^*=f_2 ^* \) , for \(x_1 ^*= x_2 ^*=(0,0,0)\). Due to that fact, the CHIM degenerates into a 3D straight line, and so, the Pareto points obtained through the NBI and NC methods can only achieve points over this line.

*f*

_{1}(

*x*) ×

*f*

_{2}(

*x*) ) of the objective functions space with the gray scale referring to the value of the third objective function

*f*

_{3}(

*x*). The parameters

*nFC*/

*nEPp*are given in the caption of each graphic.

As indicated in Fig. 10a, the WS method lead to a poor result with basically just two distinct points. The Min-Max method lead to approximately the same result as the standard NBI and NNC methods. As mentioned, the CHIM degenerates into a 3D line, which can be seen in Fig. 10c and d, where the Pareto points were obtained through the NBI and NC method, respectively.

The proposed procedure led to a better distribution of the Pareto frontier for the NBIm case, as shown in Fig. 10e. The CHIM^{ + } points have been obtained from the Pareto curves found as the solution of two objective subproblems. For this case the NCm (see Fig. 10f) was not able to give as good performance as the NBIm.

When using the NC (and NCm), in this case, the directions \(\bar N_k \) (16) (NC method) has only one independent vector, and so, the additional constraints of the method deteriorate in one plane. Due to the extra relaxation of the inequality constraints (see (15)) the NCm allows a larger solution mobility. Thus the solution fall to the center of the Pareto front, where *f*_{3} is minimized.

Finally, it should be said that the NCm results would be different if the sub-optimizations employed a different order for the objective functions then the adopted here, in which *f*_{3}(*x*) = − *x*_{1}*x*_{2}*x*_{3} . For example, changing the objective functions order, if the last function (selected to be minimized) was \(f_3\; =\; x_1 ^3 + x_2 + 2x_3\) (previously *f*_{1}), the internal solution should move to the point that minimize the function previously *f*_{1} (to the left), as can be seen in Fig. 10g. Figure 10g illustrates the results via NCm method in which the third function was changed by the first. This new solution should generate 78 internal points, but from those 47 solutions are non Pareto points (black “x” points). The NBIm solution is independent on the objective order.

### 5.3 Three objective functions: geometric test problem

*x*

_{1}and

*x*

_{2}, space. A MOP is set taking three objective functions that are to minimize the distance of an arbitrary point

**x**to the points A, B and C given in Fig. 11. Furthermore, consider that the feasible space exclude the circle presented in Fig. 11, i.e. the distance from

**x**to point

**D**has to be larger than 0.5.

**F**is the vector of the objective functions,

*g*is the inequality constraint and

**x**= (

*x*

_{1},

*x*

_{2}) is the vector of design variables. The geometric problem can be helpful to understand and visualize the result of the MOP.

*nFC*/

*nEPp*for each method are given in the caption within the brackets.

From these results, it should be emphasized the existence of empty regions (i.e. non even distribution of Pareto points) and a very large number of function evaluations when WS scheme (Fig. 12b) is adopted. When comparing to WS, the Min–Max method (Fig. 12c) leads to a better result determining the boundary of the optima region properly, with less function computations, but still with a non uniform distribution of both boundary and interior points.

The use of NBI (Fig. 12d) and NC method (Fig. 12e) result in good distribution of the Pareto solutions. However, NC scheme gives infeasible solutions (dots depicted by black “x” in Fig. 12e) and several points that do not satisfy the Pareto criteria. Furthermore, the original NBI method (which uses the equality constraint instead of the adopted inequality (14)) may present non Pareto points and infeasible solutions. The inequality constraint considered in the formulation of the NBI method (14)) allows the solution (related to CHIM points with no projection of the Pareto front) to move to the Pareto front, see Fig. 12d.

When using the proposed modification, i.e. NBIm and NCm, a very good Pareto points distribution were obtained (see Fig. 12f and g), covering the whole exact Pareto set with a well defined boundary and an even distribution of points. Moreover, no unfeasible and non Pareto solution was obtained. Finally, it must be stressed that all that were obtained with a smaller number of function evaluations when compared with the other analyzed methods.

### 5.4 Four objective functions: analytical test problem

*β*

_{i}values, with

*i*= 1, ...4 which combined with each other as suggested in Das and Dennis (1996) leads to 220 different

*β*vectors and, consequently, to 220 optimization subproblems.

For each method, the left hand side graphics show the top views (*f*_{1}(*x*) × *f*_{2}(*x*)) of the objective functions space. The right hand side graphics present an isometric view for the first three objective function spaces. The graphics in the center illustrate a perspective view of the first three objective function spaces. In all figures the gray scale refers to the value of the fourth objective function *f*_{4}(*x*). For each method the caption of the figure presents, within the brackets, the parameters *nFC*/*nEPp*.

As can be observed, the WS and Min-Max methods lead to points concentration in some regions and empty portions in some other regions. The NC and NBI method give uniformly distributed points. However they are unable to cover the whole Pareto front, but solely the region that result from the CHIM projection (i.e. the hyperplane formed by the individual minima).

By analyzing the results obtained using the modified methods NBIm and NCm, which determine the boundary (curves and surfaces) of the Pareto region, an even distributed set of points were obtained over the whole Pareto region, with approximately the same number of function evaluations than the respective standard methods.

### 5.5 Gear box design

Minimize the speed reducer volume.

Minimize the stress in one shaft.

Minimize the stress in second shaft.

The problem is subjected to a number of constraints imposed to the design variables limits and by design practices, as: stress of teeth, transverse displacement of shafts, generated torque and the stress of shafts itself (used in objective). For more details about the problem, refer to Kupapatz and Azarm (2001), Huang et al. (2006) and Sanchis et al. (2008).

A modified version of the NC (NNC) method called enhanced normalized normal constraint (ENNC) method was applied to the tri-objective problem in Sanchis et al. (2008). As the example 2 (Section 5.2) only two different anchor points were obtained by the scalar optimizations. In Sanchis et al. (2008) three different pseudo anchor points were built (by assuming the worst value of the objective functions which are not minimized) to expand the Utopia hyperplane (CHIM) in order to obtain a more wide region of the Pareto surface. The average objective function evaluations was 66 for each of the 1,275 runs (points) of the SQP algorithm, totalizing 84,150 function evaluations.

*nFC*/

*nEPp*(within the brackets) in the caption of each technique. A top view and an isometric view of the Pareto points in the space of the objective functions are illustrated, where the gray scale refers to the value of the third objective function

*f*

_{3}(

*x*).

As the other examples, the WS method obtains a poor result. The Min-Max method performs slightly better when compared to WS method.

Similarly to the example 2, the CHIM degenerates to a 3D line, deteriorating, consequently, the results by the standard NBI and NNC methods. The NNCm also, obtained results that follows the behavior presented in the example 2, in which the internal Pareto points were concentrated in a 3D curve.

Finally, by using the NBIm procedure a well distributed Pareto points were obtained, leading to the best performance among the methods presented here and the average objective function evaluations was 66, same as the results from the work Sanchis et al. (2008), cited before.

### 5.6 Results summary

*FC*/

*nEPp*indicates the number of function evaluations

*FC*per effective (excluding overlapping points) Pareto points obtained

*nEPp*,

*nnPp*indicates the number of non Pareto points obtained including infeasible solutions. In the brackets of each example label the required

*nPp*is shown. Finally, the parameter Evness, that appears in Table 2, indicates the quality of the distribution of the points, the closer to zero the better.

Performance of studied cases

Method | Time (s) | \(\emph{FC/nEPp}\) | \(\emph{nnPp}^{\rm a}\) | \(\emph{nEPp}\) | Evness |
---|---|---|---|---|---|

Example 1 (120) | |||||

WS | 9.516 | 120.2 | 0 | 22 | 4.331 |

MinMax | 9.001 | 21.7 | 0 | 120 | 2.684 |

NBI | 5.179 | 35.7 | 0 | 120 | 0.1819 |

NC | 5.647 | 30.2 | 0 | 120 | 0.1819 |

NBIm | 10.17 | 34.3 | 0 | 120 | 0.2958 |

NCm | 10.33 | 34.4 | 0 | 120 | 0.2958 |

Example 2 (120) | |||||

WS | 8.393 | 540.0 | 0 | 2 | 10.93 |

MinMax | 11.79 | 179.6 | 0 | 31 | 1.281 |

NBI | 4.009 | 166.1 | 0 | 15 | 1.109 |

NC | 3.713 | 120.9 | 0 | 15 | 1.119 |

NBIm | 9.454 | 40.7 | 0 | 106 | 0.4781 |

NCm | 10.16 | 44.6 | 0 | 105 | 1.341 |

Example 3 (120) | |||||

WS | 10.05 | 102.3 | 0 | 50 | 1.897 |

MinMax | 9.734 | 28.8 | 0 | 120 | 1.71 |

NBI | 4.43 | 21.3 | 0 | 120 | 0.5572 |

NC | 4.758 | 18.0 | 7 | 113 | 0.5844 |

NBIm | 9.391 | 19.0 | 0 | 120 | 0.4203 |

NCm | 8.642 | 13.8 | 0 | 120 | 0.3758 |

Example 4 (224) | |||||

WS | 28.91 | 231.9 | 0 | 21 | 4.836 |

MinMax | 30.2 | 28.1 | 0 | 205 | 2.76 |

NBI | 17.88 | 47.5 | 0 | 224 | 0.2468 |

NC | 23.07 | 46.5 | 0 | 224 | 0.2468 |

NBIm | 60.92 | 49.3 | 0 | 224 | 0.3262 |

NCm | 63.9 | 55.3 | 0 | 224 | 0.3072 |

Example 5 (120) | |||||

WS | 14.52 | 280.5 | 0 | 6 | 1.126 |

MinMax | 13.93 | 169.8 | 0 | 15 | 2.296 |

NBI | 9.719 | 425.5 | 0 | 15 | 1.051 |

NC | 6.583 | 202.1 | 0 | 15 | 1.023 |

NBIm | 17.04 | 65.9 | 0 | 106 | 0.5764 |

NCm | 20.67 | 91.2 | 2 | 104 | 2.037 |

**d**

_{low}and

**d**

_{up}. The parameter

**d**

_{low}refers to the minimum distance of each point to another. The parameter

**d**

_{up}states the maximum radius of a sphere touching each point, with no other points inside the sphere. The Evness parameter is calculated as follows:

*σ*is the standard deviation and

*μ*is the arithmetic mean of

**D**.

In examples 1–3 and 5 a CVT algorithm was used to obtain the distribution of 78 interior points in a bi-dimensional space. The CPU time consumed by the CVT algorithm in these cases was around 3.5 s. Note that the points in the three dimensional space are projected in a plane, so the CVT algorithm is performed in a bi-dimensional space.

For the example 4 (four-MOP) it was necessary four CVT runs each one to distribute a set of 28 interior points in a bi-dimensional space and one run for 60 interior points in a three dimensional space. The CPU time consumed by the CVT for each set of 28 interior points (in a bi-dimensional space) was around 1.7 s. The CPU time consumed by the CVT, for distribute 60 interior points (in a three dimensional space), was around 26 s. Finally, for example 4 the total CPU time used for the distribution of interior points via CVT was approximately 33 s.

Note that those CVT timing are independent of the functions involved in the problem. Thus, for MOP considering (objective or constraint) functions that are computationally more expensive, like problems involving numerical simulation, the CVT time become negligible.

For all analyzed cases, despite the wide solution and the Pareto Frontier quality, the NBIm maintain (or improve) the computational efficiency, as the number of function evaluations per Pareto point is approximately the same as the standard methods.

In the examples 2 and 5 two objective functions has the same solution. Thus, by using the modified procedure presented here, the solution of the Pareto curves (bi-MOP) related to this two functions degenerates in one point. So the 15 Pareto points of this curve generates in only 1 effective Pareto point, that is why in those two examples, the NBIm does not obtain the total *nPp* (*nEPp* = *nPp* − 14). Note that this feature is predictable and do not reduce the efficiency of the method. As a matter of fact, if two function has the same solution individually, the Pareto points between those are not calculated.

Although the smaller number of Evness of the standards NBI and NNC methods, it has been shown that these methods do not cover the entirely Pareto Front, as the NBIm one. Finally, the uniform distribution of the interior points might be improved, by using other distribution method instead of the CVT algorithm adopted here.

## 6 Conclusions

In this paper several multiobjective optimization techniques, namely WS, Min–Max, NBI and NC, were implemented and tested, giving results that are in agreement with literature. It was observed that when more than two objective functions were considered NBI and NC were not able to capture properly the Pareto frontier and they can generate bad formulated optimization subproblems which lead to a large number of unnecessary function evaluations. Also, in some cases, they capture infeasible solutions for the Pareto frontier. In order to eliminate such bad behaviors a modification to these methods was proposed here and referred as NBIm and NCm.

Four examples, each one of them involving three objective functions each, and one example with four objective functions were analyzed. The results obtained using the developed methodology were compared with the existing methods. For all analyzed cases NBIm produces the best solution in terms of Pareto Frontier quality and maintaining (or improving) the computational efficiency, as the number of function evaluations per effective Pareto points are approximately the same when such technique is used. Further, no infeasible solution was obtained.

The methodology proposed is general for any number of objective functions and it is expected to behaves satisfactorily, as demonstrated for three and four objective functions.

## Acknowledgments

The authors acknowledges the financial support given by the Brazilian research agency CNPq and Pernambuco state research agency FACEPE to the execution of the present work.

The authors acknowledges too, all the reviewers, which comments help to improve the current paper. It was particularly important to our knowledge the suggestion, during the second round of revision, of the paper Mueller-Gritschneider et al. (2009) in which a similar methodology to the one presented here was proposed.