A modified rotation strategy for directed search domain algorithm in multiobjective engineering optimization

In the real-life multiobjective optimization, it is often required to generate a well-distributed Pareto set. Only a few methods are capable of tackling such a problem in a quite general formulation. The Directed Search Domain method (DSD) proved to be efficient and, therefore, attracted much attention. In this paper, two modifications to the rotation strategy of DSD algorithm are proposed. The first modification is meant to improve its computational efficiency. The second modification is to enhance the evenness of the Pareto set with a number of additional Pareto points. These points are obtained according to some specific rotation angles calculated in a particular way. The modified approach is verified on several test cases with three objectives, including an engineering case. The proposed algorithm is compared with both the original DSD and DSD-II algorithms. It is shown that the new approach can maintain the distribution of the Pareto set at a high level with a relatively low computational cost.


Introduction
Engineering design and optimization is an active research filed. A real-life engineering design and optimization problem usually includes a variety of contradictory objectives such as high performance, low cost, long life and manufacturability. These objectives should be taken into consideration simultaneously in the optimization process. Generally speaking, the solution to such a problem is usually not unique. Instead, it represents a possible trade-off between the different objectives and cannot be improved without deterioration of at least one of the objectives and violation of the constraints. This leads to the notion of a Pareto solution (Miettinen 1999). Each Pareto point in the objective space represents a solution of the multiobjective optimization (MOO), and forms a set called the Pareto set. In general, a sufficient number of Pareto points are needed to represent the entire Pareto frontier in the objective space. However, in practice, the decision maker (DM) can only select a few possible Pareto points among the Pareto set according to additional requirements. In such a context, an even distribution of Pareto points can provide the DM with good visualization of the Pareto frontier, and substantially simplify the work of the DM. In particular, this can be exploited for local approximation of the Pareto frontier (Utyuzhnikov et al. 2008) and in ranking (Jaini and Utyuzhnikov 2017). Hence, it is of paramount importance to generate a well-distributed Pareto set to acquire maximum information on the Pareto surface at a minimal computational cost (Utyuzhnikov et al. 2009).
A variety of methods have been developed to solve multiobjective optimization problems (Siddiqui et al. 2012;Yang and Deb 2013;Yang 2013;Fogue et al. 2013;Ganesan et al. 2013;Ghosh and Chakraborty 2014;Ansary and Panda 2014;Gobbi et al. 2015;Oke and Siddiqui 2015;Siddiqui and Christensen 2016;Fou-ladgar and Lotfi 2016). In the literature, there are mainly two categories of MOO methods to search for Pareto solutions and generate a Pareto set to approximate the Pareto frontier: evolutionary methods and classical methods. The evolutionary methods consider all objective functions simultaneously, and are usually based on stochastic algorithms and random operators. Some popular evolutionary MOO algorithms include the Niched Pareto Genetic Algorithm (NPGA) (Horn et al. 1994;Erickson et al. 2001), the Nondominated Sorting Genetic Algorithm (NSGA) (Srinivas and Deb 1994;Deb et al. 2002), the Strength Pareto Evolutionary Algorithm (SPEA) (Zitzler and Thiele 1999;Zitzler et al. 2001), and some algorithms using multiobjective particle swarm optimization (MOPSO) (Coello and Lechuga 2002;Hettenhausen et al. 2013;Dehuri et al. 2015). Evolutionary methods have several advantages. They are not sensitive to non-smoothness of objective functions, and can sometimes be efficient in finding a global extremum. However, in multiobjective optimization, there is no guarantee for evolutionary methods to capture an optimum solution. Meanwhile, a large number of solutions should be taken into account to generate an even set of optimal solutions. As a consequence, the evolutionary approaches are time-consuming in multiobjective optimization. Actually, in the real-life design, most of the solutions are redundant (Erfani and Utyuzhnikov 2011).
In contrast, the classical methods generally utilize deterministic algorithms. The weighted sum method and constraint-based algorithms are among the most popular approaches. However, these methods cannot guarantee the quasi-even distribution of the Pareto set. In some cases, they can only capture part of the Pareto frontier (Marler and Arora 2004). The well-known classical MOO methods, which are able to approximate the whole Pareto frontier, include the Normal-Boundary Intersection (NBI) method (Dad and Dennis 1997;1998), the Normal constraint (NC) method (Messac et al. 2003;Messac and Mattson 2004), the Physical Programming method (Messac 1996;Messac and Mattson 2002), and the Directed Search Domain (DSD) method (Utyuzhnikov et al. 2009;Erfani and Utyuzhnikov 2011). All of these methods exploit the anchor points which are the minima of each objective in the objective space, as well as an aggregate objective function (AOF) (preference function).
In the NBI method, the Pareto frontier is construct-ed by the intersection of the normal to the utopia hyperplane and boundary of the feasible objective domain. The Pareto set is acquired after a single optimization problem has been solved along each line. The evenness of the Pareto set is determined by those of lines which are orthogonal to the hyperplane. Nevertheless, the NBI method seems to be not robust, because the feasible search domain is reduced to a line and a corresponding equality constraint is established . In order to solve this problem, NC method is proposed as a modification of NBI. The single optimization in the NC is based on inequality constraints, which makes the method more flexible and stable.
In the PP method, the DM experience is taken into consideration directly. The designer assigns each objective to one of the four class functions, and the optimization is based on minimization of an AOF determined by the class functions. However, the approach includes several free parameters and requires preliminary information on the location of the Pareto frontier.
The DSD method (Utyuzhnikov et al. 2009;Erfani and Utyuzhnikov 2011) can be considered as an alternative approach. It introduces the affine transformation of objectives to shrink a search domain to obtain a Pareto solution in a selected area of the objective space. By evenly controlling the search domains in the objective space, DSD is capable of capturing the entire Pareto frontier and providing a welldistributed Pareto set. Another advantage is that it does not generate redundant non-Pareto solutions, thus the optimization time can be saved. It has been shown that the DSD method outperforms the NBI and NC methods on a number of test cases with respect to the computational efficiency and the distribution of Pareto set. After the original DSD algorithm, a modified directed search domain algorithm called DSD-II is put forward to improve the optimization efficiency in Erfani et al. (2013). In DSD-II, the shrinking and rotating strategies are modified.
However, in some cases, the search domains are not distributed well enough in the objective space via the rotation strategy of DSD method. Then the evenness of the Pareto set is deteriorated. In addition, in DSD the rotation angle varies following a passive search algorithm reducing optimization efficiency. In DSD-II, the rotation strategy is not implemented. Instead, redundant points on the utopia hyperplane are generated in order to capture the whole Pareto frontier. This may lead to many redundant solutions and negatively affect the computational efficiency. In the context of these problems, the objective of this paper is to improve both the effect and efficiency of the original DSD algorithm without generating redundant solutions. The proposed approach can be considered as an alternative to DSD-II. Two modifications are proposed. One is to enhance the efficiency of searching for the Pareto solutions with the use of dichotomy in rotation strategy. The other modification is to improve the evenness of the Pareto set distribution based on some specifically calculated rotation angles.
The paper is structured as follows. In Section 2, basic concepts and definitions about multiobjective optimization problems are introduced. Section 3 provides the main principles and procedures of the DSD algorithm including DSD-II. The two new modifications to the original DSD algorithm are described in Section 4. The modified approach is evaluated in Section 5 on the basis of simulation results of three numerical cases and one engineering case. In Section 6, discussion as well as conclusions are presented.

Multiobjective optimization
A general multiobjective optimization (MOO) problem is described as follows: (1) where f i (i = 1, 2, ..., n) are objective functions, which form the objective space Y ⊆ R n . The vector x represents a design variable in the decision space D, and D * ⊆ D is a feasible space which is the set of elements satisfying all the constraints.
In general, the solution of problem (1) is not unique. Therefore, a set of solutions called the Pareto optimal set is introduced on the basis of the following definition (Miettinen 1999): Definition 1 Pareto solution: Vector x * ∈ D * is called a Pareto (optimal) solution to problem (1) if and only if there Then, in the objective space the vector f (x * ) represents a Pareto solution not dominated by any other feasible solution. The set of all Pareto points represents the Pareto frontier, the best trade-off solutions to the multiobjective optimization problem (1). If the above condition only holds in a vicinity of x * , then x * is called a local Pareto solution.
Definition 2 Anchor point: In the objective space Y, an anchor point μ i represents the minimum of an ith objective function subject to all the constraints.
According to the above definition, a non-unique anchor point is sometimes obtained. Then, a lexicographic-based prioritization is introduced as follows Utyuzhnikov et al. .
Definition 3 Modified anchor point: In the feasible objective space Y * = {f (x) | x ∈ D * }, the anchor point μ i of an ith objective function is determined in the circular order: min f i , min f i+1 , ..., min f n , min f 1 , min f 2 , ..., min f i−1 . Any successive minimization problem: min f i+m (1 ≤ i + m ≤ n, m = 0) is only to be considered on the set {A m−1 }, which is the solution to the previous minimization problem.
Definition 4 A hyperplane that contains all the anchor points is called the utopia hyperplane.
The above definitions are illustrated in Fig. 1.

Review of directed search domain (DSD) algorithm
DSD is a classical algorithm for generating a quasi-even Pareto set in multiobjective optimization problems. In Erfani et al. (2013), a modified directed search domain algorithm called DSD-II is proposed to further improve the optimization efficiency.

The original DSD algorithm
The main idea of the original DSD algorithm is to utilize transformation technique to shrink the search domain and seek the Pareto solution in the resulting shrunk area. To address a multiobjective optimization problem containing n objective functions as explained in (1) by DSD, the following steps are needed.
Step 1. The objective functions should be scaled if necessary.
Step 2. Search for the anchor points or modified anchor points. Step 3. Form the interior of the utopia hyperplane P by where α i are coefficients for generating reference points denoted by M in the next step.
Step 4. Generate a set of evenly distributed reference points M on the utopia hyperplane by varying α i in (2).
Step 5. Shrink the search domain. First, for each reference point M = (M 1 , ..., M n ) ∈ M, the following single optimization problem is formulated similar to the physical programming (Messac 1996;Messac and Mattson 2002): In problem (3), the constraints f i (x) ≤ M i are the search domain formed on the basis of reference point M. Therefore, different reference points result in different search domains, and such different single objective optimization problems are expected to have different solutions. However, two nearby reference points may share a part of the other search domains. Then, redundant solutions can be generated and deteriorate the evenness of the Pareto set. To overcome this problem in DSD algorithm, each search domain is shrunk to be distinct. To achieve this, a new coordinate system with the origin at a reference point M is introduced to reduce the size of the search domain, and the axes of the coordinate system form a given angle with respect to a unit vector. Finally, the constraint to the single objective optimization problem (3) is modified aŝ where A = B −1 is the transformation matrix from the Cartesian coordinate system to the new local coordinate system, andf i (x) = n j =1 f j (x)B ji is the ith objective function based on the new local coordinate system. To calculate the matrix B, the reader is referred to the original paper (Utyuzhnikov et al. 2009).
Step 6. Solve the new single objective optimization based on the shrunk search domain for each reference point M = (M 1 , ..., M n ) ∈ M: Sometimes there may not be any feasible solution found in the search domain, when the boundary of the feasible objective space is non-convex. In such a context, the search domain should be flipped to the opposite side of the utopia hyperplane in order to search the points on the Pareto frontier by reversing the inequalities in optimization problem (5) aŝ Step 7. Rotate the search domain if necessary. In some high-dimensional cases, it may happen that the orthogonal projection of the utopia hyperplane does not fully cover the entire Pareto surface. In order to capture uncovered region, the search domain should be rotated if the reference point M is located on the edge of the hyperplane. Here, definitions of an edge reference point and edge Pareto point are introduced.
Definition 5 Edge reference point: In the objective space Y, the reference point located on the edge of the utopia hyperplane is defined as an edge reference point.
Definition 6 Edge Pareto point: In the objective space Y, the Pareto point located on the edge of the Pareto surface is called an edge Pareto point.
Generally speaking, for each edge reference point, an edge Pareto point can be found with the rotation strategy. To realize this, a vector s is introduced as the outer normal to the edge of the polygon on the utopia hyperplane (Fig. 2) first.
where b i = μ i − μ i−1 are the boundaries of the polygon. Thus, a new vector l is defined by where n h is the normal vector to the utopia hyperplane. By changing θ from 0 to π/2, the rotation of the search domain is from the normal direction of the hyperplane to the direction lying on the outer part of the polygon (Utyuzhnikov et al. 2009). The rotation continues until no new solution is captured. It can be shown (Utyuzhnikov et al. 2009) that where γ c is the angle used to shrink the search domain, cos γ 0 = 1/ √ n, I is the unit matrix, and all elements of the matrix E are unities.
In the rotation strategy, a rotation matrix R is needed to map the unit vector l 0 = (l 0 , l 0 , ..., l 0 ) T onto l: The matrix R is calculated by where D is an orthogonal matrix obtained by the Gram-Schmidt orthogonalization procedure D = Gram − Schmidt (l 0 , l, e 3 , ..., e n ).
In (12), e k = (δ 1k , δ 2k , ..., δ nk ) where δ ij = 1 if i = j and δ ij = 0 if i = j (i, j = 1, ..., n). In (11), matrix T describes an elementary rotation in the hyperplane created by the vectors l 0 and l: Thus, the matrix A on Step 5 above should be updated to matrix A r : Meanwhile, the matrix B in the single-objective optimization problem on Step 4 and 5 should be updated to matrix B r : Step 8. Eliminate local Pareto solutions by a filtering procedure.
For details of steps and strategies in DSD algorithm, the reader is referred to the original paper (Utyuzhnikov et al. 2009;Erfani and Utyuzhnikov 2011).

DSD-II
As a modified DSD algorithm, DSD-II mainly improves the shrinking procedure by a vector-based shrinking strategy. A new vector v is defined as where M c = f (x) with x to be searched in the shrunk search domain. Thereafter, the new shrinking inequality is given as By setting the value of δ, the search domain based on each reference point is shrunk. Thus, two advantages are brought for improvement of computational efficiency. On the one hand, the shrinking calculation process is simplified so that the complex coordinate system transformation is avoided. On the other hand, the flipping procedure is eliminated since the new shrunk search domain already covers both sides of the utopia hyperplane.
As to the rotation strategy, DSD-II proposes modified utopia hyperplane to remove the rotating procedure for further enhancing the efficiency. However, this is achieved by some increasing the computational time in comparison with the original DSD algorithm. The reader is referred to the paper (Erfani et al. 2013) for details of modified utopia hyperplane generation.

A modified rotation strategy: DSD-III
On the basis of the original DSD algorithm, two modifications are proposed to strengthen the efficiency as well as to generate a better distributed Pareto set: 1. With the use of dichotomy, the efficiency of searching for the edge Pareto points is enhanced. 2. Based on some specific rotation angles, the evenness of the Pareto set is improved.
The proposed approach, which is based on the original DSD algorithm with the modified rotation strategy, is called DSD-III. This method is efficient in case the rotation strategy is needed. It is more effective since the dichotomous method is used rather than a passive search. The evenness might be better because more Pareto points are added if there are some gaps, which the original DSD method is not able to fill out well enough. It is worth noting that DSD-III can straightforwardly implement the non-flipping approach from DSD-II.

Searching for the edge Pareto points
The major steps of searching for the edge Pareto points with the use of dichotomy are illustrated as Fig. 3.
Step 1. Set the original search interval of the rotation angle θ to [0 π/2].
Step 2. Calculate the midpoint of the current search interval of θ : where θ l and θ u are the lower and upper bounds of the search interval, respectively. Step 3. Calculate the matrix B r based on θ mid according to (7)-(15). Then, solve the single-objective optimization problem (5) or (6) if the search domain needs to be flipped.
Step 4. Update the search interval of θ . If no feasible solution is found, then the upper bound of the search interval should be updated as θ u = θ mid . Otherwise, the lower bound should be updated as θ l = θ mid .
Step 5. Calculate the length of the updated search interval θ = θ u − θ l .
Step 6. Analyze the convergence. If the interval length θ is smaller than expected, then the process of the edge Pareto points searching is completed, and the edge rotation angle is θ e = θ l . Next, the edge Pareto point based on the current reference point can be found according to the rotation angle θ e . Otherwise, another iteration is required.

Improve the evenness of the Pareto set
Based on an edge reference point, a Pareto point can be acquired without rotation first, denoted as P so (Fig. 4). Then, with the use of rotation strategy, an edge Pareto point P se can be obtained. Thus, another definition is introduced.
In some cases, the edge point distance of an edge Pareto point is so large that the Pareto set is not quasi-distributed. In this case, more Pareto points based on the same reference point are required according to some specific rotation angles.
First of all, the number of the Pareto points to be added, denoted as n a , should be determined as follows.
where d np is the distance between two nearby Pareto points already obtained before rotating the search domain, assume P s1 and P s2 are a pair of nearby Pareto points, then d np is calculated by and η d is a parameter. By varying the value of η d , the number of the Pareto points to be added is adjusted to guarantee the even distribution of the Pareto set. The suggested value of η d is from 0.75 to 0.9. It is also worth noting that n a should be rounded to an integer. Consider the geometrical relationship among the reference point M and some Pareto points, depicted in Fig. 5. Then, the specific rotation angles θ a can be determined as θ a = (θ a1 , θ a2 , ...θ a(n a −1) ), where |P sai P so | is defined as β so is given as and |P sai M| is calculated as Finally, by rotating the search domain according to θ a , the corresponding Pareto points can be acquired. With their addition it becomes possible to strengthen the evenness of the Pareto set.

Test cases
The DSD-III method is validated on four test cases, including three numerical cases and one engineering case. The results obtained by our approach are compared against the original DSD algorithm with regards to the time required for the whole optimization process and the distribution of the Pareto set.
In order to mathematically describe the evenness of the Pareto set, a coefficient of evenness needs to be defined (Utyuzhnikov et al. 2009). For an ith Pareto point P i s (except the anchor points) in the Pareto set, the distance vector between it and other Pareto points is given by where n p is the number of Pareto points (including n anchor points) contained in the Pareto set. Then, the effective distance vector d i eff for P i s is defined as where nth min(d i ) is the nth smallest value of d i . Then, the coefficient of evenness E is determined as If E = 1, then the Pareto set is completely even. If E increases, then the evenness of the Pareto set deteriorates.
In the whole multiobjective optimization process, MAT-LAB R2015a is utilized for simulation. The fmincon optimization function is used as a basis optimizer, in which the active-set algorithm is chosen as the optimization algorithm.

Numerical test cases
First, we consider three numerical multiobjective optimization cases.

Case 1
Case 1 is a relatively simple three-dimensional test case with a concave Pareto frontier.
The Pareto sets acquired with DSD, DSD-II, and DSD-III are depicted in Figs. 6, 7, and 8, respectively, where α i is the step for changing α i in (2). Comparison among the three algorithms on the evenness coefficient E, as well as the required computational time t, is presented in Table 1. Here, n p is the number of Pareto points obtained. η 2 and η 3 are the ratios of the DSD-II and DSD-III computational times to that of DSD, respectively. To compare DSD-III with DSD-II, the parameter η 3,2 is set, which is the ratio of the DSD-III computational time to that of DSD-II.
Compared with the DSD algorithm, the DSD-III approach drastically reduces the computing time by about 65% − 85%. Meanwhile, it also improves the coefficient of evenness E from 6 − 25 to only 1.5 − 3. Thus, a more even Pareto set is obtained by DSD-III. In DSD-III, as the step α i decreases, more reference points are generated. Hence, more Pareto points are needed to provide a quasi-even distribution of the Pareto set.
Compared with DSD-II, the DSD-III approach cuts down the computational time by nearly 30%−55%. The efficiency of DSD-III increases as the step α i decreases. This is because more redundant solutions are generated by DSD-II. It is worth noting that the coefficient of evenness E retains on approximately the same level of about 1.5 − 3 for both DSD-II and DSD-III.

Case 2
Case 2 is the three-dimensional test case DT LZ2 (Abraham and Jain 2005) on a concave Pareto frontier, which is a little more complex than Case 1.
The Pareto sets generated with DSD, DSD-II, and DSD-III are shown in Figs. 9, 10 and 11, respectively. Comparison among the three methods on the optimization performance is presented in Table 2.
In Case 2, DSD-III decreases the optimization time by about 30% − 60% compared with DSD, whilst reducing the coefficient of evenness E from 5 − 19 to less than 2.5. Similar to Case 1, the smaller α i is, the better evenness of the generated Pareto set is achieved.
In this test case, when α i is equal or larger than 0.05, DSD-II performs better with respect to the computational time. This is mainly because not so many redundant solutions are generated under this circumstance. However, as α i drops below 0.05, DSD-III works faster. As to the coefficient of evenness, DSD-III slightly outperforms DSD-II. It improves E by about 0.4 − 1 while both methods maintain it at a high level, from about 1.5 to 2.5.

Case 3
Case 3 is a three-dimensional test case with a number of constraints. The objective functions are more complex. 1, 2, 3), In Case 3, DSD-II turned out to be too sensitive to the initial values of the design variables. Therefore, only the Pareto sets obtained with DSD and DSD-III are shown in Figs. 12 and 13, respectively. Comparison between these two approaches on the optimization performance is presented in Table 3.
In Case 3, the DSD-III approach drastically cuts down the optimization time by about 85%, and generates a Pareto  set with better evenness than the original DSD algorithm at the same time.

Engineering design problem
In this case, the four-bar truss multiobjective optimization problem is analyzed (Koski 1988). The truss scheme is depicted in Fig. 14. The magnitude of the suspended load F is 10kN.
s.t. 0 ≤ A i ≤ 5cm 2 (i = 1, 2, 3, 4), |σ i | ≤ 10kN/cm 2 (i = 1, 2, 3, 4), In the above problem, A i (i = 1, 2, 3, 4) is the size of Bar i, σ i (i = 1, 2, 3, 4) is the stress in Bar i, and V is the volume of the structure. The objective is to design a four-bar truss with optimal stress in Bar 1 and 4, and optimal volume of the structure. The Pareto sets obtained with DSD, DSD-II, and DSD-III are shown in Figs. 15, 16, and 17, respectively. The three approaches are compared with each other in Table 4.
In this engineering design problem, DSD-III reduces the computational time by nearly 50% − 80% compared with DSD. Furthermore, the evenness coefficient E is also significantly improved from about 6 − 15 to 3 − 4.5 in the meantime.
Compared with DSD-II, DSD-III largely cuts down the computational time by nearly 55% − 70%. Both algorithms maintain the evenness coefficient E at the same level, from about 3 to 4.5.
It is worth noting that when α i is equal to or smaller than 0.025, the computational efficiency of DSD-II is worse than that of DSD. This is due to that a fairly large number of redundant solutions are generated in this case.

Conclusion
A modified rotation strategy for DSD algorithm, DSD-III method, has been proposed. One of the two modifications is to search for the Pareto points located on the edge of Pareto frontier with the use of dichotomy method. Instead of changing the rotation angle step by step, the DSD-III approach can directly find out the edge Pareto point according to the rotation angle acquired by dichotomy. Thus, the whole optimization time is saved. The other modification is to insert a number of Pareto points to improve the evenness of the Pareto set. These Pareto points are obtained according to some specific rotation angles calculated in a particular way.
The new technique has been tested on different cases, including three numerical cases and one engineering case. Its performance has been compared with that of the original DSD and DSD-II algorithms. Compared with DSD, DSD-III always performs better with respect to both the computational time and the evenness of the Pareto set. As the density of the Pareto set increases, the effect of DSD-III in enhancing the evenness is heightened. In comparison with DSD, both DSD-II and DSD-III approaches can equally improve the evenness of the Pareto. As to the computational efficiency, DSD-III performs better than DSD-II in general. In particular, the DSD-III algorithm can be recommended for the problems in which the orthogonal projection of the Pareto surface onto the utopia hyperplane far exceeds the utopia polygon.