A benchmark test suite for evolutionary manyobjective optimization
 3.7k Downloads
 11 Citations
Abstract
In the real world, it is not uncommon to face an optimization problem with more than three objectives. Such problems, called manyobjective optimization problems (MaOPs), pose great challenges to the area of evolutionary computation. The failure of conventional Paretobased multiobjective evolutionary algorithms in dealing with MaOPs motivates various new approaches. However, in contrast to the rapid development of algorithm design, performance investigation and comparison of algorithms have received little attention. Several test problem suites which were designed for multiobjective optimization have still been dominantly used in manyobjective optimization. In this paper, we carefully select (or modify) 15 test problems with diverse properties to construct a benchmark test suite, aiming to promote the research of evolutionary manyobjective optimization (EMaO) via suggesting a set of test problems with a good representation of various realworld scenarios. Also, an opensource software platform with a userfriendly GUI is provided to facilitate the experimental execution and data observation.
Keywords
Manyobjective optimization Benchmark test suite Test functions Software platformIntroduction
Main properties of the 15 test functions
Problem  Properties  Note 

MaF1  Linear  No single optimal solution in any subset of objectives 
MaF2  Concave  No single optimal solution in any subset of objectives 
MaF3  Convex, multimodal  
MaF4  Concave, multimodal  Badly scaled and no single optimal solution in any subset of objectives 
MaF5  Convex, biased  Badly scaled 
MaF6  Concave, degenerate  
MaF7  Mixed, disconnected, Multimodal  
MaF8  Linear, degenerate  
MaF9  Linear, degenerate  Pareto optimal solutions are similar to their image in the objective space 
MaF10  Mixed, biased  
MaF11  Convex, disconnected, nonseparable  
MaF12  Concave, nonseparable, biased deceptive  
MaF13  Concave, unimodal, nonseparable, degenerate  Complex Pareto set 
MaF14  Linear, partially separable, large scale  Nonuniform correlations between decision variables and objective functions 
MaF15  Convex, partially separable, large scale  Nonuniform correlations between decision variables and objective functions 
The field of evolutionary multiobjective optimization has developed rapidly over the last two decades, but the design of effective algorithms for addressing problems with more than three objectives (called manyobjective optimization problems, MaOPs) remains a great challenge. First, the ineffectiveness of the Pareto dominance relation, which is the most important criterion in multiobjective optimization, results in the underperformance of traditional Paretobased algorithms. Also, the aggravation of the conflict between convergence and diversity, along with increasing time or space requirement as well as parameter sensitivity, has become key barriers to the design of effective manyobjective optimization algorithms. Furthermore, the infeasibility of solutions’ direct observation can lead to serious difficulties in algorithms’ performance investigation and comparison. All of these suggest the pressing need of new methodologies designed for dealing with MaOPs, new performance metrics and benchmark functions tailored for experimental and comparative studies of evolutionary manyobjective optimization (EMaO) algorithms.
Benchmark functions play an important role in understanding the strengths and weaknesses of evolutionary algorithms. In manyobjective optimization, several scalable continuous benchmark function suites, such as DTLZ [9] and WFG [10], have been commonly used. Recently, researchers have also designed/presented some problem suites specially for manyobjective optimization [11, 12, 13, 14, 15, 16]. However, all of these problem suites only represent one or several aspects of realworld scenarios. A set of benchmark functions with diverse properties for a systematic study of EMaO algorithms are not available in the area. On the other hand, existing benchmark functions typically have a “regular” Pareto front, overemphasize one specific property in a problem suite, or have some properties that appear rarely in realworld problems [17]. For example, the Pareto front of most of the DTLZ and WFG functions is similar to a simplex. This may be preferred by decompositionbased algorithms which often use a set of uniformly distributed weight vectors in a simplex to guide the search [7, 18]. This simplexlike shape of Pareto front also causes an unusual property that any subset of all objectives of the problem can reach optimality [17, 19]. This property can be very problematic in the context of objective reduction, since the Pareto front degenerates into only one point when omitting one objective [19]. Also for the DTLZ and WFG functions, there is no function having a convex Pareto front; however, a convex Pareto front may bring more difficulty (than a concave Pareto front) for decompositionbased algorithms in terms of solutions’ uniformity maintenance [20]. In addition, the DTLZ and WFG functions which are used as MaOPs with a degenerate Pareto front (i.e., DTLZ5, DTLZ6 and WFG3) have a nondegenerate part of the Pareto front when the number of objectives is larger than four [10, 21, 22]. This naturally affects the performance investigation of evolutionary algorithms on degenerate MaOPs.
This paper carefully selects/designs 15 test problems to construct a benchmark test suite for evolutionary manyobjective optimization. The 15 benchmark problems are with diverse properties which cover a good representation of various realworld scenarios, such as being multimodal, disconnected, degenerate, and/or nonseparable, and having an irregular Pareto front shape, a complex Pareto set or a large number of decision variables (as summarized in Table 1). Our aim is to promote the research of evolutionary manyobjective optimization via suggesting a set of benchmark functions with a good representation of various realworld scenarios. Also, an opensource software platform with a userfriendly GUI is provided to facilitate the experimental execution and data observation. In the following, Sect. “Function definitions” details the definitions of the 15 benchmark functions, and Sect. “Experimental setup” presents the experimental setup for benchmark studies, including general settings, performance indicators, and software platform.
Function definitions

D: number of decision variables

M: number of objectives

\(\mathbf {x} = (x_1, x_2, \ldots , x_D)\): decision vector

\(f_i\): ith objective function
MaF1 (modified inverted DTLZ1 [23])
MaF2 (DTLZ2BZ [19])
MaF3 (convex DTLZ3 [5])
MaF4 (inverted badly scaled DTLZ3)
MaF5 (convex badly scaled DTLZ4)
MaF6 (DTLZ5(I,M) [24])
MaF7 (DTLZ7 [9])
MaF8 (multipoint distance minimization problem [11, 12])
In this test suite, the regular polygon is used (to unify with MaF9). The center coordinates of the regular polygon (i.e., Pareto optimal region) are (0, 0) and the radius of the polygon (i.e., the distance of the vertexes to the center) is 1.0. Parameter settings are: \(\mathbf {x}\in [10{,}000, 10{,}000]^2\). Figure 8 shows the Pareto optimal regions of the threeobjective and tenobjective MaF8.
MaF9 (multiline distance minimization problem [25])
One key characteristic of MaF9 is that the points in the regular polygon (including the boundaries) and their objective images are similar in the sense of Euclidean geometry [25]. In other words, the ratio of the distance between any two points in the polygon to the distance between their corresponding objective vectors is a constant. This allows a straightforward understanding of the distribution of the objective vector set (e.g., its uniformity and coverage over the Pareto front) via observing the solution set in the twodimensional decision space. In addition, for MaF9 with an even number of objectives (\(M = 2k\) where \(k \ge 2\)), there exist k pairs of parallel target lines. Any point (outside the regular polygon) residing between a pair of parallel target lines is dominated by only a line segment parallel to these two lines. This property can pose a great challenge for EMaO algorithms which use Pareto dominance as the sole selection criterion in terms of convergence, typically leading to their populations trapped between these parallel lines [14].
For MaF9, all points inside the polygon are the Pareto optimal solutions. However, these points may not be the sole Pareto optimal solutions of the problem. If two target lines intersect outside the regular polygon, there exist some areas whose points are nondominated with the interior points of the polygon. Apparently, such areas exist in the problem with five or more objectives in view of the convexity of the considered polygon. However, the geometric similarity holds only for the points inside the regular polygon. The Pareto optimal solutions that are located outside the polygon will affect this similarity property. So, we set some regions infeasible in the search space of the problem. Formally, consider an Mobjective MaF9 with a regular polygon of vertexes \((A_1,A_2,\ldots ,A_M)\). For any two target lines \(\overleftrightarrow {A_{i1}A_i}\) and \(\overleftrightarrow {A_{n}A_{n+1}}\) (without loss of generality, assuming \(i<n\)) that intersect one point (O) outside the considered regular polygon, we can construct a polygon (denoted as \(\Phi _{A_{i1}A_iA_{n}A_{n+1}}\)) bounded by a set of \(2(ni)+2\) line segments: \(\overline{A_iA_n'}, \overline{A_n'A_{n1}'}, \ldots , \overline{A_{i+1}'A_i'}, \overline{A_i'A_n}, \overline{A_nA_{n1}}, \ldots , \overline{A_{i+1}A_i}\), where points \(A_i', A_{i+1}',\ldots , A_{n1}', A_n'\) are symmetric points of \(A_i, A_{i+1},\ldots A_{n1}, A_n\) with respect to central point O. We constrain the search space of the problem outside such polygons (but not including the boundary). Now the points inside the regular polygon are the sole Pareto optimal solutions of the problem. In the implementation of the test problem, for newly produced individuals which are located in the constrained areas of the problem, we simply reproduce them within the given search space until they are feasible.
MaF10 (WFG1 [10])
MaF11 (WFG2 [10])
MaF12 (WFG9 [10])
MaF13 (PF7 [13])
MaF14 (LSMOP3 [16])
MaF15 (inverted LSMOP8 [16])
Experimental setup
To conduct benchmark experiments using the proposed test suite, users may follow the experimental setup as given below.
General settings

Number of objectives (M) 5, 10, 15

Maximum population size ^{1} \(25\times M\)

Maximum number of fitness evaluations (FEs) ^{2} \(\max \{100000, 10000\times D\}\)

Number of independent runs 31
Performance metrics
 Inverted generational distance (IGD) Let \(P^*\) be a set of uniformly distributed points on the Pareto front. Let P be an approximation to the Pareto front. The inverted generational distance between \(P^*\) and P can be defined as:where \(d(\mathbf {v},P)\) is the minimum Euclidean distance from point \(\mathbf {v}\) to set P. The IGD metric is able to measure both diversity and convergence of P if \(P^*\) is large enough, and a smaller IGD value indicates a better performance. In this test suite, we suggest a number of 10,000 uniformly distributed reference points sampled on the true Pareto front^{3} for each test instance.$$\begin{aligned} \mathrm{IGD}(P^*,P) = \frac{\sum _{\mathbf {v} \in P^*}d(v,P)}{P^*}, \end{aligned}$$(51)

Hypervolume (HV) Let \(\mathbf {y}^* = (y_1^*, \ldots , y_m^*)\) be a reference point in the objective space that is dominated by all Pareto optimal solutions. Let P be the approximation to the Pareto front. The HV value of P (with regard to \(\mathbf {y}^*\)) is the volume of the region which is dominated by P and dominates \(\mathbf {y}^*\). In this test suite, the objective vectors in P are normalized using \(f^j_i = \frac{f^j_i}{1.1 \times y_i^{\mathrm{nadir}}}\), where \(f^j_i\) is the ith dimension of jth objective vector, and \(y_i^{\mathrm{nadir}}\) is the ith dimension of nadir point of the true Pareto front.^{4} Then we use y* = (1,...,1) as the reference point for the normalized objective vectors in the HV calculation.
Software platform
All the benchmark functions have been implemented in MATLAB code and embedded in a recently developed software platform—PlatEMO.^{5} PlatEMO is an open source MATLABbased platform for evolutionary multi and manyobjective optimization, which currently includes more than 50 representative algorithms and more than 100 benchmark functions, along with a variety of widely used performance indicators. Moreover, PlatEMO provides a userfriendly graphical user interface (GUI), which enables users to easily perform experimental settings and algorithmic configurations, and obtain statistical experimental results by oneclick operation.
In particular, as shown in Fig. 16, we have tailored a new GUI in PlatEMO for this test suite, such that participants are able to directly obtain tables and figures comprising the statistical experimental results for the test suite. To conduct the experiments, the only thing to be done by participants is to write the candidate algorithms in MATLAB and embed them into PlatEMO. The detailed introduction to PlatEMO regarding how to embed new algorithms can be referred to the users manual attached in the source code of PlatEMO [26]. Once a new algorithm is embedded in PlatEMO, the user will be able to select the new algorithm and execute it on the GUI shown in Fig. 16. Then the statistical results will be displayed in the figures and tables on the GUI, and the corresponding experimental result (i.e., final population and its performance indicator values) of each run will be saved to a .mat file.
Footnotes
 1.
The size of final population/archive must be smaller the given maximum population size, otherwise, a compulsory truncation will be operated in final statistics for fair comparisons.
 2.
Regardless of the number of objectives, every evaluation of the whole objective set is counted as one FE.
 3.
The specific number of reference points for IGD calculations can vary a bit due to the different geometries of the Pareto fronts. All reference point sets can be automatically generated using the software platform introduced in Sect. “Software platform”.
 4.
The nadir points can be automatically generated using the software platform introduced in Sect. “Software platform”.
 5.
PlatEMO can be downloaded at http://bimk.ahu.edu.cn/index.php?s=/Index/Software/index.html.
Notes
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China under Grant 61590922, the Engineering and Physical Sciences Research Council of U.K. under Grant EP/M017869/1, Grant EP/K001310/1 and Grant EP/K001523/1.
References
 1.Li B, Li J, Tang K, Yao X (2015) Manyobjective evolutionary algorithms: a survey. ACM Comput Surv 48(1):13CrossRefGoogle Scholar
 2.Yang S, Li M, Liu X, Zheng J (2013) A gridbased evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 17(5):721–736CrossRefGoogle Scholar
 3.Zhang X, Tian Y, Jin Y (2015) A knee point driven evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 19(6):761–776CrossRefGoogle Scholar
 4.Wang H, Jiao L, Yao X (2015) Two_arch2: an improved twoarchive algorithm for manyobjective optimization. IEEE Trans Evol Comput 19(4):524–541Google Scholar
 5.Deb K, Jain H (2014) An evolutionary manyobjective optimization algorithm using referencepointbased nondominated sorting approach, part I: solving problems with box constraints. IEEE Trans Evol Comput 18(4):577–601CrossRefGoogle Scholar
 6.Li K, Zhang Q, Kwong S (2015) An evolutionary manyobjective optimization algorithm based on dominance and decomposition. IEEE Trans Evol Comput 19(5):694–716CrossRefGoogle Scholar
 7.Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) A reference vector guided evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 20(5):773–791CrossRefGoogle Scholar
 8.Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolumebased manyobjective optimization. Evol Comput 19(1):45–76CrossRefGoogle Scholar
 9.Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. In: Abraham A, Jain L, Goldberg R (eds) Evolutionary multiobjective optimization. Theoretical advances and applications, Springer, Berlin, pp 105–145CrossRefGoogle Scholar
 10.Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Evol Comput 10(5):477–506CrossRefzbMATHGoogle Scholar
 11.Köppen M, Yoshida K (2007) Substitute distance assignments in NSGAII for handling manyobjective optimization problems. In: Evolutionary multicriterion optimization (EMO), pp 727–741Google Scholar
 12.Ishibuchi H, Hitotsuyanagi Y, Tsukamoto N, Nojima Y (2010) Manyobjective test problems to visually examine the behavior of multiobjective evolution in a decision space. In: International Conference on Parallel Problem Solving from Nature (PPSN), pp 91–100Google Scholar
 13.Saxena D, Zhang Q, Duro J, Tiwari A (2011) Framework for manyobjectivet test problems with both simple and complicated Paretoset shapes. In: Evolutionary multicriterion optimization (EMO), pp 197–211Google Scholar
 14.Li M, Yang S, Liu X (2014) A test problem for visual investigation of highdimensional multiobjective search. In: IEEE Congress on Evolutionary Computation (CEC), pp 2140–2147Google Scholar
 15.Cheung YM, Gu F, Liu HL (2016) Objective extraction for manyobjective optimization problems: Algorithm and test problems. IEEE Trans Evol Comput 20(5):755–772CrossRefGoogle Scholar
 16.Cheng R, Jin Y, Olhofer M, Sendhoff B (2016) Test problems for largescale multiobjective and manyobjective optimization. IEEE Trans Cybern (in press)Google Scholar
 17.Masuda H, Nojima Y, Ishibuchi H (2016) Common properties of scalable multiobjective problems and a new framework of test problems. In: IEEE Congress on Evolutionary Computation (CEC). IEEE, pp 3011–3018Google Scholar
 18.Cheng R, Jin Y, Narukawa K (2015) Adaptive reference vector generation for inverse model based evolutionary multiobjective optimization with degenerate and disconnected pareto fronts. In: Proceedings of the International Conference on Evolutionary MultiCriterion Optimization. Springer, New York, pp 127–140Google Scholar
 19.Brockhoff D, Zitzler E (2009) Objective reduction in evolutionary multiobjective optimization: theory and applications. Evol Comput 17(2):135–166CrossRefGoogle Scholar
 20.Li M, Yang S, Liu X (2016) Pareto or nonPareto: Bicriterion evolution in multiobjective optimization. IEEE Trans Evol Comput 20(5):645–665CrossRefGoogle Scholar
 21.Saxena D, Duro J, Tiwari A, Deb K, Zhang Q (2013) Objective reduction in manyobjective optimization: linear and nonlinear algorithms. IEEE Trans Evol Comput 17(1):77–99CrossRefGoogle Scholar
 22.Ishibuchi H, Masuda H, Nojima Y (2016) Pareto fronts of manyobjective degenerate test problems. IEEE Trans Evol Comput 20(5):807–813CrossRefGoogle Scholar
 23.Jain H, Deb K (2014) An evolutionary manyobjective optimization algorithm using referencepoint based nondominated sorting approach, part II: handling constraints and extending to an adaptive approach. IEEE Trans Evol Comput 18(4):602–622CrossRefGoogle Scholar
 24.Deb K, Saxena DK (2006) Searching for Paretooptimal solutions through dimensionality reduction for certain largedimensional multiobjective optimization problems. In: IEEE Congress on Evolutionary Computation (CEC), pp 3353–3360Google Scholar
 25.Li M, Grosan C, Yang S, Liu X, Yao X (2017) “Multiline distance minimization: A visualized manyobjective test problem suite. IEEE Trans Evol Comput (in press)Google Scholar
 26.Tian Y, Cheng R, Zhang X, Jin Y (2016) Platemo: a matlab platform for evolutionary multiobjective optimization. IEEE Comput Intell Mag (under review)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.