Complex & Intelligent Systems

, Volume 3, Issue 1, pp 67–81

A benchmark test suite for evolutionary many-objective optimization

  • Ran Cheng
  • Miqing Li
  • Ye Tian
  • Xingyi Zhang
  • Shengxiang Yang
  • Yaochu Jin
  • Xin Yao
Open Access
Original Article
  • 736 Downloads

Abstract

In the real world, it is not uncommon to face an optimization problem with more than three objectives. Such problems, called many-objective optimization problems (MaOPs), pose great challenges to the area of evolutionary computation. The failure of conventional Pareto-based multi-objective evolutionary algorithms in dealing with MaOPs motivates various new approaches. However, in contrast to the rapid development of algorithm design, performance investigation and comparison of algorithms have received little attention. Several test problem suites which were designed for multi-objective optimization have still been dominantly used in many-objective optimization. In this paper, we carefully select (or modify) 15 test problems with diverse properties to construct a benchmark test suite, aiming to promote the research of evolutionary many-objective optimization (EMaO) via suggesting a set of test problems with a good representation of various real-world scenarios. Also, an open-source software platform with a user-friendly GUI is provided to facilitate the experimental execution and data observation.

Keywords

Many-objective optimization Benchmark test suite Test functions Software platform 

Introduction

Table 1

Main properties of the 15 test functions

Problem

Properties

Note

MaF1

Linear

No single optimal solution in any subset of objectives

MaF2

Concave

No single optimal solution in any subset of objectives

MaF3

Convex, multimodal

 

MaF4

Concave, multimodal

Badly scaled and no single optimal solution in any subset of objectives

MaF5

Convex, biased

Badly scaled

MaF6

Concave, degenerate

 

MaF7

Mixed, disconnected, Multimodal

 

MaF8

Linear, degenerate

 

MaF9

Linear, degenerate

Pareto optimal solutions are similar to their image in the objective space

MaF10

Mixed, biased

 

MaF11

Convex, disconnected, nonseparable

 

MaF12

Concave, nonseparable, biased deceptive

 

MaF13

Concave, unimodal, nonseparable, degenerate

Complex Pareto set

MaF14

Linear, partially separable, large scale

Non-uniform correlations between decision variables and objective functions

MaF15

Convex, partially separable, large scale

Non-uniform correlations between decision variables and objective functions

The field of evolutionary multi-objective optimization has developed rapidly over the last two decades, but the design of effective algorithms for addressing problems with more than three objectives (called many-objective optimization problems, MaOPs) remains a great challenge. First, the ineffectiveness of the Pareto dominance relation, which is the most important criterion in multi-objective optimization, results in the underperformance of traditional Pareto-based algorithms. Also, the aggravation of the conflict between convergence and diversity, along with increasing time or space requirement as well as parameter sensitivity, has become key barriers to the design of effective many-objective optimization algorithms. Furthermore, the infeasibility of solutions’ direct observation can lead to serious difficulties in algorithms’ performance investigation and comparison. All of these suggest the pressing need of new methodologies designed for dealing with MaOPs, new performance metrics and benchmark functions tailored for experimental and comparative studies of evolutionary many-objective optimization (EMaO) algorithms.

In recent years, a number of new algorithms have been proposed for dealing with MaOPs [1], including the convergence enhancement based algorithms such as the grid-dominance-based evolutionary algorithm (GrEA) [2], the knee point-driven evolutionary algorithm (KnEA) [3], the two-archive algorithm (Two_Arch2) [4]; the decomposition-based algorithms such as the NSGA-III [5], and the evolutionary algorithms based on both dominance and decomposition (MOEA/DD) [6], and the reference vector-guided evolutionary algorithm (RVEA) [7]; the performance indicator-based algorithms such as the fast hypervolume-based evolutionary algorithm (HypE) [8]. In spite of the various algorithms proposed for dealing with MaOPs, the literature still lacks a benchmark test suite for evolutionary many-objective optimization.
Fig. 1

The Pareto front of MaF1 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

Fig. 2

The Pareto front of MaF2 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

Benchmark functions play an important role in understanding the strengths and weaknesses of evolutionary algorithms. In many-objective optimization, several scalable continuous benchmark function suites, such as DTLZ [9] and WFG [10], have been commonly used. Recently, researchers have also designed/presented some problem suites specially for many-objective optimization [11, 12, 13, 14, 15, 16]. However, all of these problem suites only represent one or several aspects of real-world scenarios. A set of benchmark functions with diverse properties for a systematic study of EMaO algorithms are not available in the area. On the other hand, existing benchmark functions typically have a “regular” Pareto front, overemphasize one specific property in a problem suite, or have some properties that appear rarely in real-world problems [17]. For example, the Pareto front of most of the DTLZ and WFG functions is similar to a simplex. This may be preferred by decomposition-based algorithms which often use a set of uniformly distributed weight vectors in a simplex to guide the search [7, 18]. This simplex-like shape of Pareto front also causes an unusual property that any subset of all objectives of the problem can reach optimality [17, 19]. This property can be very problematic in the context of objective reduction, since the Pareto front degenerates into only one point when omitting one objective [19]. Also for the DTLZ and WFG functions, there is no function having a convex Pareto front; however, a convex Pareto front may bring more difficulty (than a concave Pareto front) for decomposition-based algorithms in terms of solutions’ uniformity maintenance [20]. In addition, the DTLZ and WFG functions which are used as MaOPs with a degenerate Pareto front (i.e., DTLZ5, DTLZ6 and WFG3) have a nondegenerate part of the Pareto front when the number of objectives is larger than four [10, 21, 22]. This naturally affects the performance investigation of evolutionary algorithms on degenerate MaOPs.

This paper carefully selects/designs 15 test problems to construct a benchmark test suite for evolutionary many-objective optimization. The 15 benchmark problems are with diverse properties which cover a good representation of various real-world scenarios, such as being multimodal, disconnected, degenerate, and/or nonseparable, and having an irregular Pareto front shape, a complex Pareto set or a large number of decision variables (as summarized in Table 1). Our aim is to promote the research of evolutionary many-objective optimization via suggesting a set of benchmark functions with a good representation of various real-world scenarios. Also, an open-source software platform with a user-friendly GUI is provided to facilitate the experimental execution and data observation. In the following, Sect. “Function definitions” details the definitions of the 15 benchmark functions, and Sect. “Experimental setup” presents the experimental setup for benchmark studies, including general settings, performance indicators, and software platform.

Function definitions

  • D: number of decision variables

  • M: number of objectives

  • \(\mathbf {x} = (x_1, x_2, \ldots , x_D)\): decision vector

  • \(f_i\): ith objective function

MaF1 (modified inverted DTLZ1 [23])

$$\begin{aligned} \min \left\{ \begin{array}{l} {f_1}(\mathbf {x}) = (1 - {x_1}\ldots {x_{M - 1}})(1 + g(\mathbf {x}_M)) \\ {f_2}(\mathbf {x}) = (1 - {x_1}\ldots (1 - {x_{M - 1}}))(1 + g(\mathbf {x}_M)) \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = (1 - {x_1}(1 - {x_2}))(1 + g(\mathbf {x}_M)) \\ {f_M}(\mathbf {x}) = {x_1} (1 + g(\mathbf {x}_M)) \\ \end{array} \right. \end{aligned}$$
(1)
with
$$\begin{aligned} g(\mathbf {x}_M) = \sum \limits _{i = M}^{|\mathbf {x}|} (x_i - 0.5)^2 \end{aligned}$$
(2)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). As shown in Fig. 1, this test problem has an inverted PF, while the PS is relatively simple. This test problem is used to assess whether EMaO algorithms are capable of dealing with inverted PFs. Parameter settings of this test problem are: \(\mathbf {x} \in [0, 1]^D\) and \(K = 10\) (Fig. 2).
Fig. 3

The Pareto front of MaF3 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF2 (DTLZ2BZ [19])

$$\begin{aligned} \min \left\{ \begin{array}{l} {f_1}(\mathbf {x}) = \cos (\theta _1)\ldots \cos (\theta _2)\cos (\theta _{M - 1}) (1 + g_1(\mathbf {x}_M)) \\ {f_2}(\mathbf {x}) = \cos (\theta _1)\ldots \cos (\theta _{M - 2})\sin (\theta _{M - 1}) (1 {+} g_2(\mathbf {x}_M)) \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = \cos (\theta _1)\sin (\theta _2) (1 + g_{M - 1}(\mathbf {x}_M)) \\ {f_M}(\mathbf {x}) = \sin (\theta _1) (1 + g_M(\mathbf {x}_M))\\ \end{array} \right. \end{aligned}$$
(3)
with
$$\begin{aligned} g_{i}(\mathbf {x}_{M})= & {} \sum _{j=M+(i - 1)\cdot \lfloor \frac{D-M+1}{M}\rfloor }^{M+i\cdot \lfloor \frac{D-M+1}{M}\rfloor -1}\!\left( \!\left( \frac{x_{j}}{2}{+}\frac{1}{4}\right) \!-0.5\!\right) ^{2} \;\text {for}~i = 1, \ldots , M{-}1 \nonumber \\ g_{M}(\mathbf {x}_{M})= & {} \sum _{j=M+(i - 1)\cdot \lfloor \frac{D-M+1}{M}\rfloor }^{n}\left( \left( \frac{x_{j}}{2}+\frac{1}{4}\right) -0.5\right) ^{2}\nonumber \\ \theta _{i}= & {} \frac{\pi }{2}\cdot \left( \frac{x_{i}}{2}+\frac{1}{4}\right) \quad \text { for }~i=1, \ldots , M-1 \end{aligned}$$
(4)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). This test problem is modified from DTLZ2 to increase the difficulty of convergence. In original DTLZ2, it is very likely that the convergence can be achieved once the \(g(\mathbf {x}_M) = 0\) is satisfied; by contrast, for this modified version, all the objective have to be optimized simultaneously to reach the true PF. Therefore, this test problem is used to assess the whether and MOEA is able to perform concurrent convergence on different objectives. Parameter settings are: \(\mathbf {x} \in [0, 1]^D\) and \(K = 10\).
Fig. 4

The Pareto front of MaF4 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF3 (convex DTLZ3 [5])

$$\begin{aligned} \min \!\left\{ \begin{array}{l} {f_1}(\mathbf {x}) {=} \left[ \cos (\frac{\pi }{2}{x_1})\ldots \cos (\frac{\pi }{2}{x_{M - 2}})\cos (\frac{\pi }{2}{x_{M - 1}}) (1 {+} g(\mathbf {x}_M))\right] ^4 \\ {f_2}(\mathbf {x}) {=} \left[ \cos (\frac{\pi }{2}{x_1})\ldots \cos (\frac{\pi }{2}{x_{M - 2}})\sin (\frac{\pi }{2}{x_{M {-} 1}}) (1 {+} g(\mathbf {x}_M))\right] ^4 \\ \quad \quad \quad \ldots \\ {f_{M {-} 1}}(\mathbf {x}) {=} \left[ \cos (\frac{\pi }{2}{x_1})\sin (\frac{\pi }{2}{x_2}) (1 {+} g(\mathbf {x}_M))\right] ^4 \\ {f_M}(\mathbf {x}) {=} \left[ \sin (\frac{\pi }{2}{x_1}) (1 {+} g(\mathbf {x}_M))\right] ^2\\ \end{array} \right. \end{aligned}$$
(5)
with
$$\begin{aligned} g(\mathbf {x}_{M}){=}100\left[ |\mathbf {x}_{M}|+\sum \limits _{i = M}^{|\mathbf {x}|}(x_{i}-0.5)^{2}-\cos (20\pi (x_{i}-0.5))\right] \nonumber \\ \end{aligned}$$
(6)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). As shown in Fig. 3, this test problem has a convex PF, and there a large number of local fronts. This test problem is mainly used to assess whether EMaO algorithms are capable of dealing with convex PFs. Parameter settings of this test problem are: \(\mathbf {x} \in [0, 1]^D\), \(K = 10\) (Fig. 4).
Fig. 5

The Pareto front of MaF5 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF4 (inverted badly scaled DTLZ3)

$$\begin{aligned} \min \left\{ \begin{array}{l} {f_1}(\mathbf {x}) = a \times \left( 1 {-} \cos \left( \frac{\pi }{2}{x_1}\right) \ldots \cos \left( \frac{\pi }{2}{x_{M - 2}}\right) \cos \left( \frac{\pi }{2}{x_{M - 1}}\right) \right) (1 {+} g(\mathbf {x}_M)) \\ {f_2}(\mathbf {x}) = a^2 \times \left( 1 \!- \cos \left( \frac{\pi }{2}{x_1}\right) \ldots \cos \left( \frac{\pi }{2}{x_{M - 2}}\right) \sin \left( \frac{\pi }{2}{x_{M - 1}}\right) \right) (1 {+} g(\mathbf {x}_M)) \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = a^{M - 1} \times \left( 1 - \cos \left( \frac{\pi }{2}{x_1}\right) \sin \left( \frac{\pi }{2}{x_2}\right) \right) (1 + g(\mathbf {x}_M)) \\ {f_M}(\mathbf {x}) = a^{M} \times \left( 1 - \sin \left( \frac{\pi }{2}{x_1}\right) \right) \times (1 + g(\mathbf {x}_M))\\ \end{array} \right. \end{aligned}$$
(7)
with
$$\begin{aligned} g(\mathbf {x}_{M}){=}100\left[ \!|\mathbf {x}_{M}|{+}\sum \limits _{i = M}^{|\mathbf {x}|}(x_{i}{-}0.5)^{2}-\cos (20\pi (x_{i}{-}0.5))\!\!\right] \nonumber \\ \end{aligned}$$
(8)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). Parameter settings are \(a = 2\). Besides, the fitness landscape of this test problem is highly multimodal, containing a number of \((3^k - 1)\) local Pareto-optimal fronts. This test problem is used to assess whether EMaO algorithms are capable of dealing with badly scaled PFs, especially when the fitness landscape is highly multimodal. Parameter settings of this test problem are: \(\mathbf {x} \in [0, 1]^n\), \(K = 10\) and \(a = 2\).

MaF5 (convex badly scaled DTLZ4)

$$\begin{aligned} \min \left\{ \begin{array}{ll} {f_1}(\mathbf {x}) = a^{M} {\times } \left[ \cos \left( \frac{\pi }{2}{x_1}^\alpha \right) \ldots \cos \left( \frac{\pi }{2}{x_{M - 2}^\alpha }\right) \cos \left( \frac{\pi }{2}{x_{M - 1}^\alpha }\right) (1{+} g(\mathbf {x}_M))\right] ^4 \\ {f_2}(\mathbf {x}) = a^{M-1} {\times } \left[ \cos \left( \frac{\pi }{2}{x_1}^\alpha \right) \ldots \cos \left( \frac{\pi }{2}{x_{M - 2}^\alpha }\right) \sin \left( \frac{\pi }{2}{x_{M - 1}^\alpha }\right) (1 {+} g(\mathbf {x}_M))\right] ^4 \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = a^{2} \times \left[ \cos (\frac{\pi }{2}{x_1^\alpha })\sin (\frac{\pi }{2}{x_2^\alpha }) (1 + g(\mathbf {x}_M))\right] ^4 \\ {f_M}(\mathbf {x}) = a \times \left[ \sin (\frac{\pi }{2}{x_1}^\alpha ) (1 + g(\mathbf {x}_M))\right] ^4\\ \end{array} \right. \nonumber \\ \end{aligned}$$
(9)
with
$$\begin{aligned} g(\mathbf {x}_M) = \sum \limits _{i = M}^{|\mathbf {x}|} (x_i - 0.5)^2 \end{aligned}$$
(10)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). As shown in Fig. 5, this test problem has a badly scaled PF, where each objective function is scaled to a substantially different range. Besides, the PS of this test problem has a highly biased distribution, where the majority of Pareto optimal solutions are crowded in a small subregion. This test problem is used to assess whether EMaO algorithms are capable of dealing with badly scaled PFs/PSs. Parameter settings of this test problem are: \(\mathbf {x} \in [0, 1]^D\), \(\alpha = 100\) and \(a = 2\).
Fig. 6

The Pareto front of MaF6 with three and tenobjectives shown by Cartesian coordinates and parallel coordinates, respectively

Fig. 7

The Pareto front of MaF7 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF6 (DTLZ5(I,M) [24])

$$\begin{aligned} \min \left\{ \begin{array}{ll} {f_1}(\mathbf {x}) = \cos (\theta _1)\ldots \cos (\theta _{M - 2})\cos (\theta _{M - 1}) (1 + 100g(\mathbf {x}_M)) \\ {f_2}(\mathbf {x}) = \cos (\theta _1)\ldots \cos (\theta _{M - 2})\sin (\theta _{M - 1}) (1 + 100g(\mathbf {x}_M)) \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = \cos (\theta _1)\sin (\theta _2) (1 + 100g(\mathbf {x}_M)) \\ {f_M}(\mathbf {x}) = \sin (\theta _1) (1 + 100g(\mathbf {x}_M))\\ \end{array} \right. \nonumber \\ \end{aligned}$$
(11)
with
$$\begin{aligned}&{\theta _i} = \left\{ \begin{array}{l} \frac{\pi }{2}{x_i}\quad {\text { for }}~i = 1,2,\ldots ,I - 1\\ \frac{1}{{4(1 + g({\mathbf{{x}}_M}))}}(1 + 2g({\mathbf{{x}}_M}){x_i})\quad {\text { for }}~ i \!= I,\ldots ,M {-} 1 \end{array} \right. \nonumber \\\end{aligned}$$
(12)
$$\begin{aligned}&g(\mathbf {x}_M) = \sum \limits _{i = M}^{|\mathbf {x}|} (x_i - 0.5)^2 \end{aligned}$$
(13)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). As shown in Fig. 6, this test problem has a degenerate PF whose dimensionality is defined using parameter I. In other words, the PF of this test problem is always an I-dimensional manifold regardless of the specific number of decision variables. This test problem is used to assess whether EMaO algorithms are capable of dealing with degenerate PFs. Parameter settings are: \(\mathbf {x} \in [0, 1]^D\), \(I = 2\) and \(K = 10\).
Fig. 8

The Pareto front of MaF8 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF7 (DTLZ7 [9])

$$\begin{aligned} \min \left\{ \begin{array}{ll} {f_1}(\mathbf {x}) = {x_1} \\ {f_2}(\mathbf {x}) = {x_2} \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = {x_{M - 1}} \\ f_{M}(\mathbf {x}) = h(f_{1},\ f_{2},\ \ldots ,\ f_{M-1},\ g)\times (1+g(\mathbf {x}_{M})) \\ \end{array} \right. \nonumber \\ \end{aligned}$$
(14)
with
$$\begin{aligned} \left\{ \! \begin{array}{ll} g(\mathbf {x}_{M}) = 1+\frac{9}{|\mathbf {x}_{M}|}\sum _{i = M}^{|\mathbf {x}|} x_{i} \\ h(f_{1}, f_{2}, \ldots , f_{M-1}, g) = M-\sum _{i=1}^{M-1}\left[ \frac{f_{i}}{1+g}(1{+}\sin (3\pi f_{i}))\right] \end{array} \right. \nonumber \\ \end{aligned}$$
(15)
where the number of decision variable is \(D = M + K - 1\), and K denotes the size of \(\mathbf {x}_M\), namely \(K = |\mathbf {x}_M|\), with \(\mathbf {x}_M = (x_M,\ldots ,x_D)\). As shown in Fig. 7, this test problem has a disconnected PF where the number of disconnected segments is \(2^{M - 1}\). This test problem is used to assess whether EMaO algorithms are capable of dealing with disconnected PFs, especially when the number of disconnected segments is large in high-dimensional objective space. Parameter settings are: \(\mathbf {x} \in [0, 1]^n\) and \(K = 20\).

MaF8 (multi-point distance minimization problem [11, 12])

This function considers a two-dimensional decision space. As its name suggests, for any point \(\mathbf {x} = (x_1,x_2)\) MaF8 calculates the Euclidean distance from \(\mathbf {x}\) to a set of M target points \((A_1,A_2,\ldots ,A_M)\) of a given polygon. The goal of the problem is to optimize these M distance values simultaneously. It can be formulated as
$$\begin{aligned} \min \left\{ \begin{array}{ll} f_1(\mathbf {x}) = d(\mathbf {x},A_1)\\ f_2(\mathbf {x}) = d(\mathbf {x},A_2)\\ \quad \quad \quad \ldots \\ f_{M}(\mathbf {x}) = d(\mathbf {x},A_M)\\ \end{array} \right. \end{aligned}$$
(16)
where \(d(\mathbf {x},A_i)\) denotes the Euclidean distance from point \(\mathbf {x}\) to point \(A_i\).
One important characteristic of MaF8 is its Pareto optimal region in the decision space is typically a 2D manifold (regardless of the dimensionality of its objective vectors). This naturally allows a direct observation of the search behavior of EMaO algorithms, e.g., the convergence of their population to the Pareto optimal solutions and the coverage of the population over the optimal region.
Fig. 9

The Pareto front of MaF9 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

In this test suite, the regular polygon is used (to unify with MaF9). The center coordinates of the regular polygon (i.e., Pareto optimal region) are (0, 0) and the radius of the polygon (i.e., the distance of the vertexes to the center) is 1.0. Parameter settings are: \(\mathbf {x}\in [-10{,}000, 10{,}000]^2\). Figure 8 shows the Pareto optimal regions of the three-objective and ten-objective MaF8.

MaF9 (multi-line distance minimization problem [25])

This function considers a two-dimensional decision space. For any point \(\mathbf {x} = (x_1,x_2)\), MaF9 calculates the Euclidean distance from \(\mathbf {x}\) to a set of M target straight lines, each of which passes through an edge of the given regular polygon with M vertexes \((A_1,A_2,\ldots ,A_M)\), where \(M \ge 3\). The goal of MaF9 is to optimize these M distance values simultaneously. It can be formulated as
$$\begin{aligned} \min \left\{ \begin{array}{ll} f_1(\mathbf {x}) = d(\mathbf {x},\overleftrightarrow {A_1A_2})\\ f_2(\mathbf {x}) = d(\mathbf {x},\overleftrightarrow {A_2A_3})\\ \quad \quad \quad \ldots \\ f_{M}(\mathbf {x}) = d(\mathbf {x},\overleftrightarrow {A_MA_1})\\ \end{array} \right. \end{aligned}$$
(17)
where \(\overleftrightarrow {A_iA_j}\) is the target line passing through vertexes \(A_i\) and \(A_j\) of the regular polygon, and \(d(\mathbf {x},\overleftrightarrow {A_iA_j})\) denotes the Euclidean distance from point \(\mathbf {x}\) to line \(\overleftrightarrow {A_iA_j}\).

One key characteristic of MaF9 is that the points in the regular polygon (including the boundaries) and their objective images are similar in the sense of Euclidean geometry [25]. In other words, the ratio of the distance between any two points in the polygon to the distance between their corresponding objective vectors is a constant. This allows a straightforward understanding of the distribution of the objective vector set (e.g., its uniformity and coverage over the Pareto front) via observing the solution set in the two-dimensional decision space. In addition, for MaF9 with an even number of objectives (\(M = 2k\) where \(k \ge 2\)), there exist k pairs of parallel target lines. Any point (outside the regular polygon) residing between a pair of parallel target lines is dominated by only a line segment parallel to these two lines. This property can pose a great challenge for EMaO algorithms which use Pareto dominance as the sole selection criterion in terms of convergence, typically leading to their populations trapped between these parallel lines [14].

For MaF9, all points inside the polygon are the Pareto optimal solutions. However, these points may not be the sole Pareto optimal solutions of the problem. If two target lines intersect outside the regular polygon, there exist some areas whose points are nondominated with the interior points of the polygon. Apparently, such areas exist in the problem with five or more objectives in view of the convexity of the considered polygon. However, the geometric similarity holds only for the points inside the regular polygon. The Pareto optimal solutions that are located outside the polygon will affect this similarity property. So, we set some regions infeasible in the search space of the problem. Formally, consider an M-objective MaF9 with a regular polygon of vertexes \((A_1,A_2,\ldots ,A_M)\). For any two target lines \(\overleftrightarrow {A_{i-1}A_i}\) and \(\overleftrightarrow {A_{n}A_{n+1}}\) (without loss of generality, assuming \(i<n\)) that intersect one point (O) outside the considered regular polygon, we can construct a polygon (denoted as \(\Phi _{A_{i-1}A_iA_{n}A_{n+1}}\)) bounded by a set of \(2(n-i)+2\) line segments: \(\overline{A_iA_n'}, \overline{A_n'A_{n-1}'}, \ldots , \overline{A_{i+1}'A_i'}, \overline{A_i'A_n}, \overline{A_nA_{n-1}}, \ldots , \overline{A_{i+1}A_i}\), where points \(A_i', A_{i+1}',\ldots , A_{n-1}', A_n'\) are symmetric points of \(A_i, A_{i+1},\ldots A_{n-1}, A_n\) with respect to central point O. We constrain the search space of the problem outside such polygons (but not including the boundary). Now the points inside the regular polygon are the sole Pareto optimal solutions of the problem. In the implementation of the test problem, for newly produced individuals which are located in the constrained areas of the problem, we simply reproduce them within the given search space until they are feasible.

In this test suite, the center coordinates of the regular polygon (i.e., Pareto optimal region) are (0, 0) and the radius of the polygon (i.e., the distance of the vertexes to the center) is 1.0. Parameter settings are: \(\mathbf {x}\in [-10{,}000, 10{,}000]^2\). Figure 9 shows the Pareto optimal regions of the three-objective and ten-objective MaF9.
Fig. 10

The Pareto front of MaF10 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF10 (WFG1 [10])

$$\begin{aligned} \min \left\{ \begin{array}{ll} f_1(\mathbf {x}) = y_M{+}2\left( 1{-}\cos \left( \frac{\pi }{2}y_1\right) \right) \ldots \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-2}\right) \right) \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-1}\right) \right) \\ f_2(\mathbf {x}) = y_M{+}4\left( 1{-}\cos \left( \frac{\pi }{2}y_1\right) \right) \ldots \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-2}\right) \right) \left( 1{-}\sin \left( \frac{\pi }{2}y_{M-1}\right) \right) \\ \quad \quad \quad \ldots \\ f_{M-1}(\mathbf {x}) = y_M+2(M-1)\left( 1-\cos \left( \frac{\pi }{2}y_1\right) \right) \left( 1-\sin \left( \frac{\pi }{2}y_2\right) \right) \\ f_M(\mathbf {x}) = y_M+2M\left( 1-y_1-\frac{\cos (10\pi y_1+\pi /2)}{10\pi }\right) \\ \end{array} \right. \nonumber \\ \end{aligned}$$
(18)
with
$$\begin{aligned} z_i= & {} \dfrac{x_i}{2i} \quad \text { for } i=1,\ldots ,D\end{aligned}$$
(19)
$$\begin{aligned} t_i^1= & {} {\left\{ \begin{array}{ll} z_i, &{}\quad \text {if } i=1,\ldots ,K\\ \dfrac{|z_i-0.35|}{|\lfloor 0.35-z_i\rfloor |+0.35}, &{}\quad \text {if }~i=K+1,\ldots ,D\\ \end{array}\right. } \end{aligned}$$
(20)
$$\begin{aligned} t_i^2 \!= \left\{ \!\begin{array}{lll} t_i^1, &{}\quad \text {if }~ i=1,\ldots ,K\\ 0.8+\frac{0.8(0.75-t_i^1)\min (0,\lfloor t_i^1-0.75\rfloor )}{0.75}\\ \qquad -\frac{(1-0.8)(t_i^1-0.85)\min (0,\lfloor 0.85-t_i^1\rfloor )}{1-0.85}, &{}\quad \text {if }~ i=K+1,\ldots ,D\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(21)
$$\begin{aligned} t_i^3 = {t_i^2}^{0.02} \quad \text { for }~ i=1,\ldots ,D \end{aligned}$$
(22)
$$\begin{aligned} t_i^4 = {\left\{ \begin{array}{ll} \dfrac{\sum _{j=(i-1)K/(M-1)+1}^{iK/(M-1)}2jt_j^3}{\sum _{j=(i-1)K/(M-1)+1}^{iK/(M-1)}2j}, &{}\quad \text {if } i=1,\ldots ,M-1\\ \dfrac{\sum _{j=K+1}^{D}2jt_j^3}{\sum _{j=K+1}^{D}2j}, &{}\quad \text {if } i=M\\ \end{array}\right. } \end{aligned}$$
(23)
$$\begin{aligned} y_i = {\left\{ \begin{array}{ll} (t_i^4-0.5)\max (1,t_M^4)+0.5, &{}\quad \text {if } i=1,\ldots ,M{-}1\\ t_M^4, &{}\quad \text {if } i=M\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(24)
where the number of decision variable is \(D=K+L\), with K denoting the number of position variables and L denoting the number of distance variables. As shown in Fig. 10, this test problem has a scaled PF containing both convex and concave segments. Besides, there are a lot of transformation functions correlating the decision variables and the objective functions. This test problem is used to assess whether EMaO algorithms are capable of dealing with PFs of complicated mixed geometries. Parameter settings are: \(\mathbf {x}\in \prod _{i=1}^{D}[0,2i]\), \(K=M-1\), and \(L=10\).
Fig. 11

The Pareto front of MaF11 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

MaF11 (WFG2 [10])

$$\begin{aligned} \min \left\{ \begin{array}{lll} f_1(\mathbf {x}) {=} y_M{+}2\left( 1{-}\cos \left( \frac{\pi }{2}y_1\right) \right) \ldots \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-2}\right) \right) \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-1}\right) \right) \\ f_2(\mathbf {x}) =\! y_M{+}4\left( 1{-}\cos \left( \frac{\pi }{2}y_1\right) \right) \ldots \left( 1{-}\cos \left( \frac{\pi }{2}y_{M-2}\right) \right) \left( 1{-}\sin \left( \frac{\pi }{2}y_{M-1}\right) \right) \\ \quad \quad \quad \ldots \\ f_{M-1}(\mathbf {x}) = y_M+2(M-1)\left( 1-\cos \left( \frac{\pi }{2}y_1\right) \right) \left( 1-\sin \left( \frac{\pi }{2}y_2\right) \right) \\ f_M(\mathbf {x}) = y_M+2M(1-y_1\cos ^2(5\pi y_1))\\ \end{array} \right. \nonumber \\ \end{aligned}$$
(25)
with
$$\begin{aligned} z_i=\frac{x_i}{2i} \quad \text { for }~i=1,\ldots ,D \end{aligned}$$
(26)
$$\begin{aligned} t_i^1 = {\left\{ \begin{array}{ll} z_i, &{}\quad \text {if }~i=1,\ldots ,K\\ \frac{|z_i-0.35|}{|\lfloor 0.35-z_i\rfloor |+0.35}, &{}\quad \text {if }~ i=K+1,\ldots ,D\\ \end{array}\right. } \end{aligned}$$
(27)
$$\begin{aligned} t_i^2 = \left\{ \begin{array}{lll} t_i^1, &{}\quad \text {if }~i=1,\ldots ,K\\ t_{K+2(i-K)-1}^1+t_{K+2(i-K)}^1\\ \quad +2|t_{K+2(i-K)-1}^1-t_{K+2(i-K)}^1|, &{}\quad \text {if }~i=K{+}1,\ldots ,(D{+}K)/2\\ \end{array}\right. \nonumber \\ \end{aligned}$$
(28)
$$\begin{aligned} t_i^3 = {\left\{ \begin{array}{ll} \frac{\sum _{j=(i-1)K/(M-1)+1}^{iK/(M-1)}t_j^2}{K/(M-1)}, &{}\quad \text {if } i=1,\ldots ,M-1\\ \frac{\sum _{j=K+1}^{(D+K)/2}t_j^2}{(D-K)/2}, &{}\quad \text {if } i=M\\ \end{array}\right. } \end{aligned}$$
(29)
$$\begin{aligned} y_i = {\left\{ \begin{array}{ll} (t_i^3-0.5)\max (1,t_M^3)+0.5, &{}\quad \text {if }~i{=}1,\ldots ,M{-}1\\ t_M^3, &{}\quad \text {if }~i=M\\ \end{array}\right. } \end{aligned}$$
(30)
where the number of decision variable is \(n=K+L\), with K denoting the number of position variables and L denoting the number of distance variables. As shown in Fig. 11, this test problem has a scaled disconnected PF. This test problem is used to assess whether EMaO algorithms are capable of dealing with scaled disconnected PFs. Parameter settings are: \(\mathbf {x}\in \prod _{i=1}^{D}[0,2i]\), \(K=M-1\), and \(L=10\).

MaF12 (WFG9 [10])

$$\begin{aligned} \min \left\{ \begin{array}{lll} f_1(\mathbf {x})= y_M+2\sin \left( \frac{\pi }{2}y_1\right) \ldots \sin \left( \frac{\pi }{2}y_{M-2}\right) \sin \left( \frac{\pi }{2}y_{M-1}\right) \\ f_2(\mathbf {x})= y_M+4\sin \left( \frac{\pi }{2}y_1\right) \ldots \sin \left( \frac{\pi }{2}y_{M-2}\right) \cos \left( \frac{\pi }{2}y_{M-1}\right) \\ \quad \quad \quad \ldots \\ f_{M-1}(\mathbf {x})= y_M+2(M-1)\sin \left( \frac{\pi }{2}y_1\right) \cos \left( \frac{\pi }{2}y_2\right) \\ f_M(\mathbf {x}) = y_M+2M\cos \left( \frac{\pi }{2}y_1\right) \\ \end{array} \right. \nonumber \\ \end{aligned}$$
(31)
with
$$\begin{aligned} z_i=\frac{x_i}{2i} \quad \text { for } i=1,\ldots ,D \end{aligned}$$
(32)
$$\begin{aligned} t_i^1 = \left\{ \begin{array}{ll} z_i^{0.02+(50-0.02)\left( 0.98/49.98-\left( 1-2\frac{\sum _{j=i+1}^{n}z_j}{D-i}\right) |\lfloor 0.5-\frac{\sum _{j=i+1}^{D}z_j}{D-i}\rfloor +0.98/49.98|\right) } , &{}\quad \text {if }~i=1,\ldots ,D-1\\ z_i, &{}\quad \text {if }~i=D\\ \end{array}\right. \end{aligned}$$
(33)
$$\begin{aligned} t_i^2 = {\left\{ \begin{array}{ll} 1+\left( |t_i^1-0.35|-0.001\right) \left( \frac{349.95\lfloor t_i^1-0.349\rfloor }{0.349}+\frac{649.95\lfloor 0.351-t_i^1\rfloor }{0.649}+1000\right) , &{}\quad \text {if }~i=1,\ldots ,K\\ \frac{1}{97}\left( 1+\cos [122\pi (0.5-\frac{|t_i^1-0.35|}{2(\lfloor 0.35-t_i^1\rfloor +0.35)})]+380\left( \frac{|t_i^1-0.35|}{2(\lfloor 0.35-t_i^1\rfloor +0.35)}\right) ^2\right) , &{}\quad \text {if }~i=K+1,\ldots ,D\\ \end{array}\right. } \end{aligned}$$
(34)
$$\begin{aligned} t_i^3 = {\left\{ \begin{array}{ll} \frac{\sum _{j=(i-1)K/(M-1)+1}^{iK/(M-1)}\left( t_j^2+\sum _{k=0}^{K/(M-1)-2}|t_j^2-t_{p}^2|\right) }{\lceil K/(M-1)/2\rceil (1+2K/(M-1)-2\lceil K/(M-1)/2\rceil )}, &{}\quad \text {if }~i=1,\ldots ,M-1\\ \frac{\sum _{j=K+1}^{D}\left( t_j^2+\sum _{k=0}^{D-K-2}|t_j^2-t_q^2|\right) }{\lceil (D-K)/2\rceil (1+2(D-K)-2\lceil (D-K)/2\rceil )}, &{}\quad \text {if }~i=M\\ \end{array}\right. } \end{aligned}$$
(35)
$$\begin{aligned} y_i = {\left\{ \begin{array}{ll} (t_i^3-0.5)\max (1,t_M^3){+}0.5, &{}\quad \text {if }~i=1,\ldots ,M-1\\ t_M^3, &{}\quad \text {if }~i=M\\ \end{array}\right. } \end{aligned}$$
(36)
Fig. 12

The Pareto front of MaF12 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

$$\begin{aligned} {\left\{ \begin{array}{ll} p = (i-1)K/(M-1)+1+(j-(i-1)K/\\ \qquad \quad (M-1)+k)\text {mod}(K/(M-1))\\ q = {K+1+(j-K+k)\text {mod}(n-K)} \\ \end{array}\right. } \end{aligned}$$
(37)
where the number of decision variable is \(D=K+L\), with K denoting the number of position variable and L denoting the number of distance variable. As shown in Fig. 12, this test problem has a scaled concave PF. Although the PF of this test problem is simple, its decision variables are nonseparably reduced, and its fitness landscape is highly multimodal. This test problem is used to assess whether EMaO algorithms are capable of dealing with scaled concave PFs together with complicated fitness landscapes. Parameter settings are: \(\mathbf {x}\in \prod _{i=1}^{D}[0,2i]\), \(K=M-1\), and \(L=10\).

MaF13 (PF7 [13])

Fig. 13

The Pareto front of MaF13 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

$$\begin{aligned} \min \left\{ \begin{array}{l} f_1(\mathbf {x}) = \sin \left( \frac{\pi }{2}x_1\right) +\frac{2}{|J_1|}\sum \limits _{j\in J_1}y_j^2\\ f_2(\mathbf {x}) = \cos (\frac{\pi }{2}x_1)\sin \left( \frac{\pi }{2}x_2\right) +\frac{2}{|J_2|}\sum \limits _{j\in J_2}y_j^2\\ f_3(\mathbf {x}) = \cos \left( \frac{\pi }{2}x_1\right) \cos \left( \frac{\pi }{2}x_2\right) +\frac{2}{|J_3|}\sum \limits _{j\in J_3}y_j^2\\ f_{4,\ldots ,M}(\mathbf {x}) {=} f_1(\mathbf {x})^2+f_2(\mathbf {x})^{10}{+}f_3(\mathbf {x})^{10}{+}\frac{2}{|J_4|}\sum \limits _{j\in J_4}y_j^2\\ \end{array} \right. \end{aligned}$$
(38)
with
$$\begin{aligned} y_i=x_i-2x_2\sin \left( 2\pi x_1+\frac{i\pi }{n}\right) \quad \text { for }~ i=1,\ldots ,D \end{aligned}$$
(39)
$$\begin{aligned} \left\{ \begin{array}{l} J_1 = \{j|3\le j\le D, \text { and } j\text { mod }3=1\}\\ J_2 = \{j|3\le j\le D, \text { and } j\text { mod }3=2\}\\ J_3 = \{j|3\le j\le D, \text { and } j\text { mod }3=0\}\\ J_4 = \{j|4\le j\le D\}\\ \end{array} \right. \end{aligned}$$
(40)
where the number of decision variable is \(D=5\). As shown in Fig. 13, this test problem has a concave PF; in fact, the PF of this problem is always a unit sphere regardless of the number of objectives. Although this test problem has a simple PF, its decision variables are nonlinearly linked with the first and second decision variables, thus leading to difficulty in convergence. This test problem is used to assess whether EMaO algorithms are capable of dealing with degenerate PFs and complicated variable linkages. Parameter setting is: \(\mathbf {x}\in [0,1]^2\times [-2,2]^{D-2}\).

MaF14 (LSMOP3 [16])

Fig. 14

The Pareto front of MaF14 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

Fig. 15

The Pareto front of MaF15 with three and ten objectives shown by Cartesian coordinates and parallel coordinates, respectively

$$\begin{aligned} \min \left\{ \begin{array}{l} {f_1}(\mathbf {x}) = {x^f_1}\ldots {x^f_{M - 1}}\left( 1 + \sum _{j = 1}^{M}c_{1,j} \times {\bar{g}_1}(\mathbf {x}^s_j)\right) \\ {f_2}(\mathbf {x}) = {x^f_1}\ldots (1 - {x^f_{M - 1}})\left( 1 + \sum _{j = 1}^{M}c_{2,j} \times {\bar{g}_2}(\mathbf {x}^s_j)\right) \\ \quad \quad \quad \ldots \\ {f_{M {-} 1}}(\mathbf {x}) = {x^f_1}(1 - {x^f_2})\left( 1 + \sum _{j {=} 1}^{M}c_{M - 1,j} \times {\bar{g}_{M - 1}}(\mathbf {x}^s_j)\right) \\ {f_M}(\mathbf {x}) = (1 {-} {x^f_1})\left( 1 {+} \sum _{j = 1}^{M}c_{M,j} {\times } {\bar{g}_M}(\mathbf {x}^s_j)\right) \\ \mathbf {x} \in [0, 10]^{|\mathbf {x}|} \end{array} \right. \end{aligned}$$
(41)
with
$$\begin{aligned} c_{i,j} = {\left\{ \begin{array}{ll} 1, &{}\quad \text {if}~~i = j \\ 0, &{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
(42)
$$\begin{aligned} \left\{ \begin{array}{ll} &{} \bar{g}_{2k - 1}(\mathbf {x}^s_i) = \frac{1}{N_k} \sum _{j = 1}^{N_k} \frac{{\eta _1}(\mathbf {x}^s_{i,j})}{|\mathbf {x}^s_{i,j}|}\\ &{} \bar{g}_{2k}(\mathbf {x}^s_i) = \frac{1}{N_k} \sum _{j = 1}^{N_k} \frac{{\eta _2}(\mathbf {x}^s_{i,j})}{|\mathbf {x}^s_{i,j}|}\\ &{} k = 1,\ldots ,\left\lceil \frac{M}{2} \right\rceil \end{array} \right. \end{aligned}$$
(43)
$$\begin{aligned} \left\{ \begin{array}{ll} &{} \eta _1(\mathbf {x}) = \sum _{i = 1}^{|\mathbf {x}|}(x_i^2 - 10\cos (2\pi x_i) + 10) \\ &{} \eta _2(\mathbf {x}) = \sum _{i = 1}^{|\mathbf {x}| - 1}\left[ 100(x_i^2 - x_{i + 1})^2 + (x_i - 1)^2\right] \end{array} \right. \end{aligned}$$
(44)
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\mathbf {x}^s \leftarrow \left( 1 + \frac{i}{|\mathbf {x}^s|}\right) \times ({x}^s_{i} - l_i) - {x}^f_1\times (u_i - l_i) \\ &{}i = 1,\ldots ,|\mathbf {x}^s|\\ \end{array} \right. \end{aligned}$$
(45)
where \(N_k\) denotes the number of variable subcomponent in each variable group \(\mathbf {x}^s_i\) with \(i = 1,\ldots ,M\), and \(u_i\) and \(l_i\) are the upper and lower boundaries of the ith decision variable in \(\mathbf {x}^s\). Although this test problem has a simple linear PF, its fitness landscape is complicated. First, the decision variables are non-uniformly correlated with different objectives; second, the decision variables have mixed separability, i.e., some of them are separable while others are not. This test problem is mainly used to assess whether EMaO algorithms are capable of dealing with complicated fitness landscape with mixed variable separability, especially in large-scale cases. Parameter settings are: \(N_k = 2\) and \(D = 20\times M\).

MaF15 (inverted LSMOP8 [16])

$$\begin{aligned} \min \left\{ \begin{array}{l} {f_1}(\mathbf {x}) = \left( 1 - \cos \left( \frac{\pi }{2}{x^f_1}\right) \ldots \cos \left( \frac{\pi }{2}{x^f_{M - 2}}\right) \cos \left( \frac{\pi }{2}{x^f_{M - 1}}\right) \right) \times \left( 1 + \sum _{j = 1}^{M}c_{1,j} \times {\bar{g}_1}(\mathbf {x}^s_j)\right) \\ {f_2}(\mathbf {x}) = \left( 1 - \cos \left( \frac{\pi }{2}{x^f_1}\right) \ldots \cos \left( \frac{\pi }{2}{x^f_{M - 2}}\right) \sin \left( \frac{\pi }{2}{x^f_{M - 1}}\right) \right) \times \left( 1 + \sum _{j = 1}^{M}c_{2,j} \times {\bar{g}_2}(\mathbf {x}^s_j)\right) \\ \quad \quad \quad \ldots \\ {f_{M - 1}}(\mathbf {x}) = \left( 1 - \cos \left( \frac{\pi }{2}{x^f_1}\right) \sin \left( \frac{\pi }{2}{x^f_2}\right) \right) \times \left( 1 + \sum _{j = 1}^{M}c_{M - 1,j} \times {\bar{g}_{M - 1}}(\mathbf {x}^s_j)\right) \\ {f_M}(\mathbf {x}) = \left( 1 - \sin \left( \frac{\pi }{2}{x^f_1}\right) \right) \times \left( 1 + \sum _{j = 1}^{M}c_{M,j} {\bar{g}_M}(\mathbf {x}^s_j)\right) \\ \mathbf {x} \in [0, 1]^{|\mathbf {x}|} \end{array} \right. \end{aligned}$$
(46)
with
$$\begin{aligned} c_{i,j} = {\left\{ \begin{array}{ll} 1, &{}\quad \text {if}~~j = i\quad \text { or }\quad j = i + 1 \\ 0, &{}\quad \text {otherwise} \end{array}\right. } \end{aligned}$$
(47)
$$\begin{aligned} \left\{ \begin{array}{ll} &{} \bar{g}_{2k - 1}(\mathbf {x}^s_i) = \frac{1}{N_k} \sum _{j = 1}^{N_k} \frac{{\eta _1}(\mathbf {x}^s_{i,j})}{|\mathbf {x}^s_{i,j}|}\\ &{} \bar{g}_{2k}(\mathbf {x}^s_i) = \frac{1}{N_k} \sum _{j = 1}^{N_k} \frac{{\eta _2}(\mathbf {x}^s_{i,j})}{|\mathbf {x}^s_{i,j}|}\\ &{} k = 1,\ldots ,\left\lceil \frac{M}{2} \right\rceil \end{array} \right. \end{aligned}$$
(48)
$$\begin{aligned} \left\{ \begin{array}{ll} &{} \eta _1(\mathbf {x}) = \sum _{i = 1}^{|\mathbf {x}|}\frac{x_i^2}{4000} - \prod \limits _{i = 1}^{|\mathbf {x}|}\cos \left( \frac{x_i}{\sqrt{i}}\right) + 1 \\ &{} \eta _2(\mathbf {x}) = \sum _{i = 1}^{|\mathbf {x}|} (x_i)^2. \end{array} \right. \end{aligned}$$
(49)
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\mathbf {x}^s \leftarrow {\left( {1 {+} \cos \left( 0.5 \pi \frac{i}{|\mathbf {x}^s|}\right) }\right) }\times ({{x}^s_i} - l_i) {-} {x}^f_1\times (u_i{-} l_i) \\ &{}i = 1,\ldots ,|\mathbf {x}^s| \\ \end{array} \right. \end{aligned}$$
(50)
where \(N_k\) denotes the number of variable subcomponent in each variable group \(\mathbf {x}^s_i\) with \(i = 1,\ldots ,M\), and \(u_i\) and \(l_i\) are the upper and lower boundaries of the ith decision variable in \(\mathbf {x}^s\). Although this test problem has a simple convex PF, its fitness landscape is complicated. First, the decision variables are non-uniformly correlated with different objectives; second, the decision variables have mixed separability, i.e., some of them are separable while others are not. Different from MaF14, this test problem has non-linear (instead of linear) variable linkages on the PS, which further increases the difficulty. This test problem is mainly used to assess whether EMaO algorithms are capable of dealing with complicated fitness landscape with mixed variable separability, especially in large-scale cases. Parameter settings are: \(N_k = 2\) and \(D = 20\times M\) in Figs. 14 and 15.

Experimental setup

To conduct benchmark experiments using the proposed test suite, users may follow the experimental setup as given below.

General settings

  • Number of objectives (M)  5, 10, 15

  • Maximum population size1  \(25\times M\)

  • Maximum number of fitness evaluations (FEs)2  \(\max \{100000, 10000\times D\}\)

  • Number of independent runs  31

Fig. 16

The GUI in PlatEMO for this test suite

Performance metrics

  • Inverted generational distance (IGD)  Let \(P^*\) be a set of uniformly distributed points on the Pareto front. Let P be an approximation to the Pareto front. The inverted generational distance between \(P^*\) and P can be defined as:
    $$\begin{aligned} \mathrm{IGD}(P^*,P) = \frac{\sum _{\mathbf {v} \in P^*}d(v,P)}{|P^*|}, \end{aligned}$$
    (51)
    where \(d(\mathbf {v},P)\) is the minimum Euclidean distance from point \(\mathbf {v}\) to set P. The IGD metric is able to measure both diversity and convergence of P if \(|P^*|\) is large enough, and a smaller IGD value indicates a better performance. In this test suite, we suggest a number of 10,000 uniformly distributed reference points sampled on the true Pareto front3 for each test instance.
  • Hypervolume (HV)  Let \(\mathbf {y}^* = (y_1^*, \ldots , y_m^*)\) be a reference point in the objective space that is dominated by all Pareto optimal solutions. Let P be the approximation to the Pareto front. The HV value of P (with regard to \(\mathbf {y}^*\)) is the volume of the region which is dominated by P and dominates \(\mathbf {y}^*\). In this test suite, the objective vectors in P are normalized using \(f^j_i = \frac{f^j_i}{1.1 \times y_i^{\mathrm{nadir}}}\), where \(f^j_i\) is the ith dimension of jth objective vector, and \(y_i^{\mathrm{nadir}}\) is the ith dimension of nadir point of the true Pareto front.4 Then we use y* = (1,...,1) as the reference point for the normalized objective vectors in the HV calculation.

Software platform

All the benchmark functions have been implemented in MATLAB code and embedded in a recently developed software platform—PlatEMO.5 PlatEMO is an open source MATLAB-based platform for evolutionary multi- and many-objective optimization, which currently includes more than 50 representative algorithms and more than 100 benchmark functions, along with a variety of widely used performance indicators. Moreover, PlatEMO provides a user-friendly graphical user interface (GUI), which enables users to easily perform experimental settings and algorithmic configurations, and obtain statistical experimental results by one-click operation.

In particular, as shown in Fig. 16, we have tailored a new GUI in PlatEMO for this test suite, such that participants are able to directly obtain tables and figures comprising the statistical experimental results for the test suite. To conduct the experiments, the only thing to be done by participants is to write the candidate algorithms in MATLAB and embed them into PlatEMO. The detailed introduction to PlatEMO regarding how to embed new algorithms can be referred to the users manual attached in the source code of PlatEMO [26]. Once a new algorithm is embedded in PlatEMO, the user will be able to select the new algorithm and execute it on the GUI shown in Fig. 16. Then the statistical results will be displayed in the figures and tables on the GUI, and the corresponding experimental result (i.e., final population and its performance indicator values) of each run will be saved to a .mat file.

Footnotes
1

The size of final population/archive must be smaller the given maximum population size, otherwise, a compulsory truncation will be operated in final statistics for fair comparisons.

 
2

Regardless of the number of objectives, every evaluation of the whole objective set is counted as one FE.

 
3

The specific number of reference points for IGD calculations can vary a bit due to the different geometries of the Pareto fronts. All reference point sets can be automatically generated using the software platform introduced in Sect. “Software platform”.

 
4

The nadir points can be automatically generated using the software platform introduced in Sect. “Software platform”.

 

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61590922, the Engineering and Physical Sciences Research Council of U.K. under Grant EP/M017869/1, Grant EP/K001310/1 and Grant EP/K001523/1.

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Ran Cheng
    • 1
  • Miqing Li
    • 1
  • Ye Tian
    • 2
  • Xingyi Zhang
    • 2
  • Shengxiang Yang
    • 3
  • Yaochu Jin
    • 4
  • Xin Yao
    • 1
    • 5
  1. 1.CERCIA, School of Computer ScienceUniversity of BirminghamBirminghamUK
  2. 2.School of Computer Science and TechnologyAnhui UniversityHefeiChina
  3. 3.School of Computer Science and InformaticsDe Monfort UniversityLeicesterUK
  4. 4.Department of Computer ScienceUniversity of SurreySurreyUK
  5. 5.Department of Computer Science and EngineeringSouthern University of Science and TechnologyShenzhenChina

Personalised recommendations