Abstract
Realworld optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multiobjective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGAIII, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on largescale manyobjective optimization problems, we proposed a twostage framework. The first stage of the proposed algorithm combines with a multitasking optimization strategy and a bidirectional search strategy, where the original problem is reformulated as a multitasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multitasking optimization to a number of subproblems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.
Introduction
Conventional methods based on gradients cannot solve nonanalytic optimization problems. Researchers in the past 2 decades have confirmed that evolutionary algorithms (EAs) are effective on blackbox optimization problems. Also, the need to simultaneously optimize multiple indicators is very common for engineering optimization, which can be modeled as the multiobjective optimization problems (MOPs) with M objectives as follows:
Here, X is the feasible region of D decision variables (\(\mathbf{{x}} = (x_1,\dots ,x_D)\)) [1].
Solving MOPs has two difficulties. First, traditional mathematical methods cannot solve MOPs by gradients. Second, the optimal solutions of an MOP is a solution set due to the conflict of multiple objective functions. The Pareto dominance relationship is a classical and widely used method for multiobjective optimization. It can be explained in this way, for any two solutions of an MOP illustrated by Eq. (1): solution \(S_A\) is known to Pareto dominate solution \(S_B\), if and only if \(f_i(S_A) \le f_i(S_B)\) (\(\forall i \in {1, 2, \ldots ,M}\)) and there exists at least one objective \(f_j\) (\(j\in {1, 2, \ldots ,M}\)) satisfying \(f_j(S_A)<f_j(S_B)\). The set of all the Pareto optimal solutions in the decision space is called the Pareto set (PS), and the projection of PS in the objective space is called the Pareto front (PF).
So far, multiobjective evolutionary algorithms (MOEAs) have been a successful tool for solving MOPs [2], because they can output the Pareto set in a single run. MOEAs using the Pareto dominance for their selection are popular. Many Paretobased MOEAs have their own selection methodology. For example, NSGAII [3] is based on a fast nondominated sorting approach for selecting good solutions, and SPEA2 [4] is based on the nondominated fitness allocation. Aggregationbased algorithms, such as MOEA/D [5], transform an MOP into a number of singleobjective optimization problems and optimize those subproblems simultaneously. Furthermore, the performance measurement can be employed in the selection process of MOEAs: IBEA [6], SMSEMOA [7], and HypE [8] define their optimization goals in terms of different performance indicators.
As the number of objectives increases, MOPs become manyobjective optimization problems (MaOPs), the ability of filtering out good individuals by the Pareto dominance degenerates. To improve the poor performance of the Pareto dominance, many algorithms combine the Pareto dominancebased criterion with additional convergencerelated metrics. In those methods, solutions are selected first based on the Pareto dominance, and then based on the convergencerelated metric. Those representative approaches include gridbased evolutionary algorithm (GrEA) [9], preferenceinspired coevolutionary algorithm (PICEAg) [10], knee point driven evolutionary algorithm (KnEA) [11], and manyobjective evolutionary algorithm based on directional diversity and favorable convergence (MaOEADDFC) [12].
The large number of decision variables of MOPs (termed LSMOPs) pose challenges to existing MOEAs, because the searching space is highdimensional, which exponentially increases the computation cost. The most straightforward way to deal with LSMOPs is to use the highdimensional search, like CMOPSO [13, 14] uses competitive particle swarm optimizer [15], which is good at highdimensional optimization problems. To decrease the searching space, grouping strategies have been widely used in cooperative coevolutionary (CC) framework for largescale optimization. In CC framework, decision variables are decomposed into groups as different subproblems for cooperative coevaluation strategies. However, most CC algorithms are applied to singleobjective optimization, except a few algorithms for MOPs [16]. In fact, the detected information of LSMOPs can further assist the grouping strategies. For example, DRMOS [17, 18], MOEA/DVA [19], LMEA [20], and S3CMAES [21] divide the decision variables into different categories for applying different searching strategies. In addition, the problem transformation method is an alternative way to reduce the high complexity of LSMOPs. For example, LSMOF [22] reduces the number of decision variables greatly by transforming the original LSMOP into a number of weight optimization subproblems. WOF [23] is another problem transformation method combining with a grouping strategy.
As mentioned above, both the highdimensional decision and objective space of MOPs increase the hardness and computation cost of solving largescale MaOPs by existing MOEAs which are specially designed for either MaOPs or largescale MOPs. To address both issue, we borrow the parallelism of multifactorial optimization (MFO) [24], where a single population is employed to optimize multiple optimization problems simultaneously, for solving largescale manyobjective optimization problems (LSMaOPs). Our proposed algorithm is an twostage framework combined with the multitasking optimization (termed TSMTF).
The rest structure of this paper is as follows: the next section will introduce the related works of LSMaOPs and MFO idea. The detailed the proposed algorithm TSMTF will be shown in the continuous section. Then, the experimental results are presented in the consequent section. Finally, the last section makes the conclusion.
Related work
Largescale multiobjective optimization
To address the problems with a large number of decision variables, some representative MOEAs (CMOPSO [13, 14], S3CMAES [21], MOEA/DVA [19], and LMEA [20], WOF [23]) to reduce the large search space have been proposed. We will discuss them in detail.
CMOPSO [13] employs a competitive swarm optimizer [15]. With the good search ability for highdimensional space of competitive swarm optimizer, CMOPSO outperforms other MOEAs, but it cannot solve largescale MOPs. Therefore, it is necessary to further reduce the decision space.
The algorithm in [14] adopts a twostage framework, unlike the normal particle swarm optimization algorithms focusing on updating velocity, it proposes a new updating strategy to update the particles’ positions with a new competitive mechanism, first preupdates the particles’ positions by its previous velocity and then learn from the leader’s position. The experiment results show its position updating strategy is effective on LSMOPs.
MOEA/DVA [19], LMEA [20], and S3CMAES [21] are three examples of decision variable analysisbased algorithms. MOEA/DVA [19] solves problems by a clustering method based on the dominancebased relationship, it divides the decision variables into the following three groups:

Convergencerelated variables: the decision variables contribute to convergence.

Diversityrelated variables: the decision variables contribute to diversity.

Mixed variables: the decision variables contribute to both convergence and diversity.
With those groups, MOEA/DVA restructures the problem into a set of subproblems. Then, the convergencerelated variables are optimized firstly to get candidate solutions which are close to the PF. Then, MOEA/DVA treats both diversityrelated and mixed variables as diversityrelated variables to enhance the diversity.
In LMEA, the decision variables are divided into two classes based on the disturbance test on each decision variable: convergence and diversityrelated decision variables. Then, both classes of decision variables are optimized to improve convergence and diversity separately within a population. It also employs a fast nondominated sorting method [25] to further decrease the computation cost.
S3CMAES [21] divides the decision variables into diversityrelated and convergencyrelated groups by a clustering strategy. After that, the convergencyrelated variables are divided into subgroups and each subgroup is optimized independently using covariance matrix adaptation evolution strategy (CMAES). S3CMAES shows good performance on LSMOPs. However, CMAES is repeatedly employed for each subgroup, which makes its cost high.
However, the decision variable clustering methods in above three MOEAs require a large number of function evaluations, which makes them hard to be used for the realworld applications.
Furthermore, problem transformation methods are an alternative way to deal with LSMOPs. In both [26] and [22], the algorithms change the original problem by introducing a set of weight variables in the decision space. The experimental results show its effectiveness on convergence. For example, a solution \({\varvec{X}}=(x_1,x_2,\ldots ,x_D)\) for the problem \(f({\varvec{X}})\) can be transformed to subproblems \(f(\phi (\omega ,{\varvec{X}}))\), where \(\phi (\omega ,{\varvec{X}})\) can be defined as below:
Thus, the problem with D decision variables is transformed into several kdimensional problems. Thanks to the reduction of the decision space, the searching cost has been significantly reduced. However, such transformation can be viewed as a lossy dimension reduction, which causes stability issues. Specially in LSMOF [22], its transformed objective function is based on the hypervolume (HV) indicator, which is computationally expensive for MaOPs. Therefore, LSMOF cannot solve LSMaOPs.
By combining grouping method and problem transformation method, WOF [23] also shows good performance on LSMOPs, it divides the decision variables into groups with its grouping strategies, and each group is assigned with a weight. Unlike other groupingbased algorithms, it cannot optimize the subpopulations separately, WOF transforms the original LSMOP into weight optimization problems and optimize the transformed problems with a whole population as LSMOF [22].
Multitasking optimization
MFEA [24] is a parallel method to solve multiple problems at the same time. It defines the following properties for each individual \(p_i\) in the population P:

Factorial cost: The factorial cost of \(p_i\) on task problem j is \(T_i^j\).

Factorial rank: In the population, each individual is assigned a rank for each task; for example, in terms of task j, \(p_i\)’s rank is \(R_i^j\).

Scalar fitness: Each individual is assigned the fitness \(\varphi _i\) based its bestperforming task.

Skill factor: Each individual is marked its bestperforming task by \(\tau _i = \text {argmin}_j{r_i^j}\) where \(j \in {1,\ldots ,M}\).
Thus, multiple tasks can be turned into a oneobjective problem that all the problems use the \(\varphi \) of each individual as the selection criterion. The selection of this algorithm is based on the scalar fitness. In other words, the algorithm concerns more about the best individual in each task. The offspring generation is also based on the skill factor. The parents \(P_a\) and \(P_b\) are randomly chosen from the population, if their skill factors are different, they will execute crossover operation in a low probability; otherwise, they will execute the crossover or mutation operations. More details of MFEA can be found in Algorithms 1 and 2.
Proposed algorithm
Framework
To address the issue of LSMOF [22] on LSMaOPs, we borrow the main idea of KnEA [11] which optimizes the distance form knee points to the hyperplane to replace the HV calculations of LSMOF in the proposed algorithm. To further reduce the required number of function evaluations, we employ MFEA in TSMTF.
The structure of the proposed algorithm contains two main stages, which is highlighted in Algorithm 3. The first stage aims to find out the solutions with good convergence; the second stage focuses on the diversity improvement; then, the population with good convergence and diversity is embedded in NSGAIII [27]. When the computation budget runs out, the proposed algorithm outputs the obtained optimal solutions.
Stage 1: Bidirectional weight variable associated multitasking strategy
This stage aims to find the better individuals by reducing the dimension of the searching space. One individual \({\varvec{A}}\) in the decision space in Fig. 1 can be expressed as \(\varvec{A_o} = (x_1,\ldots ,x_n)\). Then, two directions from the extreme points of the decision space can be generated as: \(\varvec{A_u}={\varvec{O}}+\varvec{A_o}\), \(\varvec{A_d}={\varvec{T}}\varvec{A_o}\). Then, we search best solutions in both directions by optimizing the weights \(\lambda \) in two subproblems, as shown in Fig. 2.
In this stage, the proposed algorithm chooses 2R1 directions which are from R1 diverse individuals on the current PF. Since HV is very computationally expensive for MaOPs, it cannot be employed as the objective function of those 2R1 subproblems. In this work, we set the distance to the estimated ideal point as the objective function of those 2R1 subproblems.
To simultaneously find the optimal \(\lambda \) of each subproblem, we employ MFEA as the optimizer. In this method, after the normalization, the directions can be expressed as \({\varvec{D}} = (d_1,d_2,\ldots ,d_{2R1})\). We use a weight vector \(\varvec{V_0} = (\lambda _1,\lambda _2,\ldots ,\lambda _{2R1})\) to represent solutions \(P_1,P_2,\ldots ,P_{2R1}\), by \(\varvec{V_0}\cdot D\).
We consider each subproblem as one task: \(P_i\) is generated by directions \(d_i\) and \(\lambda _i\) (\(\varvec{V_0}\cdot D\)) for optimizing the ith task, which can be seen as an instance of the task \(T_i\). Thus, \(P_1,P_2,\ldots ,P_{2R1}\) corresponding to \(T_1,T_2,\ldots ,T_{2R1}\)’s instance. For each instance, there is a function to evaluate its performance, as MFEA, which is called factorial cost. The function is the distance of \(P_i\) to the estimated ideal point. In this case, the weight vectors are taken for the individuals in MFEA need to be optimized. After sorting the obtained factorial costs, MFEA solves 2R1 subproblems efficiently, as shown in Algorithm 4.
Stage 2: Diversity improvement with multitasking strategy
This stage aims to improve the diversity. In this stage, first, we choose R2 individuals as the reference points; one of them is the shortest distance to the obtained ideal point and the rest are chosen from the most dispersed individuals. As NSGAIII, each solution is associated with its nearest reference point, the most dispersed individuals are those reference points who has the fewest associate points.
Second, we expect the population evolves to those different reference points to improve diversity. In this case, we use the PBI value [5] as the criterion to evaluate the similarity to each reference point in the objective space. To push the population to those reference points, we consider each PBI distance to each direction as a task. For example, Taking R2 reference individuals: \(L(l_1,l_2,\ldots ,l_{R2})\) as an example, the multitasking population P is to minimize \(PBI(P,l_i)\), \({i \in (1,\ldots ,{R2})}\) simultaneously; each individual’s factorial cost is the set of PBI values to those R2 reference individuals. Similar to the first stage, we employ MFEA to optimize those R2 subproblems.
Experiments and discussion
Parameter settings
To test the performance of the proposed algorithm, the DTLZ [28] and LSMOP [29] problems are chosen as the test problems, because their numbers of the decision variables and objective functions are scalable. In the experiment, the decision variables are set to 100D, 200D, and 500D, while the numbers of objectives are set to 3, 5, and 10. To prove the effectiveness of the proposed algorithm in this paper, several existing popular algorithms are employed as the compared algorithm: MOEA/D [5], NSGAII [3], and NSGAIII [27]. In addition, we choose some other algorithms for MaOPs and largescale MOPs: KnEA [11] and LMEA [20]. For DTLZ problems, in addition to those algorithms mentioned in this paragraph, we also choose S3CMAES and CMOPSO for the comparison. The settings of those algorithms are recommended values as their original papers. The source codes of the compared algorithms are from PlatEMO [30].
For all the experiments, each algorithm runs 30 times independently for each test problem. In this paper, we use IGD [5] as the indicator to assess the performance of compared algorithms, which has been widely used as a performance indicator. In short, the smaller the IGD is, the better the algorithm performs.
The parameters of the proposed framework are set as follows. For DTLZ problems, the population size for the 3objective and the 5objective problems are set to 120 and 150, while that for the 10objective problems is 200, and the corresponding stopping criteria are 30,000, 50,000, and 50,000 function evaluations. In the 200D decision variables’ situation, the population are set to 200, 300, and 300, with 50,000, 60,000,and 80,000 function evaluations for 3, 5, and 10objective problems respectively. And in the 500D decision variable space, the population are set to 300, 500, and 500, with 100,000, 200,000, and 200,000 function evaluations for 3, 5, and 10objective problems, respectively. As for LSMOPs, for 3objective problems, the population size is set to 200 and the stopping criteria are set to 50000 function evaluations, for 5objective problems, the population size is 200, and the evaluation times are set to 100,000, and for 10objective problems, the size of the population is set to 200, the evaluation times are set to 100,000. The stage 1 uses 60% function evaluations, and the direction size is 10. For the stage 2, the number of the reference points is set to 11. About the specific method of crossover of TSMTF, we adopt the simulated binary crossover (SBX) [31], as for the mutation, we adopt the polynomial mutation [32]. The mutation probability stays at 1/D where D is the number of decision variables and the crossover probability keeps 1, both of the distribution index is set as recommended in [31].
Effects of stage 1
In this subsection, we discuss the effect of the first stage on the proposed algorithm. For the stage 1, we aim to improve the convergence of the population with a limited number of function evaluations. In this stage, we make use of the bidirectional method to search the individuals, which combines with MFEA to improve the efficiency. We compare TSMTF with the version without MFEA in the stage 1 (termed TSMTF1) on the 3, 5, and 10objective DTLZ problems with 100, 200, and 500 decision variables. In TSMTF1, the weights are optimized serially, it optimizes directions one by one. For each direction, it searches for the best parameters \(\lambda \) instead of weight vector \({\varvec{V}} = (\lambda _1,\lambda _2,\ldots ,\lambda _H)\).
As shown in Tables 1, 2 and 3, TSMTF behaves better than TSMTF1 in most cases. In general, the framework embedded with MFEA has tiny advantages in terms of the IGD values on DTLZ1 and DTLZ3. For the other problems, their difference is easy to be distinguished. As the dimension of the decision variable increases, the overall advantage of TSMTF can be kept. Similarly, with the number of the decision variables increase and the number of the objectives remains unchanged, the experimental results do not show any changing trends. The main reason of this phenomena is that MFEA in this stage can reduce the number of function evaluations. In other words, the proposed algorithm is efficient.
Comparative experiments
In this section, we compare the proposed algorithm with MOEA/D, NSGAII, NSAGIII, KnEA, LMEA, S3CMAES, and CMOPSO on the largescale DTLZ problems. To make the comparison clear, we make use of the Wilcoxon rank sum test [33], where TSMTF is the control algorithm. The symbols “+”, “−”, and “\(\approx \)” correspond to the results of “better”, “worse”, and “nearly” compared with TSMTF.
As shown in Tables 4, 5 and 6, in terms of the overall results, it indicates that the proposed framework TSMTF is convergent and performs markedly better than the other algorithms on DTLZ1 and DTLZ3. On DTLZ2 and DTLZ4, and TSMTF also shows good convergence, but it does not show good performance compared with KnEA. We speculate that such situation might be caused by the characteristic of DTLZ2 and DTLZ4. Relatively speaking, LMEA has no advantage in this experiment because of the large amount of calculation on decision variables’ classifications, and the number of calculations in this experiment may be too small to show better performance. S3CMAES is in the same situation. Its computation cost is too high to search each subpopulation within a limited computation budget, and its clustering strategy also requires a large number of evaluations. The performance of CMOPSO is similar to the other compared algorithms, and it performs better than TSMTF on DTLZ2 and DTLZ4 occasionally. MOEA/D and NSGAIII perform best among all the algorithms, while the other algorithms also have a small IGD value, which may be own to their advantages on diversity. In NSGAIII and MOEA/D, the diversity of the population is always kept well with uniform weight vectors. In terms of the DTLZ6, the results show that none of the algorithms performs well within a limited number of evaluations. From the above analysis and the results in Table 4, we can draw the conclusion that the TSMTF has a significant advantage in DTLZ1 and DTLZ3 in MaOPs.
As we can see it at a glance, for all the algorithms, DTLZ2 and DTLZ4 are easier than DTLZ1 and DTLZ3. In DTLZ2 and DTLZ4 problems with 100 decision variables, S3CMAES can converge and its performance is better than half of the compared algorithms, while it cannot converge on DTLZ1 and DTLZ3. With the number of decision variables increases, as the experimental results shown, S3CMAES degenerates its performance due to the limitation of function evaluations, because its grouping strategy and searching in subpopulations require a large number of function evaluations. As for CMOPSO, it shows little advantages in this experiments on MaOPs due to its particles’ learning strategy. Similar to the other compared algorithms, it performs the best on DTLZ2 and DTLZ4. Since DTLZ1 and DTLZ3 is so complex that CMOPSO is unable to converge. For LMEA, as shown in Tables 4, 5 and 6, it cannot perform so well, while the other algorithms reached their convergency state. It seems to need extra function evaluations and it behaves worse as the number of decision variables increases. NSGAII has the same issue. Within a limited number of evaluations, compared with the other algorithms, NSGAII cannot converge for most MaOPs, but it shows better performance occasionally, while the TSMTF can converge. As for NSGAIII, in DTLZ2, compared with TSMTF, it performs better occasionally, in DTLZ4 with 100D and 200D decision variables, it nearly always performs better than TSMTF, except DTLZ4. Their difference is getting large with the number of decision variables increases. The performance of MOEA/D is similar to NSGAIII, both of them performs better than TSMTF occasionally on DTLZ2 with 200 decision variables. TSMTF shows advantages in DTLZ4 with 100 decision variables, with the number of decision variables increase. As for KnEA, its IGD results are better than TSMTF in terms of several experimental results on DTLZ2 and DTLZ4. TSMTF shows the best performance on DTLZ1 and DTLZ3 in each experiment. On DTLZ6, all algorithms show similar performance.
From the above results on DTLZ problems in Tables 4, 5 and 6, KnEA, LMEA, and TSMTF are good at solving LSMaOPs. Therefore, we further compare them on largescale benchmark problems LSMOP [34] with different numbers of decision variables and objective functions. The IGD results are shown in Tables 7, 8 and 9
In Tables 7, 8 and 9, in general, in most cases, the proposed framework performs better than the other two algorithms, especially on 10objective problems in Table 9. TSMTF shows disadvantage in some situations. Threeobjective LSMOP4 is a easyconverging problem, and all those algorithms converge to the true PF and TSMTF cannot outperform the other two algorithms. However, for the other 3objective LSMOP problems, neither KnEA and LMEA cannot converge, while TSMTF has better convergence ability, which make TSMTF a bestperforming algorithm.
KnEA performs the best on 3objective LSMOP problems. The results on LSMOP1, LSMOP2, and LSMOP4 show that KnEA converges in almost all cases. As the number of objectives increases, Tables 7, 8 and 9 show that the advantage of KnEA decreases, but it still performs the best on 10objective LSMOP4 compared with the other two algorithms. As for LMEA, the results shows that it can converge on LSMOP2 and LSMOP4. LSMOP6 is too hard for LMEA. LMEA is able to obtain the solutions near the true PF in the other four LSMOPs. As the number of decision variables increases, the performance of LMEA is getting worse. The performance of TSMTF tells that it converges to the true PF in almost all cases. When TSMTF is not better than the other two algorithms, the results shows that their performance difference is tiny. While TSMTF performs the best, the results shows that the other two algorithms are far from the true PF.
The results in Tables 1, 2, 3, 4, 5, 6, 7, 8 and 9 show that TSMTF performs the best among all the algorithms. The main reason of the good performance is from the contribution of stage 1. In this stage, we transformed the highdimensional search space into a lowdimensional space by a number of weight vectors, and then, the population in TSMTF evolves by optimizing those weight vectors. In this way, with the decreasing number of decision variables, the searching cost (number of function evaluations) has been greatly reduced. To further improve the diversity, TSMTF specially increase the diversity of a small number of the individuals with good convergence. Stage 2 can increase the diversity and convergence simultaneously. After stage 2, we employ NSGAIII to make the population more evenly distributed. LMEA costs too many evaluations on decision variable grouping and S3CMAES costs much on the searching in subpopulations, so their performance degenerates when the computation budget is limited. Since KnEA, NSGAIII, CMOPSO, and MOEA/D are easily get trapped in a highdimensional decision space without any dimension reduction, it cannot find the true PS for most largescale MaOPs.
Conclusions and remarks
In this work, we propose a twostage framework combined with MFEA to address largescale MaOPs. In stage 1, we make use of the bidirectional search strategy combined with MFEA. In other words, we transformed the MaOP into a multitasking optimization problem. After that, it turns to stage 2, which aims to improve the diversity of the population. We apply multitasking to a number of subproblems from the multiobjective optimization problem. At last, the population is optimized as NSGAIII. As the experiments shows, with limited number of function evaluations, the proposed algorithm outperforms compared algorithms on MaOPs, especially on complex problems such as DTLZ1 and DTLZ3. The results verify the effectiveness of this framework.
In general, the TSMTF shows well performance and can still be improved in both stages, and there is still much room for improvement. For example, the transformed objective function in stage 1, we adopt the bidirectional strategy, and we need to think about other effective methods. In addition, the diversity in stage 2 still needs to be improved.
References
Tian Y, Wang H, Zhang X, Jin Y (2017) Effectiveness and efficiency of nondominated sorting for evolutionary multi and manyobjective optimization. Complex Intell Syst 3(4):247–263
Coello CAC, Brambila SG, Gamboa JF, Tapia MGC, Gómez RH (2020) Evolutionary multiobjective optimization: open research areas and some challenges lying ahead. Complex Intell Syst 6(2):221–236
Deb K, Pratap A, Agarwal S, Meyarivan TAMT (2002) A fast and elitist multiobjective genetic algorithm: NSGAII. IEEE Trans Evol Comput 6(2):182–197
Zitzler E, Laumanns M, Thiele L (2001) SPEA2: improving the strength pareto evolutionary algorithm. TIKreport 103
Zhang Q, Li H (2007) MOEA/D: A multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731
Zitzler E, Künzli S (2004) Indicatorbased selection in multiobjective search . In: International conference on parallel problem solving from nature. Springer, Berlin, Heidelberg, pp 832–842
Beume N, Naujoks B, Emmerich M (2007) SMSEMOA: multiobjective selection based on dominated hypervolume. Eur J Oper Res 181(3):1653–1669
Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolumebased manyobjective optimization. Evol Comput 19(1):45–76
Yang S, Li M, Liu X, Zheng J (2013) A gridbased evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 17(5):721–736
Wang R, Purshouse RC, Fleming PJ (2012) Preferenceinspired coevolutionary algorithms for manyobjective optimization. IEEE Trans Evol Comput 17:474–494
Zhang X, Tian Y, Jin Y (2014) A knee point driven evolutionary algorithm for manyobjective optimization. IEEE Trans Evol Comput 19(6):761–776
Cheng J, Yen GG, Zhang G (2015) A manyobjective evolutionary algorithm with enhanced mating and environmental selections. IEEE Trans Evol Comput 19(4):592–605
Zhang X, Zheng X, Cheng R, Qiu J, Jin Y (2017) A competitive mechanism based multiobjective particle swarm optimizer with fast convergence. Inf Sci 427:63–76
Tian Y, Zheng X, Zhang X, Jin Y (2019) Efficient largescale multiobjective optimization based on a competitive swarm optimizer. IEEE Trans Cybern 50(8):3696–3708
Cheng R, Jin Y (2014) A competitive swarm optimizer for large scale optimization. IEEE Trans Cybern 45(2):191–204
Gong M, Li H, Luo E, Liu J, Liu J (2016) A multiobjective cooperative coevolutionary algorithm for hyperspectral sparse unmixing. IEEE Transa Evol Comput 21(2):234–248
Wang H, Jiao L, Shang R, He S, Liu F (2015) A memetic optimization strategy based on dimension reduction in decision space. Evol Comput 23(1):69–100
Wang H, Jin Y (2017) Efficient nonlinear correlation detection for decomposed search in evolutionary multiobjective optimization. In: 2017 IEEE Congress on evolutionary computation (CEC). IEEE, New York, pp 649–656
Ma X, Liu F, Qi Y, Wang X, Li L, Jiao L, Yin M, Gong M (2016) A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with largescale variables. IEEE Trans Evol Comput 20(2):275–298
Cheng R, Zhang X, Tian Y (2016) A decision variable clusteringbased evolutionary algorithm for largescale manyobjective optimization. IEEE Trans Evol Comput 22(1):99–112
Chen H, Cheng R, Wen J, Li H, Weng J (2020) Solving largescale manyobjective optimization problems by covariance matrix adaptation evolution strategy with scalable small subpopulations. Inf Sci 509:457–469
He C, Li L, Tian Y, Zhang X, Cheng R, Jin Yaochu, Yao Xin (2019) Accelerating largescale multiobjective optimization via problem reformulation. IEEE Trans Evol Comput 23(6):949–961
Zille H, Ishibuchi H, Mostaghim S, Nojima Y (2018) A framework for largescale multiobjective optimization based on problem transformation. IEEE Trans Evol Comput 22(2):260–275
Gupta A, Ong YS, Feng L (2016) Multifactorial evolution: toward evolutionary multitasking. IEEE Trans Evol Comput 20(3):343–357
Tian Y, Wang H, Zhang X, Jin Y (2017) Effectiveness and efficiency of nondominated sorting for evolutionary multiand manyobjective optimization. Complex Intell Syst 3(4):247–263
Zille H, Ishibuchi H, Mostaghim S, Nojima Y (2017) A framework for largescale multiobjective optimization based on problem transformation. IEEE Trans Evol Comput 22(2):260–275
Deb K, Jain H (2013) An evolutionary manyobjective optimization algorithm using referencepointbased nondominated sorting approach, part I: solving problems with box constraints. IEEE Transa Evol Comput 18(4):577–601
Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary multiobjective optimization. Springer, London, pp 105–145
Cheng R, Jin Y, Olhofer M, Sendhoff B (2017) Test problems for largescale multiobjective and manyobjective optimization. IEEE Trans Cybern 47(12):4108–4121
Tian Y, Cheng R, Zhang X, Jin Y (2017) PlatEMO: a MATLAB platform for evolutionary multiobjective optimization [educational forum]. IEEE Comput Intell Mag 12(4):73–87
Deb K (2001) Multiobjective optimization using evolutionary algorithms, vol 16. Wiley, New York
Deb K, Agrawal RB et al (1995) Simulated binary crossover for continuous search space. Complex Syst 9(2):115–148
Wilcoxon Frank, Katti SK, Wilcox Roberta A (1970) Critical values and probability levels for the wilcoxon rank sum test and the wilcoxon signed rank test. Selected tables in mathematical statistics 1:171–259
Cheng Ran, Jin Yaochu, Olhofer Markus et al (2016) Test problems for largescale multiobjective and manyobjective optimization. IEEE transactions on cybernetics 47(12):4108–4121
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported in part by the National Natural Science Foundation of China (no. 61976165)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chen, L., Wang, H. & Ma, W. Twostage multitasking transform framework for largescale manyobjective optimization problems. Complex Intell. Syst. 7, 1499–1513 (2021). https://doi.org/10.1007/s40747021002735
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747021002735