Multiobjective optimization problems (MOPs) are commonly seen in real-world applications, and even many conventional single-objective optimization problems can also be transferred/optimized into/as MOPs [1]. Specifically, multiple, often conflicting, objectives exist in an MOP, and the problem can be mathematically formulated as

$$\begin{aligned}{} & {} \text {minimize}\, \textbf{f}(\mathbf {\textbf{x}})=\left( f_1(\textbf{x}),f_2(\textbf{x}),\dots ,f_m(\textbf{x})\right) \nonumber \\{} & {} \text {subject to}\, \textbf{x}\in \mathbb {R}^d. \end{aligned}$$

Notably, m and d are the numbers of objectives and decision variables, respectively. The optima of MOPs is a set of non-dominated solutions trading off between different objectives [2]. To be more specific, all the optima in the decision space form the Pareto optimal set (PS), and the projection of the PS in the objective space is the Pareto optimal front (PF). Thanks to the population-based property of evolutionary algorithms (EAs), many researchers have turned to evolutionary multiobjective optimization for obtaining a solution set in a single run [3].

Multiobjective EAs (MOEAs) for solving MOPs with different properties have been designed in recent decades. Most MOEAs have emphasized simplicity since the mid-1980s [4], such as non-dominated sorting based genetic algorithms (multiobjective GA (MOGA [5]) and NSGA [6]). These MOEAs effectively solved simple MOPs with several decision variables (usually less than ten) and two–three objectives. Two decades ago, the strength Pareto EA (SPEA [7]), along with the publication of the elitist NSGA (NSGA-II [8]), the decomposition-based MOEAs (MOEA/D [9]) and the indicator-based EAs (IBEA [10]), emphasized on efficiency. These MOEAs have shown promising performance in obtaining evenly distributed solution sets on problems with less than 100 decision variables and two–three objectives [11].

As the increase in the number of objectives, MOPs with more than three objectives (known as many-objective optimization problems or MaOPs) cause the loss of selection pressure for conventional MOEAs [12]. Thus, some environmental selection strategies were designed to distinguish the qualities of candidate solutions in the high-dimensional objective space [13]. For instances, some many-objective EAs have been proposed by modifying the dominance relationship [14], introducing additional reference information [15], or using effective performance indicators [16], e.g., knee point-based EA (KnEA [17]), reference vector guided EA (RVEA [18]), and fast hypervolume-based EA (HypE [19]). MOEAs above have shown promising performance in solving problems with up to 15 objectives and less than 100 decision variables [20].

In recent years, MOPs with more than 100 decision variables (known as large-scale MOP or LSMOP) [21] appeared more frequently in complex industries, e.g., community detection in complex social networks [22] and ratio error estimation of voltage transformer (TREE) [23]. Due to the rapid growth in the number of decision variables, the curse of dimensionality [24] brings more challenges for MOEAs to solve LSMOPs than to solve MaOPs [25]. Some MOEAs tailored for large-scale multiobjective optimization are proposed to handle this challenge. Generally, they can be roughly classified into four categories [26]: (1) the cooperative co-evolutionary (CC)-based MOEAs group the decision variable without specific analysis and solve LSMOPs in a divide-and-conquer manner [27]; (2) the decision variable analysis-based MOEAs group the decision variables via linking the decision variables to convergence or diversity properties [28, 29], e.g., decision variable analysis-based MOEA (MOEA/DVA [30, 31]), EA with cluster-based grouping (LMEA [32]), and subpopulations-based approach (S\(^3\)-CMA-ES [33]); (3) the problem reformulation-based approaches form the third category, including the weighted optimization-based framework (WOF [34]) and the large-scale multiobjective optimization framework (LSMOF [35]); and (4) MOEAs of the last category use efficient offspring generation strategies for large-scale multiobjective optimization, e.g., competitive swarm optimizer-based EA (LMOCSO [36]) and adaptive direction-guided EA (DGEA [37, 38]).

The aforementioned large-scale MOEAs have shown promising performance in solving LSMOPs, but they can be further improved by addressing their inherent drawbacks. For instance, the CC-based approaches may incorrectly group the decision variables; the decision variable analysis-based approaches could waste a huge number of function evaluations (FEs); those problem reformulation based approaches could ignore the diversity maintenance to some extent; the generation of candidate offspring solutions could ignore the local search in those efficient offspring generation-based algorithms. Thus, we propose a sampling-based MOEA by taking advantage of the ideas of decision variable analysis, problem reformulation, and efficient offspring generation. The main contributions of this study are summarized as follows:

  1. 1.

    An efficient variable classification technique is proposed to guide the search in the decision space. Compared with existing exact decision variable analysis methods, the proposed classification approach is applicable to problems with different properties using significantly low computational costs.

  2. 2.

    Local search is emphasized in the proposed SLSEA, which aims to enhance the capability of MOEAs in searching local/global optima, enabling the modification of conventional MOEAs for efficient large-scale multiobjective optimization.

The rest of this paper is organized as follows. The details of the proposed sampling based large-scale EA, termed SLSEA, are presented in section “Method”. Next, the settings of different algorithms, tested problems, and the adopted performance indicators are given in “Experimental settings”. Empirical results achieved by SLSEA in comparison with some state-of-the-art large-scale MOEAs are presented in “Comparative studies”, and conclusions are drawn in “Conclusion”.


In this part, we first demonstrate the main idea of this study, followed by the principles of the proposed variable division approach. Then, the sampling strategies guided by the variable division are elaborated.

Main idea

Fig. 1
figure 1

An illustrative example of the three sampling operations in SLSEA on an MOP with five decision variables, where the shaded region is the feasible decision space in parallel routine plot and \(\textbf{b}\) is a binary vector

Inspired by the decision variable analysis-based divide-and-conquer strategy and the efficient offspring generation approaches, we propose an optimization-based variable classification approach to handle LSMOPs. Instead of categorizing decision variables into many groups, all the decision variables are divided into two groups in the proposed method. Generally, the first group consists of decision variables related to convergence and the second one related to diversity. All the convergence-related/diversity-related decision variables are optimized independently without considering the pairwise variable interactions. Accordingly, three different sampling strategies, following the idea of efficient offspring generation, for convergence enhancement, diversity maintenance, and local search are designed.Footnote 1 An illustrative example of the proposed offspring generation strategies on an MOP with five decision variables is presented in Fig. 1, which visually shows the principles of three involved sampling strategies in SLSEA. Assume binary vector \(\textbf{b}=(0,1,0,1,1)\) is a vector for indicating the division result of an MOP with five decision variables. The vector indicates that decision variables \(x_2,x_4,x_5\) are convergence-related decision variables, while \(x_1,x_3\) are diversity-related ones.

  1. 1.

    In Fig. 1a, the convergence-related sampling perturbs variables \(x_2\), \(x_4\), and \(x_5\) using a norm Gaussian distribution \(\mathcal {N}(0,1)\).

  2. 2.

    The diversity-related sampling perturbs the rest variables (i.e., \(x_1\) and \(x_3\), which may be related to diversity) and uses an uniform distribution \(\mathcal {U}(0,1)\), as displayed in Fig. 1b. An uniform distribution is used for sampling the diversity-related decision variable(s) is expected to enhance the exploration capability of the population.

  3. 3.

    As for the local search-related sampling, we use a Gaussian distribution to restrict the sampling density around the current population, aiming to encourage the exploitation of the population, as shown in Fig. 1c.

With these ad hoc sampling strategies, we are motivated to solve LSMOPs via efficient sampling under the guidance of variable classification. Note that the norm Gaussian distribution is used in two designed sampling strategies due to its popularity with symmetry, coverage, and a predefined center. The symmetry ensures unbiased sampling as we have no prior knowledge of the problem, and the coverage enables the sampling of all possible solutions. Moreover, the predefined center controls the sampling, which is expected to guide the sampling towards the Pareto optimal set. Other distributions with similar properties can also be used for perturbing the three sampling strategies. If the prior knowledge of the problem is given, the multivariate Gaussian distribution can be used for addressing this issue, which could be computationally expensive.

Schema of SLSEA

figure a

Algorithm 1 presents the schema of the proposed SLSEA. It mainly consists of five components, i.e., convergence sampling, variable classification, diversity sampling, local sampling, and environmental selection. First, a candidate solution set P of size n and a binary vector set B of size \(n_b\) are randomly initialized (Steps 1–2 in Algorithm 1). Next, B is used to sample a set of convergence-related solutions \(P_c\) (Steps 5\(\sim \)6 in Algorithm 1). Notably, B is updated using the proposed variable classification approach, during which the newly generated convergence-related solutions are merged into population \(P_c\). Then, the diversity-related sampling strategy (Step 7 in Algorithm 1) is adopted to sample a set of diversity-related candidate solutions \(P_d\) with the guidance of the updated B. Afterwards, a local search-related sampling is designed to sample local solutions \(P_l\) around the current population P. Finally, the combination of all the sampled candidate solutions is updated using environmental selection. Notably, different environmental selection strategies in conventional MOEAs can be used in SLSEA, and here, we use the environmental selection in NSGA-II due to its efficiency and simplicity. The procedures above are repeated until the maximum number of iterations is met.

Convergence-related sampling

The detailed procedures of the convergence-related sampling are given in Algorithm 2 In B, the ith binary vector \(\textbf{b}_i\) denotes an approximation of the accurate variable classifications. It is used to sample some convergence-related solutions via perturbing those convergence-related variables only. A total number of \(n_s\times n_b\) candidate solutions are sampled during the sampling, where \(n_s\) candidate solutions are sampled for each binary vector.

figure b
figure c

In addition to the convergence-related sampling in the main loop of SLSEA, this sampling is also conducted for updating the binary vectors (Step 3 in Algorithm 3). Thus, a total number of \(2n_s\times n_b\) convergence-related solutions are sampled in one generation of SLSEA.

Fig. 2
figure 2

An illustrative example of the variable classification in an MOP with five decision variables, where the size of binary vector set \(n_b\) is set to 3. Each binary vector shows the role of each decision variable, where values 1 and 0 indicate that the decision variable is related to convergence enhancement and diversity maintenance, respectively. For instance, \(\textbf{b}_1\) indicates decision variables \(x_1,x_3,x_4,x_5\) are related to convergence enhancement, while \(x_2\) is related to diversity maintenance. Particularly, the quality of a binary vector is assessed by the quality of sampled solutions sampled using this binary vector

Variable classification

In the proposed SLSEA, variable classification is used to update the binary vectors for better approximating the accurate variable classification, refer to Algorithm 3. An illustrative example of an MOP with five decision variables is given in Fig. 2 to show the detailed procedures. During the variable classification, the binary vector set B of size \(n_b\) is regarded as the parent population, and the quality of each binary vector is assessed based on the quality of the sampled solutions using Algorithm 4. The mating selection strategy in NSGA-II is applied to select the mating pool for generating offspring \(B'\). Notably, the single-point crossover and bit-wise mutation in the canonical GA are used for the generation of offspring binary vectors, and readers can refer to literature [8, 39] for more details. Next, a set of convergence-related solutions are sampled for each offspring binary vector using Algorithm 2, followed by the quality assessment of each offspring binary vector. Finally, \(n_d\) binary vectors are selected from \(B\cup B'\) using the environmental selection strategy in NSGA-II. Besides, the convergence-related solution set is also updated.

As a crucial part of the variable classification, the quality assessment determines the accuracy of variable classification results for guiding the sampling of convergence- and diversity-related solutions. Its detailed procedures are displayed in Algorithm 4 using two criteria:

figure d
  1. (1)

    Instead of using the average Euclidean distance to measure the convergence degree of a candidate solution set, the grid-based distance given in (2) is used. It could eliminate the effect of neighbor points [40] to tolerate the inaccuracy in quality assessment.

    $$\begin{aligned}{} & {} d_i= \lfloor N_d*\frac{\left\Vert {\textbf{f}(\textbf{x}_i) - \textbf{l}}\right\Vert -\left\Vert {\textbf{l}}\right\Vert }{\left\Vert {\textbf{u}}\right\Vert -\left\Vert {\textbf{l}}\right\Vert }\rfloor ,\end{aligned}$$
    $$\begin{aligned}{} & {} q_1 = \sum _{i=1}^{N_s}{\text {sign}(\min {(\textbf{f}(\textbf{x}_i) - \textbf{l}))\times d_i}}, \end{aligned}$$

    where \(\textbf{l}\) and \(\textbf{u}\) are the lower and upper boundaries of the objective vectors, sign(\(*\)) is a sign function, \(\lfloor *\rfloor \) is a floor function, \(\left\Vert {*}\right\Vert \) obtains the length of a vector, and \(N_d\) is the number of solutions in the current solution set. If the ideal point dominates any solution, the grid-based distance will be positive; otherwise, its grid-based distance will be negative. Thus, this solution will reduce the average distance of the entire solution set to encourage the generation of convergence-related binary vectors.

  2. (2)

    The use of \(q_2\) is inspired by the decision variable analysis method in LMEA [32], and a smaller \(q_2\) value is likely to indicate the higher correction to diversity, which can be calculated using the non-dominated sorting method [2].

The design of \(q_1\) and \(q_2\) is intuitively designed, aiming to demonstrate the effectiveness of the reformulation process in a relatively straightforward manner. The first objective reflects the average convergence degree of the solution set, which is a simplified version of the convergence degree in LMEA. The second objective is expected to minimize the number of convergence-related decision variables, aiming to reduce the dimensionality of the sub-problem for accelerating large-scale multiobjective optimization. Nonetheless, the framework is compatible with other tailored functions of similar purpose.

Diversity-related sampling

During the variable classification, a set of binary vectors are optimized simultaneously. We take advantage of the binary vectors to sample some candidate solutions for diversity maintenance, as presented in Algorithm 5. We first obtain the best-converged solution \(\textbf{p}\) \(\in \) \(P\) as the baseline to ensure the convergence of the candidate solutions to be sampled (Step 1 in Algorithm 5). ||f(p)|| denotes the Euclidean distance of solution \(\textbf{p}\) in the objective space, which reflects the convergence degree of solution \(\textbf{p}\). By selecting the solution with the best convergence degree, Step 1 of Algorithm 5 is expected to guide the sampling of diversity-related solutions with good convergence. Then, we sample n candidate solutions for each binary vector by perturbing those diversity-related decision variables (i.e., elements with zero value in a binary vector). Specifically, a uniform distribution \(\mathcal {U}(0,1)\) is used to generate a random number k for perturbing the decision vector (Step 3 in Algorithm 5), encouraging the sampling of diversity-related candidate solutions. Finally, all the sampled candidate solutions are merged as population \(P_d\).

figure e

Local search-related sampling

While the variable classification and diversity-related sampling aim to deal with convergence enhancement and diversity maintenance respectively, the local search-related sampling is expected to conduct local search around existing solutions. The detailed procedures of the local search-related sampling are given in Algorithm 6. Specifically, we conduct a local search around \(n_s\) solutions randomly selected from population P. Notably, this selection can also choose some dominated solutions to encourage diversity maintenance since they could be important in global optimization [40, 41]. Then, a value randomly selected from a norm Gaussian distribution is used to perturb each chosen solution. Mathematically, for a candidate solution \(\textbf{x}\), the ith variable of the perturbed solutions satisfies Gaussian distribution \(\mathcal {N}(x_i,1)\) (refer to Fig. 1c). Finally, a total number of \(n_s\times n\) candidate solutions are merged as population \(P_l\) during the local search-related sampling.

figure f

Experimental settings

We first briefly introduce the adopted performance indicators, followed by the details of the involved parameter settings. Each algorithm is run 20 times on each test problem independently for the Wilcoxon rank-sum test [42] at a significance level of 0.05. We use symbols ‘−’, ‘\(+\)’, and ‘\(\approx \)’ to indicate SLSEA is statistically surpassed by, significantly worse than, and statistically tied with the compared algorithm. All the compared algorithms are implemented in PlatEMO v2.6 [43] on a PC with two 2.1 GHz Intel(R) Xeon(R) Gold 6130 CPUs (32 CPU Cores) and 128 GB RAM on the Windows 10 64-bit operation system.

Performance indicator

To assess the performance of the compared algorithms on large-scale multiobjective optimization, two widely used indicators, namely the IGD indicator [44] and the hypervolume (HV) indicator [45], are used. Both indicators can assess the convergence degree and distribution uniformity of a solution set in multiobjective optimization. Nevertheless, these two performance indicators are different due to the different requirements in reference points.

In the IGD indicator, a set of uniformly distributed points located precisely on the PF are required to give an accurate assessment of the quality of the solution set. By contrast, only a single point close to the nadir point of the PF is needed in HV calculation [46]. Generally, IGD calculation requires prior knowledge about the PF, which is suitable for benchmark problems with known PFs. The IGD\(+\) indicator can also be used for better assessment of the quality, which is weakly Pareto compliant, whereas IGD is Pareto incompliant [47]. Thus, it is suitable for mathematically formulated problems, e.g., DTLZ problems [48], WFG problems [49], and LSMOP problems [50]. However, for a real-world MOP such as a TREE problem [23], having a set of uniformly distributed points on its PF is impractical. Thus, we can adapt the HV indicator to assess the quality of a solution set using an approximation of the real nadir point.

Test problems

In the following experiments, some test instances are selected from three benchmark test suites, i.e., DTLZ [48], LSMOP [50], and TREE [23]. In all these test instances, the number of objectives m is set to two, and the detailed settings are given as follows.

  1. (1)

    Three representative DTLZ problems with fully separable variable interactions are tested for ablation studies, i.e., DTLZ1, DTLZ4, and DTLZ7 [48]. DTLZ1 is a multimodal MOP, where conventional MOEAs can hardly obtain converged solutions. DTLZ4 may hinder MOEAs from achieving a well-distributed set of solutions, and DTLZ7 will test the ability of an algorithm to maintain subpopulation in different Pareto optimal regions. Besides, the number of decision variables is set to 5000, and the maximum number of FEs is set to 1\(\times 10^6\).

  2. (2)

    Nine LSMOP problems are tested in this study, involving fully separable (LSMOP1, LSMOP5, and LSMOP9), partially separable (LSMOP2 and LSMOP6), or mixed (LSMOP3, LSMOP4, LSMOP7, and LSMOP8) variable interactions [50]. Generally, those LSMOP problems are tailored for examining the performance of EAs in large-scale multiobjective optimization, and we set the maximum number of FEs to 200\(\times \) \(d\) for test instances with d decision variables (d ranges in \(\{1000,2000, 5000\}\)).

Parameter settings

We adopt the recommended parameter settings for the compared algorithms that have achieved the best performance reported in the literature for fair comparisons. Moreover, to give an insight into the performance of SLSEA in large-scale multiobjective optimization, different variants of SLSEAs are designed as well.

Fig. 3
figure 3

Convergence profiles of different variants of SLSEA on three DTLZ problems with 5000 decision variables in terms of IGD values

Variants of SLSEA

Several ablation studies are conducted to investigate the effect of different sampling strategies in SLSEA on its performance in large-scale multiobjective optimization. Five variants of SLSEA using different combinations of sampling strategies are tested on three DTLZ problems [48], three LSMOP problems [50], and two TREE problems [23]. The first variant without diversity sampling and local search-related sampling is called SLSEA\(\setminus \)DL, the second one with only local search-related sampling is called SLSEA\(\setminus \)CD, the third one without diversity sampling is called SLSEA\(\setminus \)D, the forth one without local search-related sampling is called SLSEA\(\setminus \)L, and the last one with all the functions is named SLSEA. In these five variants, the interaction vector set size \(n_b\) is set to 10 and the sampling size \(n_s\) is set to five according to our empirical results.

Fig. 4
figure 4

Convergence profiles of different variants of SLSEA on three LSMOP problems with 5000 decision variables in terms of IGD values

Fig. 5
figure 5

Convergence profiles of different variants of SLSEA on two TREE problems with 15,000 and 30,000 decision variables in terms of HV values, respectively

SOTA large-scale MOEAs

The proposed SLSEA is compared with four representative large-scale MOEAs, including LSMOF [35], MOEA/DVA [30], DGEA [37], and LMOCSO [36]. In LSMOF, NSGA-II is embedded as the optimizer for the second stage using 50\(\%\) of FEs for fair comparisons, and the population size n is set to 100 for all the compared algorithms. To be more specific, the number of reference solutions r is set to ten. The population size \(n_s\) and the scale factor \(F_m\) of the single-objective differential evolution algorithm are set to 30 and 0.8, respectively. For MOEA/DVA, the number of sampling solutions in control variable analysis NIC is set to 20, the maximum number of tries required to judge the interaction NIA is set to six, and the MOEA/D based on differential evolution (MOEA/D-DE) [51] is used for uniformity optimization. The number of reference solutions r is set to ten in DGEA, and there is no specific parameter in LMOCSO. Regarding SLSEA, the population size for variable classification \(n_b\) is set to ten and the sampling size \(n_s\) is set to five.

Ablation studies

In our proposed SLSEA, two additional sampling strategies, i.e., diversity-related sampling and local search-related sampling, are involved. It is essential to investigate the contribution of each strategy to the performance of SLSEA. We have conducted a series of ablation studies by comparing the performance of different variants of SLSEA in solving LSMOPs. Empirically, we compare the original algorithm (termed SLSEA) with its four variants (refer to “Experimental settings”.C) on eight test instances selected from three benchmark test suites, including three DTLZ problems from conventional artificial MOPs, three LSMOP problems from artificial LSMOPs, and two TREE problems extracted from real-world applications.

The convergence profiles of the five compared algorithms on DTLZ1, DTLZ4, and DTLZ7 with 5000 decision variables are displayed in Fig. 3 in terms of the IGD values. To be more specific, the mean IGD values are noted by different markers, and the corresponding lower and upper boundaries are given by the color bars. SLSEA\(\setminus \)CD has achieved the smallest mean IGD results on the DTLZ1 problem; SLSEA\(\setminus \)D has achieved the best on DTLZ4 problem; SLSEA\(\setminus \)DL performed the best on DTLZ7 problems. All in all, SLSEA is capable of achieving the average performance of these three test problems.

The convergence profiles on three representative LSMOP problems with 5000 decision variables are given in Fig. 4 in terms of the IGD values. Unlike the comparison results on DTLZ problems, SLSEA\(\setminus \)CD has performed the best on all three cases, followed by SLSEA\(\setminus \)DL; SLSEA\(\setminus \)D, SLSEA\(\setminus \)L, and SLSEA performed similarly. For LSMOP problems, the local search-based sampling is capable of handling the LSMOPs with complex variable interactions.

Since TREE problems are constructed from real-world applications whose true PFs are unknown, we use HV values to show the convergence profiles on TREE1 with 15,000 decision variables and TREE4 with 30,000 decision variables (as shown in Fig. 5). Different from the results on DTLZ and LSMOP problems, SLSEA has performed the best while SLSEA\(\setminus \)DL and SLSEA\(\setminus \)L performed the worst.

In summary, the local search-related sampling has contributed significantly to the performance of SLSEA. Due to the specific property of each tested problem, different combinations of sampling strategies have led to different performances. For instance, in the three LSMOP problems, SLSEA without diversity and local search sampling strategies performed the second best, indicating that LSMOPs are not difficult for diversity maintenance. DTLZ1 is a multimodal MOP, where diversity maintenance is essential, and thus SLSEA\(\setminus \)DL has performed the worst due to the loss of diversity maintenance.

Different variants of SLSEAs have performed differently on artificial test instances, where local search-related sampling has contributed the most to their performance. Nevertheless, SLSEA has performed the best on two real-world test instances. It can be summarized that local search-related sampling is effective in large-scale optimization. Besides, existing artificial benchmark problems may be too regular, involving biased preferences and missing real-world properties. Thus, we still use the original version of SLSEA in the following experiments due to its comprehensive consideration of problem properties and robustness.

Comparative studies

The general performance of SLSEA is compared to four SOTA large-scale MOEAs on LSMOP and TREE test suites. The computational complexity of the proposed SLSEA and the computation time for the compared algorithms are also presented to indicate its efficiency.

Three different experimental results indicate the overall performance, dynamic behavior, and final results of all the compared algorithms, respectively.

Table 1 The results of CCGDE3, LCSA, IM-MOEA, LSMOF, MOEA/DVA, DGEA, LMOCSO and SLSEA on 54 test instances
Table 2 The results of CCGDE3, LCSA, IM-MOEA, LSMOF, MOEA/DVA, DGEA, LMOCSO and SLSEA on 54 test instances
Fig. 6
figure 6

Convergence profiles of five compared large-scale MOEAs on nine LSMOP problems with 5000 decision variables in terms of IGD values

Fig. 7
figure 7

The non-dominated solutions achieved by the five compared algorithms on LSMOP1 to LSMOP9 with 5000 decision variables in the run associated with the best IGD values

Performance on LSMOP problems

The overall performance is presented in Tables 1 and  2. Specifically, these tables show IGD results and HV results achieved by the five compared MOEAs on nine LSMOP problems with 1000, 2000, and 5000 decision variables, respectively. It can be observed from the first table that SLSEA has achieved the most best results, followed by DGEA, LMOCSO, and LSMOF, while MOEA/DVA has failed to achieve any best result. Notably, MOEA/DVA has consumed four times the maximum number of FEs for decision variable analysis. Thus, it has no additional resources for further optimization, which could explain its unsatisfactory performance. To be more specific, SLSEA has achieved the best results mainly on LSMOP4, LSMOP5, LSMOP6, and LSMOP8; DGEA has performed the best on LSMOP1 and LSMOP2; LSMOF has superior performance mainly on LSMOP7; LMOCSO has shown its advantages on LSMOP3 and LSMOP9. For LSMOP3, LSMOP5, LSMOP7, and LSMOP8, the differences of the achieved IGD results are significantly (e.g., over 100 times on LSMOP3 and LSMOP7) since these problems are challenging for large-scale MOEAs to converge to the PFs. For the rest test problems, the compared algorithms have achieved competitive results, since the difficulty of these problems is about the diversity maintenance to obtain evenly distributed results. As for Table 2, LSMOF has achieved the most best results, which can be attributed to the fact that LSMOF has achieved widespread solution sets in the objective space, leading to better HV values in comparison with SLSEA (refer to Fig. 7).

To visually show the dynamic behaviors of the compared algorithms on large-scale multiobjective optimization, Fig. 6 presents the convergence profiles of the compared algorithms in terms of IGD values. The convergence profiles of LSMOF and SLSEA are similar on LSMOP1, LSMOP3, LSMOP5, LSMOP6, LSMOP7, and LSMOP8; DGEA converges similarly to SLSEA on LSMOP1, LSMOP2, LSMOP4, and LSMOP6; while SLSEA has shown overall superiority over LMOCSO and MOEA/DVA on all the test instances. It can be concluded that the search behavior of SLSEA is similar to DGEA and LSMOF since they all have effective strategies for dealing with a large number of decision variables. Nevertheless, due to the balanced consideration of convergence enhancement, diversity maintenance, and local search, SLSEA has shown better versatility in solving different LSMOPs. In contrast, LSMOF and DGEA have achieved biased performances.

Moreover, the final non-dominated solutions achieved by the five compared algorithms on nine LSMOP problems in the run associated with the best IGD value are displayed in Fig. 7. It can be seen that SLSEA has behaved similarly to DGEA on LSMOP1, LSMOP2, LSMOP6, and LSMOP8, and it has behaved similarly to LSMOF on LSMOP2, LSMOP6, LSMOP 7, and LSMOP 8, which more or less agrees with the convergence profiles in Fig. 6. Both LSMOF and DGEA have used direction vectors in the decision space, while SLSEA has used the guidance of variable classifications. Thus they have shown effectiveness in large-scale multiobjective optimization. Nevertheless, due to the differences in offspring generation, SLSEA trends take advantage of LSMOF and DGEA to achieve a balanced performance, leading to its superiority over LSMOF and DGEA.

In the Supplementary Materials, we also give the convergence profiles of the compared algorithms on five representative LSMOP problems with a maximum number of \(5\times 10^6\) (25\(\times \) of the original settings). The results indicate that SLSEA tends to converge fast at the first \(1\times 10^6\) and stagnate around \(4\times 10^6\). Thus, SLSEA is more suitable for solving LSMOPs with limited FEs.

Table 3 The computation time consumed by five compared algorithms on four test instances

Complexity analysis

As indicated in Algorithm 1, the computational complexity of SLSEA is mainly contributed by four functions, i.e., variable classification, diversity-related sampling, local search-related sampling, and environmental selection. For the first component, the computational complexity in terms of big O notation, as indicated in Algorithm 3, is \(O(2nm)+2\times O(n_b\times (n_s+mn_s+mn^2+mn_b^2)+n_b^2+2\times (2n_b)^2)\), i.e., \(O(mn+mn^2n_b+mn_b^3+n_bn_s+mn_sn_b+n_b^2)\) by eliminating the constant factors, which can be further simplified as \(O(mn^2n_b+mn_b^3+mn_bn_s+n_b^2)\). Algorithms 5 and 6 are similar, and their total computational complexity is \(O(mn+nlgn+n_b(n+1))\) \(+\) \(O(n_s+n_s(n+1))\). As for the last function, the computational complexity of the environmental selection in NSGA-II is \(O(m(n+nn_b+nn_s)^2)\) [8]. In summary, the computational complexity of SLSEA is \(O(mn^2n_b+mn_b^3+mn_bn_s+n_b^2)\) \(+\) \(O(mn+nlgn+n_b(n+1))\) \(+\) \(O(n_s+n_s(n+1))\) \(+\) \(O(m(n+nn_b+nn_s)^2)\). It can be simplified to \(O(m(n^2n_b+n_b^3+(n+nn_b+nn_s)^2)+n(lgn+n_b+n_s)+n_b^2)\), i.e., \(O(mn^2(n_b+(1+n_b+n_s)^2)+mn_b^3)\). The main computational complexity comes from the non-dominated sorting based environmental selection. It can be improved by using some divide-and-conquer strategies, e.g., selecting the non-dominated solutions from each subpopulation instead of selecting solutions from the combination of all solutions. Besides, if \(n_b\) and \(n_s\) are treated as constants, the final computational complexity of SLSEA will be \(O(mn^2)\) as that of NSGA-II. Thus, the efficiency of SLSEA in terms of computational complexity is acceptable.

Table 3 further displays the computation time of the five compared algorithms on two LSMOP problems and two TREE problems. It can be observed that SLSEA has consumed acceptable computation time for large-scale multiobjective optimization, which validates its efficiency in terms of computation time.


In this work, we have proposed to use efficient sampling strategies for large-scale multiobjective optimization. Three different sampling strategies are designed for convergence enhancement, diversity maintenance, and local search, respectively. Generally, the sampling strategies are economically cheap, robust, and versatile for scaling up evolutionary multiobjective optimization mainly for three reasons. First, the sampling operation is efficient compared to generic crossover and mutation operations. Even when the number of decision variables increases rapidly, the cost for generating promising candidate solutions increases linearly. The computation time comparison has validated its efficiency on LSMOPs with a large number of decision variables. Second, two issues have ensured the robustness of SLSEA in scaling up large-scale optimization. On the one hand, the comprehensive consideration of convergence enhancement, diversity maintenance, and local search enables SLSEA to have diverse behaviors of offspring generation. On the other hand, using probability distributions enhances the randomness of the generated offspring solutions for better global optimization. Third, due to the use of different probability distributions for different purposes, the behaviors of SLSEA can be easily adjusted for handling different types of LSMOPs. In other words, if an expert has prior knowledge about the landscape of an LSMOP, a specific sampling strategy can be preferred to achieve tailored behaviors for solving specific LSMOPs. In summary, efficient sampling strategies are promising for large-scale multiobjective optimization, and more effective sample strategies are highly desirable.

In addition to the two convergence- and diversity-related sampling strategies, we also emphasize the role of local search in this work. Ablation studies using different variants of SLSEA have indicated the importance of local search, which is somehow ignored in most existing large-scale MOEAs. Notably, the variant of SLSEA with local search-related sampling has behaved similarly to LSMOF. These two algorithms can converge to a certain degree at the early stage of evolution (refer to [35]). Though the local search-related sampling has contributed significantly to the performance of SLSEA, the combination of the three sampling strategies is expected to perform robustly for large-scale multiobjective optimization. All in all, the experimental results on various LSMOPs have validated the effectiveness and efficiency of SLSEA.