Introduction

In many real-world industrial applications, we often face complex decision-making problems that need to optimize several (often conflicting) objectives simultaneously. These problems are called multiobjective optimization problems (MOPs). Let \(\Omega\) denote the decision space (which refers to a feasible search space), the notation \(x = (x_{1} ,x_{2} ,.........x_{n} )\) indicate the decision vector, and f represent the objective vector. An unconstrained minimization MOP can be defined as follows:

$$ \begin{gathered} {\text{minimize }}F(x) = (f_{1} (x),f_{2} (x),.......f_{m} (x)) \hfill \\ {\text{subject to }}x \in \Omega { ,} \hfill \\ \end{gathered} $$
(1)

where \(\Omega \, F:\Omega \to R^{m}\) represents the objective function, and m is the number of objectives. Being different from the single-objective problem (SOP), the natures of the multiple objectives are conflicting. Thus, the MOP usually obtains a set of optimal solutions called the Pareto optimal set (PS), and the objective vectors corresponding to PS are named the Pareto front (PF). When MOPs have more than three objectives, they are often called many-objective optimization problems (MaOPs).

MOPs widely exist in many practical applications, such as short-term wind forecast [1], autonomous control of unmanned aerial vehicles (UAVs) [2], aero-engine calibration [3], and optimization of deep neural networks [4]. It is difficult to solve MOPs using conventional mathematical tools, but due to good parallelism, evolutionary algorithms are very suitable for MOPs [5]. As a classic heuristic technique, evolutionary algorithms (EAs) have been demonstrated as a powerful framework for solving MOPs and MaOPs, which have been studied intensively for many years. According to different calculation strategies, they can be roughly divided into three categories. The first category is the multiobjective evolutionary algorithms (MOEAs) based on modified Pareto-dominance, such as the evolutionary algorithm based on grid dominance [6], preference order ranking [7], and other new dominance relations [8]. These improved dominance ranking methods significantly increase the selection pressure in non-dominated solutions and improve the efficiency of searching the true Pareto front. The second category is the indicator-based MOEAs which replaces the Pareto-dominance relation by performance indicators of solution quality measurement. SMS-EMOA [9] and IBEA [10] are two representative algorithms in this category. The third category is the decomposition-based MOEAs. The original MOP is decomposed into multiple subproblems, and these subproblems are solved in a collaborative manner through a population-based search [11]. Representatives of this type include the EA based on a localized weighted sum [12], the constrained decomposition approach with grids for the MOEA [13], the reference vector-guided EA (RVEA) [14] and the MOEA/D with an Ensemble of Neighborhood Sizes (ENS-MOEA/D) [15]. The existing MOEA model is very effective for solving MOPs with two or three objectives. However, it has been found in practice that the performance of these MOEAs is severely declined when solving MaOPs. The main reason of these poor performances is that most of the generated solutions are mutually nondominated as the number of objectives increases, leading to the MOEAs’ lack of selection pressure toward the true PF [16]-[18].

As another branch of heuristic technique, the swarm intelligence algorithms also have been designed for solving MOPs and MaOPs. Some literatures have shown that particle swarm optimization (PSO), inspired by the social behavior of birds, is a potential competitor of the genetic algorithm (GA) [19]. Although it cannot be concluded that the performance of PSO on MOPs is better than GA, PSO has the advantages of easier implementation, efficient storage and effective maintenance of solution diversity [20]-[22]. PSO has the characteristic of fast convergence in single-objective optimization problems (SOP), which is based on the premise that the personal- and global-best particles can be clearly confirmed. For example, AWPSO is a novel PSO algorithm, which efficiently solves the SOPs by a sigmoid function and clearly confirmed optimal particles [23]. However, multiobjective particle swarm optimizers (MOPSOs) do not have any particles on MaOPs that can perform best on all objectives and they are usually replaced by a set of tradeoff solutions. Most of the generated particles are mutually nondominated on MaOPs, which makes it difficult to choose the personal- and global-best particles. Since the personal- and global-best particles are used to guide the search direction of the particle swarm, they have a considerable impact on the performance of the PSO algorithm. Therefore, how to define the personal- and global-best particles has become the most important issue that MOPSOs need to solve. Many PSO methods try to solve the above problem. The first method uses the Pareto-based ranking scheme to define the personal- and global-best particles. CPSO [24], OMOPSO [25] and SMPSO [26] are three typical algorithms. They usually choose the less crowded solutions in the nondominated solutions as the global optimal particle. In addition, MOPSOs based on enhanced ranking schemes have been proposed, such as global marginal ranking [27] and preference order ranking [28]. In these algorithms, the particles in the external archive are first sorted according to the corresponding criteria, and then some elite particles can be selected as candidates for personal- and global-best particles according to their rankings. The second strategy is decomposition-based MOPSOs, where an original MOP is decomposed into a number of single-objective optimization problems where the single-objective PSO algorithms can be directly applied. For example, in AgMOPSO [29], a novel decomposition approach is used to select the personal-best particle and global-best particle during the evolutionary search process. In HMOPSO-ARA, the position information of local best particle is introduced by the decomposition method to improve search efficiency of PSO [30].

Recently, the competitive swarm optimizer (CSO) have being increasingly popular due to its benefits of high search efficiency in solving MOPs. CSO is a variant of PSO and adopts a pairwise competition strategy to update the population. Cheng et al. were the first to introduce a competition mechanism into PSO and applied it to solve SOPs [31]. In their method, the dynamic system is driven by a random competition mechanism, where any particle could be a potential leader. Afterwards, Zhang proposed a competitive mechanism based multi-objective PSO algorithm (CMOPSO), which effectively enhance swarm diversity for tackling MOPs with multimodal landscapes [32]. Moreover, the large-scale multiobjective optimization based on CSO (LMOCSO) is proposed for solving large-scale multiobjective optimization problems (large-scale MOPs). Different from the existing algorithms that focus on the modification of updating velocity, LMOCSO adopts a two-stage strategy to update the position of the particles, which can significantly improve the search efficiency [20]. In addition, some new particle update strategies recently proposed also provide research ideas for CSO [33].

The PSOs and CSOs have good performances on different types of benchmark MOPs and MaOPs, but several noticeable problems still remain. First, due to the fact that there is no such a particle that can perform best on all objectives, the MOPSOs are incapable of clearly determining the swarm leaders in solving MaOPs [34]. Second, PSO has the characteristic of fast convergence in SOP, which is based on the premise that search direction can be clearly confirmed [21]. However, in MOPSOs, several objectives need to be considered. Third, after the positions of the particles are updated, some of them are performing invalid searches. Moreover, similar to the problems that MOPSOs face, traditional CSOs also perform poorly on MaOPs, and the performance of CSO degrades dramatically with the increase in the number of objectives in the MaOPs.

To address the abovementioned issues, we propose a competitive swarm optimizer with probabilistic criteria termed MOCSOP. The main new contributions of this paper are summarized as follows.

  • To clearly define the leader of swarm in solving MaOPs, we propose a probability criterion to estimate the quality of each particle in the population and select the leaders according to the value of joint probability. The proposed probability estimation method shows good robustness as its performance is not affected by the number and range of objectives.

  • To address the issue of low convergence efficiency of the MOPSOs on some MaOPs, we design a competitive mechanism with winner pool to update the particles’ position. Compared with the velocity and updated position strategy of the existing MOPSOs, the learning strategy, based on the competitive mechanism with winner pool, achieves a better performance of convergence on MaOPs.

  • To enable the swarm to be evenly distributed at the PF, we construct an environment selection scheme with the mixed probability criterion. The proposed mixed probability criterion based on diversity mechanism not only effectively develops the diversity of the population, but also strengthens the selection pressure to some extent in the early stage.

  • After the position updating of the particles, a number of invalid particles may be generated. Some particles even move away from the PF, which will decrease the convergence efficiency. To address the abovementioned issue, we propose a swarm update strategy by using the particles in the external elite archive to update the current particle swarm. This ensures that all of the particles entering the next generation are valid, which effectively improves the convergence of the algorithm.

The rest of this article is organized as follows. The second section introduces the relevant background of MOPSO and the motivation of this article. Details of the MOCSOP are given in Sect. “Proposed algorithm”. In Sect. “Experimental results and analysis”, some experimental studies are carried out to elaborate on the performance of the MOCSOP algorithm in detail. Finally, Sect. 5 provides our conclusions and some possible approaches for future work.

Related background and motivations

PSO has been widely used in SOPs [31] and other applications [4]. Recent reports show that PSO is a powerful potential competitor of GA in solving MOPs, and many MOPSOs have been successfully applied to MOPs. Despite the fact that MOPSO is very effective in solving MOPs with two or three objectives [32, 26], most of the existing MOPSOs still perform poorly on MaOPs. There are several significant challenges that restrict the performance of MOPSOs.


(1) Swarm leader.

MOPSOs do not have any particles in MOPs that can perform best on all objectives and are usually replaced by a set of tradeoff solutions. This makes it difficult to choose the swarm leader [34]. Since swarm leader particles are used to guide the search direction of the particle swarm, they have a considerable impact on the performance of the PSO algorithm. Especially when solving high-dimensional MaOPs, particles will oscillate repeatedly in the objective space, which will affect the convergence speed [20]. Therefore, how to define swarm leader has become the most important issue that MOPSOs need to solve. In the current studies, some novel leader selection strategies have been proposed [22, 24, 34, 35]. However, these methods have a complicated selection procedure and can’t completely solve the leader selection problem, which means the MOPSOs still lacks sufficient selection pressure toward the true PFs.


(2) Convergence efficiency in solving MaOPs.

Being different from SOPs, due to the conflicting nature between the multiple objectives, there does not exist such a search direction that can be clearly confirmed. Therefore, MOPSOs do not show a good convergence advantage compared with MOEAs. As shown in Fig. 1, we selected five MOPSO algorithms [32, 26, 19, 36, 22] for comparison with MOEA/D [11], each with a population size of 105 and 30,000 function evaluations (FEs). It is obvious from the figure that the convergence of MOEA/D on three-objective DTLZ3 is significantly better than other state-of-the-art MOPSOs. As a consequence, the convergence efficiency of PSO is not high enough to find a set of Pareto optimal solutions within a limited number of generations.

Fig. 1
figure 1

Nondominated solution set obtained by (a) CMOPSO, (b) SMPSO, (c) IDMOPSO, (d) MMOPSO, (e) MaOPSO/vPF and (f) MOEA/D on DTLZ3 with three objectives


(3) Invalid search.

Although PSO has been applied to solve MaOPs and other real-life applications, little work has been reported to consider the invalid search of particles in the objective space. In most existing MOPSOs, the velocities and the positions of the particles are usually updated using the positional information of the personal- and global-best particles. After all the particles are updated, the updated particles will directly pass to the next generation population. However, not all updated particles are valid particles. This may cause insufficient selection pressure for the population to approach the true PFs. To illustrate this fact, Fig. 2 shows the positions of eight particles updated by MMOPSO, IDMOPSO, and SMPSO strategies on 2-objective DTLZ5, respectively. As shown in Fig. 2, the updated particles are not always towards the PF. Specifically, some updated particles even move away from the PF, which will affect the search efficiency.

Fig. 2
figure 2

a MMOPSO on 2-objective DTLZ5. b IDMOPSO on 2-objective DTLZ5. c SMPSO on 2-objective DTLZ5

To solve the abovementioned issues, we propose a competitive swarm optimizer with probabilistic criteria for MaOPs, termed MOCSOP. On the one hand, MOCSOP guarantees convergence efficiency of the algorithm through the winner pool and particle swarm update strategy. On the other hand, to produce well-distributed Pareto fronts, we use the environment selection scheme with the mixed probability criterion to select particles which will enter the external archive. The specific content of MOCSOP will be described in detail in Sect. “Proposed algorithm”.

Proposed algorithm

Probability estimation method

MOPSOs guide the search direction of particles in the swarm through the appropriate swarm leaders, so choosing the swarm leaders is very important. It directly affects the performance of the MOPSO algorithms, especially when solving MaOPs, and an inappropriate swarm leader selection method will increase the invalid exploration of particles in the objective space.

In view of the above, we use probability estimation methods to find the swarm leader particle among the current swarm to form the winner pool. We first compute the probability values of the particles in each objective on the space of probabilities.

$$ P_{k} (x_{i} ) = \frac{{D_{k} (x_{i} )}}{|S| - 1}. $$
(2)

Probability theory is used to define \(P_{k} (x_{i} )\) as the probability that \(x_{i}\) wins the comparison on the k-th objective. In other words, \(P_{k} (x_{i} )\) is the probability that \(x_{i} \in S\) wins a comparison, according to k-th objective (\(f_{k} ,k = 1,2,.......m\), m is the number of objectives), against another randomly selected solution from S. If \(P_{k} (x_{i} ) > P_{k} (x_{j} )\), it means that the probability of \(x_{i}\) winning the comparison on the k-th objective is higher than that of \(x_{j}\). We can also say \(x_{i}\) performs better than \(x_{j}\) on the k-th objective. Where S represents the finite set of feasible solutions under consideration, | | represents the L1-norm, |S| represents the size of the population, and \(D_{k} (x_{i} )\), which is calculated using the competition strategy, represents the number of times that \(x_{i}\) has won the competition with other particles in the population on the k-th objective. In the minimization problem, the comparison rules are as follows:

$$ com(j) = \left\{ {_{{0 \quad if \quad f_{k} (x_{i} ) \ge f_{k} (x_{j} ) \, }}^{{1 \quad if \quad f_{k} (x_{i} ) < f_{k} (x_{j} )}} } \right., $$
(3)
$$ D_{k} (x_{i} ) = \sum\limits_{j = 1,j \ne i}^{|S|} {com(j)} . $$
(4)

For example, consider a problem involving the minimization of three objectives, \(f_{1}\), \(f_{2}\) and \(f_{3}\). The population contains four particles, a, b, c and d. Assume that the corresponding values of (\(f_{1}\), \(f_{2}\), \(f_{3}\)) for different particles are ≡ (0.5,1,1), ≡ (4,4,3), ≡ (3,3,1.5) and ≡ (1,1.5,2), respectively. Then we construct a probability matrix, M, which has dimensions ||S|| ×||m||, where || || indicates the set cardinality, S represents the population, and m is the number of objectives. Each row of the matrix M corresponds to one individual and each column of the matrix M corresponds to one objective vectors. In this example, a 4 × 3 matrix is constituted as shown in Fig. 3a. Then, as shown in Fig. 3b, the number of times each particle wins the competition on k-th objective is calculated by Eq. (4). And finally as shown in Fig. 3c, we calculate the probability of each particle winning the comparison on the k-th objective according to Eq. (2).

Fig. 3
figure 3

Illustrative example to show the process of probability estimation

However, \(P_{k} (x_{i} )\) only reflects the probability that \(x_{i}\) wins a comparison on the k-th objective. To estimate the quality of particles on all objectives in the population, we use the joint probability representation:

$$ P(x_{i} ) = P_{1} (x_{i} )*P_{2} (x_{i} )*P_{3} (x_{i} )........*P_{m} (x_{i} ) $$
(5)

where \(P (x_{i} )\) is the probability that \(x_{i}\) wins a comparison on all objectives, against another solution randomly selected from the current population S. The joint probability represents the probability of the particle \(x_{i}\) wins the competitions on all objectives. As shown in Fig. 3d, if \(P (x_{i} ) = 1\), it means that \(x_{i}\) is the best for all the objective functions and can dominate all other particles in the swarm. If \(P (x_{i} ) = 0\), then the convergence of \(x_{i}\) is worse than that of other particles. The joint probability reflects the quality of particles in the population, but the calculation of joint probability also suffers from the “curse of dimensionality”. For instance, consider a problem involving the minimization of five objectives, \(f_{1}\), \(f_{2}\), \(f_{3}\), \(f_{4}\), and \(f_{5}\). Assume that the corresponding values of (\(P_{1} (y)\), \(P_{2} (y)\), \(P_{3} (y)\), \(P_{4} (y)\), and \(P_{5} (y)\)) for y are (0.1, 0.2, 0.5, 0.9, 0.4). The joint probability of y can be calculated by the formula (5): \(P(y) = 0.1*0.2*0.5*0.9*0.4 = 3.6 \times 10^{ - 4}\). when the number of objectives increases significantly, \(P(y)\) may underflow. In addition, once a particle performs poorly on one objective, the joint probability of value will directly go to 0. To obtain an easy-to-calculate equivalent formula instead of joint probability, we transform the product into a summation form through a logarithmic operation, which is often used in machine learning [37]. The approximate values of the joint probability can be computed as follows:

$$ PV(x_{i} ) = - \log (P_{1} (x_{i} )) - \log (P_{2} (x_{i} ))..... - \log (P_{m} (x_{i} )). $$
(6)

It is worth noting that when \(P_{k} (x_{i} ) = 0\), we set \(P_{k} (x_{i} ) = 10^{ - 6}\). A smaller \(PV(x_{i} )\) indicates that \(x_{i}\) has a higher probability of winning the comparison on all objectives. Algorithm 1 gives the entire process of probability estimation. Each particle is assigned a value of joint probability through the probability space to reflect the particle's quality in the swarm.

figure a

The competition mechanism with winner pool

Recent literature reports that the competitive swarm optimizer (CSO), compared with the traditional PSO in solving MOPs and MaOPs, improves the swarm diversity to avoid premature convergence [20, 32]. Specifically, in the competitive swarm optimizer, two particles are randomly selected at a time. The velocity of the particles with poor fitness is updated according to the position of the particles with good fitness, and the winner is directly passed to the next generation of the swarm. In our method, we also use the strategy of the competitive swarm optimizer to guide the entire swarm by learning from the winner, but we have three new contributions. First, we clearly define the swarm leaders by probability criterion. Second, we form the winner pool by selecting the particles with the best value of joint probability from the current population instead of using the random competition mechanism to select the winner. Third, MOCSOP does not use the personal- and global-best particles as mentioned in [19, 36, 38], and we use the particles of winner pool directly to guide all the particles to approach the true PFs of MaOPs. In summary, the particle velocity is updated in the proposed MOCSOP by:

$$ v_{i,j} (t + 1) = \omega v_{i,j} (t) + c_{1} r_{1} (x_{w,j} (t) - x_{i,j} (t)), $$
(7)

where each particle has an n-dimensional position, \(x_{i} (t) = (x_{i,1} (t),x_{i,2} (t),......x_{i,n} (t))\), t is the iteration number, \(\omega\) is the inertial weight, \(c_{1}\) is the learning factor, \(r_{1}\) is a random number generated uniformly in the range [0, 1], the position of the winner for \(x_{i} (t)\) is denoted as \(x_{w}\) and the velocity of \(x_{i} (t)\) is an n-dimensional velocity vector \(v_{i} (t) = (v_{i,1} (t),v_{i,2} (t),......v_{i,n} (t))\).

It is worth noting that the winner pool is formed from the top 10% of the particles with better values of joint probability in the current swarm. where the winner of \(x_{i}\) is randomly selected from the winner pool, and then the position of \(x_{i}\) can be updated on the basis of the new velocity:

$$ x_{i} (t + 1) = x_{i} (t) + v_{i} (t + 1). $$
(8)

Furthermore, similar to most existing MOPSOs, MOCSOP also executes polynomial mutation [39].

For further observing the position of the particles of winner pool, Fig. 4 presents an example to illustrate. It is interesting to find that the position of leaders in the population. In the early stage of the evolution, the position of leaders in the population is closer to the PF. These particles have better quality on convergence, and are regarded as swarm leaders to guide the CSO-based search. With the process of evolution, most of the generated solutions are mutually nondominated. Leaders are distributed at various positions on the PF, which enhances the diversity search of algorithm to some extent.

Fig. 4
figure 4

Example to illustrate how to identify the position of leaders in MOCSOP for DTLZ7 with three-objectives, where the population size is set to 275 (The red points represent the leaders and blue points represent the other particles in the swarm). a solution set obtained by MOCSOP after 759 FEs. b solution set obtained by MOCSOP after 12,903 FEs

figure b

Environmental selection

Similar to the existing MOEAs [39], MOCSOP also uses a set of predefined reference points to ensure the diversity of the obtained solutions. As presented in Algorithm 2, the combined population \(R_{t}\) is divided into different layers (\(F_{1} ,F_{2}\), and so on) by a nondominated sorting procedure, where \(F_{j}\) is the j-th Pareto nondomination level of \(R_{t}\), and the last layer \(F_{l}\) is determined. The critical front is \(S_{t}\), if \(|S_{t} | = N\), then return \(A^{\prime} = S_{t}\). Otherwise, when \(|S_{t} | \ge N\), we first estimate the joint probability of the particles in \(S_{t}\). Then, the remaining \((K = N - |A^{\prime}|)\) swarm members are chosen from the last front \(F_{l}\) by using the association and niching operation with the mixed probability criterion (line15). In what follows, we will describe them in more detail in the following subsections.

Objective space normalization

In general, different objectives have different ranges, which can directly affect the diversity estimation of the population. Therefore, we need to perform an adaptive normalization procedure on the critical front \(S_{t}\). Several normalization methods have been proposed [40, 41], and we utilize the adaptive normalization method proposed in [39]. Specifically, the normalization of the objective functions can be computed using the following equation:

$$ f_{i}^{^{\prime}} (x_{i} ) = \frac{{f_{i} (x_{i} ) - z_{i}^{\min } }}{{b_{i} }}, $$
(9)

where the ideal point \(z^{\min } = (z_{1}^{\min } ,z_{2}^{\min } ,.....z_{m}^{\min } )\) is constructed from the minimum value of each objective function \(f_{i}\) and \(b_{i}\) is the intercept of the i-th objective axis.


1. Association and Niche-Preservation Operation

The proposed MOCSOP has a similar association and niching operation as [39], except that the probability criterion is added to the niche-preservation operation procedure. In our proposed method, when a reference vector already has one member associated with it that exists in \(S_{t} /F_{l}\), the particles with the best value of joint probability are preferentially selected to pass the archive. A simple example is displayed for illustration as shown in Fig. 5, where A, B, and C are nondominated solutions, D, E, F and G are dominated solutions. Assume that five out of the seven candidate solutions need to be selected for the next archive. Considering that A, B and C are in the first layer, they are preferentially selected to enter the archive. In this case, all the reference vectors have a particle associated with it in the first layer. Then, we still need to select two particles from last layer to enter the archive. For the reference vector 2, NSGA-III randomly chooses a particle form D and E to enter the archive. However, randomly selected particles have uncertainty, and some particles with good quality may be missed. In MOCSOP, the E is passed to the archive, because of the fact that the joint probability value of E is better than D according to Eq. 6. Similar operation, we choose F to enter the archive.

Fig. 5
figure 5

Illustrative example to show the differences between archive updated by our method and NSGA-III

There are two main reasons prompting that we add the probability criterion to the association and niching operation. On the one hand, evolutionary search and swarm update strategy are applied to the external archive (presented in Sect. “Evolutionary search on the external archive” and “Swarm update strategy”), therefore more particles with better joint probability value in the archive can effectively improve the search efficiency, especially in the early stage. On the other hand, during the experiments, we find that the proposed method is beneficial for solving large-scale MOPs. Refer to Sect. “Further discussion” for more details.

Evolutionary search on the external archive

To further enhance the solution quality in the external archive and to repair the potential insufficiency of CSO search on some MaOPs, we use the evolutionary search to further explore the archives. Recently, [36, 38] have shown that this hybrid scheme not only effectively improves the search ability of MOPSOs, but it also enhances the robustness of the algorithm to tackle various complex PFs. In this paper, the evolutionary search framework is the same as in NMPSO [38], and we also use the simulated binary crossover (SBX) and polynomial mutation (PM) [39] to extend the search capabilities of the CSO. Due to space limitations, the specific details of using the evolutionary search to assist CSO can be found in [36, 38].

Swarm update strategy

After the velocity and position of the particle swarm are updated, the swarm has a large number of invalid particles. These particles are mostly concentrated in crowded areas or far from the Pareto front. In response to this problem, this article proposes a simple and efficient particle swarm update strategy to ensure that the particles can search the Pareto front efficiently while avoiding the repeated search of invalid areas that affect the convergence and equilibrium of the algorithm. The specific details are shown in Algorithm 3. After the environmental selection, if \(a_{i}\) comes from the updated particle swarm S (in line 14 of Algorithm 4) and still survive, then \(a_{i}\) will inherit the updated velocity (in line 10 of Algorithm 4); otherwise, the velocity of \(a_{i}\) will set to be 0. The external archive is directly used as the next-generation particle swarm to ensure the effectiveness of the particles in the swarm. The reason for the above is that the archive, after environmental selection, retains some elite individuals of this generation, and these individuals are obviously valid particles. Second, the external archive saves all of the elite individuals who have been searched so far. These individuals, as the next generation of particles, ensure the effectiveness of the entire population and avoid the invalid search of particles.

figure c

Figure 6 illustrates an example that shows the advantage of the swarm update strategy over the traditional PSO. The traditional PSO usually chooses the updated particles entering the next generation of the swarm [24, 38]. As shown in Fig. 6, the particle swarm {d, e, f} is the next generation of the swarm. We can find that the positions and motion directions of d and f are far away from the PF; this situation will increase the invalid search of the PSO in the objective space and it will affect convergence. By contrast, the particle swarm {a, e, g} is selected by the proposed swarm update strategy. They are the best candidate particles in terms of convergence and diversity. In addition, e retains the direction of velocity, which ensures that the particles always move toward the PF.

Fig. 6
figure 6

Illustration of the procedure of swarm update strategy on a bi-objective problem

figure d

Complete algorithm of MOCSOP

Similar to most existing MOPSOs, the proposed MOCSOP has a very simple framework. To describe the complete algorithm of MOCSOP in detail, Algorithm 4 presents the pseudocode of its complete framework, and the main framework of MOCSOP consists of the following steps. It begins with the initialization of a random population S and a set of uniformly distributed reference vectors Z. For each particle in S, its positional information is randomly generated, and its velocity is set to 0. Furthermore, we use Das and Dennis’s [42] method to generate uniformly distributed reference points. In line 3, the external archive A is initialized and all nondominated solutions in S are distinguished and added into A. During the evolutionary phase, we first use the probability estimation method to find the swarm leader particles among the S, and then select the particles with better values of joint probability in the current swarm to create a winner pool. After that, for each particle in S, the particle velocity and position are updated by using Eqs. (7) and (8) in lines 10–11. Then, to enhance the search ability of the CSO, the polynomial mutation is also performed. In line 16, we update the archive A by executing environmental selection. Then, in line 17, the evolutionary search strategy is applied on A to obtain new solutions. For the new population S', we execute the environmental selection to update archive A again. Finally, swarm update strategy procedures are performed to ensure that the next generation particles are all valid particles. The main loop will repeat until a termination criterion is reached, and the archive A is reported as the final approximated PF.

Computational complexity analysis

The computational complexity of MOCSOP is mainly related to the operations of probability estimation and environmental selection. For a population size N and a M-objective problem, the computational complexity of probability estimation is \(O(MN^{2} )\). In the population update stage, all particles in the population are updated in the worst-case scenario, and this requires \(O(N)\) calculations. For the evolutionary search strategy, two operations, the simulated binary crossover (SBX) and polynomial mutation are calculated, which requires a runtime of \(O(M\frac{N}{2})\). In the environmental selection, we require \(O(MN^{2} )\) computations for nondominated sorting and niching operation. After archive updating, the swarm update strategy executed requires \(O(N)\) in the worst case. In summary, the worst-case time complexity of one generation in MOCSOP is \(O(MN^{2} )\).

Compared with the existing MOPSOs and MOEAs, our MOCSOP method is computationally efficient in solving MaOPs. In the experimental section, we will compare the average runtimes of MOCSOP with that of the various evaluated approaches for MaOPs.

Experimental results and analysis

In this section, to prove the effectiveness of our algorithm model, we first compare our method with five typical MOPSOs, namely, CMOPSO [32], NMPSO [38], IDMOPSO [19], MMOPSO [36] and MaOPSO/vPF [22]. Where CMOPSO is a recently proposed competitive mechanism-based PSO, MMOPSO is an improved version of MOPSO with multiple search strategies, NMPSO, IDMOPSO and MaOPSO/vPF are three novel PSO algorithms designed for solving MaOPs. These comparable methods have shown excellent performance on both MOPs and MaOPs with various types of Pareto fronts. Then, we compared our approach with five state-of-the-art MaOEAs including MaOEA/IGD [43], NSGA-II/SDR [8], VaEA [44], MOEA/D-CMA [45] and A-NSGA-III [46]. They have shown a good balance between diversity and convergence on MOPs and MaOPs.

In the experiment, we selected 16 test problems, including DTLZ1-DTLZ7 [47] and WFG1-WFG9 [48], which were widely used to evaluate the performance of the algorithm. Based on the different types of Pareto fronts, these test problems can be roughly divided into three groups. The first group is DTLZ1, which includes a linear PF. The second group consists of DTLZ2-DTLZ4 and WFG4-WFG9, which have a concave PF. The problem with concave PF may have a great number of local optima, which imposes a great challenge for algorithms to push the population toward the PF. The third group consists of DTLZ5, DTLZ6, and WFG1-WFG3. These instances have discontinuous (DTLZ7 and WFG2), degenerated (DTLZ5, DTLZ6 and WFG3) and other complex PFs (WFG1), which brings the challenge to maintain the diversity of population. In this paper, we use DTLZ and WFG with a number of objectives that range from 3 to 15, because of the fact that they can scale any number of objectives and decision variables. The number of decision variables for DTLZ test suites is set to n = k + m − 1, where m is the number of objectives and n is the number of decision variables. As recommended in [43], we set k = 5 for DTLZ1, k = 10 for DTLZ2 to DTLZ6 and k = 20 for DTLZ7. For WFG1-WFG9 test instances, the number of decision variables is set to n = k + l as suggested in [44], where k is set to m − 1, and the distance-related variable l = 10.

Experimental settings

  • Reference points and population size:

    NSGA-III, MOEA/D-CMA, MaOEA/IGD, VaEA, IDMOPSO, MaOPSO/vPF and MOCSOP were all used Das and Dennis’s [42] approach with two layers to generate uniformly distributed reference points. According to the suggestion of [16], for the test suites of 3, 5, 6, 8, 10 and 15 objectives, we set the number of weight vectors to 105, 126, 132, 156, 275, 135 respectively. In addition, for quantitative comparisons, the population size of each comparison method is set to the same value as the number of reference points.

  • Experimental settings of all compared algorithms:

    For fair comparisons, the parameters of all comparison methods were set according to their references. Table 1 lists the related parameters used in the experiments for each algorithm, where D is the dimension of the decision space, \(\omega\) is the inertial weight, \(c_{1}\) and \(c_{2}\) are two learning factors, \(r_{1}\) and \(r_{2}\) are two uniformly distributed random numbers, and \(\eta_{c}\) and \(\eta_{m}\) are the distribution indexes of SBX and PM, respectively. \(p_{c}\) and \(p_{m}\) are the crossover and mutation probabilities used in evolutionary operators, respectively. Regarding MOEA/D-CMA, the number K of Gaussian models is set to 5. In addition, for MOEA/D-CMA, T is the neighborhood size. In IDMOPSO, k is set to 0.005 according to [45]. For MOCSOP, no additional parameters are needed to be specified.

  • Performance metrics:

Table 1 Parameters Settings of all the Algorithms Compared

To demonstrate the capability of our method in convergence and diversity quality, we utilized the inverted generational distance (IGD) [49] and hypervolume (HV) [50] to evaluate the performance of various approaches on MOPs and MaOPs. Specifically, the IGD and HV can measure the convergence and diversity between nondominated solutions generated by the algorithm and true PFs. In the calculation of IGD and HV, the sampling of the reference points was adopted from the suggestion of [51]. Moreover, for a comprehensive evaluation, Wilcoxon rank was further employed to test the performance of various evaluated models [52]. In the experiment, the symbols “ + ,” “ − ,” and “\(\approx\)” indicate that the results obtained by other comparison algorithms are significantly better than, worse than, and similar to that obtained by MOCSOP, respectively. In this paper, the number of evaluations is adopted as the termination criterion. For DTLZ1-DTLZ7 and WFG1-WFG9, the maximal number of evaluations is set to M × 30,000. In addition, all the experiments performed 20 independent runs for each algorithm on each test instance by utilizing a PC with an Intel Core I7-8750H CPU and an Nvidia GeForce GTX 1060 GPU.

Comparisons of MOCSOP with five competitive MOPSOs for solving MOPs and MaOPs

We first discuss the convergence of MOCSOP. To investigate the convergence of the proposed approach in the search process, we utilize three test functions DTLZ1, DTLZ3, and WFG1 to conduct a comparative experiment. For a quantitative evaluation, all comparison models are set to use the same initial population and each test suite is run for 20 times. The convergence profiles of IGD values obtained by MOCSOP and compared methods are plotted in Fig. 7 As shown in Fig. 7a, for DTLZ1 with linear Pareto front, MOCSOP converges to PF significantly faster than other evaluation approaches. Especially at the beginning of optimization, MOCSOP has converged rapidly, which means that the proposed approach is effective in solving problem with linear PF. It is known that PSO encounters great challenges when tackling DTLZ3 [36]. This is mainly because the DTLZ3 contains a large number of local optima that will pose challenges to existing MOPSOs in obtaining nondominated solutions. Figure 7b shows the IGD curves of all the compared algorithms on DTLZ3. We can find that CMOPSO, IDMOPSO, MaOPSO/vPF have performed poorly on DTLZ3, one possible reason for the poor convergence of the above algorithms is that maybe the PSO-based search lose its efficiency in solving problem with a great number of local optima. Although NMPSO and MMOPSO reach the best IGD values, their convergence speed is significantly slower than MOCSOP during the whole evolutionary process. Thus, it is unsurprising that the convergence of the proposed MOCSOP is better than NMPSO and MMOPSO on DTLZ3. To further observe the convergence of MOCSOP for complex PF, Fig. 7c depicts the evolutionary trends of the compared models on WFG1. WFG1 instance includes an irregular PF, which imposes a great challenge for MOPSOs to push the population toward the PF. As can be seen from the figure, MMOPSO, CMOPSO, IDMOPSO and MaOPSO/vPF have trouble in convergence. AS for MOCSOP and NMPSO, they obtain similar IGD values in the early stage. However, the IGD values of NMPSO increases after 70,000 FEs. This phenomenon may be due to the fact that NMPSO only reaches the local optimum and does not toward the true PF. In contrast, as the iteration proceeds, the solutions obtained by MOCSOP get closer and closer to the true PF. By comparing the convergence of MOCSOP with those of traditional PSO algorithms, we conclude that the proposed method has the gratifying capacity of convergence.

Fig. 7
figure 7

Evolutionary trajectories of IGD for MOCSOP, CMOPSO, NMPSO, MMOPSO, IDMOPSO and MaOPSO/vPF on a DTLZ1 with three objectives. b DTLZ3 with three-objectives. c WFG1 with six objectives

To make a visual comparison, Fig. 8 shows the nondominated set obtained by MOCSOP and other competitive MOPSOs on three-objective WFG2. For WFG2, it includes a disconnected PF, which brings the challenge to maintain the diversity of population. As shown in Fig. 8, we can find that all compared methods exhibit the good performances in terms of convergence on WFG2 and have successfully converged to the true PF. However, these algorithms perform poorly in maintaining the diversity of population. NMPSO has obtained the sparse nondominated set on the Pareto front, which indicates that the balanceable fitness estimation method is not suitable for solving the problem with disconnected PF. Although CMOPSO, MMOPSO, IDMOPSO and MaOPSO/vPF have dense population, their solutions are not uniformly distributed on the disconnected parts. In contrast, the nondominated solution set obtained by our method has a significant improvement compared with those of other evaluation approaches, as evidenced by our solution set, which is closer to the shape of the true Pareto front.

Fig. 8
figure 8

Pareto nondominant solutions with the median IGD values among 20 runs obtained by CMOPSO, NMPSO, IDMOPSO, MMOPSO, MaOPSO/vPF and MOCSOP on three-objective WFG2

To conduct a comprehensive comparison between the various methods, Table 2 summarizes the median IGD comparison results of MOCSOP with respect to five current MOPSOs on DTLZ1–DTLZ7 and WFG1-WFG9 with 3–15 objectives. As can be seen from Table 2, the proposed MOCSOP wins 46 out of the 80 comparisons, demonstrating its efficiency in handling general MaOPs. Specifically, MOCSOP achieves the best IGD values on DTLZ1 with 3 to 15 objectives. It is demonstrated that the proposed MOCSOP achieved promising performance on the problem with a linear PF. As for the instances with a concave Pareto front such as DTLZ2-DTLZ4, MOCSOP also produces competitive results compared with those of the other state-of-the-art approaches, especially on the high objectives. It is worth noting that the proposed MOCSOP performs worse than NMPSO on DTLZ5–DTLZ7 regarding IGD. This finding is unsurprising and is mainly because the reference points in MOCSOP has poor distributions on those irregular PFs, which may mislead the search efforts of the algorithm. Furthermore, CMOPSO obtains the worst IGD values on WFG3. This is because the conventional particle swarm update strategy lacks sufficient selection pressure to approach the true PF of problem with a disconnected PF. This phenomenon also exists in MMPSO and MaOPSO/vPF. Although the performance of MOCSOP is slightly worse than that of NMPSO, it performs competitively on WFG3. As far as the IGD is concerned, the performance measures obtained by the proposed MOCSOP on WFG3 with 3 to 15 objectives are better than those of MMPSO and MaOPSO/vPF. This means that the nondominant solutions obtained by MOCSOP is closer to the true PFs than those obtained by MMOPSO and MaOPSO/vPF. For the other test instances with irregular PF, the proposed MOCSOP also achieves good performance in terms of both convergence and uniformity, such as WFG2 and WFG3. Therefore, the comparison results in Table 2 demonstrate that MOCSOP has good ability of solving MaOPs with various types of PFs.

Table 2 Median IGD values among 20 independent runs obtained by CMOPSO, NMPSO, IDMOPSO, MMOPSO, MaOPSO/vPF and MOCSOP on DTLZ1–7 AND WFG1–9 with 3, 5, 8, 10 and 15 Objectives

Comparisons of MOCSOP with other state−of-the-art MaOEAs

To verify the capability of MOCSOP for handling MaOPs, we compare our approach with some state-of-the-art approaches, including A-NSGA-III, MOEA/D-CMA, MaOEA/IGD, NSGA-II/SDR and VaEA. These are typical methods from different categories that use different techniques. NSGA-II/SDR is a variation of NSGAII to tackle the MaOPs, which uses a new dominance relation. ANSGA-III is constructed by applying an adaptive reference point scheme to the original NSGA-III approach. The MaOEA/IGD is an IGD indicator-based evolutionary algorithm for solving MaOPs. Finally, VaEA is a new vector angle-based evolutionary algorithm, which has the significant advantage of diversity and convergence. The above algorithms are very competitive in addressing MaOPs, making the comparisons more comprehensive.

Table 3 summarizes the HV values obtained by MOCSOP and five state-of-the-art MaOEAs on DTLZ1–DTLZ7 and WFG1–WFG9 with 3, 5, 8, 10 and 15 objectives. As shown in Table 3, the proposed method yields superior performance on most of the test instances. Although the proposed algorithm produces the suboptimal results on some concave problems, it performs well on the test instances containing complex, degenerate and disconnected Pareto fronts. For instance, our method generates the best HV values on DTLZ1, WFG5 and WFG7 and produces competitive results on DTLZ2, DTLZ3 and WFG2. It is widely recognized that the PSO operators do not perform well on WFG1 [36]. However, MOCSOP obtains the competitive HV values on WFG1, which means that the proposed method ensures the diversity of the generated solutions and avoids to local optima. Moreover, the MOCSOP performs a constant good performance couple with the increasing of the number of objectives in MaOPs. This indicates that MOCSOP is a competitive model for handling MaOPs. For a visual comparison of the evaluated approaches, Fig. 9 plots the final nondominated solutions with the median HV value among 20 independent runs of various models on the 10-objective DTLZ1 instance by parallel coordinates. As shown in Fig. 9, both MOEA/D-CMA and VaEA show worse performance in terms of convergence. Due to the effective reference point adaptation strategy, A-NSGA-III shows very competitive diversity performance, but it does not converge well the entire PF. Although solutions of NSGA-II/SDR reach the PF region, they fail to cover all the objectives due to modified Pareto-dominance that may have a negative effect on guidance. Furthermore, the solution sets obtained by MaOEA/IGD have shown good distribution on DTLZ1, but Table 3 shows that they have smaller HV value than MOCSOP, which indicate that they may not actually reach the true PF. As shown in Fig. 9 and Table 3, MOCSOP is the only algorithm that achieves good diversity and impartial convergence to the Pareto front on DTLZ1.

Table 3 Quantitative comparison of median HV values among 20 independent runs of various models on DTLZ1-7 and WFG1-9 WITH 3, 5, 8, 10 AND 15 objectives
Fig. 9
figure 9

Nondominated solution set with the average HV values among 20 runs on the 10-objective DTLZ1, shown by parallel coordinates. a A-NSGAIII. b MOEA/D-CMA. c MaOEA/IGD. d NSGA-II/SDR. e VaEA, f MOCSOP

To further analyze the robustness of each algorithm, we introduce performance scores [53] to evaluate the overall performance of the compared algorithms. Specifically, the performance score shows how many other algorithms are significantly better than the selected algorithm on the considered problem instance. Figure 10 summarizes the average performance scores for the different numbers of objectives and the different test problems. A smaller value means better performance of the algorithm. As shown in Fig. 10, MOCSOP performs best overall in 10 out of 16 test problems, which demonstrates that the MOCSOP has excellent performance in all test instances. For further observations, the line chart of MOCSOP does not show huge fluctuations compared to other methods, which shows that our model has good robustness and strong search ability on different PF shapes.

Fig. 10
figure 10

Average performance score over all dimensions for different normalized test problems, namely DTLZ (Dx) and WFG (Wx). The smaller the score, the better the PF approximation in terms of HV metric

Runtimes

To investigate the computational efficiency of MOCSOP, we record the actual running time of those nine compared algorithms on DTLZ1-DTLZ7 and WFG1-WFG9. To make a comprehensive comparison, all algorithms were implemented in an identical running platform (Matlab2019a). Figure 11a shows the average runtimes of the evaluated MOPSOs tested on all instances with 8 objectives. In this figure, we can find that the MMOPSO achieves the best performance in terms of computational efficiency because the simple swarm leader selection strategy has a significant benefit in real-time computation. However, the proposed MOCSOP produces the second best result and performs better on the metrics of IGD value compared with MMOPSO. It is worth noting that, although both CMOPSO and MOCSOP use the competition mechanism, MOCSOP performs significantly better in terms of computational efficiency relative to CMOPSO. This is due to the fact that, in MOCSOP, the worst computational complexity of the proposed probability estimation method is \(O(MN^{2} )\). Additionally, the implementation of competition mechanism with winner pool is also simple, with negligible computational cost. By contrast, CMOCSO requires \(O(MN^{2} )\) and \(O(MN\log N)\) to calculate the nondominated sorting and crowding distance, respectively. For competition mechanism, two operations, selection of leader and the angle between the vectors are calculated, costing \(O(MN)\). As a result, the competition mechanism of CMOCSOP requires a maximum total runtime of \(O(MN^{2} ) + O(MN\log N) + O(MN)\). As can be further observed from Fig. 11b, proposed MOCSOP presents a competitive performance in terms of computational efficiency with the other state-of-the-art MOEAs.

Fig. 11
figure 11

Average runtimes of the various evaluated approaches tested on the all the benchmark instances. a 8-objective. b 15-objective

Discussions on probability estimation method

To further observe the differences between leaders obtained by our method and other fitness assignment methods, we use two common fitness estimation methods to select leaders. The first fitness assignment method considers the L2-norm of objective value [16], which can be formulated as follows:

$$ Fitness(x) = \sqrt {\sum\limits_{i = 1}^{m} {f_{i} (x)^{2} } } . $$
(10)

The second fitness estimator is calculated by the sum of all the objectives [24] as:

$$ Fitness(x) = \sum\limits_{i = 1}^{m} {f_{i} (x)} , $$
(11)

where \(f_{k} (x)\) denotes the k-th objective value of x. Figure 12 presents an example to show the position of leaders by forming winner pool with the above fitness estimator. As shown in Fig. 12a, in the early stages, the leaders are scattered in the objective space. However, the position of leaders obtained by the proposed probability estimation method is closer to the ideal point, as illustrated in Fig. 4a. This means that the leaders obtained by the proposed method can guide the entire population towards the true PF in the early stages. As can be further observed from Fig. 12b, with the nondominated solutions increase, the leaders only focus on local regions of the PF instead of entire PF. On the contrary, the leaders are distributed over the entire PF in the Fig. 4b. Similarly, the second fitness estimator encounters the same problem. In summary, the proposed probability estimation method can adaptively adjust the position of leaders at different stages and guide the entire population towards the Pareto front.

Fig. 12
figure 12

Illustrative example to show the position of leaders for DTLZ7 with three-objectives, where the population size is set to 275 (The red points represent the leaders and blue points represent the other particles in the swarm). a Solution set obtained by the first fitness estimator after 759 FEs. b solution set obtained by the first fitness estimator after 12,903 FEs. c Solution set obtained by the second fitness estimator after 759 FEs. d Solution set obtained by the second fitness estimator after 12,903 FEs

Ablation experiment

To demonstrate the benefits of the proposed swarm update strategy in convergence efficiency, we utilize four 10-objective instances to conduct an ablation experiment. Figure 13 plots the evolutionary trajectories of IGD values obtained by MOCSOP and MOCSOP-SU averaged over 20 runs. In Fig. 13, the MOCSOP-SU model denotes the MOCSOP method without the proposed swarm update strategy. As shown in Fig. 13, for problems with concave PF (DTLZ4), both MOCSOP and MOCSOP-SU can guide the entire particle swarm to converge to the PF quickly, and they finally obtain similar IGD values. However, for DTLZ6 with degenerate PF, using swarm update strategy improves the speed of convergence significantly. The comparison results between MOCSOP and MOCSOP-SU indicate that the proposed swarm update strategy is beneficial for improving the performance of convergence speed, especially in MaOPs with complex PF.

Fig. 13
figure 13

Evolutionary trajectories of IGD values obtained by MOCSOP and its variant on four 10-objective test problems. (a) DTLZ4. (b) DTLZ6. (c) WFG1. (d) WFG9

Discussion of free parameters

In the proposed MOCSOP, the size of the winner pool has certain influence on the performance, because we use the winner pool to guide the particle swarm toward true PF. To achieve the desired nondominated solution set, the size of the winner pool needs to be given reasonable values. In the evolutionary process, we expect that most of the particles in the population are to be updated. If the size of the winner pool is too large, the number of the updated particles in the particle swarm will be too small. This may decrease the performance of the proposed method. Therefore, we suggest that the size of the winner pool should be set small. We choose 5%, 10%, 15%, 20% as the candidate values for experimental comparative analysis. To investigate the influence of the choice of different values on the performance of the algorithm, we use the DTLZ as the benchmark test suites. Table 4 shows four results of MOCSOP that correspond to different choices of the size of the winner pool on DTLZ1-ZDT7 with six objectives. Among them, MOCSOP-1, MOCSOP, MOCSOP-2 and MOCSOP-3 represent the size of the winner pools as 5, 10, 15 and 20 percent of the population size, respectively. To allow a fair comparison, the other parameters are remained unchanged. As seen in Table 4, we find that MOCSOP significantly outperforms MOCSOP-1, MOCSOP-2 and MOCSOP-3 in 5 out of 7 instances. In particular, MOCSOP has achieved relatively good results on DTLZ5. This is mainly because the inappropriate candidate values increase the randomness of the competition, which affects the performance of the algorithm inevitably on irregular problems. This phenomenon is also observed on DTLZ3. Therefore, based on the statistical results in terms of the obtained IGD values on the DTLZ test suite, we recommend setting the size of winner pool to 10 percent of the population size, even though the selected values of the free parameters may not be the best choices for other evaluation datasets.

Table 4 The median values of IGD obtained by different sizes of winner pools on DTLZ1-DTLZ7 with 6 objectives

Further discussion

During the experiment, we find that MOCSOP has shown promising potential in large-scale MOPs. Table 5 exhibits the median IGD values obtained by MOCSOP and other state-of-the-art MOEAs on three-objective LSMOP1-LSMOP9 [20] with 300 decision variables, where the population size is set to 496. For fair comparisons, all compared algorithms are individually executed 20 independent times with 4,500,000 FEs. It is worth noting that the LMEA [54] and S3-CMA-ES [55] are two typical MOEAs for large-scale MOPs, which have shown good performance on large-scale MOPs. As shown in Table 5, MOCSOP obtains the best IGD results on 7 out of the 9 test instances. The comparison results demonstrate that our method performs competitively on the large-scale MOPs. It is interesting to find that the potential of MOCSOP for solving large-scale MOPs is related to our proposed environmental selection strategy. The proposed MOCSOP has a similar environmental selection strategy as NSGA-III, except that the probability criterion is added. In MOCSOP, when a reference vector has several associated particles, the particles with the best value of joint probability are preferentially selected to enter the archive. This procedure is beneficial for solving large-scale MOPs, mainly due to the fact that large-scale MOPs has a large number of decision variables that require greater selection pressure than general MOPs. Figure 4a shows that particles with better joint probability values are closer to the ideal point in the early stages. In MOCSOP, evolutionary search and swarm update strategy are applied to the external archive, therefore the more particles with better joint probability value in the archive can effectively improve the search efficiency, especially in the early stage. In general, the proposed environmental selection scheme indirectly strengthens the selection pressure to some extent and improves the performance of MOCSOP for solving large-scale MOPs. To verify the above hypothesis, we compared the proposed method with its variant, in which the proposed environmental selection strategy is replaced by the environmental selection of NSGA-III. Figure 14 displays the experimental result, we can find that MOCSOP can obtain a solution set with good convergence and diversity on three-objective LSMOP1 with 300 decision variables, whereas the solution sets obtained by variant is not satisfactory. The comparison results demonstrate that the proposed environmental selection strategy is beneficial for improving the performance of MOCSOP on large-scale MOPs.

Table 5 The median values of IGD obtained by LMEA, S3-CMA-ES, and MOCSOP on three- objective LSMOP1-LSMOP9 with 300 decision variables
Fig. 14
figure 14

Nondominated solutions obtained by MOCSOP and its variant on three-objective LSMOP1 with 300 decision variables, where the number of evaluations is set to 4,500,000 and the population size is set to 496. (a) MOCSOP. (b) MOCSOP(NSGA-III)

In addition, to further examine the capability of the proposed MOCSOP in dealing with large-scale MOPs, we compare it with LMOCSO [20] on DTLZ test problems. Specifically, the LMOCSO is a competitive swarm optimizer (CSO)-based efficient searching method, and it shows good performance on solving large-scale MOPs. Table 6 lists the IGD values of the MOCSOP and LMOCSO evaluated on three-objective DTLZ1–DTLZ7 with 300 decision variables. The comparison results indicate that the proposed MOCSOP performs better than the LMOCSO on most of the test functions. For a visual comparison, Fig. 15 illustrates the nondominated solution set obtained by MOCSOP and LMOCSO on DTLZ7 with different numbers of decision variables. As shown in Fig. 15, the MOCSOP achieves a competitive performance on three-objective DTLZ7 with 300 decision variables compared with LMOCSO. It is noticeable that LMOCSO yields uniform distributions on DTLZ7 with 300 decision variables and it fails to maintain population diversity on DTLZ7 with 22 decision variables. The comparison results indicate that LMOCSO performs poor versatility with respect to problems containing low-dimensional decision variables. By contrast, our method has good versatility regarding different numbers of decision variables. The comparison results demonstrate that the proposed MOCSOP is effective for dealing with large-scale MOPs.

Table 6 The median values of IGD obtained BY LMOCSO, and MOCSOP on three− objective DTLZ1-DTLZ7 with 300 decision variables, where the number of evaluations is set to 4,500,000 and the population size is set to 105
Fig. 15
figure 15

Nondominated solutions obtained by MOCSOP and LMOCSO on three-objective DTLZ7 with different numbers of decision variables

Conclusions

In this paper, we proposed a competitive swarm optimizer with probabilistic criteria to tackle MaOPs, termed MOCSOP. First, we estimated the joint probability of the particles in the population and selected some of the swarm leaders, according to the value of joint probability. Second, we utilized a competition mechanism with winner pool to update position, which can improve the efficiency of searching the true Pareto front. Then, we exploited a diversity mechanism with the mixed probability criterion to ensure the diversity of the swarm. Finally, we designed a swarm update strategy using the particles in the external elite archive to update the current particle swarm, which can effectively improve the convergence of the algorithm. The experimental results on the DTLZ1-7 and WFG1-9 test instances demonstrated that the proposed method presents robust and superior performance compared to other MOPSOs and MaOEAs for tackling MOPs and MaOPs. Furthermore, the comparison results between MOCSOP and other state-of-the-art large-scale MOEAs indicated that the MOCSOP has promising potential in large-scale MOPs.

In the future, we will further investigate the performance of MOCSOP for large-scale many-objective optimization problems (large-scale MaOPs) and apply it to some real-world problems.