Heterogeneous differential evolution particle swarm optimization with local search

To develop a high performance and widely applicable particle swarm optimization (PSO) algorithm, a heterogeneous differential evolution particle swarm optimization (HeDE-PSO) is proposed in this study. HeDE-PSO adopts two differential evolution (DE) mutants to construct different characteristics of learning exemplars for PSO, one DE mutant is for enhancing exploration and the other is for enhance exploitation. To further improve search accuracy in the late stage of optimization, the BFGS (Broyden–Fletcher–Goldfarb–Shanno) local search is employed. To assess the performance of HeDE-PSO, it is tested on the CEC2017 test suite and the industrial refrigeration system design problem. The test results are compared with seven recent PSO algorithms, JADE (adaptive differential evolution with optional external archive) and four meta-heuristics. The comparison results show that with two DE mutants to construct learning exemplars, HeDE-PSO can balance exploration and exploitation and obtains strong adaptability on different kinds of optimization problems. On 10-dimensional functions and 30-dimensional functions, HeDE-PSO is only outperformed by the most competitive PSO algorithm on seven and six functions, respectively. HeDE-PSO obtains the best performance on sixteen 10-dimensional functions and seventeen-30 dimensional functions. Moreover, HeDE-PSO outperforms other compared PSO algorithms on the industrial refrigeration system design problem.


Introduction
Since the introduction of the genetic algorithm [1], computing intelligence has attracted wide spread attention and become a powerful tool to handle complex theoretical and real-life optimization problems.Nowadays, many population-based stochastic optimization algorithms are proposed every year.Compared with traditional optimization methods, population-based algorithms are easy to implement and can provide near optima on most problems.Furthermore, they are applicable to non-continuous and nondifferentiable problems.When the mathematical model is absent, population-based algorithms are still applicable.Population based algorithms provide an effective approach to solve NP-hard problems.
As one of the widely used population-based algorithms, particle swarm optimization (PSO) [2] is simple in concept and converges fast.Due to the effectiveness of PSO on solving complex optimization problem, many theoretic and real-life application PSO algorithms have been reported since its inception.The original PSO suffers from diversity loss and premature convergence.On complex multimodal problems, PSO is apt to fall into local optima.To mitigate premature convergence problem, a series of excellent PSO algorithms have been proposed in the past two decades.These improved versions of PSO can be roughly divided into four classes, namely, parameter adjustment algorithms, neighborhood topology algorithms, hybridization with other evolutionary algorithms (EAs) and learning strategy algorithms.The representative literature for each class is as follows: 1. Parameter adjustment: Adjusting the control parameters (inertia weight and acceleration coefficients) can control the algorithms' exploration and exploitation capability.Linear [3], non-linear [4], fuzzy system [5,6], chaotic [7], and adaptive [8] adjustment algorithms have been proposed to balance exploration and exploitation.Parameter adjustment can improve performance to some extent, while on complex multimodal problems its impact is limited.Nowadays dynamic parameter adjustment methods are employed to enhance the performance of novel learning strategies [9].2. Topology: Neighborhood topology determines the information exchanging among the particle swarm.Common topologies include ring, gbest, wheel, random, star and von Neumann [10].Besides static topology, a series of dynamic topologies have been proposed.For example, Liang et al. [11] proposed dynamic multi-swarm topology, where the whole swarm is divided into many sub-swarms and these sub-swarms are regrouped periodically.Li et al. [12] introduced a dynamic pyramid topology based on the particles' fitness ranking.Lim et al. [13] developed an increasing topology connectivity to achieve better control of exploration/exploitation.Lin et al. [14] proposed a Tournament Topology Particle Swarm Optimization.The previous literature indicates that high topology connectivity converges quickly while low topology connectivity performs well on multimodal problems.Nowadays dynamic topology with fitness selection is the primary development trend.3. Hybridization: PSO is widely hybridized with other evolutionary algorithms to obtain better performance.There are three major hybrid methods.(1) Employing different algorithm sub-swarms to perform different functions [15]; (2) Adopting another EA's operators to overcome premature convergence [16]; and (3) Utilizing another EA's operators to construct learning examples [17].In recently years, many hybrid PSO algorithms have been proposed.For example, Abdulhameed et al. [18] proposed a hybrid algorithm based on PSO and the crow search algorithm (CSA) for feature selection.The CSA is adopted for enhancing global search.Yang [19] presented a hybrid algorithm based on PSO and cuckoo search (CS) for tuning the parameters of PID.CS with random walk was employed to enhance swarm diversity of PSO.Abdülkadir et al. proposed [20] a hybrid firefly and PSO algorithm with chaotic local search.Sama et al. [21] used PSO to explore the global search area and utilized fast-simulated annealing to refine the visited search area.Zhen et al. [22] developed a hybrid wolf pack algorithm and PSO for parameter estimation.To develop high performance hybrid PSO, the component algorithms should be excellent and complementary.4. Learning strategy: Novel learning strategies can improve the performance of PSO by constructing promising learning exemplars [23] or introducing effective competition and cooperation mechanisms [24].Some representative learning exemplar-based PSO algorithms are as follows: Gong et al. [17] proposed genetic learning particle swarm optimization by employing the crossover, mutation and selection operators of genetic algorithms to breeding exemplars.Cheng et al. [25] introduced a "learning from any better particles" mechanism and proposed social learning particle swarm optimization.Xu et al. [26] proposed a dimensional learning strategy to discover and employ the promising particles' information.Wang et al. [27] proposed an adaptive learning strategy to adjust the self-learning component and competitive-learning component.To deal with large-scale optimization, Kaucic et al. propose [28] a level-based learning swarm optimizer with a hybrid constraint-handling technique for large-scale portfolio selection problems.To obtain good balance between exploration and exploitation on large-scale problems, Li et al. [29] proposed a learning structure decoupling exploration and exploitation for large-scale optimization.Sheng et al. [30] employed dynamic p-learning mechanisms and multi-level population to improve the performance of PSO on large-scale optimization.
To help the particle swarm converge to optima rapidly, local search strategies are employed.For example, Liang et.al.[31] employed a quasi-Newton method to further improve a portion of top local best regularly.Wu et al. [32] conducted comparison experiments and pointed out that the Broyden-Fletcher-Goldfarb-Shanno method (BFGS) performs better than Nelder-Mead simplex search, Davidon-Fletcher-Powell (DFP) and pattern search (PS) on complex optimization problems.Chen et al. [33] used BFGS to enhance a portion of Lbest regularly for a dynamic multiswarm differential learning particle swarm optimizer.Hu et al. [34] merged sub-gradient local search into PSO iteration.Cao et al. [35] adopted the quasi-entropy index to trigger local search.The aforementioned literature indicates that local search is an effective auxiliary approach to enhance exploitation of PSO.
Novel learning strategies are an active research direction of PSO.To enhance the adaptability of learning exemplarbased algorithms, scholars have employed two or more types of exemplars with different characteristics to guide the motion of particle swarm.For example, Fu et al. [36] developed an adjustable driving force-based particle swarm optimization by employing two types of exemplars to update particle velocities.Chen et al. [37] employed two DE mutations for generating learning exemplars of PSO, so the hybrid algorithm enhances exploration at the start and gradually changes over to enhance exploitation.Lynn et al. [38] adopted an ensemble mechanism to adjust the sizes of sub-swarms with different learning exemplars.Wang et al. proposed [39] a multiple-strategy learning particle swarm optimization for large-scale optimization problems by utilizing different learning strategies in different stages.Ning et al. [40] employed three particle swarms and three velocity update methods for high dimensional problems.Lynn et al. [41] proposed heterogeneous comprehensive learning particle swarm optimization (HCLPSO) by employing two sub-swarms to enhance exploration and exploitation, respectively.The test results show that HCLPSO yields high performance on different problems and outperforms seven state-of-the-art PSO variants on the CEC2005 test suite.
Due to the robustness performance of HCLPSO, this study further extends the heterogeneous theory by employing two DE mutants to construct diversified exploration and exploitation learning exemplars, respectively.The major contribution of this study is The resultant algorithm is referred to as heterogeneous differential evolution particle swarm optimization with local search (HeDE-PSO).The rest of this paper is organized as follows: the next section reviews related works, the subsequent section presents the methodology, the following section reports the experimental results and the last section summarizes the paper.

Particle swarm optimization
PSO imitates the bird flock's foraging behavior to find the global optima.In PSO, each solution is regarded as a particle.The particles are guided by their own best position (Pbest) and the global best position (Gbest) according to Eqs. (1, 2) [42].Vel i,d denotes the particle's velocity, the subscripts i and d stand for the particle index and dimension, respectively.Pos and P stand for the Position and Pbest of the ith particle, respectively, and G denotes the global optima found by particle swarm.w, c 1 and c 2 are inertia weight and two acceleration coefficients, respectively.r 1,d and r 2,d are two uniformly distributed random numbers in the range of 0 and 1.In the learning process, the particles are oscillating in the neighborhood of Pbest and Gbest.The greedy selection algorithm is employed to update Pbest and Gbest. (1) PSO converges fast while suffering from premature convergence.To acquire better exploration, some exemplarbased learning strategies are proposed.For example, Wu et al. [32] proposed a superior solution guided PSO (SSG-PSO) framework to fully utilize the valuable information of superior solutions found in the optimization process.References [9,43] employed crossover, mutation and selection operators to construct high quality learning exemplars for PSO.Chen et al. [44] presented a biogeography-based learning strategy for PSO.Lu et al. [45] employed valid history information to guide the behavior of particles through a reinforcement learning strategy.Zhang et al. [46] merged Bayesian iteration method into comprehensive learning particle swarm optimization.Zhang et al. [47] employed self-organizing topological structure and self-adaptive adjustable parameters to improve the performance of PSO.Cheng et al. [48] have reviewed the developments of particle swarm optimization in the past quarter century.

Differential evolution
Differential evolution (DE) [49] is a versatile evolutionary optimization technique.Kinds of DE mutants are proposed to fulfill different optimization problems.The representative DE mutations are as follows: ,…,x i,D ] are the trial vector and target vector, respectively.F is the scale factor.r 1 , r 2 , r 3 , r 4 , and r 5 are five are mutually exclusive random integers within the range of [1,NP], and NP denotes for the population size of DE.
DE generates new candidate solutions in three steps: 1. Mutation: Generate a mutant vector according to one DE mutant among Eqs.(3-6).2. Crossover: Generate a random number for each dimension and compare it with the crossover probability according to Eq. ( 7) to determine whether to update the relative dimension of the trial vector ]. cr i is the crossover probability, and k is a random dimension to guarantee at least one dimension is chosen from mutant vector V i .
3. Selection: The fitness value of trial vector U i is calculated and compared with the target vector X i ; if the fitness value of trial vector U i is better than the target vector X i , the target vector is replaced by the trial vector.
DE/rand/1 and DE/rand/2 bear strong exploration properties, while DE/best/1 and DE/best/2 possess high exploitation properties.To balance exploration and exploitation, Zhang adopts DE/current-to-pbest/1 to generate the trial vector according to Eq. ( 9) and proposes adaptive differential evolution with optional external archive (JADE) [50].rp denotes a randomly selected target vector from the top p% individuals in Eq. (9).DE/current-to-pbest/1 is widely used nowadays in DE variants for its excellent performance [51].For more research on DE variants, please refer to reference [52].
Hybrid DE-PSO
Epbestr 3 and Epbestr 5 denote randomly selected Pbests from the union of the current swarm and archive.Ps 1 stands for the sub-swarm size of DE/rand/1, and the remaining particles generate learning exemplars according to DE/currentto-pbest/1.
After mutation, the crossover operation and out of bounds treatment are carried out according to Eqs. ( 7) and (12,13), correspondingly.Then the fitness value of the exemplar is evaluated and compared.If the fitness of the exemplar is better than the corresponding particle's Pbest, the exemplar will replace the previous Pbest, and the previous Pbest will challenge the worst solution in the archive.Otherwise, the generated exemplar will be abandoned.

Accompanying learning
To avoid wasting computing resources, the accompanying learning strategy is introduced.In the PSO learning stage, each particle selects an accompanying particle randomly.If the learning particle's fitness is better than the accompany particle's, the PSO learning process will be executed according to Eq. ( 14).Otherwise, the PSO learning is skipped in this iteration.With accompanying learning, the best particles win more computing resources to improve the algorithm's performance.

BFGS
In the late stage of PSO, the swarm diversity is insufficient to maintain high convergence speed.Hence employing local search in the late stage can improve search accuracy.BFGS is a popular quasi-Newton method whose core is to approximate the inverse Hessian matrix.The major steps of BFGS [32] are given in Algorithm 1.

Methodologies
The major steps of HeDE-PSO are illustrated in Fig. 1.In each iteration, DE/rand/1 and DE/current-to-pbest/1 work in parallel to generate complementary exemplars for PSO.Then the fitness value of a particle is compared with that of the accompanying particle to determine whether to execute the PSO learning stage.In the late stage of optimization, the promising area is located and BFGS local search is employed to improve search accuracy.Details for HeDE-PSO are as follows: Step 1: Initialize the initial population and control parameters.The fitness of the initial population is evaluated.
Step 2: Generate the trail vector through two DE mutants with different characteristics.DE/rand/1 is employed for exploration, and it updates individuals according to Eq. (10).DE/current-to-pbest/1 is employed for enhancing exploitation, and it generates new individuals according to Eq. (11).
Step 4: Evaluate and compare the fitness value of a trial vector with the relevant particle's Pbest.If the fitness of the trial vector is better than its Pbest, replace the Pbest with its trial vector.Update FEs.
Step 5: Randomly select an accompanying particle.If the learning particle's fitness value is no worse than the accompany particle's, execute PSO according to Eq. ( 14).Otherwise, skip the PSO learning part (update the index of particle and Fes, jump to step 2).
Step 6: Compare the fitness of new position with its Pbest.If the former is better, replace the Pbest with its new position.
Conduct crossover according to eq.( 7) Out bounder treatment according to eq. (12,13) Replace the worst archive solution with Fig. 1 The flowchart of HeDE-PSO Step 7: Update the index of particles and FEs (function of evolutions), if the index is bigger than the population, reset the index to 1.
Step 8: Compare FEs with PSO learning FEs (M FEs -k•D, M FEs is the maximum allowable FEs,k is an integer).If FEs is no bigger than PSO learning FEs, return to step 2, otherwise execute the following step.
Step 9: Conduct BFGS local search for the rest of FEs to improve accuracy.
The characteristics of HeDE-PSO are analyzed in "Ablation experiments".

Experimental works
To test the performance of HEDE-PSO, 29 CEC2017 functions [66] are adopted.f 2 is absent from this test.Six recent PSO variants, four meta-heuristics and JADE are employed for comparison tests.Among the comparison algorithms, biogeography-based learning particle swarm optimization (BLPSO) [31] updates each particle using the combination of its own personal best position and personal best positions of all other particles through the biogeography-based optimization (BBO) migration.Particle swarm optimization using dynamic tournament topology (DTT-PSO) [14] employs several better solutions choosing from the entire swarm to guide the evolution of each particle.Expanded particle swarm optimization based on multiple exemplars and forgetting ability (XPSO) [9] utilizes the locally best solution and the globally best solution to construct social learning exemplars and assigns different forgetting abilities to different particles.Fitness-based multi-role PSO (FMPSO) [60] divides the swarm into leaders, ramblers, and followers based on their fitness in each generation, and employs different learning strategies for different roles.Phasor particle swarm optimization (PPSO) [61] adjusts the control parameters of PSO based on phasor angle theory.Dynamic multiswarm differential learning particle swarm optimizer (DMSDL-PSO) [33] employs DE mutation to construct exploration enhanced exemplars and adopts BFGS local search in the late stage to improve accuracy.An adaptive particle swarm optimizer with decoupled exploration and exploitation (APSO-DEE) [29] employs two learning components to enhance exploration and exploitation, respectively.Multi-swarm strategy with adaptive sub-swarm size regulating is adopted to achieve better performance.The Honey Badger Algorithm (HBA) [62] employs honey badgers' digging and honey finding approaches to construct the exploration phase and the exploitation phase, respectively.The salp swarm algorithm (SSA) [63] divides the swarm into leaders and follows according to the swarm behavior of salp chains.The leader guides the swarm, and the followers follow each other during the optimization process.The Archimedes optimization algorithm (AOA) [64] imitates the principle of buoyant force exerted upward on an object partially or fully immersed in fluid.AOA updates the density, volume and acceleration of every object to determine new positions in every generation.The dwarf mongoose optimization algorithm (DMO) [65] divides the swarm into the alpha group, babysitters, and the scout group.Each group contributes to compensatory behavioral adaptation, which leads to a seminomadic way of life in a territory large enough to support the entire group.Adaptive differential evolution with optional external archive (JADE) [42] improves the performance of DE by employing a new mutation strategy "DE/current-topbest/1" with optional external archive and adaptive updating control parameters.All the comparison algorithms adopt the authors recommended parameters configuration.
10-dimensional and 30-dimensional function experiments are conducted with the same parameter settings to evaluate the algorithms' scalability.Each function is run for 30 independent runs, and the Wilcoxon signed-rank test [67] with significance level of 0.05 is adopted for comparing test results.The parameter configurations of involved algorithms are given in Table 1.All the comparison algorithms adopt the authors' suggested parameter configurations.

Comparing with PSO algorithms
Table 2 shows the statistical results of HeDE-PSO and other PSO algorithms.The symbols ">", "≈ ", "<" denote that the performance of HeDE-PSO is significantly better than, tied with or significantly worse than the compared algorithms, according to the Wilcoxon signed-rank test, respectively.For example, in the first cell "> (7.62 ± 7.70)E + 02", ">" means the performance of HeDE-PSO is significantly better than BLPSO on f 1 , and "7.62" and "7.70" are the mean error and standard deviation achieved by BLPSO on f 1 , respectively."E + 02" is the order of magnitude of scientific notation.In the last two rows, "Best" denotes for times of best mean performance achieved by the relevant algorithm, "w/t/l" stands for the times that HeDE-PSO performs "significantly better than", "tied with", or "significant worse than" the compared algorithm, respectively.In the 10D test, HeDE-PSO achieves the best performance on 16 functions, while DMSDL-PSO, BLPSO, DTT-PSO and FMPSO yield the best performance on 9, 5, 4 and 1 functions, respectively.XPSO, PPSO and APSO-DEE generate no best performances.HeDE-PSO performs significantly better than BLPSO, DTT-PSO, XPSO,  On 30D functions, Table 3 indicates that HeDE-PSO has the best performance on 17 functions, DTT-PSO, BLPSO and DMSDL-PSO have the best performance on 6, 4 and 4 functions, respectively.XPSO, FMPSO, PPSO and APSO-DEE yield no best performances.HeDE-PSO generates the best performance on all ten hybrid functions (f 11 -f 20 ), while with seven simple multimodal functions (f 4 -f 10 ), HeDE-PSO only wins the best performance on one function.HeDE-PSO outperforms BLPSO, DTT-PSO, XPSO, FMPSO, PPSO, DMSDL-PSO and APSO-DEE on 20, 22, 27, 28, 29, 21 and 29 functions, respectively.Figure 2 shows that the average rank of HeDE-PSO is lower than those of other PSO algorithms.The advantages of HeDE-PSO on 30D functions are more significant than on 10D functions.DMSDL-PSO and BLPSO perform better than the remaining PSO algorithms.With two DE mutations to construct exploration and exploitation learning exemplars, HeDE-PSO relieves premature convergence successfully and exhibits better adaptation than other PSO algorithms.HeDE-PSO outperforms APSO-DEE on both 10D and 30D CEC2017 test suite, indicates that employing two sub-swarms to enhance exploration and exploitation, respectively, performs better than utilizing two learning components to enhance exploration and exploitation in this study.

Comparison with other meta-heuristics
The comparison test results with other meta-heuristics on 30D functions in Table 4 show that HeDE-PSO has the best performance on 19 functions.JADE, DMOA and SSA yield the best performance on 7, 3 and 1 functions, respectively.HeDE-PSO outperforms HBA, SSA, AOA, DMOA and JADE on 29, 28, 29, 25 and 16 functions, respectively.The average rank in Fig. 3 shows that HeDE-PSO ranks the first, while JADE and HBA occupy the second and the third places, respectively.HeDE-PSO outperforms four recent meta-heuristics and JADE in this test.

Convergence analysis
To analyze the convergence speed of HeDE-PSO, the convergence curves of HeDE-PSO and other PSO algorithms on four different types of functions are given in Fig. 4. Figure 4a indicates that on the unimodal function f 1 , HeDE-PSO achieves the highest accuracy, DMSDL-PSO ranks the second, and the rest of the PSO algorithms yield almost the same mean errors.Figure 4b shows that on the simple multimodal function f 6 , DMSDL-PSO generates the lowest mean error, and HeDE-PSO and BLSPO rank second and the third, respectively.The rest of the PSO algorithms yield relatively bigger mean errors.On composition function f 16 , Fig. 4c shows that HeDE-PSO yields the lowest mean error, BLPSO ranks second, and the other algorithms yield a relatively bigger mean error.Figure 4d reveals that DTT-PSO converges faster and yields the lowest mean error, and DMSDL-PSO ranks the second.The mean error of HeDE-PSO is relatively bigger.HeDE-PSO has significant advantages on unimodal functions and hybrid functions, while on simple multimodal functions and composition, HeDE-PSO yields moderate performance.

Ablation experiments
In this section, ablation experiments are carried out to show the effectiveness of each strategy employed by HeDE-PSO.Five contrast algorithms denoted by NoLS-PSO, Noapy-PSO, NoDE-PSO, DE r1 -PSO, and DE pbest -PSO are modified from HeDE-PSO.Each contrast algorithm is developed by removing one strategy from HeDE-PSO in "Methodologies".Details of five contrast algorithms are given in Table 5.
The test results are given in Table 6.The Wilcoxon signedrank test shows that HeDE-PSO outperforms any contrast algorithm on at least twenty functions.HeDE-PSO outperforms NoLS-PSO on 23 functions, which indicates that the BFGS local search can improve search accuracy in the late stage of HeDE-PSO.HeDE-PSO performs better on 20 functions than Noapy-PSO, showing that the accompanying operation can allocate more computing resources to high quality particles to achieve high performance.HeDE-PSO defeats NoDE-PSO, DE r1 -PSO, DE pbest -PSO on 25, 24 and 21 functions, respectively, indicating that employing both DE/rand/1 and DE/current-to-pbest/1 to generate learning exemplars (heterogeneous DE exemplars) can achieve better performance than employing a single DE mutant or not using DE mutation to construct learning exemplars.HeDE-PSO performs better than all the contrast algorithms and yields the best performance on 21 functions, indicating that the BFGS local search, the accompanying operation and the heterogeneous DE exemplar are indispensable for HeDE-PSO.
To analyze the proposed strategies in balancing exploitation and diversity, the convergence curves and diversity curves on unimodal f 1 and multimodal f 17 are given in Fig. 5.The diversity is evaluated by the mean Euler distance between all the particles and their centroid.
Figure 5a shows that with both DE/rand/1 and DE/currentto-pbest/1 to construct learning exemplars, HeDE-PSO outperforms NoDE-PSO, DE r1 -PSO and DE pbest -PSO.DE pbest -PSO converges fast in the early stage and performs better than NoDE-PSO and DE r1 -PSO, indicating that employing DE/current-to-pbest/1 to generate learning exemplars (DE/current-to-pbest/1 exemplars) can achieve high convergence speed on unimodal f 1 .Figure 5b shows that HeDE-PSO can keep moderate diversity in the early stage and converge to the high quality area in the later stage.15 is the best choice of HeDE-PSO.p%denotes that the top p% particles are selected for DE/current-to-pbest/1 mutation in Eq. (11).Reducing p% means selecting a small portion of high-quality particles for DE/current-to-pbest/1 mutation to enhance exploitation, while increasingp% means more elite particles can be employed for DE/current-to-pbest/1 mutation to enhance exploration.Table 8 shows that p% 20% yields relatively better performance than other values; therefore p% 20% is adopted by HeDE-PSO.
FE BFGS denotes the total function evolution consumed by BFGS local search in the later stage of HeDE-PSO.BFGS local search converges faster than population-based algorithms, while it is apt to fall into local optima.The initial point is very important for BFGS local search.Table 9 indicates that FE BFGS 100*D performs better than other parameter settings.

Application to industrial refrigeration system design
To test the performance of HeDE-PSO on real-life applications, the industrial refrigeration system design problem is used.The industrial refrigeration system design problem provided in CEC2020 is from a real-world single-objective where g j (x) ≤ 0 and h j (x) 0 denote inequality constraints and equality constraints, respectively.Let f A , Φ A , and f B , Φ B , denote for fitness values and constraint violation values at points A and B, respectively.ε level comparison is defined as below: In this study, θ 0.5, T λ 0.95 × T c , cp 3, T c 0.2 × M FEs .M FEs is the maximum allowable function evolutions.
The test results of 30 independent runs are given in Table 10.MF, MV , FR, and SR denote mean fitness, mean constraint violation, feasibility rate and success rate, respectively.Success rate is the ratio that one algorithm obtains a feasible solution and the error is not bigger than 1e−8 ( f (x) − f (x * ) ≤ 10 −8 ).

( a ) 17 Fig. 5
Fig. 5 Convergence and diversity curves of the proposed strategies

Table 1
Parameter configuration of algorithms 21 -f 30 ).With DE to construct learning exemplars and BFGS local search, DMSDL-PSO yields high performance of composition functions.

Table 2
Comparison with PSO algorithms on 10D CEC2017 functions

Table 3
Comparison with PSO algorithms on 30D CEC2017 functions

Table 4
Comparison with other meta-heuristics on 30D CEC2017 functions

Table 4
In this section, three parameters of HeDE-PSO are analyzed, namely, Ps 1 , p% and FE BFGS (function evolution of BFGS).In each test, only one parameter is tested for different values, while other parameters are set according to Table1.f 1 , f 7 , f 11 , f 17 , f 21 and f 27 are employed as representative functions.Ps 1 stands for the subswarm size of DE/current-topbest/1, and the remaining particles employ DE/rand/1 to construct learning exemplars.Increasing Ps 1 means enlarging the exploitation enhanced subswarm and reducing the exploration enhanced subswarm.Table 7 indicates that Ps 1 15 obtains the best performance of f 1 , f 7 , f 11 and f 21 ; hence, Ps 1 Table 10 indicates that HeDE-PSO achieves 100% FR and 100% SR.BLPSO achieves 100% FR

Table 5
Modification of contrast algorithm

Table 6
Test results of contrast algorithmsThe best test results are highlighted in bold while SR is 0%.APSO-DEE, XPSO, FMPSO and DMSDL-PSO yield high FR, while their SR are not high.The SR of DTT-PSO and XPSO are 3.33% and 16.7%, respectively.The SR of BLPSO, FMPSO and PPSO are 0%.HeDE-PSO has significant advantages over other PSO algorithms.

Table 7
The effects of Ps 1

Table 8
The effects of p%

Table 10
Test results on industrial refrigeration system designConclusionsThis study proposes a heterogeneous differential evolution particle swarm optimization (HeDE-PSO) method.HeDE-PSO adopts two DE mutants to construct learning exemplars for PSO to improve adaptability and employs BFGS local search to increase search accuracy in the late stage.DE/rand/1 is employed for enhancing exploration and DE/current-to-pBest/1 is employed for enhancing exploitation.The test results on 10-dimensional and 30-dimensional functions show that HeDE-PSO defeats the comparison algorithms on most of the tested functions.For 30-dimensional functions, HeDE-PSO outperforms other PSO algorithms on at least 20 functions out of 29 functions.HeDE-PSO obtains high performance on 10-dimensional and 30-dimensional CEC2017 functions without parameter turning.On the industrial refrigeration system design problem, HeDE-PSO is the sole algorithm generating 100% feasibility rate and 100% success rate.The test results indicate that adopting heterogeneous DE learning exemplars can improve the performance and adaptability of HeDE-PSO.HeDE-PSO is applicable to both benchmark functions and real-life application problems.