Abstract
To develop new meta-heuristic algorithms and evaluate on the benchmark functions is the most challenging task. In this paper, performance of the various developed meta-heuristic algorithms are evaluated on the recently developed CEC 2021 benchmark functions. The objective functions are parametrized by inclusion of the operators, such as bias, shift and rotation. The different combinations of the binary operators are applied to the objective functions which leads to the CEC2021 benchmark functions. Therefore, different meta-heuristic algorithms are considered which solve the benchmark functions with different dimensions. The performance of some basic, advanced meta-heuristics algorithms and the algorithms that participated in the CEC2021 competition have been experimentally investigated and many observations, recommendations, conclusions have been reached. The experimental results show the performance of meta-heuristic algorithms on the different combinations of binary parameterized operators.
1 Introduction
Metaheruistic algorithms represent a class of derivative-free, nature inspired algorithms and provide robust optimization tools for problems. Mostly, the analytical process may fail for solving the complex, and real-world optimization problems. Therefore, meta-heuristic algorithms prove very efficient and effective algorithms in solving these types of problems. The applications of meta-heuristic algorithms are found in the numerous fields of science, machine learning, engineering, operations research [1,2,3,4,5,6,7].
The performance evaluation and comparison of algorithms are very much dependent on benchmarking. The benchmarking experiments are developed to predict/select the best algorithm for solving real-world problems [8]. In the past two decades, proposing new benchmark real-parameter single-objective optimization problems with several novel characteristics to evaluate and analyze the performance of meta-heuristic algorithms is considered as the cornerstone of research in the optimization field. Furthermore, it has been attracted by computer science, operations research practitioners and specialists as well as mathematicians and engineers. There are following important reasons for developing benchmark problems:
-
1.
It is considered the basis for developing more complex optimization problems such as single-objective computationally expensive numerical optimization problems, single objective multi-niche optimization problems, constrained real-parameter single-objective optimization problems, constrained / bound-constrained multi/many-objective optimization problems.
-
2.
It must simulate the degree of difficulty of the real-world optimization problems.
-
3.
It must be able to detect the weakness and strengths of the novel optimization algorithms which have been significantly improved during the past few years.
Two benchmark series are common in evaluation of real-parameter evolutionary algorithms, namely IEEE Congress on Evolutionary Computation (CEC) competitions and the Comparing Continuous Optimizer (COCO). The COCO benchmark suite provides a platform to compare the performance of large number of algorithms for unconstrained continuous optimization problems. COCO benchmark suits specially represent the single objective noiseless and noise problems and bi-objective noiseless problems [9,10,11]. On the other hand, CEC benchmark represents the most elaborated platform for the comparison of stochastic search algorithms. The CEC benchmark suite includes single, multi-objective, noiseless, noise, large-scale, real-world problems, constrained optimization problems. Moreover, it also provides the performance metrices, test environment for assessment and comparison. To evaluate the performance of state-of-the-art algorithms, CEC competitions functions are most frequently used for benchmarking.
Since 2005, a new generation of benchmark problems have been developed to evaluate with the new era of nature-inspired algorithms or meta-heuristics. 2005 IEEE Congress on Evolutionary Computation was the inception of the first benchmark problems that overcome the aforementioned shortcoming features, named CEC’05 [12]. CEC’05 report included 25 benchmark functions with different properties such as separable, non-separable, rotated, unrotated, unimodal and multimodal functions with shifts in dimensionality, multiplicative noise in fitness and composition of functions, continuous functions, non-continuous functions, global optimum on the bounds, global optimum not on the bounds, function with no clear structure in the fitness landscape, the narrow global basin of attraction and so on.
Eight years later, CEC’13 test suite which includes 28 benchmark functions has been proposed [13]. In the CEC’13 test suite, the previously proposed composition functions [14] are improved and additional test functions are included. In the same context, CEC’14 test suite which includes 30 benchmark functions has been proposed [15]. CEC’14 developed benchmark problems with several novel features such as novel basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on. Three years later, CEC’17 test suite which includes 30 benchmark functions has been proposed [16]. In the CEC’17 test suite, similar to CE’14, new basic functions with different features have been added. These benchmark functions are discussed in detailed in the next section. The same benchmark suite of CEC’17 has been used in CEC’18, CEC’19 and CEC’20 [17]. As algorithms improve, ever more challenging functions are developed. This interplay between methods and problems drives progress, the CEC’20 [17] and CEC’21 [18] Special Sessions on Real-Parameter Optimization are developed to promote this symbiosis. In CEC’20 competition, the objective function is transformed into another function with the inclusion of the rotation matrix. Moreover, new benchmark objective functions are set in CEC’21 competition by including the different combination of bias, shift and rotation operators. These benchmark functions make a new challenge for researchers to develop meta-heuristic algorithms which handle all the complexity of the functions.
Additionally, it is pointed out that in order to compare and analyse the solution quality of different algorithms statistically and to verify the behaviour of stochastic algorithms [19], the results are compared using two nonparametric statistical hypothesis tests: I the Friedman test (to determine the final rankings of different algorithms for all functions) and (ii) the multi-problem Wilcoxon signed-rank test (to check the differences between all algorithms for all functions). Besides, the algorithm complexity has been taken into consideration by evaluating the computation time just for specific function for predefined evaluations of a certain dimension. Nonetheless CEC’17 proposed a new performance measure which is called score metric. Thus, the evaluation method for each algorithm is based on a score of 100 which is based on two criteria taking into account higher weights will be given for higher dimensions. Thus, the calculated score is used instead of using a statistical test [16].
In this paper, CEC’21 benchmark functions are considered to evaluate the performance of meta-heuristic algorithms. We divided the algorithms into three category, i.e., basic algorithms, advanced algorithms and CEC2021’s Algorithms. The basic algorithms include the basic or standard version of the old and recent meta-heuristic algorithms and it includes the basic or standard versions of differential evolution (DE) [20], gaining-sharing knowledge-based algorithm (GSK) [21], grey wolf optimizer (GWO) [22], particle swarm optimization (PSO) [23], and teaching learning-based optimization algorithm (TLBO) [24]. The advanced algorithms are the adaptive and self-adaptive version of those algorithms which have been the winner of some CEC competitions. These algorithms include the following: LSHADE was ranked as the winner in real-parameter single objective optimization competition, CEC 2014 [25], AGSK was ranked second in real-parameter single objective optimization competition, CEC2020 [26], EBOwithCMAR is the winner of CEC2017 single objective with bound constraints competition [27], IMODE is the winner of CEC2020 single objective with bound constraints competition [28] and ELSHADE_SPACMA algorithm was ranked third in real-parameter single objective optimization competition CEC 2018 and, is an enhanced version of LSHADE-SPACMA algorithm [29]. The CEC2021’s algorithms are the set of algorithms that participated in the CEC2021 competition. These algorithms include Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control (SOMA-CLP) [30]; A Multi-start Local Search Algorithm with L-SHADE (MLS-LSHADE); LSHADE based on ordered and roulette-wheel-based mutation (L-SHADE-OrdRw); LSHADE algorithm with Adaptive Archive and Selective Pressure (NL-SHADE-RSP) [31]; Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization (j21) [32]; A New Step-Size Adaptation Rule for CMA-ES Based on the Population Midpoint Fitness ( RB-IPOP-CMAES) [33]; Improved DE through Bayesian Heperparameter Optimization (MadDE) [34]; Gaining-Sharing Knowledge Algorithm with Adaptive Parameters Hybrid with Improved Multi-operator DE algorithm (APGSK-IMODE) [35] and DE with Distance-based Mutation-selection (DEDMNA) [36].
The objective functions of the benchmarks are parameterized using operators such as bias, rotation, and translation. The primary goal of parameterization is to evaluate the influence of all operator combinations on all benchmark functions. Parametric benchmarking is a first step towards gaining a comprehensive understanding of algorithmic performance and optimization concerns [37]. To this end, ten scalable benchmark challenges utilising these binary operators are proposed.
The contribution of the paper are summarized as:
-
Evaluating the performance of the previously proposed swarm and evolutionary algorithms in the recently proposed CEC2021 competition on Single Objective Bound Constrained Numerical Optimization;
-
Discussing the algorithms participated in the CEC2021 competition on Single Objective Bound Constrained Numerical Optimization;
-
Comparing the performance of those algorithms based on different criteria, such as Wilcoxon test and Friedman ranking test;
-
Proposing some future directions that would benefit the evolutionary computation research community.
The organization of the paper is as follows: Sect. 2 includes the literature review of the benchmark functions and algorithms. Section 3 presents the detailed description of parameterized benchmark for CEC 2021 competition. Section 4 represents the numerical experiments with the comparative results and Sect. 5 concludes the whole paper.
2 Related work
Liang et al. [13] stated that many of classical popular benchmark functions have some features that have been used by some algorithms to achieve excellent results. According to their experience, some of these shortcoming properties associated with the existing benchmark problems are:
-
A Global optimum having the same parameter values for different dimensions/variables. In this case, the global optimum may be reached quickly if some operators exist to duplicate the value of one dimension to the other dimensions.
-
At the origin, there is a global optimum. As a result, numerous mutation operators may be simply built to take use of this common trait of a large number of benchmark functions.
-
The global optimum is located at the search range’s centre. When the uniform initial population is generated randomly, the mean-centric technique has a tendency to lead the population to the search range’s centre.
-
The global optimum is located on the bounds of the search space that are easily discovered by the majority of methods.
-
Local optima located along the coordinate axes or a lack of connectivity between the variables/dimensions. In this situation, the local optimal information might be used to determine the global optimum.
By analyzing these problems, they proposed a general framework to construct novel and challenging composition test functions possessing many desirable properties.
Recently, during the past few years, there have been a few attempts to investigate the relationship between the benchmark sets and the performance of the optimization algorithms. Mersmann et al. [38], discussed that the main objective of benchmarking optimization algorithms is to answer two questions. The first one: which algorithm is the best? And the second one: which algorithm can be used to solve a specific real-world problem? Therefore, they suggested that the first question can be answered by using consensus ranking procedures. Regarding the second question, they proposed a new term, named Exploratory Landscape Analysis (ELA), which is based on developing ways to cheaply and automatically extracts problem properties from a concrete problem instance. However, this procedure has not been applied in this study. On the other hand, they suggested many problem properties such as multi-modality, global structure, separability, variable scaling, search space homogeneity, basin size homogeneity, global to local optima contrast and plateau. Then, real-parameter black-box optimization benchmarking 2009, named BBOB’09 [39], has been used to analyze the performance of 30 optimization algorithms. they applied consensus ranking instead of using individual ranking. Thus, in order to gain insights to answer these questions, many classical statistical exploratory data analyses have been used.
Firstly, the expected running time of each algorithm has been calculated for each test function and dimension and consensus ranking has been applied. Then, distance measure has been used to calculate the distance between all rankings. Besides, to retrieve groups or clusters from the distance matrix, multidimensional scaling (MDS, [40]), to visualize the relationship between observations, and the clustering algorithm PAM [41], to find clusters or groups in data, have been used. Finally, to describe these groups, decision trees for modelling the unknown cluster boundary have been applied. In the same context, in order to answer the second question, Mersmann et al. [42], extended their work by applying Exploratory Landscape Analysis (ELA). They suggested several low-level features of functions which are convexity, y-distribution, level set, meta-model, local search and curvature. Then, they try to relate these features to the high-level features of Mersmann et al. [38]. The relationship between the problem properties and algorithm optimization can be estimated using a small sample of function values combined with statistical and machine learning techniques. In order to simultaneously optimize feature sets according to quality and cost, a multi-objective evolutionary algorithm SMS-EMOA [43], has been employed. Later on, Mersmann et al. [8], the 2009 and 2010 BBOB benchmark results [9, 44] are used to analyze more than 50 optimization algorithms. The compared algorithms have been divided into 11 groups such that all optimization algorithms based on the same base mechanism will be put into the same group. Then, in order to choose the best optimizer from each group as the representative for this group, the Borda [45] consensus over all algorithms in each group for the accuracy levels 10-3 and 10-4 has been used. Then, they applied the same approach used in Mersmann et al. [38]. Altogether, no consistent results have been reached. Thus, they concluded that the proposed features of the benchmark problems are not enough or not adequate to describe the groups of different algorithms.
Morgan and Gallagher [46] proposed a novel framework based on a length scale. The length scale is the change of the objective function value with respect to the change in the points in the search space. The main objective was to study the structural features of the landscape of the problems regardless of any particular algorithm. They discussed some important properties of the concept of the length scale and its distribution. Then, the proposed framework has been applied to the 2010 BBOB benchmark (BBOB’10) [44]. Experimental analysis and results showed that the proposed framework can differentiate easily between uni-modal and multi-modal functions. On the other hand, many researchers have focused on the benchmarking process of optimization algorithms that must be followed to perform a fair comparison. Opara and Arabas [47] proposed an overview of benchmarking procedures. Based on some functions from CEC and BBOB benchmarks, they discussed the main points in this field such as theoretical aspects of the comparison of the algorithms, available benchmarks, evaluation criteria and standard methods of presenting and interpreting results and the related appropriate statistical procedures. Finally, they proposed a novel concept of parallel implementation of test problems which is considered as a qualitative improvement to the benchmarking procedures. Beiranvand et al. [48] presented complete and detailed standards and guidelines on how to benchmark optimization algorithms. They reviewed the benchmark test suites for all types of optimization problems. Besides, they discussed the main performance measures to evaluate the efficiency of the optimization algorithms. Moreover, pitfalls to avoid and main issues that should be taken into account to conduct fair and systematic comparison have been highlighted. Muñoz et al. [49] introduced a survey of selected methods for optimization algorithm in the black-box continuous optimization problems. Firstly, the algorithm selection framework by Rice in [50] has been described. The four-component spaces- problem, algorithm, performance and characteristic- are described. Therefore, the fitness landscape concept is discussed. Then, they discussed the different classes of search algorithms. Besides, the methods used to measure algorithm performance have been introduced. A classification and a summary of well-known exploratory landscape analysis methods have been presented. Finally, they presented the implementations of the algorithm selection framework for solving continuous optimization problems. Opara et al. [37] extended CEC’17 benchmark problems by employing different parametrization to the problems. Each function is parametrized by interpretable, high-level characteristics (rotation vs non-rotation, noise vs noise-less, etc.) which are used in predicting multiple regression model to explain the algorithmic performance.
2.1 Description of used algorithms
In this subsection, all the algorithms that have been used in the paper to solve the parameterized CEC2021 test functions are briefly discussed and explained. Also, comparisons between these algorithms have been conducted at the end of the paper. These algorithms have been classified into basic, advanced and recently-developed ones.
2.1.1 Basic algorithms
-
Differential evolution (DE)
It is a type of evolutionary computing method that belongs to a broader family of evolutionary algorithms. DE algorithm, developed by Storn and Price in 1997 [20], is also a popular direct search technique such as genetic algorithm and evolution strategies which starts with a population of initial solutions. Then, by introducing mutations into the population, these initial solutions are iteratively improved. It is most popular evolutionary algorithm and has been applied to various nonlinear, high dimensional and complex optimization problem to obtain the optimal solution. Moreover, different variants of DE algorithm such as self adaptive, binary, multi-objective, etc. have been introduced.
-
Gaining sharing knowledge-based algorithm (GSK)
GSK algorithm [21] is based on the human behavior of gaining and sharing knowledge which consists of two phases: junior and senior gaining and sharing phase. In junior phase, initial solutions are produced and later, they are sent to senior phase to interact with others. This is key idea behind the GSK algorithm. Many different variants of GSK algorithm have been developed to show its capability in solving real-world optimization problems [51,52,53].
-
Grey Wolf optimizer (GWO)
The GWO algorithm is modelled after the natural leadership structure and hunting mechanism of grey wolves by Mirjalili et al. [22]. For modelling the leadership structure, four sorts of grey wolves are used: alpha, beta, delta, and omega. Furthermore, the three primary phases of hunting are implemented: looking for prey, surrounding prey, and attacking prey. Grey wolves have the capacity to locate and surround prey. The alpha is generally in charge of the hunt. Hunting is something that the beta and delta could do on occasion. They separate to hunt for prey and then converge to attack prey. The proposed technique was evaluated on 29 benchmark test functions and outperformed others. Researcher have been developed its variants to solve different problems.
-
Particle swarm optimization (PSO)
It is one of the nature-inspired evolutionary algorithms and stochastic optimisation techniques developed by James Kennedy and Russ Eberhart in 1995 [23] to solve computationally difficult and difficult optimisation problems. Since then, it has been applied to a wide variety of search and optimisation problems. It abstracts the mechanism of action of swarms such as swarms of birds, fish, and so on. It is an algorithm for population-based evolution in which each swarm (particle) represents a unique solution. Each particle updates its current position via its velocity vector and attempts to find the optimal solution.
-
Teaching Learning based optimization algorithm (TLBO)
This technique focuses on the impact of a teacher’s influence on students. TLBO, like other nature-inspired algorithms, is a population-based approach that progresses to the global answer through a population of solutions [24]. A population is defined as a group of students or a class of students. The TLBO procedure is broken into two parts: The ‘Teacher Phase’ is the first section. and the ‘Learner Phase’ makes up the second portion. The terms ’Teacher Phase’ and ’Learner Phase’ refer to learning from a teacher and ’Learning by Interaction Between Learners’ respectively. It has been successfully applied to several numerical optimization problems and it proved its superiority in solving them.
2.1.2 Advanced algorithms
-
LSHADE [25]
It is an extended version of the previously developed success history-based adaptive DE (SHADE) algorithm [54]. There are three main control parameters of DE algorithm i.e. population size, scaling factor F and crossover rate CR. SHADE algorithm employs a historical memory \(M_{CR}\), \(M_F\) that saves a set of CR, F values which have worked successfully in the past. Based on SHADE algorithm, Tanabe and Fukunaga [25] developed LSHADE algorithm which is based on population size parameter for DE algorithm. In this, a linear population size reduction formula has been used which gradually decreases the population size during a DE run. The proposed algorithm LSHADE was applied to the special session on Real parameter single objective optimization benchmark suite. The performance of LSHADE algorithm outperformed other algorithms and it was ranked as winner in CEC2014 competition.
-
AGSK [35]
It depicts adaptive version of gaining-sharing knowledge (GSK) based optimization algorithm. The GSK algorithm has two main parameters: knowledge factor \(k_f\) and knowledge ratio \(k_r\) that control junior and senior phase during optimization. Therefore, to accommodate this situation, adaptation process has been included with the parameter settings. Moreover, population size reduction scheme is also employed to decrease the population size gradually. This leads to adaptive gaining sharing knowledge-based algorithm and the performance of the AGSK algorithm is evaluated on CEC2020 benchmark functions. It performed significantly better than other comparative algorithms due to the remarkably balance between the exploration and exploitation. In CEC2020, AGSK algorithm was ranked second among all competitive algorithms.
-
EBOwithCMAR [27]
This algorithm is a combination of global optimizer and a local optimizer. The global optimizer, i.e. effective butterfly optimizer (EBO) is a self-adaptive version of butterfly optimization algorithm which uses success history-based adaption and linear population size reduction to the increase the diversity of the population. While it also incorporates a co-variance matrix adapted retreat (CMAR) phase to produce the new solution. Moreover, it also increase the local search capability of EBO algorithm. This hybrid algorithm is tested over single objective CEC 2017 benchmark problems and ranked as winner of CEC2017.
-
IMODE [28]
It introduces an improve multi-operator differential evolution algorithm (IMODE). It begins by segmenting the initial population into many sub-population, each of which is developed using a different DE variation. The size of each sub-population is continually adjusted depending on two indicators: solution quality and sub-population variety. The proposed algorithm is tested over 10 benchmark functions of CEC2020 competitions and ranked as winner among all competitive algorithms.
-
ELSHADE_SPACMA [29]
Many researcher or practitioners have been introduced different variants of DE algorithm. In DE algorithm, crossover rate reflects the chance that the test solution inherits a certain gene. Montgomery and Chen [55] claimed that CR is a very sensitive parameter in order to solve optimization problems. Thus, to tackle with this problem, Mohamed et al. [56] proposed LSHADE_SPACMA algorithm. Furthermore, Hadi et al. [29] enhanced the performance of LSHADE_SPACMA by developing ELSHADE_SPACMA algorithm with the two improvements. The first was a hybridization of LSHADE_SPACMA and adaptive guided differential evolution (AGDE) in which all the population were assigned to LSHADE_SPACMA for one generation. And then all the population will be assigned to AGDE algorithm. The next improvement was made in the mutation parameter to balance the exploration and exploitation quality. The proposed technique has been applied to CEC2017 benchmark problems and obtained third rank among all algorithms.
2.1.3 CEC2021’s algorithms
In this subsection, we will discuss and summarize the algorithms that have been participated in CEC2021 competition.
-
APGSK-IMODE [35]
A hybrid algorithm based on Adaptive Parameters gaining sharing knowledge (APGSK) [57] and improved multi-operator differential evolution (IMODE) [28]. For a predetermined number of generations (cycle), two sub-populations are evolved to an optimal or near-optimal solution using APGSK and IMODE. Probabilistically, each algorithm evolves its sub-population for the next cycle that is done via probability, which is adjusted based on each algorithm’s quality. It is worth to mention that, APGSK-IMODE was the winner of the CEC2021 Single Objective Bound Constrained Numerical Optimization for non-shifted cases.
-
SOMA-CLP [30]
Kadavy et al. [30] introduced a new Self-organizing Migrating Algorithm (SOMA) variant with clustering-aided migration and adaptive perturbation vector control (SOMA-CLP). SOMA-CLP is a recent variation of SOMA, which is an improved SOMA-CL [58]. SOMA-CLP promotes the worldwide transition from exploration to exploitation by linearly adapting the prt control parameter. Its workflow is split into three phases: search space mapping, mapped space clustering, and further screening of areas of interest revealed in the first phase.
-
L-SHADE-OrdRw [59]
L-SHADE-OrdRw, which is an improved L-SHADE algorithm based on ordered and roulette-wheel-based mutation has been proposed in [59]. Also, in order to further improves the performance of L-SHADE-OrdRw, an adaptive and non-adaptive ensemble sinusoidal method, to automatically adjust the scaling factor values, has been used. Moreover, a selection pressure, which put more weights on the better solutions, based on the roulette-wheel strategy has been utilized to select random solutions in mutation strategy. To further enhance the efficiency of L-SHADE-OrdRw algorithm uses a local search based on Gaussian Walks [60].
-
NL-SHADE-RSP [31]
Stanovov et al. [31] proposed an improved LSHADE algorithm, called NL-SHADE-RSP. NL-SHADE-RSP incorporates different novel parameter control methods including nonlinear population size reduction, rank-based selective pressure in mutation strategy to select one of two mutation operators (with and without archive), adaptive archive set usage, and adapting the crossover rate control based on some rules. It also utilizes a mechanism to adapt the p value in the current-to-pbest mutation operator.
-
j21 [32]
The j21 algorithm is built on the self-adapting jDE100 and j2020 algorithms. It uses two populations, a restart mechanism in both populations, a crowding mechanism, and a system to choose vectors from both sub-populations for the mutation process. The jDE100 [61] and j2020 [62] algorithms use similar processes. j21 algorithm uses a mechanism to reduce population size throughout evolution, unlike the prior two algorithms. The self-adaptive control parameter CR also has a wider range of values.
-
RB-IPOP-CMAES [33]
RB-IPOP-CMAES [33] is an extension of IPOP-CMAES, in which previous population midpoint fitness (PPMF) is used as an adaptation of the 1/5th success rule to the CMA-ES algorithm. PPMF is utilized to adjust the step-size multiplier \(\sigma\) which compares the current population’s fitness value to the previous population’s midpoint. The stepsize is changed to make sure that the success probability fluctuates around the reference target value.
-
MadDE [34]
In MadDE [34], the control settings and search techniques are simultaneously adapted in the optimization process. It is based on self-adaptive DE algorithms like JADE, SHADE [54] and LSHADE [25], whose core structure has been used to build modern DE algorithms [28]. Similar to IMODE [28], it uses several mutation and crossover processes to build trial vectors. MadBE uses various search algorithms because they are likely to deliver consistent results across a wide range of objective functions (landscape). The following describes the MadDE algorithm has several characteristics. First, it mixes existing powerful mutation strategies and selects them probabilistically and the likelihood of choosing a mutation strategy depends on its historical success rate. Second, it uses probabilistic crossover to choose between binomial and q-best binomial crossover. Third, it adapts the DE control parameters NP, Cr, and F using the LSHADE algorithm [25].
-
DEDMNA [36]
Bujok and Kolenovsky proposes and tests a new DE algorithm with distance-based mutation selection, population size reduction, and an archive for good old solutions [36], called DEDMNA, which is an improved variant of DE algorithm with a selection of mutation strategy based on the mutant point distance (DEMD) [63]. DEDMNA uses a linear population size reduction approach to and an archive to store old solutions.
In the summary, in this section, several algorithms have been discussed that have been used to solve the CEC2021 benchmark functions. These algorithms are categorized as basic algorithms, advanced algorithms and the recently developed algorithms used in the CEC2021 competitions. In this paper, a comparison between these algorithms have been conducted to see their performance on the parameterized CEC2021 problems, as to the best of the authors knowledge, there is no such comparison in the literature. The aim of this paper is to evaluate the performance of these algorithms on the recently proposed parameterized CEC2021 problems. As, it can be concluded from the results and analysis section, the recently developed algorithms have been performed better than others.
3 Parametrized benchmark
Based on the aforementioned discussion, it can be seen from CEC’05 to CEC’2020 that regardless of the type and feature of benchmark problems, the same experimental analysis approach has been used. All benchmark problems have been used with the fixed-parameter settings for all features i.e. no different values for different features have been experimentally investigated. All benchmark problems have been manipulated as a black-box with no permeation for a possible change to evaluate the performance of the algorithms with different combinations of parameters’ values. Therefore, to the best of our knowledge, this is the first study that opens this black-box for all benchmark problems and tests different DE-based algorithm with different predetermined levels for different controlled features.
The benchmarks plays very important role in improvement of global meta-heuristics. The two-benchmark series CEC [64] and COCO [39] are proposed to check the performance of the real parameter meta-heuristic algorithms. In these competition, the benchmark functions are transformed by using different operators such as bias, rotation and shift in the objective function [14, 37, 65]. Various possible combination of the operators can be formed. The total eight combinations of operators are possible and they are as
-
Only bias exists.
-
Bias does not exist.
-
Only rotation exists
-
Rotation does not exist.
-
Only shift exists.
-
Shift does not exist.
-
Bias and rotation exist but shift does not exist.
-
Bias and shift exist but rotation does not exist.
-
Rotation and shift exist but bias does not exist.
-
Bias, shift and rotation exist simultaneously.
The main aim of benchmarking is to find the best transformation by checking the effect of all possible combination of the operators. Thus, the resulting set is known as parameterized benchmark.
In CEC’20 [17] benchmark, new objective function is defined by including shift vector, rotation matrix and bias in the original objective function. In the variable x, shift vector \(s_i\) is included with the multiplication of rotation matrix R and the bias vector \(b_i^*\) is added in the original objective function. Therefore, the mathematical formulation of the new benchmark is given as:
The \(F_i(X)\) is known as parameterized benchmark function on which the performance of the operators will be tested. There are some detailed variations for hybrid and composition functions, which make the full pattern slightly more complicated. Therefore, the decomposition allows to define the binary parameters that demonstrate which transformation should be applied and ensures that predictors are standardized to the same scale. The values taken by the parameters is presented in Table 1 with the reference Equation number from which the value can be obtained. The operators bias, shift and rotation can be controlled, activated or deactivated. While, there are some other parameters such as problem type, separability, symmetry, and number of local optima which can be observed only. The type of problems can be unimodal, hybrid, simple multimodal, and composition, the optimization problem may be of different kind such as separable or non-separable. A few or a huge number of local optima can be existed and the shape of the problem may be symmetric or asymmetric. The values of the parameters are fixed therefore, these type of parameters can be observed only. Thus, total 8 combination are possible for each function. One example is illustrated to understand these binary parameters. For example, only shift operator exists and the rotation and bias operator do not exist then \(R=I\) (identity matrix) and \(F_i^*=0\) must set on. The detailed description of each binary operator applied on function \(F_i\) has been shown in Table 2. To show the effect of these configurations on the benchmark set, \(F_9\) has been selected as an example. The 3-D maps for 2 dimensions \(F_9\) in all 8 configurations are shown in Fig. 1. In each figure, the subfigure (a) shows the basic 3-D map of function such that the no parametrization is used. The subfigures (b), (c), (d) present the function with only shift parameter, only rotation and only bias, respectively. In the subfigures (e), (f), (g), two operations are simultaneously used that are shift with rotation, shift with bias and rotation with bias, respectively. The subfigure (h) shows all the three parameters with the original function. These figures illustrate the effect of all parameters on the original benchmark functions. Moreover, their contour maps (For \(F_9\)) are also drawn in Fig. 2. The interested research can be found full details in CEC’21 technical report [18].
4 Numerical experiments
This section presents the numerical experiments of performance of meta-heuristic algorithms on CEC 2021 benchmark functions. The experiments conducted on two types of meta-heuristic algorithms, i.e. basic algorithms and advanced algorithms. Basic algorithms includes the basic or standard versions of differential evolution (DE) [20], gaining-sharing knowledge-based algorithm (GSK) [21], grey wolf optimizer (GWO) [22], particle swarm optimization (PSO) [23], and teaching learning-based optimization algorithm (TLBO) [24]. The advanced algorithms include the adaptive and self-adaptive version of those algorithms which have been the winner of of previously held CEC competitions. The advanced algorithms are as:
-
LSHADE [25] was ranked as the winner in real-parameter single objective optimization competition, CEC 2014.
-
AGSK [26] was ranked second in real-parameter single objective optimization competition, CEC2020.
-
EBOwithCMAR [27] is the winner of CEC2017 single objective with bound constraints competition.
-
IMODE [28] is the winner of CEC2020 single objective with bound constraints competition.
-
ELSHADE_SPACMA algorithm [29] was ranked third in real-parameter single objective optimization competition CEC 2018 and, is an enhanced version of LSHADE-SPACMA algorithm [56] which was ranked fourth in real-parameter single objective optimization competition CEC 2017. Besides, IMODE, LSHADE and ELSHADE_SPACMA are DE- based algorithms.
Moreover, the performance of the state-of-the-art algorithms is evaluated on evaluation criteria [18].
The values of the parameters used for the basic algorithms are considered from the GSK algorithm paper [21] and the parameters values for the advanced algorithm are taken from their original paper. The performance of the algorithms for 10 D and 20 D are compared with the two criteria (i) among the parametrized vector for each algorithm, and (ii) among the obtained results of all algorithms for each parametrized vector.
4.1 Evaluation criteria
Algorithms are evaluated with a score that is composed of two parts, SE and SR, both of which assign equal weights to 10 and 20 dimensional results. SE is based on sums of normalized error values, while SR is composed of sums of ranks. Each score contributes 50% to the total Score, which has a maximum value of 100.
In particular, SE begins as an average of 2 sums of normalized functional error values:
where ne is an algorithm’s normalized error value for a given function, configuration and dimension and SNE is the average of all normalized error values over all functions, configurations and dimensions. For this competition, ne is defined as:
where \(f(x_{best})\) is the algorithm’s best result out of 30 trials, \(f(x^*)\) is the function’s known optimal value and \(f(x_{best})_{max}\) is the largest \(f(x_{best})\) among all algorithms for the given function/dimension combination. Once SNE has been determined for all algorithms, SE is computed as:
where \(SNE_{min}\) is the minimal sum of normalized errors among all algorithms. SR1 begins as an average of 2 sums of ranks:
where rank is the algorithm’s rank among all algorithms for a given function, configuration and dimension that is based on its mean error value (not normalized). Once SR has been determined for all algorithms, SR is computed as:
where \(SR1_{min}\) is the minimal sum of ranks among all algorithms. The final Score is the sum of SE and SR:
The entries will be ranked based on the score.
Also, as the results in this paper were presented as best, medium, maximum, average and standard deviation values for each problem for the same number of runs, nonparametric tests such as Friedman ranking test can be used to compare the competing algorithms.
4.2 Results of basic algorithms
This section describes the performance of all basic algorithms on the CEC2021 benchmark functions. The statistical results of all algorithms are given in Tables S1-S5 in the supplementary file. Tables 3, 4, 5 and 6 show the result for the above-mentioned criteria. Table 3 presents the obtained rank, evaluation scores (SE and SR) and the total score of each algorithm for each parametrized vector in case of 10 dimensions. Table 3 shows that DE, GSK and PSO algorithm get first rank in case of basic function or when there is no parametrized vector. GWO and TLBO algorithms perform similar in case of basic function and when only bias operator is applied. Moreover, when the three binary operators are applied simultaneously, DE and PSO algorithms achieve the last rank. This implies this operator has significant effect on the both algorithms. Similarly, in case of GSK, GWO and TLBO algorithms, the effect of bias, shift and rotation operators (all of three i.e. (111)) and shift and rotation (both i.e. (110)) are same, as the algorithms achieve same rank in the both parameterized vectors.
Although in case of 20 dimensions, Table 4 shows that PSO and TLBO algorithm get the first rank when the bias operator is applied. PSO algorithm has significantly effect of shift and rotation operator simultaneously whereas, TLBO algorithm has similar effect of shift and rotation operator (011) and bias, shift and rotation operators (111) simultaneously. It performs similar in both the cases. DE and GWO algorithm perform similar in case of basic function and only bias operator. GSK algorithm get the first rank for basic functions.
According to (ii) criteria, the comparison is made among the algorithms for each parameterized vector. In Table 5, the comparative results are shown for dimension 10. It can be observed that, GWO algorithm outperforms others in case of basic function and only bias operator. TLBO algorithm gets the first position for only rotation operator (001) and bias and rotation operator (101). DE algorithm obtains first rank in 4 out of total 8 cases and the total score is almost equal to 100. Moreover, DE algorithm is at the second place in 2 cases ((001) and (101)). Although in case of 20 dimensions, TLBO algorithm obtains the first rank in 4 cases ((000), (011), (100), and (111)) and in the other cases it is at the second place only. The algorithm obtains total score above 90 in 6 out of 8 cases. It implies that in the higher dimensions TLBO algorithm performs better than other algorithms. When the three operators (bias, shift and rotation) has been applied to the basic function, TLBO algorithm outperforms others and the worst performance is of GWO algorithm. The total score obtained by GWO algorithm is 41.30 which is very less.
The overall comparison is made among basic algorithms and among advanced algorithms. Table 7 shows the overall comparative results obtained by all algorithms. In the basic algorithms, TLBO performs better than other basic algorithms in each parameterized vector. It gets the first rank and DE, GSK, PSO and GWO are come to next places. The score of DE algorithm which got the second rank, it score is only 81.01 out of 100.
Table 8 presents the results obtained from the basic algorithms DE, GSK, GWO, PSO and TLBO algorithms based on the Friedman ranking test. For the 10D test problems, DE algorithm obtained the best rank for the problem with shift, shift and rotation, bias and shift and the ones with bias, shift and rotation operators, while TLBO algorithm achieve the best rank for problems with other operators.
For the 20D test problems, DE obtained the best rank for the problems with shift and the ones with bias and shift operators. GWO achieved the best rank for the test problems with basic and bias, while GSK obtained the best rank for the remaining categories.
4.3 Results of Advanced Algorithms
This section includes the performance of advance algorithms on CEC2021 benchmark functions. The statistical results of all algorithms are given in Tables S6-S10 in the supplementary file. Tables 9, 10, 11 and 12 show the results obtained by the advanced algorithms. Table 9 shows the rank obtained by the algorithms for each parameterized vector for dimension 10. LSHADE algorithm performs better when the bias operator is applied. It obtains the first rank in this case. There is a significant effect of shift and rotation operators on LSHADE algorithm therefore, it did not perform well in (011) case. AGSK and EBOwithCMAR get first rank only in the basic function means that no operator has been applied, whereas the only rotation operator has significant effect on AGSK algorithm. The combination of shift and rotation operators affect the EBOwithCMAR algorithm. IMODE algorithm perform similar in case of basic functions and with the bias operator. Therefore, it obtains the same rank in the both the cases. ELSHADE_SPACMA algorithm gets the first rank in the basic function only and did not perform well in case of all three operators.
Table 10 presents the result of advanced algorithms of dimensions 20. When the dimension increases, ELSHADE_SPACMA and IMODE algorithms obtain the first rank in the (100) case only. AGSK and EBOwithCMAR algorithms outperform in the only basic functions and LSHADE algorithm gets first rank in case of the inclusion of bias operator only. According to Table 10, LSHADE and ELSHADE_SPACMA algorithms have the significant effect of all three operators simultaneously.
Now the comparison is made among the advanced algorithms for each operator. Table 11 presents the obtained rank and score for the dimensions 10. It shows that IMODE algorithm obtains first rank in 6 out of 8 cases and EBOwithCMAR algorithm get first rank in rest of two cases. When the only shift and rotation operator (011) and three operator (111) are applied, IMODE does not perform better. However, in case of 20 dimensions, IMODE algorithms outperforms other advanced algorithms in 5 cases. The results are depicted in Table 12. AGSK algorithm obtains first rank in only 1 case that is (010) and EBOwithCMAR algorithm gets first rank in (011) and (111) case.
For the overall comparison, the results are shown in Table 13. IMODE algorithm outperforms other in the advanced algorithm category and obtains the first place among others. ELSHADE_SPACMA, LSHAEDE, EBOwithCMAR and AGK algorithm take place. It can be said that TLBO and IMODE algorithm are the most efficient and capable algorithm among the mentioned algorithms to solve the CEC2021 benchmark problems.
Table 14 presents the results obtained from the advanced algorithms LSHADE, AGSK, EBOwithCMAR, IMODE and LSHADE_SPACMA algorithms based on the Friedman ranking test. For the 10D test problems, EBOwithCMAR algorithm obtained the best rank for the problem with (011) and (111) operators, while IMODE algorithm achieved the best rank for the remaining problems.
For the 20D test problems, AGSK obtained the best rank for the problems with (010), and (110) problems. EBOwithCMAR achieved the best rank based on the Friedman test for (011) and (111) test problems, while the IMODE attained the best rank for the remaining test functions.
4.4 Results and discussion of CEC2021 algorithms
In this section, the performance of the algorithms that participated in the CEC2021 is presented and analyzed. The list of the algorithms are below:
-
1.
SOMA-CLP: Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control [30];
-
2.
MLS-LSHADE: A Multi-start Local Search Algorithm with L-SHADE;
-
3.
L-SHADE-OrdRw: LSHADE based on ordered and roulette-wheel-based mutation.
-
4.
NL-SHADE-RSP: LSHADE algorithm with Adaptive Archive and Selective Pressure [31].
-
5.
j21: Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization [32].
-
6.
RB-IPOP-CMAES: A New Step-Size Adaptation Rule for CMA-ES Based on the Population Midpoint Fitness [33].
-
7.
MadDE: Improved DE through Bayesian Heperparameter Optimization [34].
-
8.
APGSK-IMODE: Gaining-Sharing Knowledge Algorithm with Adaptive Parameters Hybrid with Improved Multi-operator DE algorithm [35]
-
9.
DEDMNA: DE with Distance-based Mutation-selection [36].
The detailed of the obtained results (Best, Worst, Median, Mean and Std) form the above-mentioned algorithms for 10D and 20D are presented at the Supplementary material file in Table S11-S19.
Tables 15, 16, 17 and 18 present the results produced from the CEC2021’s algorithms. Table 15 presents the rank obtained from the competing algorithms for every parameterized vector for the 10D problems. APGSK_IMODE and MadDE algorithms obtain the first rank for basic and bias cases. They also gives better results for rotation and when the combination of bias and rotation are applied. Their performance is affected when the shift operator is applied.
A comparison is conducted among the CEC2021’s algorithms for every parameterized operator. Table 17 shows the obtained rank and score for problems with 10 variables. The results show that APGSK_IMODE achieves the first rank in 5 cases out of 8, DEDMNA obtains the first in 2 cases while NL-SHADE-RSP obtains the first rank in the remaining case. APGSK_IMODE is ranked 4th when used to solve (010) and (110) and 3rd when all the three operators are used (111).
Table 18 shows the obtained rank and score for problems with 20 variables. The results show that APGSK_IMODE and MadDE obtain the first rank in 4 cases out of 8. However, APGSK_IMODE obtains better rank than MadDE in the other 4 cases. J21 and NL-SHADE-RSP obtain the first rank in 2 and 2 cases out of 8 cases, respectively.
5 Conclusions and future directions
This paper presents the performance of various meta-heuristic algorithms on CEC 2021 benchmark functions. The benchmark objective functions are parameterized by including the different combinations of bias, shift and rotation operators. These parameterized functions are evaluated on the some basic and advanced meta-heuristic algorithms. The performance of the meta-heuristic algorithms are based on the evaluation metric which includes the score. The score is composed of two parts, i.e. sum of normalized error values and sum of the ranks and each score contributes 50% to the total score of 100. 10 benchmark functions are considered with 10 and 20 dimensions. Two types of meta-heuristic algorithms, basic and advanced, are considered which includes different algorithms. The obtained results of these algorithms state that Teaching learning-based optimization algorithm outperforms other in case of basic algorithm. It gets the first rank among others for overall comparison. It presents the best results with the bias operator in case of 20 dimensions. Although it depicts that it has significant effect for both parametrized vectors i.e. shift and rotation operator (011) and bias, shift and rotation operators (111). Moreover, IMODE is one of the advanced algorithms which performs better than others and place itself at the first position. It performs best with the case of bias operator only for 20 dimensions. IMODE algorithm presents the best results among other algorithms in case of (000), (001), (010), (100), (101) and (110) for 10 dimensions. Moreover, It gets the total score of 100 for (000), (001), (100), (101), (110) parameterized vectors in case of 20 dimensions. Among the algorithms that participated in CEC2021 competitions, APGSK-IMODE obtains the best results for both 10D and 20D.
This paper benefits to researchers to get the various meta-heuristic algorithms under one roof. It presents the results of the standard and advanced algorithms for 10 and 20 dimensions. It can be extended for the higher dimensions such as 30, 50, 100, 500, and 1000. Besides the mentioned algorithms, more algorithms must be included to check the performance on benchmark functions. Moreover, the other evaluation criteria or performance metrices must be developed to evaluate the performance of the algorithms. The interested researchers can be considered these objectives for the further study.
References
Michalewicz Z, Dasgupta D, Le Riche RG, Schoenauer M (1996) Evolutionary algorithms for constrained engineering problems. Comput Ind Eng 30(4):851–870
Gusel L, Rudolf R, Brezocnik M (2015) Genetic based approach to predicting the elongation of drawn alloy. Int J Simul Modell 14(1):39–47
Zhang J, Zhan Z-H, Lin Y, Chen N, Gong Y-J, Zhong J-H, Chung HS, Li Y, Shi Y-H (2011) Evolutionary computation meets machine learning: a survey. IEEE Comput Intell Mag 6(4):68–75
Collange G, Delattre N, Hansen N, Quinquis I, Schoenauer M (2010) Multidisciplinary optimization in the design of future space launchers
Mora AM, Squillero G (2015) Applications of evolutionary computation. In: Proceedings 18th European Conference, EvoApplications 2015, Copenhagen, Denmark, April 8-10, 2015,. Springer, vol. 9028
Forestiero A (2021) Metaheuristic algorithm for anomaly detection in internet of things leveraging on a neural-driven multiagent system. Knowl-Based Syst 228:107241
Cicirelli F, Forestiero A, Giordano A, Mastroianni C (2016) Transparent and efficient parallelization of swarm algorithms. ACM Trans Autonom Adap Syst (TAAS) 11(2):1–26
Mersmann O, Preuss M, Trautmann H, Bischl B, Weihs C (2015) Analyzing the bbob results by means of benchmarking concepts. Evol Comput 23(1):161–185
Hansen N, Auger A, Ros R, Finck S, Pošík P (2010) Comparing results of 31 algorithms from the black-box optimization benchmarking bbob-2009. In: Proceedings of the 12th annual conference companion on Genetic and evolutionary computation, pp. 1689–1696
Hansen N, Auger A, Finck S, Ros R (2012) Real-parameter black-box optimization benchmarking: experimental setup,” Orsay, France: Université Paris Sud, Institut National de Recherche en Informatique et en Automatique (INRIA) Futurs, Équipe TAO. Tech, Rep
Hansen N, Auger A, Mersmann O, Tusar T, Brockhoff D (2016) Coco: a platform for comparing continuous optimizers in a black-box setting. arXiv preprint arXiv:1603.08785
Suganthan PN, Hansen N, Liang JJ, Deb K, Chen Y-P, Auger A, Tiwari S (2005) Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL Rep 2005005(2005):2005
Liang J, Qu B, Suganthan P, Hernández-Díaz AG (2013) Problem definitions and evaluation criteria for the cec, (2013) special session and competition on real-parameter optimization. Technical Report 201212
Liang J-J, Suganthan PN, Deb K (2005) Novel composition test functions for numerical global optimization. In: Proceedings, (2005) IEEE swarm intelligence symposium. SIS 2005. IEEE 2005:68–75
Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and evaluation criteria for the cec, (2014) special session and competition on single objective real-parameter numerical optimization. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore 635
Wu G, Mallipeddi R, Suganthan PN (2017) Problem definitions and evaluation criteria for the cec 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report
Yue C, Price K, Suganthan P, Liang J, Ali M, Qu B, Awad N, Biswas P (2019) Problem definitions and evaluation criteria for the cec 2020 special session and competition on single objective bound constrained numerical optimization. Comput Intell Lab, Zhengzhou Univ., Zhengzhou, China, Tech. Rep, vol. 201911
Mohamed A, Hadi A, Mohamed A, Agrawal P, Kumar A, Suganthan P (2020) Problem definitions and evaluation criteria for the cec 2021 special session and competition on single objective bound constrained numerical optimization. In: Tech. Rep
García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’2005 special session on real parameter optimization. J Heurist 15(6):617
Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Mohamed AW, Hadi AA, Mohamed AK (2020) Gaining-sharing knowledge based algorithm for solving optimization problems: a novel nature-inspired algorithm. Int J Mach Learn Cybern 11:1501–1529
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
J. Kennedy and R. C. Eberhart (1997) A discrete binary version of the particle swarm algorithm. In: IEEE International conference on systems, man, and cybernetics. Computational cybernetics and simulation, 1997, vol. 5. IEEE, pp. 4104–4108
Rao RV, Savsani VJ, Vakharia D (2012) Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf Sci 183(1):1–15
Tanabe R, Fukunaga AS (2014) Improving the search performance of shade using linear population size reduction. In: IEEE congress on evolutionary computation (CEC). IEEE, 2014, pp. 1658–1665
Mohamed AW, Hadi AA, Mohamed AK, Awad NH (2020) Evaluating the performance of adaptive gaining sharing knowledge based algorithm on cec, (2020) benchmark problems. in: 2020 IEEE congress on evolutionary computation (CEC). IEEE, pp. 1–8
Kumar A, Misra RK, Singh D (2017) Improving the local search capability of effective butterfly optimizer using covariance matrix adapted retreat phase. In: IEEE congress on evolutionary computation (CEC). IEEE, 2017, pp. 1835–1842
Sallam KM, Elsayed SM, Chakrabortty RK, Ryan MJ (2020) Improved multi-operator differential evolution algorithm for solving unconstrained problems. In: IEEE congress on evolutionary computation (CEC). IEEE, 2020, pp. 1–8
Hadi AA, Mohamed AW, Jambi KM (2021) Single-objective real-parameter optimization: enhanced lshade-spacma algorithm. In: Heuristics for optimization and learning. Springer, pp. 103–121
Kadavy T, Pluhacek M, Viktorin A, Senkerik R (2021) Soma-clp for competition on bound constrained single objective numerical optimization benchmark: a competition entry on bound constrained single objective numerical optimization at the genetic and evolutionary computation conference (gecco) 2021. In: Proceedings of the genetic and evolutionary computation conference companion, pp. 11–12
Stanovov V, Akhmedova S, Semenkin E (2021) Nl-shade-rsp algorithm with adaptive archive and selective pressure for cec, (2021) numerical optimization. In: IEEE congress on evolutionary computation (CEC). IEEE, 2021, pp. 809–816
Brest J, Maučec MS, Bošković B (2021) Self-adaptive differential evolution algorithm with population size reduction for single objective bound-constrained optimization: algorithm j21. In: IEEE congress on evolutionary computation (CEC). IEEE, 2021, pp. 817–824
Warchulski E, Arabas J (2021) A new step-size adaptation rule for cma-es based on the population midpoint fitness. In: IEEE congress on evolutionary computation (CEC). IEEE, 2021, pp. 825–831
Biswas S, Saha D, De S, Cobb AD, Das S, Jalaian BA (2021) Improving differential evolution through Bayesian hyperparameter optimization. In: IEEE congress on evolutionary computation (CEC). IEEE, 2021, pp. 832–840
Mohamed AW, Hadi AA, Agrawal P, Sallam KM, Mohamed AK (2021) Gaining-sharing knowledge based algorithm with adaptive parameters hybrid with imode algorithm for solving cec, (2021) benchmark problems. In: 2021 IEEE congress on evolutionary computation (CEC). IEEE, pp. 841–848
Bujok P, Kolenovsky P (2021) Differential evolution with distance-based mutation-selection applied to cec, (2021) single objective numerical optimisation. In: 2021 IEEE congress on evolutionary computation (CEC). IEEE, pp. 849–856
Opara KR, Hadi AA, Mohamed AW (2020) Parametrized benchmarking: an outline of the idea and a feasibility study. In: Proceedings of the 2020 genetic and evolutionary computation conference companion, pp. 197–198
Mersmann O, Preuss M, Trautmann H (2010) Benchmarking evolutionary algorithms: towards exploratory landscape analysis. In: International conference on parallel problem solving from nature. Springer, pp. 73–82
Hansen N, Finck S, Ros R, Auger A (2009) Real-parameter black-box optimization benchmarking 2009: noiseless functions definitions
Friedman J, Hastie T, Tibshirani R (2001) The elements of statistical learning, vol 1. Springer, New York
Kaufman L, Rousseeuw PJ (2009) Finding groups in data: an introduction to cluster analysis. Wiley, New York, p 344
Mersmann O, Bischl B, Trautmann H, Preuss M, Weihs C, Rudolph G (2011) Exploratory landscape analysis. In: Proceedings of the 13th annual conference on genetic and evolutionary computation, pp. 829–836
Beume N, Naujoks B, Emmerich M (2007) Sms-emoa: Multiobjective selection based on dominated hypervolume. Eur J Oper Res 181(3):1653–1669
Auger A, Finck S, Hansen N, Ros R (2010) Bbob 2009: comparison tables of all algorithms on all noiseless functions
Borda Jd (1784) “Mémoire sur les élections au scrutin,” Histoire de l’Academie Royale des Sciences pour 1781 (Paris, 1784)
Morgan R, Gallagher M (2012) Length scale for characterising continuous optimization problems. In: International conference on parallel problem solving from nature. Springer, pp. 407–416
Opara K, Arabas J (2011) Benchmarking procedures for continuous optimization algorithms. J Telecommun Inf Technol pp. 73–80
Beiranvand V, Hare W, Lucet Y (2017) Best practices for comparing optimization algorithms. Optim Eng 18(4):815–848
Muñoz MA, Sun Y, Kirley M, Halgamuge SK (2015) Algorithm selection for black-box continuous optimization problems: a survey on methods and challenges. Inf Sci 317:224–245
Rice JR (1976) The algorithm selection problem. Adv Comput 15:65–118
Agrawal P, Ganesh T, Mohamed AW (2021) A novel binary gaining-sharing knowledge-based optimization algorithm for feature selection. Neural Comput Appl 33(11):5989–6008
Agrawal P, Ganesh T, Mohamed AW (2021) Chaotic gaining sharing knowledge-based optimization algorithm: an improved metaheuristic algorithm for feature selection. Soft Comput 25:9505–9528
Agrawal P, Ganesh T, Oliva D, Mohamed AW (2021) S-shaped and v-shaped gaining-sharing knowledge-based algorithm for feature selection. Appl Intell pp. 1–32
Tanabe R, Fukunaga A (2013) Success-history based parameter adaptation for differential evolution. In: IEEE congress on evolutionary computation. IEEE, pp. 71–78
Montgomery J, Chen S (2010) An analysis of the operation of differential evolution at high and low crossover rates. In: IEEE congress on evolutionary computation. IEEE, pp. 1–8
Mohamed AW, Hadi AA, Fattouh AM, Jambi KM (2017) Lshade with semi-parameter adaptation hybrid with cma-es for solving cec, (2017) benchmark problems. In: 2017 IEEE congress on evolutionary computation (CEC). IEEE, pp. 145–152
Mohamed AW, Abutarboush HF, Hadi AA, Mohamed AK (2021) Gaining-sharing knowledge based algorithm with adaptive parameters for engineering optimization. IEEE Access, 9: 934–946
Kadavy T, Pluhacek M, Viktorin A, Senkerik R (2020) Self-organizing migrating algorithm with clustering-aided migration. In: Proceedings of the 2020 genetic and evolutionary computation conference companion, pp. 1441–1447
Mousavirad SJ, Moghadam MH, Saadatmand M, Chakrabortty RK (2021) An ordered and roulette-wheel-based mutation incorporated l-shade algorithm for solving cec2021 single objective numerical optimisation problems. In: Proceedings of the genetic and evolutionary computation conference companion, pp. 1–2
Awad NH, Ali MZ, Suganthan PN, Reynolds RG (2016) An ensemble sinusoidal parameter adaptation incorporated with l-shade for solving cec2014 benchmark problems. In: IEEE congress on evolutionary computation (CEC). IEEE, 2016, pp. 2958–2965
BrestJ, Maučec MS, Bošković B (2019) The 100-digit challenge: algorithm jde100. In: 2019 IEEE congress on evolutionary computation (CEC). IEEE, pp. 19–26
Brest J, Maučec MS, Bošković B (2020) Differential evolution algorithm for single objective bound-constrained optimization: Algorithm j2020. In: IEEE congress on evolutionary computation (CEC). IEEE, 2020, pp. 1–8
Bujok P (2016) Improving the convergence of differential evolution. In: International conference on numerical analysis and its applications. Springer, pp. 252–260
Awad N, Ali M, Liang JJ, Qu B, Suganthan P (2016) Problem definitions and evaluation criteria for the cec 2017 special session and competition on single objective real-parameter numerical optimization. In: Technical Reports
Liang JJ, Baskar S, Suganthan PN, Qin AK (2006) Performance evaluation of multiagent genetic algorithm. Nat Comput 5(1):83–96
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mohamed, A.W., Sallam, K.M., Agrawal, P. et al. Evaluating the performance of meta-heuristic algorithms on CEC 2021 benchmark problems. Neural Comput & Applic 35, 1493–1517 (2023). https://doi.org/10.1007/s00521-022-07788-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-022-07788-z