Evaluating the performance of meta-heuristic algorithms on CEC 2021 benchmark problems

To develop new meta-heuristic algorithms and evaluate on the benchmark functions is the most challenging task. In this paper, performance of the various developed meta-heuristic algorithms are evaluated on the recently developed CEC 2021 benchmark functions. The objective functions are parametrized by inclusion of the operators, such as bias, shift and rotation. The different combinations of the binary operators are applied to the objective functions which leads to the CEC2021 benchmark functions. Therefore, different meta-heuristic algorithms are considered which solve the benchmark functions with different dimensions. The performance of some basic, advanced meta-heuristics algorithms and the algorithms that participated in the CEC2021 competition have been experimentally investigated and many observations, recommendations, conclusions have been reached. The experimental results show the performance of meta-heuristic algorithms on the different combinations of binary parameterized operators.


Introduction
Metaheruistic algorithms represent a class of derivativefree, nature inspired algorithms and provide robust optimization tools for problems. Mostly, the analytical process may fail for solving the complex, and real-world optimization problems. Therefore, meta-heuristic algorithms prove very efficient and effective algorithms in solving these types of problems. The applications of meta-heuristic algorithms are found in the numerous fields of science, machine learning, engineering, operations research [1][2][3][4][5][6][7].
The performance evaluation and comparison of algorithms are very much dependent on benchmarking. The benchmarking experiments are developed to predict/select the best algorithm for solving real-world problems [8]. In the past two decades, proposing new benchmark real-parameter single-objective optimization problems with several novel characteristics to evaluate and analyze the performance of meta-heuristic algorithms is considered as the cornerstone of research in the optimization field. Furthermore, it has been attracted by computer science, operations research practitioners and specialists as well as mathematicians and engineers. There are following important reasons for developing benchmark problems: 1. It is considered the basis for developing more complex optimization problems such as single-objective computationally expensive numerical optimization problems, single objective multi-niche optimization problems, constrained real-parameter single-objective optimization problems, constrained / bound-constrained multi/many-objective optimization problems. 2. It must simulate the degree of difficulty of the realworld optimization problems. 3. It must be able to detect the weakness and strengths of the novel optimization algorithms which have been significantly improved during the past few years.
Two benchmark series are common in evaluation of realparameter evolutionary algorithms, namely IEEE Congress on Evolutionary Computation (CEC) competitions and the Comparing Continuous Optimizer (COCO). The COCO benchmark suite provides a platform to compare the performance of large number of algorithms for unconstrained continuous optimization problems. COCO benchmark suits specially represent the single objective noiseless and noise problems and bi-objective noiseless problems [9][10][11]. On the other hand, CEC benchmark represents the most elaborated platform for the comparison of stochastic search algorithms. The CEC benchmark suite includes single, multi-objective, noiseless, noise, large-scale, real-world problems, constrained optimization problems. Moreover, it also provides the performance metrices, test environment for assessment and comparison. To evaluate the performance of state-of-the-art algorithms, CEC competitions functions are most frequently used for benchmarking. Since 2005, a new generation of benchmark problems have been developed to evaluate with the new era of nature-inspired algorithms or meta-heuristics. 2005 IEEE Congress on Evolutionary Computation was the inception of the first benchmark problems that overcome the aforementioned shortcoming features, named CEC'05 [12]. CEC'05 report included 25 benchmark functions with different properties such as separable, non-separable, rotated, unrotated, unimodal and multimodal functions with shifts in dimensionality, multiplicative noise in fitness and composition of functions, continuous functions, noncontinuous functions, global optimum on the bounds, global optimum not on the bounds, function with no clear structure in the fitness landscape, the narrow global basin of attraction and so on.
Eight years later, CEC'13 test suite which includes 28 benchmark functions has been proposed [13]. In the CEC'13 test suite, the previously proposed composition functions [14] are improved and additional test functions are included. In the same context, CEC'14 test suite which includes 30 benchmark functions has been proposed [15]. CEC'14 developed benchmark problems with several novel features such as novel basic problems, composing test problems by extracting features dimension-wise from several problems, graded level of linkages, rotated trap problems, and so on. Three years later, CEC'17 test suite which includes 30 benchmark functions has been proposed [16]. In the CEC'17 test suite, similar to CE'14, new basic functions with different features have been added. These benchmark functions are discussed in detailed in the next section. The same benchmark suite of CEC'17 has been used in CEC'18, CEC'19 and CEC'20 [17]. As algorithms improve, ever more challenging functions are developed. This interplay between methods and problems drives progress, the CEC'20 [17] and CEC'21 [18] Special Sessions on Real-Parameter Optimization are developed to promote this symbiosis. In CEC'20 competition, the objective function is transformed into another function with the inclusion of the rotation matrix. Moreover, new benchmark objective functions are set in CEC'21 competition by including the different combination of bias, shift and rotation operators. These benchmark functions make a new challenge for researchers to develop meta-heuristic algorithms which handle all the complexity of the functions.
Additionally, it is pointed out that in order to compare and analyse the solution quality of different algorithms statistically and to verify the behaviour of stochastic algorithms [19], the results are compared using two nonparametric statistical hypothesis tests: I the Friedman test (to determine the final rankings of different algorithms for all functions) and (ii) the multi-problem Wilcoxon signedrank test (to check the differences between all algorithms for all functions). Besides, the algorithm complexity has been taken into consideration by evaluating the computation time just for specific function for predefined evaluations of a certain dimension. Nonetheless CEC'17 proposed a new performance measure which is called score metric. Thus, the evaluation method for each algorithm is based on a score of 100 which is based on two criteria taking into account higher weights will be given for higher dimensions. Thus, the calculated score is used instead of using a statistical test [16].
In this paper, CEC'21 benchmark functions are considered to evaluate the performance of meta-heuristic algorithms. We divided the algorithms into three category, i.e., basic algorithms, advanced algorithms and CEC2021's Algorithms. The basic algorithms include the basic or standard version of the old and recent meta-heuristic algorithms and it includes the basic or standard versions of differential evolution (DE) [20], gaining-sharing knowledge-based algorithm (GSK) [21], grey wolf optimizer (GWO) [22], particle swarm optimization (PSO) [23], and teaching learning-based optimization algorithm (TLBO) [24]. The advanced algorithms are the adaptive and selfadaptive version of those algorithms which have been the winner of some CEC competitions. These algorithms include the following: LSHADE was ranked as the winner in real-parameter single objective optimization competition, CEC 2014 [25], AGSK was ranked second in realparameter single objective optimization competition, CEC2020 [26], EBOwithCMAR is the winner of CEC2017 single objective with bound constraints competition [27], IMODE is the winner of CEC2020 single objective with bound constraints competition [28] and ELSHADE_-SPACMA algorithm was ranked third in real-parameter single objective optimization competition CEC 2018 and, is an enhanced version of LSHADE-SPACMA algorithm [29]. The CEC2021's algorithms are the set of algorithms that participated in the CEC2021 competition. These algorithms include Self-organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control (SOMA-CLP) [30]; A Multi-start Local Search Algorithm with L-SHADE (MLS-LSHADE); LSHADE based on ordered and roulette-wheel-based mutation (L-SHADE-OrdRw); LSHADE algorithm with Adaptive Archive and Selective Pressure (NL-SHADE-RSP) [31]; Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization (j21) [32]; A New Step-Size Adaptation Rule for CMA-ES Based on the Population Midpoint Fitness ( RB-IPOP-CMAES) [33]; Improved DE through Bayesian Heperparameter Optimization (MadDE) [34]; Gaining-Sharing Knowledge Algorithm with Adaptive Parameters Hybrid with Improved Multioperator DE algorithm (APGSK-IMODE) [35] and DE with Distance-based Mutation-selection (DEDMNA) [36].
The objective functions of the benchmarks are parameterized using operators such as bias, rotation, and translation. The primary goal of parameterization is to evaluate the influence of all operator combinations on all benchmark functions. Parametric benchmarking is a first step towards gaining a comprehensive understanding of algorithmic performance and optimization concerns [37]. To this end, ten scalable benchmark challenges utilising these binary operators are proposed.

Related work
Liang et al. [13] stated that many of classical popular benchmark functions have some features that have been used by some algorithms to achieve excellent results. According to their experience, some of these shortcoming properties associated with the existing benchmark problems are: • A Global optimum having the same parameter values for different dimensions/variables. In this case, the global optimum may be reached quickly if some operators exist to duplicate the value of one dimension to the other dimensions. • At the origin, there is a global optimum. As a result, numerous mutation operators may be simply built to take use of this common trait of a large number of benchmark functions. • The global optimum is located at the search range's centre. When the uniform initial population is generated randomly, the mean-centric technique has a tendency to lead the population to the search range's centre. • The global optimum is located on the bounds of the search space that are easily discovered by the majority of methods. • Local optima located along the coordinate axes or a lack of connectivity between the variables/dimensions. In this situation, the local optimal information might be used to determine the global optimum.
By analyzing these problems, they proposed a general framework to construct novel and challenging composition test functions possessing many desirable properties.
Recently, during the past few years, there have been a few attempts to investigate the relationship between the benchmark sets and the performance of the optimization algorithms. Mersmann et al. [38], discussed that the main objective of benchmarking optimization algorithms is to answer two questions. The first one: which algorithm is the best? And the second one: which algorithm can be used to solve a specific real-world problem? Therefore, they suggested that the first question can be answered by using consensus ranking procedures. Regarding the second question, they proposed a new term, named Exploratory Landscape Analysis (ELA), which is based on developing ways to cheaply and automatically extracts problem properties from a concrete problem instance. However, this procedure has not been applied in this study. On the other hand, they suggested many problem properties such as multi-modality, global structure, separability, variable scaling, search space homogeneity, basin size homogeneity, global to local optima contrast and plateau. Then, realparameter black-box optimization benchmarking 2009, named BBOB'09 [39], has been used to analyze the performance of 30 optimization algorithms. they applied consensus ranking instead of using individual ranking. Thus, in order to gain insights to answer these questions, many classical statistical exploratory data analyses have been used.
Firstly, the expected running time of each algorithm has been calculated for each test function and dimension and consensus ranking has been applied. Then, distance measure has been used to calculate the distance between all rankings. Besides, to retrieve groups or clusters from the distance matrix, multidimensional scaling (MDS, [40]), to visualize the relationship between observations, and the clustering algorithm PAM [41], to find clusters or groups in data, have been used. Finally, to describe these groups, decision trees for modelling the unknown cluster boundary have been applied. In the same context, in order to answer the second question, Mersmann et al. [42], extended their work by applying Exploratory Landscape Analysis (ELA). They suggested several low-level features of functions which are convexity, y-distribution, level set, meta-model, local search and curvature. Then, they try to relate these features to the high-level features of Mersmann et al. [38]. The relationship between the problem properties and algorithm optimization can be estimated using a small sample of function values combined with statistical and machine learning techniques. In order to simultaneously optimize feature sets according to quality and cost, a multiobjective evolutionary algorithm SMS-EMOA [43], has been employed. Later on, Mersmann et al. [8], the 2009 and 2010 BBOB benchmark results [9,44] are used to analyze more than 50 optimization algorithms. The compared algorithms have been divided into 11 groups such that all optimization algorithms based on the same base mechanism will be put into the same group. Then, in order to choose the best optimizer from each group as the representative for this group, the Borda [45] consensus over all algorithms in each group for the accuracy levels 10-3 and 10-4 has been used. Then, they applied the same approach used in Mersmann et al. [38]. Altogether, no consistent results have been reached. Thus, they concluded that the proposed features of the benchmark problems are not enough or not adequate to describe the groups of different algorithms.
Morgan and Gallagher [46] proposed a novel framework based on a length scale. The length scale is the change of the objective function value with respect to the change in the points in the search space. The main objective was to study the structural features of the landscape of the problems regardless of any particular algorithm. They discussed some important properties of the concept of the length scale and its distribution. Then, the proposed framework has been applied to the 2010 BBOB benchmark (BBOB'10) [44]. Experimental analysis and results showed that the proposed framework can differentiate easily between uni-modal and multi-modal functions. On the other hand, many researchers have focused on the benchmarking process of optimization algorithms that must be followed to perform a fair comparison. Opara and Arabas [47] proposed an overview of benchmarking procedures. Based on some functions from CEC and BBOB benchmarks, they discussed the main points in this field such as theoretical aspects of the comparison of the algorithms, available benchmarks, evaluation criteria and standard methods of presenting and interpreting results and the related appropriate statistical procedures. Finally, they proposed a novel concept of parallel implementation of test problems which is considered as a qualitative improvement to the benchmarking procedures. Beiranvand et al. [48] presented complete and detailed standards and guidelines on how to benchmark optimization algorithms. They reviewed the benchmark test suites for all types of optimization problems. Besides, they discussed the main performance measures to evaluate the efficiency of the optimization algorithms. Moreover, pitfalls to avoid and main issues that should be taken into account to conduct fair and systematic comparison have been highlighted. Muñoz et al. [49] introduced a survey of selected methods for optimization algorithm in the black-box continuous optimization problems. Firstly, the algorithm selection framework by Rice in [50] has been described. The fourcomponent spaces-problem, algorithm, performance and characteristic-are described. Therefore, the fitness landscape concept is discussed. Then, they discussed the different classes of search algorithms. Besides, the methods used to measure algorithm performance have been introduced. A classification and a summary of well-known exploratory landscape analysis methods have been presented. Finally, they presented the implementations of the algorithm selection framework for solving continuous optimization problems. Opara et al. [37] extended CEC'17 benchmark problems by employing different parametrization to the problems. Each function is parametrized by interpretable, high-level characteristics (rotation vs nonrotation, noise vs noise-less, etc.) which are used in predicting multiple regression model to explain the algorithmic performance.

Description of used algorithms
In this subsection, all the algorithms that have been used in the paper to solve the parameterized CEC2021 test functions are briefly discussed and explained. Also, comparisons between these algorithms have been conducted at the end of the paper. These algorithms have been classified into basic, advanced and recently-developed ones.

Basic algorithms • Differential evolution (DE)
It is a type of evolutionary computing method that belongs to a broader family of evolutionary algorithms. DE algorithm, developed by Storn and Price in 1997 [20], is also a popular direct search technique such as genetic algorithm and evolution strategies which starts with a population of initial solutions. Then, by introducing mutations into the population, these initial solutions are iteratively improved. It is most popular evolutionary algorithm and has been applied to various nonlinear, high dimensional and complex optimization problem to obtain the optimal solution. Moreover, different variants of DE algorithm such as self adaptive, binary, multi-objective, etc. have been introduced.
• Gaining sharing knowledge-based algorithm (GSK) GSK algorithm [21] is based on the human behavior of gaining and sharing knowledge which consists of two phases: junior and senior gaining and sharing phase. In junior phase, initial solutions are produced and later, they are sent to senior phase to interact with others. This is key idea behind the GSK algorithm. Many different variants of GSK algorithm have been developed to show its capability in solving real-world optimization problems [51][52][53].

• Grey Wolf optimizer (GWO)
The GWO algorithm is modelled after the natural leadership structure and hunting mechanism of grey wolves by Mirjalili et al. [22]. For modelling the leadership structure, four sorts of grey wolves are used: alpha, beta, delta, and omega. Furthermore, the three primary phases of hunting are implemented: looking for prey, surrounding prey, and attacking prey. Grey wolves have the capacity to locate and surround prey. The alpha is generally in charge of the hunt. Hunting is something that the beta and delta could do on occasion. They separate to hunt for prey and then converge to attack prey. The proposed technique was evaluated on 29 benchmark test functions and outperformed others. Researcher have been developed its variants to solve different problems.

• Particle swarm optimization (PSO)
It is one of the nature-inspired evolutionary algorithms and stochastic optimisation techniques developed by James Kennedy and Russ Eberhart in 1995 [23] to solve computationally difficult and difficult optimisation problems. Since then, it has been applied to a wide variety of search and optimisation problems. It abstracts the mechanism of action of swarms such as swarms of birds, fish, and so on. It is an algorithm for population-based evolution in which each swarm (particle) represents a unique solution. Each particle updates its current position via its velocity vector and attempts to find the optimal solution.
• Teaching Learning based optimization algorithm (TLBO) This technique focuses on the impact of a teacher's influence on students. TLBO, like other nature-inspired algorithms, is a population-based approach that progresses to the global answer through a population of solutions [24]. A population is defined as a group of students or a class of students. The TLBO procedure is broken into two parts: The 'Teacher Phase' is the first section. and the 'Learner Phase' makes up the second portion. The terms 'Teacher Phase' and 'Learner Phase' refer to learning from a teacher and 'Learning by Interaction Between Learners' respectively. It has been successfully applied to several numerical optimization problems and it proved its superiority in solving them.

Advanced algorithms
• LSHADE [25] It is an extended version of the previously developed success history-based adaptive DE (SHADE) algorithm [54]. There are three main control parameters of DE algorithm i.e. population size, scaling factor F and crossover rate CR. SHADE algorithm employs a historical memory M CR , M F that saves a set of CR, F values which have worked successfully in the past. Based on SHADE algorithm, Tanabe and Fukunaga [25] developed LSHADE algorithm which is based on population size parameter for DE algorithm. In this, a linear population size reduction formula has been used which gradually decreases the population size during a DE run. The proposed algorithm LSHADE was applied to the special session on Real parameter single objective optimization benchmark suite. The performance of LSHADE algorithm outperformed other algorithms and it was ranked as winner in CEC2014 competition.
• AGSK [35] It depicts adaptive version of gaining-sharing knowledge (GSK) based optimization algorithm. The GSK algorithm has two main parameters: knowledge factor k f and knowledge ratio k r that control junior and senior phase during optimization. Therefore, to accommodate this situation, adaptation process has been included with the parameter settings. Moreover, population size reduction scheme is also employed to decrease the population size gradually. This leads to adaptive gaining sharing knowledge-based algorithm and the performance of the AGSK algorithm is evaluated on CEC2020 benchmark functions. It performed significantly better than other comparative algorithms due to the remarkably balance between the exploration and exploitation. In CEC2020, AGSK algorithm was ranked second among all competitive algorithms.
• EBOwithCMAR [27] This algorithm is a combination of global optimizer and a local optimizer. The global optimizer, i.e. effective butterfly optimizer (EBO) is a self-adaptive version of butterfly optimization algorithm which uses success history-based adaption and linear population size reduction to the increase the diversity of the population. While it also incorporates a co-variance matrix adapted retreat (CMAR) phase to produce the new solution. Moreover, it also increase the local search capability of EBO algorithm. This hybrid algorithm is tested over single objective CEC 2017 benchmark problems and ranked as winner of CEC2017.
• IMODE [28] It introduces an improve multi-operator differential evolution algorithm (IMODE). It begins by segmenting the initial population into many sub-population, each of which is developed using a different DE variation. The size of each sub-population is continually adjusted depending on two indicators: solution quality and subpopulation variety. The proposed algorithm is tested over 10 benchmark functions of CEC2020 competitions and ranked as winner among all competitive algorithms.
• ELSHADE_SPACMA [29] Many researcher or practitioners have been introduced different variants of DE algorithm. In DE algorithm, crossover rate reflects the chance that the test solution inherits a certain gene. Montgomery and Chen [55] claimed that CR is a very sensitive parameter in order to solve optimization problems. Thus, to tackle with this problem, Mohamed et al. [56] proposed LSHADE_SPACMA algorithm. Furthermore, Hadi et al. [29] enhanced the performance of LSHADE_-SPACMA by developing ELSHADE_SPACMA algorithm with the two improvements. The first was a hybridization of LSHADE_SPACMA and adaptive guided differential evolution (AGDE) in which all the population were assigned to LSHADE_SPACMA for one generation. And then all the population will be assigned to AGDE algorithm. The next improvement was made in the mutation parameter to balance the exploration and exploitation quality. The proposed technique has been applied to CEC2017 benchmark problems and obtained third rank among all algorithms.

CEC2021's algorithms
In this subsection, we will discuss and summarize the algorithms that have been participated in CEC2021 competition.
• APGSK-IMODE [35] A hybrid algorithm based on Adaptive Parameters gaining sharing knowledge (APGSK) [57] and improved multi-operator differential evolution (IMODE) [28]. For a predetermined number of generations (cycle), two sub-populations are evolved to an optimal or near-optimal solution using APGSK and IMODE. Probabilistically, each algorithm evolves its sub-population for the next cycle that is done via probability, which is adjusted based on each algorithm's quality. It is worth to mention that, APGSK-IMODE was the winner of the CEC2021 Single Objective Bound Constrained Numerical Optimization for non-shifted cases.
• SOMA-CLP [30] Kadavy et al. [30] introduced a new Self-organizing Migrating Algorithm (SOMA) variant with clusteringaided migration and adaptive perturbation vector control (SOMA-CLP). SOMA-CLP is a recent variation of SOMA, which is an improved SOMA-CL [58]. SOMA-CLP promotes the worldwide transition from exploration to exploitation by linearly adapting the prt control parameter. Its workflow is split into three phases: search space mapping, mapped space clustering, and further screening of areas of interest revealed in the first phase.
• L-SHADE-OrdRw [59] L-SHADE-OrdRw, which is an improved L-SHADE algorithm based on ordered and roulette-wheel-based mutation has been proposed in [59]. Also, in order to further improves the performance of L-SHADE-OrdRw, an adaptive and non-adaptive ensemble sinusoidal method, to automatically adjust the scaling factor values, has been used. Moreover, a selection pressure, which put more weights on the better solutions, based on the roulette-wheel strategy has been utilized to select random solutions in mutation strategy. To further enhance the efficiency of L-SHADE-OrdRw algorithm uses a local search based on Gaussian Walks [60].
• NL-SHADE-RSP [31] Stanovov et al. [31] proposed an improved LSHADE algorithm, called NL-SHADE-RSP. NL-SHADE-RSP incorporates different novel parameter control methods including nonlinear population size reduction, rankbased selective pressure in mutation strategy to select one of two mutation operators (with and without archive), adaptive archive set usage, and adapting the crossover rate control based on some rules. It also utilizes a mechanism to adapt the p value in the currentto-pbest mutation operator.
• j21 [32] The j21 algorithm is built on the self-adapting jDE100 and j2020 algorithms. It uses two populations, a restart mechanism in both populations, a crowding mechanism, and a system to choose vectors from both sub-populations for the mutation process. The jDE100 [61] and j2020 [62] algorithms use similar processes. j21 algorithm uses a mechanism to reduce population size throughout evolution, unlike the prior two algorithms. The self-adaptive control parameter CR also has a wider range of values.
• RB-IPOP-CMAES [33] RB-IPOP-CMAES [33] is an extension of IPOP-CMAES, in which previous population midpoint fitness (PPMF) is used as an adaptation of the 1/5th success rule to the CMA-ES algorithm. PPMF is utilized to adjust the step-size multiplier r which compares the current population's fitness value to the previous population's midpoint. The stepsize is changed to make sure that the success probability fluctuates around the reference target value.
• MadDE [34] In MadDE [34], the control settings and search techniques are simultaneously adapted in the optimization process. It is based on self-adaptive DE algorithms like JADE, SHADE [54] and LSHADE [25], whose core structure has been used to build modern DE algorithms [28]. Similar to IMODE [28], it uses several mutation and crossover processes to build trial vectors.
MadBE uses various search algorithms because they are likely to deliver consistent results across a wide range of objective functions (landscape). The following describes the MadDE algorithm has several characteristics. First, it mixes existing powerful mutation strategies and selects them probabilistically and the likelihood of choosing a mutation strategy depends on its historical success rate. Second, it uses probabilistic crossover to choose between binomial and q-best binomial crossover. Third, it adapts the DE control parameters NP, Cr, and F using the LSHADE algorithm [25].
• DEDMNA [36] Bujok and Kolenovsky proposes and tests a new DE algorithm with distance-based mutation selection, population size reduction, and an archive for good old solutions [36], called DEDMNA, which is an improved variant of DE algorithm with a selection of mutation strategy based on the mutant point distance (DEMD) [63]. DEDMNA uses a linear population size reduction approach to and an archive to store old solutions.
In the summary, in this section, several algorithms have been discussed that have been used to solve the CEC2021 benchmark functions. These algorithms are categorized as basic algorithms, advanced algorithms and the recently developed algorithms used in the CEC2021 competitions. In this paper, a comparison between these algorithms have been conducted to see their performance on the parameterized CEC2021 problems, as to the best of the authors knowledge, there is no such comparison in the literature. The aim of this paper is to evaluate the performance of these algorithms on the recently proposed parameterized CEC2021 problems. As, it can be concluded from the results and analysis section, the recently developed algorithms have been performed better than others.

Parametrized benchmark
Based on the aforementioned discussion, it can be seen from CEC'05 to CEC'2020 that regardless of the type and feature of benchmark problems, the same experimental analysis approach has been used. All benchmark problems have been used with the fixed-parameter settings for all features i.e. no different values for different features have been experimentally investigated. All benchmark problems have been manipulated as a black-box with no permeation for a possible change to evaluate the performance of the algorithms with different combinations of parameters' values. Therefore, to the best of our knowledge, this is the first study that opens this black-box for all benchmark problems and tests different DE-based algorithm with different predetermined levels for different controlled features.
The benchmarks plays very important role in improvement of global meta-heuristics. The two-benchmark series CEC [64] and COCO [39] are proposed to check the performance of the real parameter meta-heuristic algorithms. In these competition, the benchmark functions are transformed by using different operators such as bias, rotation and shift in the objective function [14,37,65]. Various possible combination of the operators can be formed. The total eight combinations of operators are possible and they are as The main aim of benchmarking is to find the best transformation by checking the effect of all possible combination of the operators. Thus, the resulting set is known as parameterized benchmark.
In CEC'20 [17] benchmark, new objective function is defined by including shift vector, rotation matrix and bias in the original objective function. In the variable x, shift vector s i is included with the multiplication of rotation matrix R and the bias vector b Ã i is added in the original objective function. Therefore, the mathematical formulation of the new benchmark is given as: The F i ðXÞ is known as parameterized benchmark function on which the performance of the operators will be tested. There are some detailed variations for hybrid and composition functions, which make the full pattern slightly more complicated. Therefore, the decomposition allows to define the binary parameters that demonstrate which transformation should be applied and ensures that predictors are standardized to the same scale. The values taken by the parameters is presented in Table 1 with the reference Equation number from which the value can be obtained. The operators bias, shift and rotation can be controlled, activated or deactivated. While, there are some other parameters such as problem type, separability, symmetry, and number of local optima which can be observed only.
The type of problems can be unimodal, hybrid, simple multimodal, and composition, the optimization problem may be of different kind such as separable or non-separable. A few or a huge number of local optima can be existed and the shape of the problem may be symmetric or asymmetric. The values of the parameters are fixed therefore, these type of parameters can be observed only. Thus, total 8 combination are possible for each function. One example is illustrated to understand these binary parameters. For example, only shift operator exists and the rotation and bias operator do not exist then R ¼ I (identity matrix) and F Ã i ¼ 0 must set on. The detailed description of each binary operator applied on function F i has been shown in Table 2. To show the effect of these configurations on the benchmark set, F 9 has been selected as an example. The 3-D maps for 2 dimensions F 9 in all 8 configurations are shown in Fig. 1. In each figure, the subfigure (a) shows the      Fig. 2. The interested research can be found full details in CEC'21 technical report [18].

Numerical experiments
This section presents the numerical experiments of performance of meta-heuristic algorithms on CEC 2021 benchmark functions. The experiments conducted on two types of meta-heuristic algorithms, i.e. basic algorithms and advanced algorithms. Basic algorithms includes the basic or standard versions of differential evolution (DE) [20], gaining-sharing knowledge-based algorithm (GSK) [21], grey wolf optimizer (GWO) [22], particle swarm optimization (PSO) [23], and teaching learning-based optimization algorithm (TLBO) [24]. The advanced algorithms include the adaptive and self-adaptive version of those algorithms which have been the winner of of previously held CEC competitions. The advanced algorithms are as: • LSHADE [25] was ranked as the winner in realparameter single objective optimization competition, CEC 2014. • AGSK [26] was ranked second in real-parameter single objective optimization competition, CEC2020. • EBOwithCMAR [27] is the winner of CEC2017 single objective with bound constraints competition. • IMODE [28] is the winner of CEC2020 single objective with bound constraints competition. • ELSHADE_SPACMA algorithm [29] was ranked third in real-parameter single objective optimization competition CEC 2018 and, is an enhanced version of LSHADE-SPACMA algorithm [56] which was ranked fourth in real-parameter single objective optimization competition CEC 2017. Besides, IMODE, LSHADE and ELSHADE_SPACMA are DE-based algorithms.
Moreover, the performance of the state-of-the-art algorithms is evaluated on evaluation criteria [18]. The values of the parameters used for the basic algorithms are considered from the GSK algorithm paper [21] and the parameters values for the advanced algorithm are taken from their original paper. The performance of the algorithms for 10 D and 20 D are compared with the two criteria (i) among the parametrized vector for each algorithm, and (ii) among the obtained results of all algorithms for each parametrized vector.

Evaluation criteria
Algorithms are evaluated with a score that is composed of two parts, SE and SR, both of which assign equal weights to 10 and 20 dimensional results. SE is based on sums of normalized error values, while SR is composed of sums of ranks. Each score contributes 50% to the total Score, which has a maximum value of 100. In particular, SE begins as an average of 2 sums of normalized functional error values: where ne is an algorithm's normalized error value for a given function, configuration and dimension and SNE is the average of all normalized error values over all functions, configurations and dimensions. For this competition, ne is defined as: where f ðx best Þ is the algorithm's best result out of 30 trials, f ðx Ã Þ is the function's known optimal value and f ðx best Þ max is the largest f ðx best Þ among all algorithms for the given function/dimension combination. Once SNE has been determined for all algorithms, SE is computed as: where SNE min is the minimal sum of normalized errors among all algorithms. SR1 begins as an average of 2 sums of ranks: where rank is the algorithm's rank among all algorithms for a given function, configuration and dimension that is based on its mean error value (not normalized). Once SR has been determined for all algorithms, SR is computed as: where SR1 min is the minimal sum of ranks among all algorithms. The final Score is the sum of SE and SR: The entries will be ranked based on the score. Also, as the results in this paper were presented as best, medium, maximum, average and standard deviation values for each problem for the same number of runs,

Results of basic algorithms
This section describes the performance of all basic algorithms on the CEC2021 benchmark functions. The statistical results of all algorithms are given in Tables S1-S5 in  the supplementary file. Tables 3, 4, 5 and 6 show the result for the above-mentioned criteria. Table 3 presents the obtained rank, evaluation scores (SE and SR) and the total score of each algorithm for each parametrized vector in case of 10 dimensions.   as the algorithms achieve same rank in the both parameterized vectors. Although in case of 20 dimensions, Table 4 shows that PSO and TLBO algorithm get the first rank when the bias operator is applied. PSO algorithm has significantly effect of shift and rotation operator simultaneously whereas, TLBO algorithm has similar effect of shift and rotation operator (011) and bias, shift and rotation operators (111) simultaneously. It performs similar in both the cases. DE and GWO algorithm perform similar in case of basic function and only bias operator. GSK algorithm get the first rank for basic functions.
According to (ii) criteria, the comparison is made among the algorithms for each parameterized vector. In Table 5, the comparative results are shown for dimension 10. It can be observed that, GWO algorithm outperforms others in case of basic function and only bias operator. TLBO algorithm gets the first position for only rotation operator (001) and bias and rotation operator (101). DE algorithm obtains first rank in 4 out of total 8 cases and the total score is almost equal to 100. Moreover, DE algorithm is at the second place in 2 cases ((001) and (101)). Although in case of 20 dimensions, TLBO algorithm obtains the first rank in 4 cases ((000), (011), (100), and (111)) and in the other cases it is at the second place only. The algorithm obtains total score above 90 in 6 out of 8 cases. It implies that in the higher dimensions TLBO algorithm performs better than other algorithms. When the three operators (bias, shift and rotation) has been applied to the basic function, TLBO algorithm outperforms others and the worst performance is of GWO algorithm. The total score obtained by GWO algorithm is 41.30 which is very less.
The overall comparison is made among basic algorithms and among advanced algorithms. Table 7 shows the overall comparative results obtained by all algorithms. In the basic algorithms, TLBO performs better than other basic algorithms in each parameterized vector. It gets the first rank and DE, GSK, PSO and GWO are come to next places. The score of DE algorithm which got the second rank, it score is only 81.01 out of 100. Table 8 presents the results obtained from the basic algorithms DE, GSK, GWO, PSO and TLBO algorithms based on the Friedman ranking test. For the 10D test problems, DE algorithm obtained the best rank for the problem with shift, shift and rotation, bias and shift and the ones with bias, shift and rotation operators, while TLBO algorithm achieve the best rank for problems with other operators.
For the 20D test problems, DE obtained the best rank for the problems with shift and the ones with bias and shift operators. GWO achieved the best rank for the test problems with basic and bias, while GSK obtained the best rank for the remaining categories.

Results of Advanced Algorithms
This section includes the performance of advance algorithms on CEC2021 benchmark functions. The statistical results of all algorithms are given in Tables S6-S10 in the supplementary file. Tables 9, 10, 11 and 12 show the results obtained by the advanced algorithms. Table 9 shows the rank obtained by the algorithms for each parameterized vector for dimension 10. LSHADE algorithm performs better when the bias operator is applied. It obtains the first rank in this case. There is a significant effect of shift and rotation operators on LSHADE algorithm therefore, it did not perform well in (011) case. AGSK and EBOw-ithCMAR get first rank only in the basic function means that no operator has been applied, whereas the only rotation operator has significant effect on AGSK algorithm. The combination of shift and rotation operators affect the EBOwithCMAR algorithm. IMODE algorithm perform similar in case of basic functions and with the bias operator. Therefore, it obtains the same rank in the both the cases. ELSHADE_SPACMA algorithm gets the first rank in the basic function only and did not perform well in case of all three operators. Table 10 presents the result of advanced algorithms of dimensions 20. When the dimension increases, ELSHA-DE_SPACMA and IMODE algorithms obtain the first rank in the (100) case only. AGSK and EBOwithCMAR algorithms outperform in the only basic functions and LSHADE algorithm gets first rank in case of the inclusion of bias operator only. According to Table 10, LSHADE and ELSHADE_SPACMA algorithms have the significant effect of all three operators simultaneously. Now the comparison is made among the advanced algorithms for each operator. Table 11 presents the obtained rank and score for the dimensions 10. It shows that IMODE algorithm obtains first rank in 6 out of 8 cases and EBOwithCMAR algorithm get first rank in rest of two cases. When the only shift and rotation operator (011) and three operator (111) are applied, IMODE does not perform better. However, in case of 20 dimensions, IMODE algorithms outperforms other advanced algorithms in 5 cases. The results are depicted in Table 12. AGSK algorithm obtains first rank in only 1 case that is (010) and EBOw-ithCMAR algorithm gets first rank in (011) and (111) case.
For the overall comparison, the results are shown in Table 13. IMODE algorithm outperforms other in the advanced algorithm category and obtains the first place among others. ELSHADE_SPACMA, LSHAEDE, EBOwithCMAR and AGK algorithm take place. It can be said that TLBO and IMODE algorithm are the most efficient and capable algorithm among the mentioned algorithms to solve the CEC2021 benchmark problems. Table 14 presents the results obtained from the advanced algorithms LSHADE, AGSK, EBOwithCMAR, IMODE and LSHADE_SPACMA algorithms based on the Friedman ranking test. For the 10D test problems, EBOw-ithCMAR algorithm obtained the best rank for the problem with (011) and (111) operators, while IMODE algorithm achieved the best rank for the remaining problems. For the 20D test problems, AGSK obtained the best rank for the problems with (010), and (110) problems. EBOw-ithCMAR achieved the best rank based on the Friedman test for (011) and (111) test problems, while the IMODE attained the best rank for the remaining test functions.

Results and discussion of CEC2021 algorithms
In this section, the performance of the algorithms that participated in the CEC2021 is presented and analyzed. The list of the algorithms are below: Archive and Selective Pressure [31]. 5. j21: Self-adaptive Differential Evolution Algorithm with Population Size Reduction for Single Objective Bound-Constrained Optimization [32].  Table S11-S19.  Tables 15, 16, 17 and 18 present the results produced from the CEC2021's algorithms. Table 15 presents the rank obtained from the competing algorithms for every parameterized vector for the 10D problems. APGSK_IMODE and MadDE algorithms obtain the first rank for basic and bias cases. They also gives better results for rotation and when the combination of bias and rotation are applied. Their performance is affected when the shift operator is applied. A comparison is conducted among the CEC2021's algorithms for every parameterized operator. Table 17 shows the obtained rank and score for problems with 10 variables. The results show that APGSK_IMODE achieves the first rank in 5 cases out of 8, DEDMNA obtains the first in 2 cases while NL-SHADE-RSP obtains the first rank in the remaining case. APGSK_IMODE is ranked 4th when used to solve (010) and (110) and 3rd when all the three operators are used (111). Table 18 shows the obtained rank and score for problems with 20 variables. The results show that APGSK_I-MODE and MadDE obtain the first rank in 4 cases out of 8.
However, APGSK_IMODE obtains better rank than MadDE in the other 4 cases. J21 and NL-SHADE-RSP obtain the first rank in 2 and 2 cases out of 8 cases, respectively.

Conclusions and future directions
This paper presents the performance of various metaheuristic algorithms on CEC 2021 benchmark functions. The benchmark objective functions are parameterized by including the different combinations of bias, shift and rotation operators. These parameterized functions are evaluated on the some basic and advanced meta-heuristic algorithms. The performance of the meta-heuristic algorithms are based on the evaluation metric which includes the score. The score is composed of two parts, i.e. sum of normalized error values and sum of the ranks and each score contributes 50% to the total score of 100. 10 benchmark functions are considered with 10 and 20 dimensions. Two types of meta-heuristic algorithms, basic and advanced, are considered which includes different algorithms. The obtained results of these algorithms state that Teaching learning-based optimization algorithm outperforms other in case of basic algorithm. It gets the first rank among others for overall comparison. It presents the best results with the bias operator in case of 20 dimensions. Although it depicts that it has significant effect for both parametrized vectors i.e. shift and rotation operator (011) and bias, shift and rotation operators (111). Moreover, IMODE is one of the advanced algorithms which performs better than others and place itself at the first position. It performs best with the case of bias operator only for 20 dimensions. IMODE algorithm presents the best results among other algorithms in case of (000), (001), (010), (100), (101) and (110) for 10 dimensions. Moreover, It gets the total score of 100 for (000), (001), (100), (101), (110) parameterized vectors in case of 20 dimensions. Among the algorithms that participated in CEC2021 competitions, APGSK-IMODE obtains the best results for both 10D and 20D. This paper benefits to researchers to get the various meta-heuristic algorithms under one roof. It presents the results of the standard and advanced algorithms for 10 and 20 dimensions. It can be extended for the higher dimensions such as 30, 50, 100, 500, and 1000. Besides the mentioned algorithms, more algorithms must be included to check the performance on benchmark functions. Moreover, the other evaluation criteria or performance metrices must be developed to evaluate the performance of the algorithms. The interested researchers can be considered these objectives for the further study.

Declarations
Conflict of interest The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.