Abstract
Recently, the Honey Badger Algorithm (HBA) was proposed as a metaheuristic algorithm. Honey badger hunting behaviour inspired the development of this algorithm. In the exploitation phase, HBA performs poorly and stagnates at the local best solution. On the other hand, the sand cat swarm optimization (SCSO) is a very competitive algorithm compared to other common metaheuristic algorithms since it has outstanding performance in the exploitation phase. Hence, the purpose of this paper is to hybridize HBA with SCSO so that the SCSO can overcome deficiencies of the HBA to improve the quality of the solution. The SCSO can effectively exploit optimal solutions. For the research conducted in this paper, a hybrid metaheuristic algorithm called HBASCSO was developed. The proposed approach was evaluated against challenging CEC benchmark instances taken from CEC2015, CEC2017, and CEC2019 benchmark suites The HBASCSO is also evaluated concerning the original HBA, SCSO, as well as several other recently proposed algorithms. To demonstrate that the proposed method performs significantly better than other competitive algorithms, 30 independent runs of each algorithm were evaluated to determine the best, worst, mean, and standard deviation of fitness functions. In addition, the Wilcoxon rank-sum test is used as a non-parametric comparison, and it has been found that the proposed algorithm outperforms other algorithms. Hence, the HBASCSO achieves an optimum solution that is better than the original algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A typical optimization problem involves finding the largest or the smallest value based on a function as the goal. The obtained values are known as an optimum solution. The cost and time complexity are important when an algorithm tries to find the optimum solution. When the problem dimension and the search space are increased, the complexity of the problem will also increase [1]. Researchers have used Metaheuristic algorithms to find optimal solutions and avoid computational complexity in recent decades [2,3,4,5,6,7]. The complexity theory classified the difficulty of optimization problems into the Polynomial (P), Non-deterministic Polynomial (NP), and NP-hard [8, 9]. In NP-hard problems, the run time increases when the input size grows exponentially. Metaheuristic algorithms [10] can find satisfying solutions for NP-hard problems within a reasonable time. The authors benefit from observing the animals' behaviour, physical phenome, and evolutionary theory to build the algorithms [11]. Metaheuristic algorithms determine the best possible solution in each iteration by initialization of a search space based on the algorithm rules and satisfying the cost function. Additionally, the algorithms must consider the balancing between the exploration and exploitation phases to prevent trapping into one of the local optimum solutions [12,13,14,15].
In general, metaheuristic algorithms are classified into Single solution-based and Population-based. The researchers have proved that the algorithms in the population-based class have better performance in solving optimization problems [10]. There are three types of population-based algorithms. The swarm intelligence algorithms (SI) are the first type that imitates the natural behaviours of humans, animals, and plants [16,17,18,19,20,21,22]. The evolutionary algorithms (EA) are the second type that imitates natural genetic mechanisms and evolution [23,24,25]. The last type is the natural phenomenon algorithms (NP). The algorithms in the NP subclass imitate the universe's physical or chemical rules [26]. However, some optimization algorithms have limitations, such as local optimum traps, tradeoffs between exploration and exploitation, and time complexity [27]. New metaheuristic algorithms were proposed to address these weaknesses. Usually, the researchers used some metaheuristic algorithm advantages and disadvantages to propose hybrid metaheuristic algorithms. For example, the ease of getting trapped in the local optima of the honey badger algorithm (HBA) [21] can be solved by the convergence rates in the early stages of evolution in the sand cat swarm optimization (SCSO) algorithm [22]. Moreover, the SCSO offers swarm diversity and convergence speed which solve the insufficient exploration and exploitation balancing in the HBA. Thus, the main goal is to use the strengths of some algorithms to minimize another algorithm's weaknesses [28].
A wide range of hybrid metaheuristic algorithms have been proposed, and the following is a review of some of these algorithms. In algorithms [13][29,30,31,32], the authors present hybrid algorithms that address problems such as low convergence speed, balancing between the two exploration and exploitation stages, and preventing the algorithm from falling into the local optimum. Other hybrid algorithms [33, 34] addressed problems such as early convergence, enhanced approximate, and approach global optimum solutions. In [35], the authors developed a new hybrid metaheuristic algorithm to find the best route to collect the bins in a smart waste collection system. The hybrid algorithm decides the collection time and each bin's short path. They applied the proposed hybrid algorithm in a real case study in Portugal and the results showed that the new algorithm increases the company's profit by 45% percentage. The authors of [36] proposed a new hybrid algorithm by combining the crow search algorithm (CSA) and the symbiotic organisms search (SOS) algorithm. The new CSA-SOS algorithm is applied in industrial applications to solve the load-sharing optimization problem. The work in [37] proposed a new hybrid algorithm that combines the iterated local search (ILS), variable neighbourhood descent (VND), and threshold acceptance (TA) metaheuristic algorithms to find the proper routing for pickup and delivery operations. The ILS is used as a mainframe while the VND and TA are used for the local search mechanism and acceptance criterion respectively. The new algorithm generates initial solutions by using the nearest neighbour heuristic algorithm. Then, the VND concentrates the search space by ordering the neighbourhood structures randomly. Finally, the perturbation mechanism explores different regions of the search space.
In [38], a new hybrid metaheuristic algorithm is introduced to solve various engineering problems without any adherence to the parameters. The new hybrid algorithm merges the particle swarm optimization (PSO), gravity search algorithm (GSA), and grey wolf optimizer (GWO), into one hybrid algorithm known as the HGPG algorithm. The HGPG has high control over exploration and exploitation and this is achieved by using the gravity law in GSA, the top three search factors in GWO, and the speed is calculated by the PSO algorithm. This led to an increasing exploitation rate and guided the exploration which produced a high convergence rate compared with other heuristic algorithms. Within the past few years, different hybrid metaheuristic algorithms have been proposed to enhance the feature selection process in human–computer interaction. In [39], a hybrid channel ranking procedure has been developed for Multichannel Electroencephalography-based Brain-Computer Interface (BCI) systems using Fisher information and the objective Firefly Algorithm (FA). The authors aimed to minimize the high-dimensional features of the EEG. In [40], a new hybrid algorithm based on the Dynamic Butterfly Optimization Algorithm (DBOA) with a mutual information-based Feature Interaction Maximization (FIM) scheme for solving the problems of hybrid feature selection methods. The new method IFS-DBOIM maximized the classification accuracy on different datasets. In [41], a Multiobjective X-shaped Binary Butterfly Optimization Algorithm (MX-BBOA) has been developed to select the most informative channels from BCI system signals. The new algorithm increased the classification accuracy and reduced the computation time. In [42], a Logistic S-shaped Binary Jaya Optimization Algorithm (LS-BJOA) combines a logistic map with the Jaya optimization algorithm has been proposed. The new approach aimed to alleviate the computational burden caused by many channels in extracting neural signals from the brain in the BCI systems.
A new generation of hybrid metaheuristic algorithms has emerged recently. The new generation uses machine-learning strategies to enhance metaheuristic algorithms in terms of efficiency. In [43], the authors combined the Q-learning method, a method in reinforcement learning which is a subfield in machine learning, with three classical metaheuristic algorithms to produce three new hybrid algorithms. The Q-learning method is responsible for finding the global solution and avoiding the local trap by guiding the search agent with a reward and penalty system. The authors hybridized the I-GWO, Ex-GWO, and WOA with the Q-learning method to produce the RLI − GWO, RLEx − GWO, and RLWOA hybrid algorithms. The result showed that the new algorithms explore new areas more successfully and have better performance in the exploration and exploitation phases. Another work that used reinforcement learning to enhance a classical metaheuristic algorithm was introduced in [14]. In this work, the sand cat swarm optimization algorithm (SCSO) is hybridized with reinforcement learning techniques to provide the RLSCSO algorithm. The RLSCSO used reinforcement learning agents to explore the search space of the problem efficiently. The results showed that the RLSCSO algorithm explores and exploits the search space better than the standard SCSO. Additionally, the RLSCSO algorithm is superior to other metaheuristic algorithms since the agent could switch between the aforementioned phases depending on the reward and penalty system.
This study made the following significant contributions:
-
1.
This paper proposes a hybrid algorithm called HBASCSO using HBA and SCSO characteristics to improve search efficiency.
-
2.
The HBASCSO, has ability to transition from exploration to exploitation efficiently.
-
3.
Local optimum avoidance is achieved due to trade-off between the exploration and extraction phases.
-
4.
The performance analysis of the HBASCSO is evaluated by three different sets of benchmark functions CEC 2014, 2017 and 2019.
-
5.
Statistical analysis is carried out to evaluate the experimental results and compare them with other state-of-the-art algorithms.
The rest of the paper is organized as follows: Section 2 gives a background about both the honey badger algorithm (HBA) and sand cat swarm optimization (SCSO) algorithms. In Section 3, the proposed hybrid algorithm is explained, and in Section 4, we explain the performance analysis of the HBASCSO algorithm on the different sets of benchmark functions. Finally, Section 5 presents a discussion of the results and Section 6 concludes the paper.
2 Fundamentals
The purpose of this section is to describe the honey badger algorithm (HBA) and the sand cat swarm optimization (SCSO) algorithms. In this paper, we are going to discuss in depth the mathematical models for these algorithms as well as their environment behavior.
2.1 Honey Badger Algorithm (HBA)
The honey badger algorithm (HBA) was proposed by Hashim et al. [21]. HBA algorithm is inspired by honey badger behaviors in nature. The honey badger is a fearless mammal with white fur found in Africa and Asia. Honey badgers weigh between 7 and 13 kg and measure 77 cm in length. Honey badgers love honey and beehives, but they cannot detect the hives' location. This problem is solved by the badger following the honeyguide, a bird that can locate the hives but cannot reach the honey, leading it to beehives. Moreover, the honey badger preys on sixty species using the smelling skill. It starts by estimating the location of the prey, then by moving around in the vicinity of the prey it finds a suitable spot to catch the prey. The badger begins digging and catching after locating the correct location. The honey badger algorithm (HBA) mimics badger's feeding behavior in two ways. In the first mode, smell and dig, also known as digging mode, and in the second mode, honeyguide, also known as honey mode.
The HBA algorithm starts by initializing the search space. It is the representation of the candidate solutions that forms the search space. The search space initialization determined by the using the Eq. (1).
The number of honey badgers N and the position of each one is initialized. The next move is calculating the intensity (I) which relay on both the smell of prey and the distance to it. The honey badger’s speed depends on the power of smell. The smell intensity calculates by Inverse Square Law, and is defined in Eq. (2), (3), and (4). The \({r}_{1}\) selects randomly between 0 and 1, \({x}_{i}\) is the candidate solution in a population, the \({lb}_{i}\) and \({ub}_{i}\) refer to bounds of the search space where the first one is the lower bound while the second one is the upper bound. \({I}_{i}\) is the prey’s smell intensity, \({r}_{2}\) selected randomly between 0 and 1, S is the prey’s location (concentration strength), \({d}_{i}\) is the distance between badger and prey, and \({x}_{prey}\) refers to the prey position. The third step is updating the density factor\(\left(\alpha \right)\). The density factor ensures the smoothness between the searching or exploration phase and the exploitation phase by controlling the time-varying randomization. The density factor decreases with time to minimize the randomization according to Eq. (6). Where C is a constant = 2, and \({t}_{max}\) is the maximum number of iterations.
One of the important steps in metaheuristic algorithms is avoiding the local optimal trapping. The HBA changes the search direction by using a flag (F). This flag allows agents to discover new areas not visited yet in the search space. The HBA updates process position in two phases “the digging phase” and “the honey phase”. In the digging phase, the badger updates its position by moving like a Cardioid shape. This motion can be simulated by Eq. (7). Where \({x}_{new}\) refers to the badger’s new position, \(F\) is the direction alters flag, \(\beta\) represents the ability to get food, and r numbers are selected randomly between 0 and 1. The \(F\) can be calculated using Eq. (8). In the honey phase, the badger updates his position by following the guided bird and this motion can be simulated by Eq. (9). Where \({r}_{7}\) selected randomly between 0 and 1 Fig. 1.
Prey intensity and inverse square law [21]
2.2 Sand cat swarm optimization (SCSO)
Sand cat swarm optimization algorithm by Seyyedabbasi et al. was inspired by sand cat behaviors in nature [22]. The Sand Cat is a Felis mammal animal that lives in Asia deserts. These environments are known as harsh environments for animals. The smart and small cat has various life behaviors to do daily activities like hunting and escaping. Despite the great similarity between a sand cat and a domestic cat in appearance, the living behavior is very different. One of the most important of these differences is that sand cats do not live in a group. However, Sand cats have some different features that enable them to live in these harsh environments. The fur color of sand cats is near to the desert color which makes hiding from other animals easier. Moreover, the sand cat's paws have a density of fur that acts as an insulator that protects it from high soil temperatures. The last different feature is the size of the sand cat's ears is bigger compared with the domestic cat's ears. The tail of the sand cat represents half of the cat's length. cats' length ranges between 45—57 cm with body weight between 1 and 3.5 kg. As clawed animals, sand cats use their paws for hunting snakes, reptiles, desert rodents, small birds, and insects. During hunting, firstly, the sand cat detects the prey by using its ears to hear the low-frequency noises (below 2 kHz). Then it tracks the prey until it finds the right moment to attack or dig when the prey is underground. To imitate this behavior, the SCSO algorithm has two stages which are searching and attacking.
To achieve the swarm intelligence concepts, the SCSO algorithm contains swarm of sand cats. The population is an array of sand cats and each cat (1D array) represents values for all variables in the problem. The definition phase creates a candidate matrix with a size equal to the sand cats' number and the variable values specified between low and upper boundaries. The fitness function is used to find the cat's fitness cost function Eq. (10). Once the iteration is done, the cat with the best cost function output is the best solution for that iteration. The other cats enhance their position toward the best solution. However, if the best solution is not better than the previous iteration's solutions, the SCSO algorithm ignores it.
where f is the fitness function value for each cat in the population. The SCSO algorithm imitates the sand cat's behaviors in two phases: searching the prey (exploration) and attacking the prey (exploitation). The search phase relies on the fact that the sand cat hears low frequencies. Each solution (cat) has a sensitivity range and this range decreases linearly after each iteration to ensure that the cat will not move away from the prey. The initial value of this range is between two and zero. The reason for selecting two is the fact that the sand cats can hear low-frequencies below 2 kHz. Mathematically, the sensitivity range decreases according to Eq. (12). Where \(\overrightarrow{{r}_{G}}\) is the sensitivity range, \({S}_{M}\) is the cat's hearing level which is assumed to be 2, \({iter}_{c}\) is current iteration and \({iter}_{Max}\) is maximum iterations. This equation is flexible and adaptable. For example, the \({S}_{M}\) value can represent the agent's action speed in another problem. Moreover, the range value will be adapted with the iteration number. In a 100 iteration, the value will be greater than 1 in the first fifty iterations and less than 1 in the last fifty iterations. Equation (13) shows \(\overrightarrow{R}\) which is the main parameter that decides between moving from exploration to exploitation. The \(\overrightarrow{R}\) parameter ensures the balance between these two phases. To avoid the local optimum problem, each cat in the population has its sensitivity range which is calculated by Eq. (14). However, the \(\overrightarrow{{r}_{G}}\) is the general range sensitivity that decreased linearly as mentioned before. Finally, each cat updates its position depending on its sensitivity range, its current position, and the best-candidate position Eq. (15). Where the \(\overrightarrow{{Pos}_{bc}}\) and \(\overrightarrow{{Pos}_{c}}\) are the best candidate and current positions respectively.
The cat sensitivity range takes a circular shape. In the attack phase, the direction of movement is determined by a random angle \(\left(\theta \right)\) on the circle. The distance between the current solution \(\overrightarrow{{Pos}_{c}}\) and the best solution \(\overrightarrow{{Pos}_{b}}\) and the other parameters of movement are calculated by Eq. (16). Where \(\overrightarrow{{Pos}_{rnd}}\) represents the random position and is calculated by Eq. (17). The random position guides cats to avoid the local optimum traps. Since the direction movement is determined on a circle, each cat in the population moves in a different direction between 0 and 360 (-1 and 1 in the search space). The angle of the hunting position for each cat is determined by using the Roulette Wheel selection algorithm. Figure 3 shows the position updating procedure for two consecutive iterations in the SCSO algorithm Fig. 2.
Position updating between Iterationi (a), and iterationi+1 (b) [22]
3 HBASCSO Algorithm
The exploration phase of the algorithm plays a very important role in optimizing performance in terms of speeding up the convergence process and avoiding local optimum conditions. As a result, exploitation also contributes significantly to the performance of algorithms. It is common to use hybrid algorithms that combine the advantages and disadvantages of metaheuristic algorithms. This study uses two algorithms to hybridize: the honey badger algorithm (HBA) and the sand cat swarm optimization algorithm (SCSO). The previous section provided in-depth descriptions of these algorithms. The algorithms are both inspired by animal behavior in nature. These algorithms each have strong abilities to find the optimal solution, but they both have limitations. Furthermore, the no free lunch (NFL) theorem [44] indicates that no algorithm can solve all optimization problems. There is no doubt that both of these algorithms are simple to implement, as well as reasonable in terms of cost and time complexity. Due to these characteristics, they are able to find the optimal solution in a reasonable amount of time.
As a first step, a uniform distribution of randomly generated solutions is used to fill the search space, including both sand cats and honey badgers. Then, the search space boundary is controlled by checking the population. If the search agents are found outside the boundary, they are amended. In this way, a fitness function is calculated as a result. Therefore, it is imperative that, in order to ensure that the solutions are feasible and optimal, a fitness function be satisfied. There is a parameter called \(a\) that has the function of controlling the switch between digging and attacking. This parameter is a random value between 0 and 1. As long as the \(a\) parameter value is smaller than 0.5, the HBASCSO algorithm is used, Eq. (15.a) which is the digging phase. This equation is similar to a cardioid motion [19]. Otherwise, the HBASCSO algorithm is used Eq. (15.b) to attacking on the prey. In the equation the cos (\(\beta\)) is also used that \(\beta\) is the ability of search agents to attack on the food. The pseudocode and flowchart is given in the algorithm 1 and Fig. 3.

Algorithm 1. Proposed hybrid optimization algorithm pseudocode
4 Result and Analysis
The purpose of this section is to evaluate the performance of the HBASCSO algorithm through the use of benchmark functions. A comparison is made between the proposed HBASCSO algorithm and seventeen popular algorithms, including, honey badger algorithm (HBA) [21], sand cat swarm optimization (SCSO) [22], grey wolf optimizer (GWO) [45], whale optimization algorithm (WOA) [17], harris hawks optimization (HHO) [20], sine cosine algorithm (SCA) [46], particle swarm optimization (PSO) [47], salp swarm algorithm (SSA) [48], gravitational search algorithm (GSA) [49], fick's law algorithm(FLA) [50], Henry gas solubility optimization (HGS) [51], moth-flame optimization (MFO) [52], Bonobo optimization (BO) [53], artificial ecosystem-based optimization (AEO) [54], multi-verse optimizer (MVO) [55], seagull optimization algorithm (SOA) [56], slime mould algorithm (SMA) [57].
This study utilizes a greater number of metaheuristic algorithms than is usually the case. To analyze the performance and variety of the proposed algorithms for each group of benchmark functions, different metaheuristic algorithms have been selected. The benchmark functions used for this study are those from CEC2015 [58, 59], CEC2017 [60], and CEC2019 [61]. In order to increase the accuracy of the analysis, three sets of benchmark functions were used. All experiments are conducted in the same environment. All algorithms are assumed to be simulated under similar settings, using 30 independent runs with 30 search agents and 500 iterations. For a methodology to be effective, independent runs must be conducted to monitor the effects generated by random parameters. Each metaheuristic algorithm parameter value is presented in Table 1.
Using benchmark functions, metaheuristic algorithms can be evaluated for their effectiveness and efficiency. The CEC 2014 and 2015 benchmark functions are presented in Table 2. This group consists of three types of functions: unimodal, multimodal, and fixed-dimension multimodal. Within the unimodal benchmark functions (max or min), there is only one global optimum. The multimodal function has both a global and a local optimum, as the name implies. It is important to note that the fixed-dimension multimodal function cannot be modified, as opposed to the other two categories of benchmark functions. Table 3 presents the second set of benchmark functions. Metaheuristic algorithms are measured using this type of benchmark in the Congress on Evolutionary Computation [60]. The benchmark functions in this section are more challenging. In this set, there are four groups of functions: unimodal, simple multimodal, hybrid, and composition functions. In the third set of benchmark functions (CEC-C06 2019), we examined the test functions to demonstrate the algorithm's ability to handle large-scale optimization problems in Table 4.
4.1 Result Analysis for the Benchmark Functions CEC2014-2015
The analysis for each algorithm with different sets of benchmark functions is explained in Tables 5–8. An overview of the results, such as average, worst, best, and standard deviation, can be found in the appropriate tables. As mentioned before, this paper used three different sets of benchmark functions as well as some recently proposed metaheuristic algorithms to compare and evaluate the proposed algorithm. In Table 5, the results for the first set of benchmark functions are presented. The obtained results for unimodal functions with different metaheuristic algorithms show that the HBASCSO algorithm has good performance in the functions F1, F2, F3, F4, and F7. In this type of benchmark function, there is only one optimal solution. The HHO algorithm performance in functions F5 and F6 is better than others. Table 6 illustrates the results of multimodal functions. The obtained results for the functions F8 to F13 demonstrate that the HBASCSO algorithm provides optimal results for the functions F9, F10, and F11. While the results for the HBA algorithm for those functions were the same as the proposed algorithm, the SCSO algorithm results for those functions were not the same. As a result, the proposed hybrid algorithm can improve the functionality of the HBA and SCSO algorithms. The hybrid metaheuristic algorithms are used to capitalize on the advantages of the algorithms. It is possible to observe that the HBA and SCSO algorithms are capable of exploration and exploitation efficiently in hybridization.
For functions F14-F23, which are fixed-dimension functions, the obtained results are presented in Table 7. On the basis of the appropriate table, the HBASCSO determines the global optimum for F15, F16, F17, F18, F19, and F20. It was observed that most metaheuristic algorithms can find the fixed-dimension benchmark function's global optimum, but for the function F20, the proposed algorithm's mean result is better than others. Besides, for functions F21, F23, the GWO algorithm always finds the global optimum. Table 8 shows the output of rank based on the mean value of each algorithm. All algorithms are ranked statistically in Table 8. Figure 4 shows the most successful optimization algorithm, based on an analysis of the total rank summary of all optimization algorithms.
4.2 Result Analysis for the Benchmark Functions CEC2017
To perform the numerical validation analysis, 29 benchmarks from CEC2017 were used. Performance evaluations of many metaheuristic algorithms are conducted using these functions. Among the CEC2017 functions, four types are distinguished: unimodal (F1 and F3), multiple (F4—F10), hybrid (F11—F20), and composition (F21-F30). Table 3 presents the specifications for these functions. These benchmarks have been used to evaluate the HBASCSO, as well as comparisons with algorithms such as HBA, SCSO, SSA, GSA, FLA, HGS, and MFO. In Table 9, results from the experiments are summarized, along with average, worst, best, and standard deviation.
In terms of their average results, HBASCSO's results are superior to these benchmarks. By analyzing the average ranking values of the algorithms involved, the HBASCSO identifies benchmark functions that provide the optimum results. A comprehensive analysis of the HBASCSO algorithm was conducted along with an examination of the affecting exploration and exploitation capabilities over the CEC2017 test functions. A better balance between exploration and exploitation is possible after the hybridization of two metaheuristic algorithms. Hybrid algorithms benefit from the main advantages of both HBA and SCSO algorithms, even though they have operators to control a trade-off. During exploration and exploitation of the HBASCSO, all search agents maintain their characteristics and activity, which allows for efficient optimization of the search area. Table 10 summarizes the ranking results for the HBASCSO algorithm and other algorithms. According to this table, the algorithm that finds values close to the global optimum is the one with the lowest overall ranking. As there are eight algorithms compared, the algorithm with the lowest ranking can find results which are very close to the optimum. In contrast, the algorithm with the highest value can find the worst results.
4.3 Result Analysis for the Benchmark Functions CEC2019
Using the HBASCSO algorithm, the CEC2019 benchmark function is examined, and its results are compared with those of other well-known metaheuristics. Furthermore, this benchmark test function is referred to as CEC-C06, also known as "The 100-digit challenge" [63]. The 10 functions of modern benchmark tests are listed in Table 4. These functions are used for evaluating large-scale optimization problems in metaheuristic algorithms. It can be seen from Table 4 that the first three functions have different dimensions. The functions 4 to 10 are shifted and rotated between 100 and 100 to simulate the minimization problem in 10-dimensional space. It has been determined that all of the functions in CEC2019 have reached their global optimum at point 1, and all of the functions are scalable.
The HABSCSO algorithm performs well in the CEC01, CEC02, CEC03, and CEC10 algorithms as shown in Table 11. Analyzing the exploration and exploitation capabilities of metaheuristic algorithms for unimodal functions is the goal of these functions. Table 12 shows that the HBASCSO algorithm is ranked first in the rank summary and in the competitive rank for optimum values. The SCSO and SMA algorithms are also in the first rank summary totally. Consequently, the SCSO algorithm achieved optimal results in CEC04 and CEC05. Meanwhile, the SMA algorithm obtained optimal results in CEC02, CEC07, and CEC09. A comparison of the performance of the HBASCSO algorithm with newly proposed algorithms shows that it is very competitive. At the same time, the trade-off between the exploration phase and the exploitation phase has been evidently examined.
5 Discussion
The numerical results show that the HBASCSO algorithm works better than many other metaheuristic and hybrid metaheuristic algorithms, such as HBA, SCSO, GWO, WOA, HHO, SCA, PSO, SSA, GSA, FLA, HGS, MFO, BO, AEO, MVO, SOA, and SMA. In the first part of the performance analysis, there are 23 test functions used to compare the proposed algorithm with HBA, SCSO, GWO, WOA, HHO, SCA, and PSO. Table8 presents the rank summary of the proposed algorithm compared with the previously mentioned algorithms on the CEC 2014–2015 benchmark functions. The proposed algorithm took first place in the total result, while the SCSO algorithm ranked second in this comparison. The second comparison shows the difference between the proposed algorithm and HBA, SCSO, SSA, GSA, FLA, HGS, and MFO algorithms on CEC2017 test functions (F1-F30). The mean, worst, best, and standard deviation metrics are used to compare the proposed algorithm with other optimization algorithms. Table 10 summarizes the comparison results. The HBASCSO algorithm took first place in this table, and the SCSO ranked second with little difference. The last section of the analysis compares the proposed algorithm with the HBA, SCSO, BO, AEO, MVO, SOA, and SMA on the CEC2019 benchmark functions. In this comparison, both the proposed algorithm, the SCSO, and the SMA introduce perfect performance in many benchmark functions, and as summarized in Table 12, all three of these algorithms took the first place in the total performance. The HBASCSO algorithm analysis presented in the previous section demonstrated its effectiveness in comparison with some optimization algorithms. However, HBASCSO's advantages and disadvantages can be summarized as follows:
-
It is important to note that in order to improve performance, the HBASCSO has a trade-off between the exploration and extraction phases. This is the cause of the hybridization of the HBA and SCSO algorithms, which is what led to its hybridization.
-
Considering disturbances and uncertainties is crucial for designing robust optimization algorithms that fit better into real-world systems.
-
In an analysis of the mean, worst, best, and standard deviation values of the obtained results, it can be seen that the HABSCSO algorithm tries to get as close as possible to the optimal solution. As a result, there is no significant difference in the results based on mean, worst, and best.
5.1 Wilcoxon Rank Sum Test Analysis
The Wilcoxon signed rank test was developed by Wilcoxon and his colleagues (1970) as a statistical procedure based solely on the order in which observations are presented in the sample [62]. In this case, the one with the lowest ranking will be determined to be the best. In this section, the Wilcoxon rank sum test is carried out at a significance level of 5%. In addition to the analysis made, Tables 13 presents the p-values calculated by the nonparametric Wilcoxon ranksum tests for the pair-wise comparison over two independent samples (HBASCSO vs. HBA, SCSO, GWO, WOA, HHO, SCA, and PSO) for the CEC2017. Tables 14 and 15 present the p-values calculated by the nonparametric Wilcoxon ranksum tests for the pair-wise comparison over two independent samples (HBASCSO vs. HBA, SCSO, GWO, WOA, HHO, SCA, PSO, BO, AEO, MVO, SOA, and SMA) for the CEC2019. The p-values are generated by the Wilcoxon test with 0.05 significant level and over 30 independent runs.
5.2 Computational Complexity
A fundamental metric for assessing algorithmic performance is time complexity. This paper expresses this using big-O notation. This paper has a detailed assessment of the complexity for the HBA, SCSO, and HBASCSO. The computational complexity of the algorithm can be categorized into three primary segments: the population update method, fitness evaluation, and initialization. In O(N × D) time, the HBA, SCSO, and HBASCSO algorithms initialize the emphasize of each search agent, where N is the number of search agents and D is the problem's dimensionality. Consequently, the overall computational cost of the HBASCSO is proportional to O(N × D × Max-iter) for a total of Max-iter iterations. The HBASCSO algorithm general computing complexity is O(N2), assuming that N and D are equivalent.
5.3 Examination of the Convergence Curve
The HBASCSO algorithm has a specified convergence rate. In order to avoid local optima, exploration and exploitation should be balanced. In terms of exploration and exploitation, control parameters are effective. Additionally, premature convergence is prevented by hybridizing the two metaheuristic algorithms. In the early steps of optimization, which include exploring the search space, it is necessary to make sudden changes in the search agent's movement. It is necessary to make these changes in order to determine the most effective search areas in the search space. When the exploitation phase begins, search agents find the local optimum solution and perform their operations in a specific manner.
Based on the obtained results, it is clear that there are some unforeseen changes in the movement of search agents during the initial iterations. Additionally, the movement in the final iteration should ideally decrease. These movements are considered essential [64]. The convergence curve of the HBASCSO is shown in Fig. 5. The convergence curve behavior of the HBASCSO algorithm in functions F1, F2, F3, F4, and F7 indicates that the proposed algorithm exhibits a typical convergence pattern. It is also clear that the HBASCSO algorithm has a tradeoff between exploration and exploitation phases. The HBA and SCSO algorithms demonstrate efficient exploration and exploitation capabilities through hybridization.
6 Conclusion
In this study, a newly developed hybrid metaheuristic algorithm based on the Honey Badger Algorithm (HBA) and the Sand Cat Swarm Optimization (SCSO) algorithm has been developed. The HBASCSO algorithm aims to improve the performance of the original HBA and SCSO algorithms by covering the weaknesses of each algorithm. One of these weaknesses is the poor performance of the HBA algorithm in the exploitation phase. On the other hand, SCSO is a very competitive algorithm and its performance in the exploitation phase has outperformed many other algorithms. The obtained results from well-known benchmark functions show that the HBASCSO algorithm has a smooth mechanism for position updating. These well-known benchmark functions include the CEC2015, CEC2017, and CEC2019 functions. For the CEC2015, the HBASCSO algorithm was compared with seven different well-known metaheuristic algorithms (HBA, SCSO, GWO, WOA, HHO, SCA, and PSO) and analyzed. According to the rank summary for each metaheuristic algorithm, the HBASCSO algorithm ranked first in 14 functions which makes it the best among all seven algorithms. For the CEC2017, the HBASCSO algorithm was compared with some new well-known metaheuristic algorithms (SSA, GSA, FLA, HGS, and MFO) and analyzed. The proposed algorithm outperformed 9 out of 30 test functions and ranked first in the total. For the CEC2019, the HBASCSO algorithm was compared with (HBA, SCSO, BO, AEO, MVO, SOA, and SMA) algorithms. This benchmark includes 10 functions and the proposed algorithm ranked first in 3 of them alongside SCSO and SMA. The performance of the proposed algorithm exceeds the performance of the other metaheuristic algorithms and proves its utility in solving many engineering and real-world problems.
Below are a few of the projects that are planned for the future.
-
In concurrent or parallel systems, they can solve multi-objective problems.
-
The proposed algorithm can be used to define optimized fitness functions for artificial neural networks.
-
Feedback controller design for nonlinear systems can benefit from the proposed algorithm.
-
Bioinformatics applications can be analyzed using these algorithms to determine the best method for extracting and filtering features.
-
The HBASCSO can be effectively applied to real-world application problems, including feature selection and robot path planning.
References
Mohammed H, Rashid T (2020) A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design. Neural Comput Appl 32(18):14701–14718
Mafarja M, Mirjalili S (2018) Whale optimization approaches for wrapper feature selection. Appl Soft Comput 62:441–453
Huang CL, Dun JF (2008) A distributed PSO–SVM hybrid system with feature selection and parameter optimization. Appl Soft Comput 8(4):1381–1391
Bianchi L, Gambardella LM, Dorigo M (2002) Solving the homogeneous probabilistic traveling salesman problem by the ACO metaheuristic. Ant Algorithms 2463:176–187
Azizi M, Aickelin U, Khorshidi HA, Shishehgarkhaneh MB (2022) Shape and size optimization of truss structures by Chaos game optimization considering frequency constraints. J Adv Res 41:89–100
Tavakol Aghaei V, Onat A, Yıldırım S (2018) A Markov chain Monte Carlo algorithm for Bayesian policy search. Systems Science & Control Engineering 6(1):438–455
Aghaei VT, Ağababaoğlu A, Yıldırım S, Onat A (2022) A real-world application of Markov chain Monte Carlo method for Bayesian trajectory control of a robotic manipulator. ISA Trans 125:580–590
Manson SM (2001) Simplifying complexity: a review of complexity theory. Geoforum 32(3):405–414
Li Wenjun et al (2020) Parameterized algorithms of fundamental NP-hard problems: A survey. Human centric Computing and Information Sciences 10.1:1–24
Talbi EG (2009) Metaheuristics: from design to implementation. John Wiley & Sons
Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic algorithms. Comput Ind Eng 137:106040
Abdollahzadeh B, Gharehchopogh FS, Khodadadi N, Mirjalili S (2022) Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Adv Eng Softw 174:103282
Seyyedabbasi A (2022) WOASCALF: A new hybrid whale optimization algorithm based on sine cosine algorithm and levy flight to solve global optimization problems. Adv Eng Softw 173:103272
Seyyedabbasi A (2023) A reinforcement learning-based metaheuristic algorithm for solving global optimization problems. Adv Eng Softw 178:103411
Talbi EG (2009) Metaheuristics: from design to implementation, vol 74. Wiley, New York, pp 5–39
Mirjalili S (2015) Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Wang G-G, Deb S, Gao X-Z, Coelho LDS (2016) A new metaheuristic optimisation algorithm motivated by elephant herding behaviour. Int J Bio-Inspired Comput 8(6):394–409
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Heidari A, Seyedali M, Hossam F, Ibrahim A, Majdi M, Huiling C (2019) Harris hawks optimization: Algorithm and applications, Future Gener. Comput Syst 97:849–872
Hashim Fatma (2022) A, et al “Honey Badger Algorithm: New metaheuristic algorithm for solving optimization problems.” Mathematics and Computers in Simulation. 192:84–110
Seyyedabbasi Amir, Farzad Kiani (2022) “Sand Cat swarm optimization: a nature-inspired algorithm to solve global optimization problems.” Engineering with Computers. pp 1–25
Holland JH (1975) Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT Press, Cambridge, Mass, USA
Rechenberg (1978) Evolutionsstrategien. Springer, Berlin Heidelberg, pp 83–114
Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evol Comput 9(2):159–195
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Ting TO, Yang XS, Cheng S, Huang K (2015) Hybrid metaheuristic algorithms: past, present, and future. Recent advances in swarm intelligence and evolutionary computation. pp 71–83
Abdel-Basset M, Abdel-Fatah L, Sangaiah AK (2018) Metaheuristic algorithms: A comprehensive review. Computational intelligence for multimedia big data on the cloud with engineering applications. pp 185–231
Barshandeh S, Haghzadeh M (2021) A new hybrid chaotic atom search optimization based on tree-seed algorithm and Levy flight for solving optimization problems. Engineering with Computers 37:3079–3122
Wang Z, Luo Q, Zhou Y (2021) Hybrid metaheuristic algorithm using butterfly and flower pollination base on mutualism mechanism for global optimization problems. Engineering with Computers 37:3665–3698
Houssein EH, Hosney ME, Elhoseny M et al (2020) Hybrid Harris hawks optimization with cuckoo search for drug design and discovery in chemoinformatics. Sci Rep 10:14439
Gao Zheng-Ming et al (2020) “The hybrid grey wolf optimization-slime mould algorithm.” Journal of Physics: Conference Series Vol. 1617, No. 1. IOP Publishing
Houssein, Essam H et al (2021) “Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems.” Expert Systems with Applications 174:114689
Ficarella E, Lamberti L, Degertekin SO (2021) Comparison of three novel hybrid metaheuristic algorithms for structural optimization problems. Comput Struct 244:106395
Jorge Diana et al (2022) “A hybrid metaheuristic for smart waste collection problems with workload concerns.” Computers & Operations Research 137:105518
Rodrigues, Leonardo R (2022) “A hybrid multi-population metaheuristic applied to load-sharing optimization of gas compressor stations.” Comput & Electr Eng 97:107632
Öztaş T, Tuş A (2022) A hybrid metaheuristic algorithm based on iterated local search for vehicle routing problem with simultaneous pickup and delivery. Expert Syst Appl 202:117401
Biabani Fatemeh, Saeed Saeed, Saleh Hamzehei-Javaran (2022) “A new insight into metaheuristic optimization method using a hybrid of PSO, GSA, and GWO.” Structures, Vol. 44. Elsevier
Tiwari A, Chaturvedi A (2023) Automatic EEG channel selection for multiclass brain-computer interface classification using multiobjective improved firefly algorithm. Multimedia Tools and Applications 82(4):5405–5433
Tiwari A, Chaturvedi A (2022) A hybrid feature selection approach based on information theory and dynamic butterfly optimization algorithm for data classification. Expert Syst Appl 196:116621
Tiwari A, Chaturvedi A (2022) Automatic channel selection using multiobjective X-shaped binary butterfly algorithm for motor imagery classification. Expert Syst Appl 206:117757
Tiwari A (2023) A logistic binary Jaya optimization-based channel selection scheme for motor-imagery classification in brain-computer interface. Expert Syst Appl 223:119921
Seyyedabbasi Amir et al (2021) “Hybrid algorithms based on combining reinforcement learning and metaheuristic methods to solve global optimization problems.” Knowledge-Based Systems 223:107044
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer Adv Eng Softw 69:46–61
Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133
Eberhart R, Kennedy J (1995) Particle swarm optimization. Proceedings of the IEEE international conference on neural networks, Vol. 4. pp 1942–1948
Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci 179(13):2232–2248
Hashim FA, Mostafa RR, Hussien AG, Mirjalili S, Sallam KM (2023) Fick’s Law Algorithm: A physical law-based algorithm for numerical optimization. Knowl-Based Syst 260:110146
Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S (2019) Henry gas solubility optimization: A novel physics-based algorithm. Futur Gener Comput Syst 101:646–667
Mirjalili S (2015) Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249
Das AK, Pratihar DK (2022) Bonobo optimizer (BO): an intelligent heuristic with self-adjusting parameters over continuous spaces and its applications to engineering problems. Appl Intell 52(3):2942–2974
Zhao W, Wang L, Zhang Z (2020) Artificial ecosystem-based optimization: a novel nature-inspired meta-heuristic algorithm. Neural Comput Appl 32:9383–9425
Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27:495–513
Dhiman G, Kumar V (2019) Seagull optimization algorithm: Theory and its applications for large-scale industrial engineering problems. Knowl-Based Syst 165:169–196
Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: A new method for stochastic optimization. Futur Gener Comput Syst 111:300–323
Liang JJ, Qu BY, Suganthan PN, Chen Q (2014) Problem definitions and evaluation criteria for the CEC 2015 competition on learning-based real-parameter single objective optimization. Technical Report201411A. Computational Intelligence Laboratory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singapore 29:625–640
Helbig M, Engelbrecht A (2015) Benchmark functions for cec 2015 special session and competition on dynamic multi-objective optimization. Comput. Sci., Univ. Pretoria, Pretoria, South Africa, Rep, Dept
Wu G, Mallipeddi R, Suganthan PN (2017) Problem definitions and evaluation criteria for the CEC 2017 competition on constrained real-parameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report
Liang JJ, Qu BY, Gong DW, Yue CT (2019) Problem definitions and evaluation criteria for the CEC 2019 special session on multimodal multiobjective optimization. Zhengzhou University, Computational Intelligence Laboratory
Woolson RF (2007) Wilcoxon signed‐rank test. Wiley encyclopedia of clinical trials. pp 1–3
Price KV, Awad NH, Ali MZ, Suganthan PN (2018) Problem definitions and evaluation criteria for the 100-digit challenge special session and competition on single objective numerical optimization. Technical Report. Nanyang Technological University, Singapore
Van den Bergh F, Engelbrecht AP (2006) A study of particle swarm optimization particle trajectories. Inf Sci 176(8):937–971
Funding
Open access funding provided by the Scientific and Technological Research Council of Türkiye (TÜBİTAK).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Seyyedabbasi, A., Tareq Tareq, W.Z. & Bacanin, N. An Effective Hybrid Metaheuristic Algorithm for Solving Global Optimization Algorithms. Multimed Tools Appl 83, 85103–85138 (2024). https://doi.org/10.1007/s11042-024-19437-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-024-19437-9









