1 Introduction

A typical optimization problem involves finding the largest or the smallest value based on a function as the goal. The obtained values are known as an optimum solution. The cost and time complexity are important when an algorithm tries to find the optimum solution. When the problem dimension and the search space are increased, the complexity of the problem will also increase [1]. Researchers have used Metaheuristic algorithms to find optimal solutions and avoid computational complexity in recent decades [2,3,4,5,6,7]. The complexity theory classified the difficulty of optimization problems into the Polynomial (P), Non-deterministic Polynomial (NP), and NP-hard [8, 9]. In NP-hard problems, the run time increases when the input size grows exponentially. Metaheuristic algorithms [10] can find satisfying solutions for NP-hard problems within a reasonable time. The authors benefit from observing the animals' behaviour, physical phenome, and evolutionary theory to build the algorithms [11]. Metaheuristic algorithms determine the best possible solution in each iteration by initialization of a search space based on the algorithm rules and satisfying the cost function. Additionally, the algorithms must consider the balancing between the exploration and exploitation phases to prevent trapping into one of the local optimum solutions [12,13,14,15].

In general, metaheuristic algorithms are classified into Single solution-based and Population-based. The researchers have proved that the algorithms in the population-based class have better performance in solving optimization problems [10]. There are three types of population-based algorithms. The swarm intelligence algorithms (SI) are the first type that imitates the natural behaviours of humans, animals, and plants [16,17,18,19,20,21,22]. The evolutionary algorithms (EA) are the second type that imitates natural genetic mechanisms and evolution [23,24,25]. The last type is the natural phenomenon algorithms (NP). The algorithms in the NP subclass imitate the universe's physical or chemical rules [26]. However, some optimization algorithms have limitations, such as local optimum traps, tradeoffs between exploration and exploitation, and time complexity [27]. New metaheuristic algorithms were proposed to address these weaknesses. Usually, the researchers used some metaheuristic algorithm advantages and disadvantages to propose hybrid metaheuristic algorithms. For example, the ease of getting trapped in the local optima of the honey badger algorithm (HBA) [21] can be solved by the convergence rates in the early stages of evolution in the sand cat swarm optimization (SCSO) algorithm [22]. Moreover, the SCSO offers swarm diversity and convergence speed which solve the insufficient exploration and exploitation balancing in the HBA. Thus, the main goal is to use the strengths of some algorithms to minimize another algorithm's weaknesses [28].

A wide range of hybrid metaheuristic algorithms have been proposed, and the following is a review of some of these algorithms. In algorithms [13][29,30,31,32], the authors present hybrid algorithms that address problems such as low convergence speed, balancing between the two exploration and exploitation stages, and preventing the algorithm from falling into the local optimum. Other hybrid algorithms [33, 34] addressed problems such as early convergence, enhanced approximate, and approach global optimum solutions. In [35], the authors developed a new hybrid metaheuristic algorithm to find the best route to collect the bins in a smart waste collection system. The hybrid algorithm decides the collection time and each bin's short path. They applied the proposed hybrid algorithm in a real case study in Portugal and the results showed that the new algorithm increases the company's profit by 45% percentage. The authors of [36] proposed a new hybrid algorithm by combining the crow search algorithm (CSA) and the symbiotic organisms search (SOS) algorithm. The new CSA-SOS algorithm is applied in industrial applications to solve the load-sharing optimization problem. The work in [37] proposed a new hybrid algorithm that combines the iterated local search (ILS), variable neighbourhood descent (VND), and threshold acceptance (TA) metaheuristic algorithms to find the proper routing for pickup and delivery operations. The ILS is used as a mainframe while the VND and TA are used for the local search mechanism and acceptance criterion respectively. The new algorithm generates initial solutions by using the nearest neighbour heuristic algorithm. Then, the VND concentrates the search space by ordering the neighbourhood structures randomly. Finally, the perturbation mechanism explores different regions of the search space.

In [38], a new hybrid metaheuristic algorithm is introduced to solve various engineering problems without any adherence to the parameters. The new hybrid algorithm merges the particle swarm optimization (PSO), gravity search algorithm (GSA), and grey wolf optimizer (GWO), into one hybrid algorithm known as the HGPG algorithm. The HGPG has high control over exploration and exploitation and this is achieved by using the gravity law in GSA, the top three search factors in GWO, and the speed is calculated by the PSO algorithm. This led to an increasing exploitation rate and guided the exploration which produced a high convergence rate compared with other heuristic algorithms. Within the past few years, different hybrid metaheuristic algorithms have been proposed to enhance the feature selection process in human–computer interaction. In [39], a hybrid channel ranking procedure has been developed for Multichannel Electroencephalography-based Brain-Computer Interface (BCI) systems using Fisher information and the objective Firefly Algorithm (FA). The authors aimed to minimize the high-dimensional features of the EEG. In [40], a new hybrid algorithm based on the Dynamic Butterfly Optimization Algorithm (DBOA) with a mutual information-based Feature Interaction Maximization (FIM) scheme for solving the problems of hybrid feature selection methods. The new method IFS-DBOIM maximized the classification accuracy on different datasets. In [41], a Multiobjective X-shaped Binary Butterfly Optimization Algorithm (MX-BBOA) has been developed to select the most informative channels from BCI system signals. The new algorithm increased the classification accuracy and reduced the computation time. In [42], a Logistic S-shaped Binary Jaya Optimization Algorithm (LS-BJOA) combines a logistic map with the Jaya optimization algorithm has been proposed. The new approach aimed to alleviate the computational burden caused by many channels in extracting neural signals from the brain in the BCI systems.

A new generation of hybrid metaheuristic algorithms has emerged recently. The new generation uses machine-learning strategies to enhance metaheuristic algorithms in terms of efficiency. In [43], the authors combined the Q-learning method, a method in reinforcement learning which is a subfield in machine learning, with three classical metaheuristic algorithms to produce three new hybrid algorithms. The Q-learning method is responsible for finding the global solution and avoiding the local trap by guiding the search agent with a reward and penalty system. The authors hybridized the I-GWO, Ex-GWO, and WOA with the Q-learning method to produce the RLI − GWO, RLEx − GWO, and RLWOA hybrid algorithms. The result showed that the new algorithms explore new areas more successfully and have better performance in the exploration and exploitation phases. Another work that used reinforcement learning to enhance a classical metaheuristic algorithm was introduced in [14]. In this work, the sand cat swarm optimization algorithm (SCSO) is hybridized with reinforcement learning techniques to provide the RLSCSO algorithm. The RLSCSO used reinforcement learning agents to explore the search space of the problem efficiently. The results showed that the RLSCSO algorithm explores and exploits the search space better than the standard SCSO. Additionally, the RLSCSO algorithm is superior to other metaheuristic algorithms since the agent could switch between the aforementioned phases depending on the reward and penalty system.

This study made the following significant contributions:

  1. 1.

    This paper proposes a hybrid algorithm called HBASCSO using HBA and SCSO characteristics to improve search efficiency.

  2. 2.

    The HBASCSO, has ability to transition from exploration to exploitation efficiently.

  3. 3.

    Local optimum avoidance is achieved due to trade-off between the exploration and extraction phases.

  4. 4.

    The performance analysis of the HBASCSO is evaluated by three different sets of benchmark functions CEC 2014, 2017 and 2019.

  5. 5.

    Statistical analysis is carried out to evaluate the experimental results and compare them with other state-of-the-art algorithms.

The rest of the paper is organized as follows: Section 2 gives a background about both the honey badger algorithm (HBA) and sand cat swarm optimization (SCSO) algorithms. In Section 3, the proposed hybrid algorithm is explained, and in Section 4, we explain the performance analysis of the HBASCSO algorithm on the different sets of benchmark functions. Finally, Section 5 presents a discussion of the results and Section 6 concludes the paper.

2 Fundamentals

The purpose of this section is to describe the honey badger algorithm (HBA) and the sand cat swarm optimization (SCSO) algorithms. In this paper, we are going to discuss in depth the mathematical models for these algorithms as well as their environment behavior.

2.1 Honey Badger Algorithm (HBA)

The honey badger algorithm (HBA) was proposed by Hashim et al. [21]. HBA algorithm is inspired by honey badger behaviors in nature. The honey badger is a fearless mammal with white fur found in Africa and Asia. Honey badgers weigh between 7 and 13 kg and measure 77 cm in length. Honey badgers love honey and beehives, but they cannot detect the hives' location. This problem is solved by the badger following the honeyguide, a bird that can locate the hives but cannot reach the honey, leading it to beehives. Moreover, the honey badger preys on sixty species using the smelling skill. It starts by estimating the location of the prey, then by moving around in the vicinity of the prey it finds a suitable spot to catch the prey. The badger begins digging and catching after locating the correct location. The honey badger algorithm (HBA) mimics badger's feeding behavior in two ways. In the first mode, smell and dig, also known as digging mode, and in the second mode, honeyguide, also known as honey mode.

The HBA algorithm starts by initializing the search space. It is the representation of the candidate solutions that forms the search space. The search space initialization determined by the using the Eq. (1).

$$\text{candidate solutions }=\left[{x}_{11} \cdots {x}_{1D} \vdots \ddots \vdots {x}_{n1} \cdots {x}_{nD}\right]$$
(1)

The number of honey badgers N and the position of each one is initialized. The next move is calculating the intensity (I) which relay on both the smell of prey and the distance to it. The honey badger’s speed depends on the power of smell. The smell intensity calculates by Inverse Square Law, and is defined in Eq. (2), (3), and (4). The \({r}_{1}\) selects randomly between 0 and 1, \({x}_{i}\) is the candidate solution in a population, the \({lb}_{i}\) and \({ub}_{i}\) refer to bounds of the search space where the first one is the lower bound while the second one is the upper bound. \({I}_{i}\) is the prey’s smell intensity, \({r}_{2}\) selected randomly between 0 and 1, S is the prey’s location (concentration strength), \({d}_{i}\) is the distance between badger and prey, and \({x}_{prey}\) refers to the prey position. The third step is updating the density factor\(\left(\alpha \right)\). The density factor ensures the smoothness between the searching or exploration phase and the exploitation phase by controlling the time-varying randomization. The density factor decreases with time to minimize the randomization according to Eq. (6). Where C is a constant = 2, and \({t}_{max}\) is the maximum number of iterations.

One of the important steps in metaheuristic algorithms is avoiding the local optimal trapping. The HBA changes the search direction by using a flag (F). This flag allows agents to discover new areas not visited yet in the search space. The HBA updates process position in two phases “the digging phase” and “the honey phase”. In the digging phase, the badger updates its position by moving like a Cardioid shape. This motion can be simulated by Eq. (7). Where \({x}_{new}\) refers to the badger’s new position, \(F\) is the direction alters flag, \(\beta\) represents the ability to get food, and r numbers are selected randomly between 0 and 1. The \(F\) can be calculated using Eq. (8). In the honey phase, the badger updates his position by following the guided bird and this motion can be simulated by Eq. (9). Where \({r}_{7}\) selected randomly between 0 and 1 Fig. 1.

Fig. 1
figure 1

Prey intensity and inverse square law [21]

$${x}_{i}={lb}_{i}+{r}_{1}\times \left({ub}_{i}-{lb}_{i}\right)$$
(2)
$${I}_{i}={r}_{2}\times \frac{S}{4\pi {d}_{i}^{2}}$$
(3)
$$S={\left({x}_{i}-{x}_{i+1}\right)}^{2}$$
(4)
$${d}_{i}={x}_{prey}-{x}_{i}$$
(5)
$$\alpha =C\times exp\left(\frac{-t}{{t}_{max}}\right)$$
(6)
$${x}_{new}={x}_{prey}+F\times \beta \times I\times {x}_{prey}+F\times {r}_{3}\times \alpha \times {d}_{i}\times coscos 2\pi {r}_{4} \times \left[1-coscos 2\pi {r}_{5}\right]$$
(7)
$$F=\{1 if {r}_{6}\le 0.5 -1 else$$
(8)
$${x}_{new}={x}_{prey}+F\times {r}_{7}\times \alpha \times {d}_{i}$$
(9)

2.2 Sand cat swarm optimization (SCSO)

Sand cat swarm optimization algorithm by Seyyedabbasi et al. was inspired by sand cat behaviors in nature [22]. The Sand Cat is a Felis mammal animal that lives in Asia deserts. These environments are known as harsh environments for animals. The smart and small cat has various life behaviors to do daily activities like hunting and escaping. Despite the great similarity between a sand cat and a domestic cat in appearance, the living behavior is very different. One of the most important of these differences is that sand cats do not live in a group. However, Sand cats have some different features that enable them to live in these harsh environments. The fur color of sand cats is near to the desert color which makes hiding from other animals easier. Moreover, the sand cat's paws have a density of fur that acts as an insulator that protects it from high soil temperatures. The last different feature is the size of the sand cat's ears is bigger compared with the domestic cat's ears. The tail of the sand cat represents half of the cat's length. cats' length ranges between 45—57 cm with body weight between 1 and 3.5 kg. As clawed animals, sand cats use their paws for hunting snakes, reptiles, desert rodents, small birds, and insects. During hunting, firstly, the sand cat detects the prey by using its ears to hear the low-frequency noises (below 2 kHz). Then it tracks the prey until it finds the right moment to attack or dig when the prey is underground. To imitate this behavior, the SCSO algorithm has two stages which are searching and attacking.

To achieve the swarm intelligence concepts, the SCSO algorithm contains swarm of sand cats. The population is an array of sand cats and each cat (1D array) represents values for all variables in the problem. The definition phase creates a candidate matrix with a size equal to the sand cats' number and the variable values specified between low and upper boundaries. The fitness function is used to find the cat's fitness cost function Eq. (10). Once the iteration is done, the cat with the best cost function output is the best solution for that iteration. The other cats enhance their position toward the best solution. However, if the best solution is not better than the previous iteration's solutions, the SCSO algorithm ignores it.

$$candidate solutions={cat}_{1} {\dots } {cat}_{n} \left[{x}_{11} \cdots {x}_{1d} \vdots \ddots \vdots {x}_{n1} \cdots {x}_{nd}\right]$$
(10)
$$Fitness=f\left(Sand Cat\right)=f\left({x}_{1},{x}_{2},\cdots ,{x}_{d}\right);\forall {x}_{i}\left(is calculated for n time\right)$$
(11)

where f is the fitness function value for each cat in the population. The SCSO algorithm imitates the sand cat's behaviors in two phases: searching the prey (exploration) and attacking the prey (exploitation). The search phase relies on the fact that the sand cat hears low frequencies. Each solution (cat) has a sensitivity range and this range decreases linearly after each iteration to ensure that the cat will not move away from the prey. The initial value of this range is between two and zero. The reason for selecting two is the fact that the sand cats can hear low-frequencies below 2 kHz. Mathematically, the sensitivity range decreases according to Eq. (12). Where \(\overrightarrow{{r}_{G}}\) is the sensitivity range, \({S}_{M}\) is the cat's hearing level which is assumed to be 2, \({iter}_{c}\) is current iteration and \({iter}_{Max}\) is maximum iterations. This equation is flexible and adaptable. For example, the \({S}_{M}\) value can represent the agent's action speed in another problem. Moreover, the range value will be adapted with the iteration number. In a 100 iteration, the value will be greater than 1 in the first fifty iterations and less than 1 in the last fifty iterations. Equation (13) shows \(\overrightarrow{R}\) which is the main parameter that decides between moving from exploration to exploitation. The \(\overrightarrow{R}\) parameter ensures the balance between these two phases. To avoid the local optimum problem, each cat in the population has its sensitivity range which is calculated by Eq. (14). However, the \(\overrightarrow{{r}_{G}}\) is the general range sensitivity that decreased linearly as mentioned before. Finally, each cat updates its position depending on its sensitivity range, its current position, and the best-candidate position Eq. (15). Where the \(\overrightarrow{{Pos}_{bc}}\) and \(\overrightarrow{{Pos}_{c}}\) are the best candidate and current positions respectively.

The cat sensitivity range takes a circular shape. In the attack phase, the direction of movement is determined by a random angle \(\left(\theta \right)\) on the circle. The distance between the current solution \(\overrightarrow{{Pos}_{c}}\) and the best solution \(\overrightarrow{{Pos}_{b}}\) and the other parameters of movement are calculated by Eq. (16). Where \(\overrightarrow{{Pos}_{rnd}}\) represents the random position and is calculated by Eq. (17). The random position guides cats to avoid the local optimum traps. Since the direction movement is determined on a circle, each cat in the population moves in a different direction between 0 and 360 (-1 and 1 in the search space). The angle of the hunting position for each cat is determined by using the Roulette Wheel selection algorithm. Figure 3 shows the position updating procedure for two consecutive iterations in the SCSO algorithm Fig. 2.

Fig. 2
figure 2

Position updating between Iterationi (a), and iterationi+1 (b) [22]

$$\overrightarrow{{r}_{G}}={S}_{M}-\left(\frac{{S}_{M}\times {iter}_{c}}{{iter}_{max}}\right)$$
(12)
$$\overrightarrow{R}=2\times \overrightarrow{{r}_{G}}\times rand\left(\text{0,1}\right)-\overrightarrow{{r}_{G}}$$
(13)
$$\overrightarrow{r}=\overrightarrow{{r}_{G}} \times rand(\text{0,1})$$
(14)
$$\overrightarrow{Pos}\left(t+1\right)= \overrightarrow{r}\bullet \left(\overrightarrow{{Pos}_{bc}}\left(t\right)-rand\left(\text{0,1}\right)\bullet \overrightarrow{{Pos}_{c}}(t)\right)$$
(15)
$$\overrightarrow{Pos}\left(t+1\right)=\overrightarrow{{Pos}_{b}}\left(t\right)-\overrightarrow{r}\bullet \overrightarrow{{Pos}_{rnd}}\bullet cos \left(\theta \right)$$
(16)
$$\overrightarrow{{Pos}_{rnd}}= \left|rand\left(\text{0,1}\right)\bullet \overrightarrow{{Pos}_{b}}\left(t\right)-\overrightarrow{{Pos}_{c}}(t)\right|$$
(17)

3 HBASCSO Algorithm

The exploration phase of the algorithm plays a very important role in optimizing performance in terms of speeding up the convergence process and avoiding local optimum conditions. As a result, exploitation also contributes significantly to the performance of algorithms. It is common to use hybrid algorithms that combine the advantages and disadvantages of metaheuristic algorithms. This study uses two algorithms to hybridize: the honey badger algorithm (HBA) and the sand cat swarm optimization algorithm (SCSO). The previous section provided in-depth descriptions of these algorithms. The algorithms are both inspired by animal behavior in nature. These algorithms each have strong abilities to find the optimal solution, but they both have limitations. Furthermore, the no free lunch (NFL) theorem [44] indicates that no algorithm can solve all optimization problems. There is no doubt that both of these algorithms are simple to implement, as well as reasonable in terms of cost and time complexity. Due to these characteristics, they are able to find the optimal solution in a reasonable amount of time.

As a first step, a uniform distribution of randomly generated solutions is used to fill the search space, including both sand cats and honey badgers. Then, the search space boundary is controlled by checking the population. If the search agents are found outside the boundary, they are amended. In this way, a fitness function is calculated as a result. Therefore, it is imperative that, in order to ensure that the solutions are feasible and optimal, a fitness function be satisfied. There is a parameter called \(a\) that has the function of controlling the switch between digging and attacking. This parameter is a random value between 0 and 1. As long as the \(a\) parameter value is smaller than 0.5, the HBASCSO algorithm is used, Eq. (15.a) which is the digging phase. This equation is similar to a cardioid motion [19]. Otherwise, the HBASCSO algorithm is used Eq. (15.b) to attacking on the prey. In the equation the cos (\(\beta\)) is also used that \(\beta\) is the ability of search agents to attack on the food. The pseudocode and flowchart is given in the algorithm 1 and Fig. 3.

Fig. 3
figure 3

The flowchart of proposed algorithm

$$\overrightarrow{X}\left(t+1\right)=\{\overrightarrow{{Pos}_{c}}\left(t\right)+F\times \beta \times I\times \overrightarrow{{Pos}_{c}}\left(t\right)+F\times {r}_{3}\times \alpha \times {d}_{i}\times cosco2\pi {r}_{4} \times \left[1-coscos 2\pi {r}_{5} \right] \overrightarrow{r }.\left(\overrightarrow{{Pos}_{bc}}\left(t\right)-coscos \left(\beta \right) \bullet \overrightarrow{{Pos}_{c}}\left(t\right)\right) \frac{ a\le 0.5}{a>0.5}\frac{(15.a)}{(15.b)}$$
figure a

Algorithm 1. Proposed hybrid optimization algorithm pseudocode

4 Result and Analysis

The purpose of this section is to evaluate the performance of the HBASCSO algorithm through the use of benchmark functions. A comparison is made between the proposed HBASCSO algorithm and seventeen popular algorithms, including, honey badger algorithm (HBA) [21], sand cat swarm optimization (SCSO) [22], grey wolf optimizer (GWO) [45], whale optimization algorithm (WOA) [17], harris hawks optimization (HHO) [20], sine cosine algorithm (SCA) [46], particle swarm optimization (PSO) [47], salp swarm algorithm (SSA) [48], gravitational search algorithm (GSA) [49], fick's law algorithm(FLA) [50], Henry gas solubility optimization (HGS) [51], moth-flame optimization (MFO) [52], Bonobo optimization (BO) [53], artificial ecosystem-based optimization (AEO) [54], multi-verse optimizer (MVO) [55], seagull optimization algorithm (SOA) [56], slime mould algorithm (SMA) [57].

This study utilizes a greater number of metaheuristic algorithms than is usually the case. To analyze the performance and variety of the proposed algorithms for each group of benchmark functions, different metaheuristic algorithms have been selected. The benchmark functions used for this study are those from CEC2015 [58, 59], CEC2017 [60], and CEC2019 [61]. In order to increase the accuracy of the analysis, three sets of benchmark functions were used. All experiments are conducted in the same environment. All algorithms are assumed to be simulated under similar settings, using 30 independent runs with 30 search agents and 500 iterations. For a methodology to be effective, independent runs must be conducted to monitor the effects generated by random parameters. Each metaheuristic algorithm parameter value is presented in Table 1.

Table 1 Algorithm parameter settings for comparative

Using benchmark functions, metaheuristic algorithms can be evaluated for their effectiveness and efficiency. The CEC 2014 and 2015 benchmark functions are presented in Table 2. This group consists of three types of functions: unimodal, multimodal, and fixed-dimension multimodal. Within the unimodal benchmark functions (max or min), there is only one global optimum. The multimodal function has both a global and a local optimum, as the name implies. It is important to note that the fixed-dimension multimodal function cannot be modified, as opposed to the other two categories of benchmark functions. Table 3 presents the second set of benchmark functions. Metaheuristic algorithms are measured using this type of benchmark in the Congress on Evolutionary Computation [60]. The benchmark functions in this section are more challenging. In this set, there are four groups of functions: unimodal, simple multimodal, hybrid, and composition functions. In the third set of benchmark functions (CEC-C06 2019), we examined the test functions to demonstrate the algorithm's ability to handle large-scale optimization problems in Table 4.

Table 2 Benchmark functions (CEC14, 15)
Table 3 Review of CEC2017 benchmark function problems
Table 4 Modern 10 benchmark test functions from CEC2019 (CEC-C06)

4.1 Result Analysis for the Benchmark Functions CEC2014-2015

The analysis for each algorithm with different sets of benchmark functions is explained in Tables 58. An overview of the results, such as average, worst, best, and standard deviation, can be found in the appropriate tables. As mentioned before, this paper used three different sets of benchmark functions as well as some recently proposed metaheuristic algorithms to compare and evaluate the proposed algorithm. In Table 5, the results for the first set of benchmark functions are presented. The obtained results for unimodal functions with different metaheuristic algorithms show that the HBASCSO algorithm has good performance in the functions F1, F2, F3, F4, and F7. In this type of benchmark function, there is only one optimal solution. The HHO algorithm performance in functions F5 and F6 is better than others. Table 6 illustrates the results of multimodal functions. The obtained results for the functions F8 to F13 demonstrate that the HBASCSO algorithm provides optimal results for the functions F9, F10, and F11. While the results for the HBA algorithm for those functions were the same as the proposed algorithm, the SCSO algorithm results for those functions were not the same. As a result, the proposed hybrid algorithm can improve the functionality of the HBA and SCSO algorithms. The hybrid metaheuristic algorithms are used to capitalize on the advantages of the algorithms. It is possible to observe that the HBA and SCSO algorithms are capable of exploration and exploitation efficiently in hybridization.

Table 5 Results for F1-F7 Algorithm (CEC2014-2015)
Table 6 Results for F8-F13 Algorithm (CEC2014-2015)

For functions F14-F23, which are fixed-dimension functions, the obtained results are presented in Table 7. On the basis of the appropriate table, the HBASCSO determines the global optimum for F15, F16, F17, F18, F19, and F20. It was observed that most metaheuristic algorithms can find the fixed-dimension benchmark function's global optimum, but for the function F20, the proposed algorithm's mean result is better than others. Besides, for functions F21, F23, the GWO algorithm always finds the global optimum. Table 8 shows the output of rank based on the mean value of each algorithm. All algorithms are ranked statistically in Table 8. Figure 4 shows the most successful optimization algorithm, based on an analysis of the total rank summary of all optimization algorithms.

Table 7 Results for F14-F23 Algorithm (CEC2014-2015)
Table 8 The rank summary for each metaheuristic algorithm on benchmark functions (CEC2014-2015)
Fig. 4
figure 4

Total of obtained rank for each metaheuristic algorithm

4.2 Result Analysis for the Benchmark Functions CEC2017

To perform the numerical validation analysis, 29 benchmarks from CEC2017 were used. Performance evaluations of many metaheuristic algorithms are conducted using these functions. Among the CEC2017 functions, four types are distinguished: unimodal (F1 and F3), multiple (F4—F10), hybrid (F11—F20), and composition (F21-F30). Table 3 presents the specifications for these functions. These benchmarks have been used to evaluate the HBASCSO, as well as comparisons with algorithms such as HBA, SCSO, SSA, GSA, FLA, HGS, and MFO. In Table 9, results from the experiments are summarized, along with average, worst, best, and standard deviation.

Table 9 Results for F1-F30 Algorithms D = 10 (CEC17)

In terms of their average results, HBASCSO's results are superior to these benchmarks. By analyzing the average ranking values of the algorithms involved, the HBASCSO identifies benchmark functions that provide the optimum results. A comprehensive analysis of the HBASCSO algorithm was conducted along with an examination of the affecting exploration and exploitation capabilities over the CEC2017 test functions. A better balance between exploration and exploitation is possible after the hybridization of two metaheuristic algorithms. Hybrid algorithms benefit from the main advantages of both HBA and SCSO algorithms, even though they have operators to control a trade-off. During exploration and exploitation of the HBASCSO, all search agents maintain their characteristics and activity, which allows for efficient optimization of the search area. Table 10 summarizes the ranking results for the HBASCSO algorithm and other algorithms. According to this table, the algorithm that finds values close to the global optimum is the one with the lowest overall ranking. As there are eight algorithms compared, the algorithm with the lowest ranking can find results which are very close to the optimum. In contrast, the algorithm with the highest value can find the worst results.

Table 10 The rank summary for CEC2017

4.3 Result Analysis for the Benchmark Functions CEC2019

Using the HBASCSO algorithm, the CEC2019 benchmark function is examined, and its results are compared with those of other well-known metaheuristics. Furthermore, this benchmark test function is referred to as CEC-C06, also known as "The 100-digit challenge" [63]. The 10 functions of modern benchmark tests are listed in Table 4. These functions are used for evaluating large-scale optimization problems in metaheuristic algorithms. It can be seen from Table 4 that the first three functions have different dimensions. The functions 4 to 10 are shifted and rotated between 100 and 100 to simulate the minimization problem in 10-dimensional space. It has been determined that all of the functions in CEC2019 have reached their global optimum at point 1, and all of the functions are scalable.

The HABSCSO algorithm performs well in the CEC01, CEC02, CEC03, and CEC10 algorithms as shown in Table 11. Analyzing the exploration and exploitation capabilities of metaheuristic algorithms for unimodal functions is the goal of these functions. Table 12 shows that the HBASCSO algorithm is ranked first in the rank summary and in the competitive rank for optimum values. The SCSO and SMA algorithms are also in the first rank summary totally. Consequently, the SCSO algorithm achieved optimal results in CEC04 and CEC05. Meanwhile, the SMA algorithm obtained optimal results in CEC02, CEC07, and CEC09. A comparison of the performance of the HBASCSO algorithm with newly proposed algorithms shows that it is very competitive. At the same time, the trade-off between the exploration phase and the exploitation phase has been evidently examined.

Table 11 Results for F1-F10 Algorithms (CEC19)
Table 12 The rank summary for CEC2019

5 Discussion

The numerical results show that the HBASCSO algorithm works better than many other metaheuristic and hybrid metaheuristic algorithms, such as HBA, SCSO, GWO, WOA, HHO, SCA, PSO, SSA, GSA, FLA, HGS, MFO, BO, AEO, MVO, SOA, and SMA. In the first part of the performance analysis, there are 23 test functions used to compare the proposed algorithm with HBA, SCSO, GWO, WOA, HHO, SCA, and PSO. Table8 presents the rank summary of the proposed algorithm compared with the previously mentioned algorithms on the CEC 2014–2015 benchmark functions. The proposed algorithm took first place in the total result, while the SCSO algorithm ranked second in this comparison. The second comparison shows the difference between the proposed algorithm and HBA, SCSO, SSA, GSA, FLA, HGS, and MFO algorithms on CEC2017 test functions (F1-F30). The mean, worst, best, and standard deviation metrics are used to compare the proposed algorithm with other optimization algorithms. Table 10 summarizes the comparison results. The HBASCSO algorithm took first place in this table, and the SCSO ranked second with little difference. The last section of the analysis compares the proposed algorithm with the HBA, SCSO, BO, AEO, MVO, SOA, and SMA on the CEC2019 benchmark functions. In this comparison, both the proposed algorithm, the SCSO, and the SMA introduce perfect performance in many benchmark functions, and as summarized in Table 12, all three of these algorithms took the first place in the total performance. The HBASCSO algorithm analysis presented in the previous section demonstrated its effectiveness in comparison with some optimization algorithms. However, HBASCSO's advantages and disadvantages can be summarized as follows:

  • It is important to note that in order to improve performance, the HBASCSO has a trade-off between the exploration and extraction phases. This is the cause of the hybridization of the HBA and SCSO algorithms, which is what led to its hybridization.

  • Considering disturbances and uncertainties is crucial for designing robust optimization algorithms that fit better into real-world systems.

  • In an analysis of the mean, worst, best, and standard deviation values of the obtained results, it can be seen that the HABSCSO algorithm tries to get as close as possible to the optimal solution. As a result, there is no significant difference in the results based on mean, worst, and best.

5.1 Wilcoxon Rank Sum Test Analysis

The Wilcoxon signed rank test was developed by Wilcoxon and his colleagues (1970) as a statistical procedure based solely on the order in which observations are presented in the sample [62]. In this case, the one with the lowest ranking will be determined to be the best. In this section, the Wilcoxon rank sum test is carried out at a significance level of 5%. In addition to the analysis made, Tables 13 presents the p-values calculated by the nonparametric Wilcoxon ranksum tests for the pair-wise comparison over two independent samples (HBASCSO vs. HBA, SCSO, GWO, WOA, HHO, SCA, and PSO) for the CEC2017. Tables 14 and 15 present the p-values calculated by the nonparametric Wilcoxon ranksum tests for the pair-wise comparison over two independent samples (HBASCSO vs. HBA, SCSO, GWO, WOA, HHO, SCA, PSO, BO, AEO, MVO, SOA, and SMA) for the CEC2019. The p-values are generated by the Wilcoxon test with 0.05 significant level and over 30 independent runs.

Table 13 P-values at α = 0.05 by Wilcoxon test for CEC2017 with HBASCSO
Table 14 P-values at α = 0.05 by Wilcoxon test for CEC2019 with HBASCSO
Table 15 P-values at α = 0.05 by Wilcoxon test for CEC2019 with HBASCSO

5.2 Computational Complexity

A fundamental metric for assessing algorithmic performance is time complexity. This paper expresses this using big-O notation. This paper has a detailed assessment of the complexity for the HBA, SCSO, and HBASCSO. The computational complexity of the algorithm can be categorized into three primary segments: the population update method, fitness evaluation, and initialization. In O(N × D) time, the HBA, SCSO, and HBASCSO algorithms initialize the emphasize of each search agent, where N is the number of search agents and D is the problem's dimensionality. Consequently, the overall computational cost of the HBASCSO is proportional to O(N × D × Max-iter) for a total of Max-iter iterations. The HBASCSO algorithm general computing complexity is O(N2), assuming that N and D are equivalent.

5.3 Examination of the Convergence Curve

The HBASCSO algorithm has a specified convergence rate. In order to avoid local optima, exploration and exploitation should be balanced. In terms of exploration and exploitation, control parameters are effective. Additionally, premature convergence is prevented by hybridizing the two metaheuristic algorithms. In the early steps of optimization, which include exploring the search space, it is necessary to make sudden changes in the search agent's movement. It is necessary to make these changes in order to determine the most effective search areas in the search space. When the exploitation phase begins, search agents find the local optimum solution and perform their operations in a specific manner.

Based on the obtained results, it is clear that there are some unforeseen changes in the movement of search agents during the initial iterations. Additionally, the movement in the final iteration should ideally decrease. These movements are considered essential [64]. The convergence curve of the HBASCSO is shown in Fig. 5. The convergence curve behavior of the HBASCSO algorithm in functions F1, F2, F3, F4, and F7 indicates that the proposed algorithm exhibits a typical convergence pattern. It is also clear that the HBASCSO algorithm has a tradeoff between exploration and exploitation phases. The HBA and SCSO algorithms demonstrate efficient exploration and exploitation capabilities through hybridization.

Fig. 5
figure 5figure 5figure 5figure 5

The convergence curve for some benchmark functions

6 Conclusion

In this study, a newly developed hybrid metaheuristic algorithm based on the Honey Badger Algorithm (HBA) and the Sand Cat Swarm Optimization (SCSO) algorithm has been developed. The HBASCSO algorithm aims to improve the performance of the original HBA and SCSO algorithms by covering the weaknesses of each algorithm. One of these weaknesses is the poor performance of the HBA algorithm in the exploitation phase. On the other hand, SCSO is a very competitive algorithm and its performance in the exploitation phase has outperformed many other algorithms. The obtained results from well-known benchmark functions show that the HBASCSO algorithm has a smooth mechanism for position updating. These well-known benchmark functions include the CEC2015, CEC2017, and CEC2019 functions. For the CEC2015, the HBASCSO algorithm was compared with seven different well-known metaheuristic algorithms (HBA, SCSO, GWO, WOA, HHO, SCA, and PSO) and analyzed. According to the rank summary for each metaheuristic algorithm, the HBASCSO algorithm ranked first in 14 functions which makes it the best among all seven algorithms. For the CEC2017, the HBASCSO algorithm was compared with some new well-known metaheuristic algorithms (SSA, GSA, FLA, HGS, and MFO) and analyzed. The proposed algorithm outperformed 9 out of 30 test functions and ranked first in the total. For the CEC2019, the HBASCSO algorithm was compared with (HBA, SCSO, BO, AEO, MVO, SOA, and SMA) algorithms. This benchmark includes 10 functions and the proposed algorithm ranked first in 3 of them alongside SCSO and SMA. The performance of the proposed algorithm exceeds the performance of the other metaheuristic algorithms and proves its utility in solving many engineering and real-world problems.

Below are a few of the projects that are planned for the future.

  • In concurrent or parallel systems, they can solve multi-objective problems.

  • The proposed algorithm can be used to define optimized fitness functions for artificial neural networks.

  • Feedback controller design for nonlinear systems can benefit from the proposed algorithm.

  • Bioinformatics applications can be analyzed using these algorithms to determine the best method for extracting and filtering features.

  • The HBASCSO can be effectively applied to real-world application problems, including feature selection and robot path planning.