Abstract
Harris hawks optimization (HHO) is a new meta-heuristic algorithm that builds a model by imitating the predation process of Harris hawks. In order to solve the problems of poor convergence speed caused by uniform choice position update formula in the exploration stage of basic HHO and falling into local optimization caused by insufficient population richness in the later stage of the algorithm, a Harris hawks optimization based on global cross-variation and tent mapping (CRTHHO) is proposed in this paper. Firstly, the tent mapping is introduced in the exploration stage to optimize random parameter q to speed up the convergence in the early stage. Secondly, the crossover mutation operator is introduced to cross and mutate the global optimal position in each iteration process. The greedy strategy is used to select, which prevents the algorithm from falling into local optimal because of skipping the optimal solution and improves the convergence accuracy of the algorithm. In order to investigate the performance of CRTHHO, experiments are carried out on ten benchmark functions and the CEC2017 test set. Experimental results show that the CRTHHO algorithm performs better than the HHO algorithm and is competitive with five advanced meta-heuristic algorithms.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Optimization refers to searching for the optimal solution among all possible solutions for a particular problem [1]. In scientific computing, medical testing, financial analysis, and other fields, optimization problems exist in large numbers and become more and more complex [2,3,4,5]. The traditional optimization methods include gradient descent, Newton’s method, quasi-Newton methods, conjugate gradient, and Lagrange multiplier Method, which is used to solve constraint optimization problems. Although the traditional optimization methods have the advantages of relatively perfect theory and small computation, they have obvious limitations when applied to complex and challenging optimization problems. Compared with conventional methods, the meta-heuristic algorithm has the advantages of simple operation, substantial flexibility, and no need for gradient information. It has more advantages in solving large-scale and complicated optimization problems [6, 7].
Meta-heuristic algorithms can be divided into the following two categories according to the different inspired mechanisms: Firstly, algorithms based on biological evolution and physical laws in nature, such as genetic algorithm (GA) [8], differential evolution algorithm (DE) [9], multi-verse optimization (MVO) [10] and atom search optimization (ASO) [11], are based on biological evolution mechanism. Secondly, algorithms based on social behaviors of social living organisms, such as particle swarm optimization (PSO) [12] and artificial swarm algorithm (ABC) [13], are also known as swarm intelligence optimization algorithms. In recent years, scholars have proposed many novel meta-heuristic algorithms, such as biogeography-based optimization (BBO) [14], dragonfly algorithm (DA) [15], sine–cosine algorithm (SCA) [16], moth flame optimization (MFO) [17], arithmetic optimization algorithm (AOA) [18], wWeighted mean of vectors (INFO) [19], coronavirus optimization algorithm (COVIDOA) [20], Runge–Kutta optimizer (RUN) [21], and artificial hummingbird algorithm (AHA) [22]. These novel optimization algorithms have been applied to many problems, such as fault diagnosis and feature selection. In 2019, Heidari et al. [23] proposed a new swarm intelligence optimization algorithm, Harris hawks optimization (HHO). The algorithm is based on the simulation of the hunting behavior of the Harris hawks, which is the most intelligent bird at present. The algorithm integrates the concepts of a population center, random population division, and escape energy and has a good solving ability for most continuous unconstrained optimization problems [24]. Moreover, the Harris hawks optimization has few parameters to adjust, and it is easy to operate and implement. So it has attracted extensive attention once proposed.As an excellent new swarm intelligence optimization algorithm, HHO has been used to solve optimization problems in many fields. For example, Ahmad Abbasi et al. [25] use HHO algorithm in minimizing the entropy generation of the microchannel and it has a superior performance. Mali Satya et al. [26] use Harris hawks optimization (HHO) to offer a maximum power point tracking technique (MPPT) for photovoltaic (PV) powered e-Vehicles. Mashaleh et al. [27] used HHO algorithm to improve machine learning algorithm for spam detection. In addition, HHO and its improved algorithm have also achieved good application results in image recognition [28], support vector machine optimization [29], neural network optimization [30] in the computer field, photovoltaic system modeling [31], constraint engineering optimization [32,33,34], rainfall modeling [35], and other problems in the engineering field.
However, the HHO algorithm still has some problems that the convergence rate is not fast enough in the exploration stage, and it is easy to fall into local optimum in the later stage. In view of these problems, scholars have proposed many improvements to the HHO algorithm. Song Shiming et al. [36] introduced Gaussian mutation strategy and dimension decision strategy in cuckoo search into the HHO algorithm, which is conducive to fully mining the solutions in the search area. Li Chenyang et al. [37] proposed an exploration strategy based on logarithmic spiral and opposition learning, and integrated it into HHO algorithm to improve the exploration ability of HHO. Shijie Zhao et al. [38] proposed an optimized HHO algorithm combining the regulation mechanism of periodic energy decline of prey and Newton’s local enhancement strategy, which enhanced the mining ability of the algorithm. Yuxin Guo et al. [39] introduced the elite reverse learning mechanism and golden sine algorithm to improve population diversity and reduce algorithm convergence time. Xiaolong Liu et al. [40] balanced the exploration and development of the algorithm through the multi-subgroup square neighborhood topology and fixed permutation probability. Oian Fan et al. [41] proposed a new quasi-reflection Harris hawks optimization (QRHHO), which combined the HHO algorithm with a quasi-reflection-based learning mechanism (QRBL) to improve the optimization accuracy of the HHO algorithm. Hussain et al. [42] introduced long-term memory into the HHO algorithm, referring to the past individual location information to increase the diversity of the population in the search process. Yiming Ma et al. [43] used maximum likelihood estimation to improve the algorithm’s fitness function and improve the solving accuracy.
Although these improved algorithms can improve the detection ability or avoid premature convergence to some extent, according to the No Free Lunch theorem, no algorithm can achieve the best effect on all optimization problems. Therefore, given the difficulties of slow convergence speed and local optimal stagnation in the early stage of the HHO algorithm, it is still necessary to continue studying and improving the strategy. In this work, two strategies are added into the Harris hawks optimization algorithm to better balance exploration and development. Tent mapping is considered to adjust the value of a random parameter, so that the ergodicity and regularity of chaotic mapping can be used to reduce unnecessary consumption in the exploration process of Harris hawks, and then make the convergence accelerated. The global cross-mutation strategy is added to make full use of the information of the eagle group, avoid skipping the optimal solution, and enhance the local exploration ability of the eagle group in the later stage. By modifying the search equation of HHO with the above two strategies, the search mechanism of HHO is improved, and the exploration and development of the algorithm are better balanced. This paper proposes a Harris hawks optimization based on global cross-variation and tent mapping (CRTHHO), and its performance is tested on ten benchmark functions and the CEC2017 test set. The algorithm’s improvement strategy and performance test will be discussed in detail in 3 and 4.
2 Harris hawks optimization (HHO)
Harris hawks optimization (HHO) is a novel meta-heuristic algorithm, proposed by Heidari in 2019, solving complicated combinatorial optimization problem. It is inspired by the hunting and attacking behavior of Harris hawks and its typical feature is swarm intelligence. This optimization algorithm consists of two stages: the exploration stage and the exploitation phase, and each phase mimics the different behaviors of hawks during predation. The parameter E represents the escape energy of prey, which decreases over time, and its calculation formula is as follows:
where \({E_0} = 2{r_1} - 1\) represents the initial state of energy, \({r_1}\) is a random number between (0, 1) , and \({E_0}\) varies randomly between \((-1,1)\) during each iteration. T is the maximum iteration number, and t is the current iteration number.
The Harris hawks choose different capture strategies depending on the escape energy E of the prey. When \(|E |> 1\), Harris hawks are in the exploration phase. Otherwise, they are in the exploitation phase.
2.1 Exploration phase
During the exploration phase, Harris hawks search for prey using two strategies to update their position based on the random location of rabbits and other flock members. Its mathematical model is as follows:
where t represents the current iteration number, T represents the maximum iteration number, N represents the population number, \({r_1},{r_2},{r_3},{r_4}\), and q are random numbers between (0, 1), which are updated in each iteration, UB and LB show the upper and lower bounds of the search space, \(x(t+1)\) is the position vector of hawks in the next iteration, \({x_{prey}}(t)\) represents the current optimal position (the position of prey), x(t) is the current position vector of hawks, \(x_rand\) is a randomly selected hawk from the current population of hawks, and \({x_m}(t)\) represents the average position of the current population of hawks. The average position of hawks is calculated by the following formula:
where \({x_i}(t)\) indicates the location of each hawk in iteration t.
2.2 Exploitation phase
In the game of chasing and escaping between hawks and their prey, the escape energy of prey decreases gradually. When \({|{E}|<1}\), the Harris hawks enter the exploitation phase, and the captors further pursue the prey and try to capture it. HHO mimics this predatory behavior by using four strategies: soft besiege, hard besiege, soft besiege with progressive rapid dive, and hard besiege with progressive rapid dive. HHO confirms which strategy to use through the escape energy E and the escape probability r.
When \(|E |\ge 0.5,r \ge 0.5\), carry out the soft besiege. In this stage, the prey still has the energy to escape, and the eagles use a weak encircle strategy to deplete it and launch a surprise attack. This behavior is modeled by the following rules:
where \(\Delta x(t)\) represents the difference between the prey position and the current position of the Harris hawks in iteration t, \({r_5}\) is a random number between (0, 1), and J represents the random jump strength of the rabbit throughout the process of escaping. The J value changes randomly in each iteration to simulate the nature of rabbit motions.
When \(|E |< 0.5,r \ge 0,5\), the Harris hawks carry out the hard besiege. In this stage, the prey has no energy to escape, and the Harris hawks use a hard encircling to hunt the prey for a final assault. The current position is updated as follows:
When \(|E |\ge 0.5,r < 0.5\), the Harris hawks carry out the soft besiege with progressive rapid dive. In this stage, the prey has enough energy to escape from the eagles and the Harris hawks will gradually dive and soft surround the target. In order to simulate the escape mode and jumping action of the prey, the levy function to the position update process. This process is modeled as follows:
where D is the problem dimension and S is the D-dimensional random row vector.
When \(|E |< 0.5,r < 0.5\), carry out the hard besiege with progressive rapid dive. In this stage, the prey is exhausted and has low escape energy, so the Harris hawks surround the prey through the hard besiege with progressive rapid dive. They attempt to reduce the distance between their average position and the target prey. The position update formula is as follows:
where Y and Z are updated by formula (12) and (13):
The pseudocode of the basic HHO algorithm is shown in Algorithm 1
3 Harris hawks optimization based on global cross-variation and tent mapping
In this section, the proposed algorithm CRTHHO has been described in detail. Tent mapping is introduced to solve the problem of slow convergence in the exploration phase. In order to solve the problem of low convergence accuracy in the later stage of the algorithm, a global crossover mutation is proposed. These two methods will be elaborated on below. The HHO algorithms that introduce tent mapping, global cross-variation alone, and combine the two strategies are named THHO, CRHHO, and CRTHHO.
3.1 Harris hawks optimization based on tent mapping (THHO)
Chaos mapping has the characteristics of randomness, ergodicity, and regularity and is often used to generate the initial population of the algorithm or as a disturbance in the algorithm process [12]. Because of the slow convergence speed of Harris hawks optimization in the early stage, the tent mapping is introduced into the exploration phase of the algorithm, which can significantly accelerate the optimization progress in the early stage of the search. The specific introduction method is as follows:
In the original Harris hawks optimization, the exploration stage of the algorithm is realized by Eq. (2), where q is a random number between (0, 1), which is a simple uniform distribution. In the improved THHO algorithm, the tent map [44] is used to adjust the value of an arbitrary parameter q, which can better use the ergodicity and regularity of the chaotic map so that Harris hawks can reduce unnecessary consumption in the exploration process. Thus, it can accelerate the convergence speed. Therefore, the position update formula of the exploration process in THHO is changed to:
where \(q_i\) is constantly changing in the iteration process, and it is updated through tent mapping. The formula of \(q_i\) is as follows [44]:
3.2 Harris hawks optimization based on global cross-variation (CRHHO)
When Harris hawks optimization enters the later stage, the search scope will be smaller and it is easier to fall into local optimum. Therefore, a global crossover mutation strategy was introduced. After completing a global iterative optimization, the current position \(({x_1}(t),{x_2}(t), \ldots ,{x_N}(t))\) cross-mutates the optimal position \({x_{prey}}(t)\). This strategy can make full use of the information of the eagle group, avoid skipping the optimal solution, and enhance the local exploration ability of the eagle group in the later period. After crossover and mutation, the greedy selection strategy is used to update the optimal location, ensuring the exploration ability and avoiding diversity spillover. Using crossover mutation and greedy selection to improve the HHO algorithm can avoid local optimization in the later stage to enhance the optimization effect.
The crossover mutation strategy is realized by a crossover operator. Set the crossover operator to hybridize the parent solution to produce children that contain the characteristics of the parent solution. Now the crossover work used is defined as follows [45]:
where CR is the crossover rate. \({v_i}(t)\) is selected from \({x_1}(t),{x_2}(t), \ldots ,{x_N}(t)\) randomly, \({x_{prey}}(t)\) is currently the best position for the eagles. \({r_j}\) is the uniformly distributed random number between (0, 1), \(j = 1,2, \ldots ,D\). In the present work, the crossover probability CR is taken as 0.3.
Greedy selection compares the fitness of the updated vector with the current solution vector to determine whether the updated vector will survive in the next generation. After crossover and mutation, whether to update the optimal position of prey is determined by the greedy selection, and the specific operation is as follows [45]:
where F is the objective function, and the above selection process describes the selection of the minimization problem. Through cross-variation and greedy selection, it can effectively avoid falling into the local optimum and reduce the possibility of stagnation.
3.3 Harris hawks optimization combined the two strategies (CRTHHO)
The tent chaos diagram is introduced into the THHO algorithm, and the position updating formula of the original algorithm is changed in the exploration stage so that it has a faster convergence rate in the early stage and improves the optimization performance of the algorithm. CRHHO guides the current optimal individual to mutate through the crossover mutation operator and updates the optimal position of the current population through greedy selection to avoid falling into local optimal at the later stage of the algorithm and improve the convergence accuracy of the algorithm. A CRTHHO algorithm is proposed to combine the above two improved methods into the HHO algorithm. The CRTHHO contains the advantages of the above two improved strategies. It is expected that CRTHHO can not only converge at a faster speed in the early stage but also avoid local optimization to achieve better accuracy in the later stage. The pseudocode of CRTHHO is shown in Algorithm 2, and the flowchart is shown in Fig. 1. The performance of the CRTHHO algorithm is described in 4.
4 Experimental simulation and result analysis
In order to verify the optimization performance of CRTHHO, two kinds of experiments are designed in this paper: (1) comparing the improved HHO and HHO; and (2) comparing CRTHHO with five mainstream meta-heuristic algorithms. The experiment selected ten benchmark test functions for testing, among which F1–F4 is a high-dimensional single-peak function, and F5–F10 is a high-dimensional multi-peak function. The function with only one extreme point in the feasible region is called the single-peak function, which can test the algorithm’s local development ability and convergence speed. The function with multiple local optimal values in the feasible region is called the multi-peak function, which can better test the power of the optimization algorithm to jump out of the local optimal value. The detailed expressions, dimensions, feasible regions, and target values of the functions are shown in Table 1. In addition, the CRTHHO algorithm’s performance is also tested in CEC2017, which will be explained in detail in Sect. 4.2.
In order to ensure the fairness of experimental results, the population size of all algorithms was set as 30, and parameter settings of HHO and improved HHO are consistent with those in the literature [23]. At the same time, to reduce the experiment’s randomness, the experimental results are the average values after 30 independent runs. Because this experiment is limited by computer software, the minimum achievable accuracy is 2.2251E−308. Therefore, when the convergence accuracy is lower than 2.2251E−308, the result is 0. For ease of observation and comparison, the best results of each test function are shown in bold in the table. The ordinate of drawing is taken as \({\log _{10}}(fitness\_value)\).
4.1 Compare improved HHOs and basic HHO
The first part of the experiment is divided into two parts: 1) Compare the convergence accuracy and speed with fixed iteration times and 2) compare iteration times with fixed target precision. In this experiment, the population size and maximum iteration times of the improved algorithm THHO, CRHHO, CRTHHO, and the basic algorithm HHO are set to 30 and 500, respectively, and the dimension of the test function F1-F10 is 30. The final experimental result is the average of 30 independent running results.
4.1.1 Compare the convergence accuracy and speed with fixed iteration times
When the number of iterations is fixed at 500, the test results of HHO, THHO, CRHHO, and CRTHHO on ten benchmark test functions are shown in Table 2. Figure 2 shows the convergence curves of these four algorithms on ten benchmark functions.
In Table 2, the first row of each function records the average of the 30 independent runs and the second standard deviation of behavior. As can be seen from the data in Table 2, in the above 10 test functions, the three improved algorithms, THHO, CRHHO, and CRTHHO, are superior to the original HHO algorithm, and except for F3, the CRTHHO algorithm combining the two improved strategies is superior to the other two algorithms introducing only one approach. It shows that both systems improve the performance of HHO, and the algorithm combining the two methods can better improve the performance of the original HHO algorithm. The convergence precision of the four algorithms on F5, F7, and F8 is very close to the theoretical optimal value, and they have the same convergence precision on F5-F8. Therefore, we can further analyze the performance difference of the four algorithms through the convergence curve.
As shown in Fig. 2, the convergence curves of F1, F2, F4, F9, and F10 clearly show that compared with HHO, THHO has significantly improved the convergence speed in the early stage and the convergence accuracy in the late stage. Compared with HHO, CRHHO has no apparent advantage in the early stage but can jump out of local optimum in the late stage and improve the convergence accuracy. Compared with HHO, CRTHHO has both the advantages of the former two, accelerating the convergence speed in the early stage and improving the convergence accuracy in the later stage. Similarly, compared with F5-F8, it can be seen that although the absolute convergence precision is the same, the convergence speed of CRTHHO is significantly better than that of HHO.
4.1.2 Compare iteration times with fixed target precision
In order to further study the effectiveness of the improved strategy, this section compares the number of iterations required by HHO, THHO, CRHHO, and CRTHHO with the same target accuracy (the experimental results are rounded). Table 3 shows the target accuracy of each function in the experiment and the number of iterations required by each algorithm to achieve the accuracy.
By analyzing the data in Table 3, it can be seen that for the above 10 test functions, within the maximum number of iterations, 6 of the basic HHO algorithm, 3 of the THHO algorithm, and 2 of the CRHHO algorithm cannot achieve the target accuracy. In contrast, the CRTHHO combined with the two improved strategies can achieve the target accuracy. Among the above four algorithms, CRTHHO has the highest success rate of reaching the target accuracy within 500 times. In addition, a horizontal comparison of the results of each test function shows that CRTHHO can achieve the same accuracy as HHO, THHO, and CRHHO with fewer iterations. In summary, CRTHHO significantly improves the performance of the HHO algorithm.
4.2 Compare CRTHHO with 5 advanced algorithms
In order to increase the difficulty of the experiment, the second kind of experiment compares the optimization performance of CRTHHO and five mainstream meta-heuristic algorithms on different dimensions of 10 benchmark functions and CEC2017 test functions. The five algorithms selected in the experiment are as follows: particle swarm optimization (PSO, 1995) [12], biogeography-based optimization (BBO, 2009) [14], dragonfly algorithm (DA, 2016) [15], sine–cosine algorithm (SCA, 2016) [16], and the arithmetic optimization algorithm (AOA, 2021) [18]. PSO is a classic swarm intelligence optimization algorithm. BBO, DA, and SCA are excellent optimization algorithms which are proposed and widely used in recent years. AOA is a novel algorithm proposed in 2021. And the performance of them is outstanding on benchmark functions and real engineering problems. Therefore, the performance of proposed CRTHHO algorithm can be proved by comparing with these five algorithms.The initial parameters of these algorithms are shown in Table 4.
4.2.1 Compare CRTHHO with 5 advanced algorithms on basic test functions
In this section, five novel optimization algorithms are selected to perform performance tests with CRTHHO to verify that the performance of the proposed CRTHHO algorithm is excellent among meta-heuristic algorithms. The test functions are ten basic test functions, as shown in Table 1. At the same time, in order to test its performance in higher dimensions, the experiment in this section is divided into three groups for comparative experiments in dimensions 30, 50, and 100, respectively. The experimental results are shown in Table 5 , and the convergence curves are shown in Figs. 3 and 4.
Table 5 shows the convergence accuracy of 6 optimization algorithms on three different dimensions of 10 benchmark functions. Figure 3 shows the convergence curves of the six optimization algorithms on the benchmark function when the dimension is 30. In comparison, the convergence curves of the six optimization algorithms when the size is 100 are shown in Fig. 4. Combined with the data in Table 5 and the convergence curves in Figs. 3 and 4, the following analysis is carried out.
By comparing the optimization effects of the algorithm on different functions in the same dimension, it can be seen that: At 30 and 50 sizes, CRTHHO’s convergence accuracy is optimal for F1 and F3-F10, and the optimization result of CRTHHO is close to the theoretical optimal value at F5, F7, and F8. In contrast, the convergence accuracy of CRTHHO is not as good as that of AOA at F2. However, Fig. 3 shows that the convergence speed of the CRTHHO algorithm is higher than that of the AOA algorithm. When the dimension is 100, CRTHHO’s convergence accuracy is superior to the other five algorithms for F1-F10, and Fig. 4 shows that the convergence speed of CRTHHO is also faster than other algorithms.
By comparing the optimization effects of the algorithms in different dimensions of the same function, it can be concluded that: When the dimension of the function increases, the convergence accuracy of the same optimization algorithm will decrease, but the CRTHHO convergence accuracy changes little or even improves. This can be obtained by comparing the accuracy changes of AOA and CRTHHO as the dimension increases. Compared with AOA, the accuracy of CRTHHO decreased in F1, F3, F4, F9, and F10, but the variation range was small. In F2, with the increase of dimension, the accuracy of AOA decreased significantly, from close to the theoretical optimal value to 3.3E−44, while the accuracy of CRTHHO remained stable at about 1E−110 and even increased slightly. On F6, AOA’s accuracy dropped by 12 orders of magnitude as the dimension went up to 100, but CRTHHO’s accuracy remained 8.9E−16.
In conclusion, compared with the above five advanced algorithms, CRTHHO has apparent advantages in convergence accuracy and convergence speed and has a stable solution accuracy for high-dimensional problems.
4.2.2 Compare CRTHHO and 5 advanced algorithms on the CEC2017 test set
CEC2017 [46] is a set of challenging test functions, the details of which are shown in Table 6. The test functions in CEC2017 are characterized by rotation and translation. These functions are complicated, and it is difficult to find the optimal solution. Therefore, the number of iterations in this section is set to 1000. Considering the time cost and computer performance constraints, the function dimension is selected as ten dimensions. We compared CRTHHO with five advanced algorithms on the CEC2017 test set in this experiment. The experimental results are shown in Table 7. Figure 5 shows the convergence curves of the six algorithms on 30 test functions. In addition, the difference between the proposed algorithm and other algorithms is analyzed by the rank-sum test.
The rank-sum test is a nonparametric statistical method used to compare whether the two are significantly different. In this experiment, the rank-sum test is performed on the other five algorithms, respectively, with CRTHHO, and the significance level is set at 0.05. The results and ranking of each algorithm are shown in Table 8. p value is the result of rank-sum test. When p value is less than 0.05, the result is significant, indicating that the algorithm is significantly different from the CRTHHO algorithm.
Table 7 shows the optimization results of CRTHHO and the other five optimization algorithms on CEC2017, where mean and STD represent the mean and standard deviation of the results of 30 independent runs, respectively. According to the data in Table 7, it can be seen that CRTHHO has the optimal convergence accuracy on the 17 functions F1, F3–F6, F8, F10–13, F15, F17, F20, F22–23, F25, and F30, and is suboptimal on the five functions F16, F18–19, and F23–24. Therefore, CRTHHO can achieve superior results on more than seventy percent of the tasks in the CEC2017 test set. Figure 5 shows the convergence curves of CRTHHO and the other five advanced algorithms on CEC2017. Combined with Fig. 5, it can be seen that CRTHHO has a fast convergence speed in the early stage and is faster than other algorithms on more than half of the functions. The rank-sum test results and ranking of each algorithm are shown in Table 8, where p value is used to determine whether the algorithm is significantly different from CRTHHO, rank is a ranking of the six algorithms according to convergence accuracy, + means that the algorithm is significantly different from and better than CRTHHO, = means that the algorithm is not significantly different from CRTHHO, and − indicates that the algorithm is significantly different from and inferior to CRTHHO. Combining Table 8, we can see that CRTHHO ranks first in 17 functions. In addition, Table 8 gives the comprehensive ranking of the six algorithms on the 30 functions, and CRTHHO also ranks first. In conclusion, CRTHHO outperforms other algorithms in half or more functions of the CEC test set, which fully proves the effectiveness of the proposed algorithm in complex optimization problems.
4.3 Comparison of CRTHHO with other improved HHOs
To further compare the performance of CRTHHO, it is compared with other improved HHO algorithms on CEC2017. LMHHO [41] is an improved algorithm proposed by Hussain K et al in 2019, while GCHHO [35] is proposed by Song et al in 2021. The test results of CRTHHO, LMHHO, GCHHO, and HHO on CEC2017 are shown in Table 9, and the optimal results have been indicated in bold. By comparison the test results, it can be seen that CRTHHO has 20 optimal results and 5 suboptimal results on 29 test functions except F2. It can be concluded that the proposed algorithm is competitive with other improvements for HHO.
4.4 Real-world optimization problems
In order to study the performance of CRTHHO in real-world engineering applications, we tested CRTHHO with PSO, SCA, BBO, DA, and AOA on four engineering design problems. Due to the low dimensionality of these four engineering problems, compression spring design, pressure vessel design, speed reducer design, and string design, we set the maximum number of iterations and the population to 50 and 30, respectively. Moreover, the final statistical result of each algorithm is the average of 30 independent runs.
4.4.1 Compression spring design
The goal of compression spring design (CSD) [47] is to minimize its mass while satisfying certain constraints, including four inequality constraints: minimum deflection, shear stress, oscillation frequency, and outer diameter limit. The average coil diameter, the diameter of the spring wire, and the effective coil number of the spring are three design variables. The problem can be described as:
The comparison results between CRTHHO and the five algorithms is recorded in Table 10. As we can see from Table 10, the performance of CRTHHO is better than other five algorithms. Therefore, CRTHHO can effectively solve the problem of compression spring design.
4.4.2 Pressure vessel design
The goal of pressure vessel design [48] is to minimize the total cost while meeting the production needs. The four design variables are the thickness of the shell Ts(= x1), the thickness of the head Th(= x2), the inner radius R(= x3), and the length of the cylindrical part of the container L(= x4). In addition, x1 and x2 are integer multiples of 0.625, R and L are continuous variables. The problem can be described as:
The comparison results between CRTHHO and the five algorithms is recorded in Table 11. As we can see from Table 11, the performance of CRTHHO is better than other five algorithms. Therefore, CRTHHO can effectively solve the problem of pressure vessel design.
4.4.3 Speed reducer design
In the mechanical system, the reducer is one of the important parts of the gear box. In the speed reducer design problem (SRD) [49], the weight of the reducer is minimized under 11 constraints. The seven variables affecting the weight of the reducer are: face width \(b\left( { = {x_1}} \right) \), module of teeth \(m\left( { = {x_2}} \right) \), the number of teeth in pinion \(z\left( { = {x_3}} \right) \), the length of the first shaft between bearings \({l_1}\left( { = {x_4}} \right) \), the length of the second shaft between bearings \({l_2}\left( { = {x_5}} \right) \), the diameter of the first shaft \({d_1}\left( { = {x_6}} \right) \), the diameter of the second shaft \({d_2}\left( { = {x_7}} \right) \). The problem can be described as:
The experimental results of CRTHHO and the five algorithms on reducer design are shown in Table 12. By comparing the optimization algorithms, it can be seen that CRTHHO can find the lowest evaluation function value. Therefore, it has better optimization performance than other algorithms in reducer design.
4.4.4 Tubular column design
The goal of the tubular column design problem (TCD) [48] is to obtain a uniform string using minimal cost so that the string can withstand compressive loads. This problem has two design variables, the mean diameter of the column \(d\left( { = {x_1}} \right) \) and the thickness of tube \(t\left( { = {x_2}} \right) \). The optimization model of this problem is:
The CRTHHO algorithm and five other algorithms are used to solve this problem, and the test results are shown in Table 13. By comparison, it can be seen that the best value, worst value, and average value of CRTHHO algorithm are the lowest. Therefore, compared with the other five algorithms, CRTHHO algorithm has better optimization effect.
5 Conclusion and prospect
This paper presents a Harris hawks optimization based on global cross-variation and tent mapping. Firstly, the tent map is introduced into the exploration stage of the algorithm, and the parameter is adjusted by using its ergodicity and regularity, which makes up for the slow convergence speed of the algorithm in the early stage and improves the convergence speed of the HHO algorithm. Secondly, the crossover mutation strategy is introduced to mutate the current global optimal solution and select the optimal individual by the greedy selection, which can avoid missing the potential optimal solution in a certain probability and improve the convergence accuracy. The improved algorithm proposed in this paper combines the above two strategies. The performance of CRTHHO is tested by three different experiments in Sect. 4. Comparing the test results of three different dimensions in the 10 benchmark functions, it can be found that the CRTHHO algorithm has the best test results for the 9 functions, and with the increase of dimension, the advantage of CRTHHO algorithm become more outstanding. Comparing the test results of 30 test functions in CEC2017, it can be concluded that the results of CRTHHO are best in 17 functions and suboptimal in 5 functions, while it wins the first place in overall score. Finally, the performance of CRTHHO is further tested by four engineering problems, and the optimal test results are obtained. The above three experiments show that CRTHHO algorithm has better optimization performance than the basic HHO algorithm and five novel meta-heuristic algorithms. The CRTHHO algorithm still has a lot of development space in the future, and there are many aspects to study. For example, whether CRTHHO can introduce new strategies to further improve the algorithm’s performance, whether the proposed improved strategy can be applied to the multi-objective optimization algorithm, and whether the introduction has a good effect. It is also worth thinking about what practical problems this proposed algorithm can study.
Data availability
Not applicable.
References
Shabania A, Asgariana B, Salidob M, Gharebaghi SA (2020) Search and rescue optimization algorithm: a new optimization method for solving constrained engineering optimization problems. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2020.113698
Chen YL, He FZ, Li HR, Zhang DJ, Wu YQ (2020) A full migration bbo algorithm with enhanced population quality bounds for multimodal biomedical image registration. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2020.106335
Fateh H, Bahramara S, Safari A (2020) Modeling operation problem of active distribution networks with retailers and microgrids: a multi-objective bi-level approach. Appl Soft Comput 94:106484. https://doi.org/10.1016/j.asoc.2020.106484
Sundaramurthy S, Jayavel P (2020) A hybrid grey wolf optimization and particle swarm optimization with c4.5 approach for prediction of rheumatoid arthritis. Appl Soft Comput 94:106500. https://doi.org/10.1016/j.asoc.2020.106500
Almohammadi KM, Harby K (2020) Operational conditions optimization of a proposed solar-powered adsorption cooling system: experimental, modeling, and optimization algorithm techniques. Energy 206:118007. https://doi.org/10.1016/j.energy.2020.118007
Moazzeni AR, Khamehchi E (2020) Rain optimization algorithm (roa): a new metaheuristic method for drilling optimization solutions. J Petrol Sci Eng 195:107512. https://doi.org/10.1016/j.petrol.2020.107512
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw. https://doi.org/10.1016/j.advengsoft.2013.12.007
Holland JH (1992) Adaptation in natural and artificial systems: An introductory analysis with applications to biology, control, and artificial intelligence. https://doi.org/10.7551/mitpress/1090.001.0001
Storn R, Price K (1997) Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359. https://doi.org/10.1023/A:1008202821328
Mirjalili S, Mirjalili SM, Hatamlou A (2015) Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput Appl 27(2):495–513. https://doi.org/10.1007/s00521-015-1870-7
Zhao WG, Wang LY, Zhang ZX (2018) A novel atom search optimization for dispersion coefficient estimation in groundwater. Future Gener Comput Syst 91(FEB):601–610
Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Mhs95 Sixth International Symposium on Micro Machine & Human Science, pp 39–43. https://doi.org/10.1109/MHS.1995.494215
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J Global Optim 39(3):459–471. https://doi.org/10.1007/s10898-007-9149-x
Simon D (2009) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713. https://doi.org/10.1109/TEVC.2008.919004
Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073. https://doi.org/10.1007/s00521-015-1920-1
Mirjalili S (2016) Sca: a sine cosine algorithm for solving optimization problems. Knowl Based Syst. https://doi.org/10.1016/j.knosys.2015.12.022
Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl-Based Syst 89:228–249. https://doi.org/10.1016/j.knosys.2015.07.006
Abualigah L, Diabat A, Mirjalili S, Elaziz MA, Gandomi AH (2021) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609. https://doi.org/10.1016/j.cma.2020.113609
Ahmadianfar I, Heidari AA, Noshadian S, Chen H, Gandomi AH (2022) Info: An efficient optimization algorithm based on weighted mean of vectors. Expert Syst Appl 195:116516
Martinez-Alvarez F, Cortés GA, José TF, Gutierrez-Aviles D, Melgar-García L, Pérez-Chacón R, Rubio-Escudero C, Rubio-Escudero C, Troncoso A (2020) Coronavirus optimization algorithm: a bioinspired metaheuristic based on the covid-19 propagation model. Big Data 8:308–322
Abd El-Sattar H, Kamel S, Hassan MH, Jurado F (2022) Optimal sizing of an off-grid hybrid photovoltaic/biomass gasifier/battery system using a quantum model of runge kutta algorithm. Energy Convers Manag 258:115539
Weiguo Z, Liying W, Seyedali M (2022) Artificial hummingbird algorithm: a new bio-inspired optimizer with its engineering applications. Comput Methods Appl Mech Eng 388:114194
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen HL (2019) Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst 97:849–872. https://doi.org/10.1016/j.future.2019.02.028
Pasandideh SHR, Niaki STA, Gharaei A (2015) Optimization of a multiproduct economic production quantity problem with stochastic constraints using sequential quadratic programming. Knowl Based Syst 84(8):98–107. https://doi.org/10.1016/j.knosys.2015.04.001
Abbasi A, Firouzi B, Sendur P (2021) On the application of harris hawks optimization (HHO) algorithm to the design of microchannel heat sinks. Eng Comput 37:1409–1428. https://doi.org/10.1007/s00366-019-00892-0
Konijeti MSNK, Bharathi ML (2022) Extraction of maximum power from solar with BLDC motor driven electric vehicles based HHO algorithm. Adv Eng Softw 170:103137. https://doi.org/10.1016/j.advengsoft.2022.103137
Mashaleh AS, Ibrahim NFB, Al-Betar MA, Mustafa HMJ, Yaseen QM (2022) Detecting spam email with machine learning optimized with harris hawks optimizer (hho) algorithm. Procedia Comput Sci 201:659–664. https://doi.org/10.1016/j.procs.2022.03.087
Wunnava A, Naik MK, Panda R, Jena B, Abraham A (2020) A differential evolutionary adaptive harris hawks optimization for two dimensional practical masi entropy-based multilevel image thresholding. J King Saud Univ Comput Inf Sci. https://doi.org/10.1016/j.jksuci.2020.05.001
Houssein EH, Hosney ME, Oliv D, Mohamed WM, Hassaballah M (2020) A novel hybrid harris hawks optimization and support vector machines for drug design and discovery. Comput Chem Eng 133(Feb.2):106656–110665616. https://doi.org/10.1016/j.compchemeng.2019.106656
Essa FA, Elaziz MA, Elsheikh AH (2020) An enhanced productivity prediction model of active solar still using artificial neural network and harris hawks optimizer. Appl Thermal Eng. https://doi.org/10.1016/j.applthermaleng.2020.115020
Chen HL, Jiao S, Wang MJ, Heidari AA, Zhao XH (2019) Parameters identification of photovoltaic cells and modules using diversification-enriched harris hawks optimization with chaotic drifts. J Clean Prod. https://doi.org/10.1016/j.jclepro.2019.118778
KumarKamboj V, Nandi A, Bhadoria A, Sehgal S (2020) An intensify harris hawks optimizer for numerical and engineering optimization problems - sciencedirect. Appl Soft Comput. https://doi.org/10.1016/j.asoc.2019.106018
Qu CW, He W, Peng XN, Peng XN (2020) Harris hawks optimization with information exchange. Appl Math Model. https://doi.org/10.1016/j.apm.2020.03.024
Khalifeh S, Akbarifard S, Khalifeh V, Zallaghi E (2020) Optimization of water distribution of network systems using the harris hawks optimization algorithm (case study: Homashahr city). MethodsX. https://doi.org/10.1016/j.mex.2020.100948
Tikhamarine Y, Souag-Gamane D, Ahmed AN, Sammen SS, Kisi O, Huang YF, El-Shafie A (2020) Rainfall-runoff modelling using improved machine learning methods: Harris hawks optimizer vs particle swarm optimization. J Hydrol 589:125133. https://doi.org/10.1016/j.jhydrol.2020.125133
Song S, Wang P, Heidari AA, Wang M, Zhao X, Chen H, He W, Xu S (2021) Dimension decided harris hawks optimization with gaussian mutation: balance analysis and diversity patterns. Knowl Based Syst 215(Mar.5):106425–110642526
Li C, Li J, Chen H, Jin M, Ren H (2021) Enhanced harris hawks optimization with multi-strategy for global optimization tasks. Expert Syst Appl 185:115499. https://doi.org/10.1016/j.eswa.2021.115499
Zhao SJ, Gao LF, Yu DM, Tu J (2021) Improved harris hawks optimization coupling energy cycle decline mechanism and newton local enhancement strategy. Control Decis 36(3):629–636. https://doi.org/10.13195/j.kzyjc.2019.0810
Guo YX, Liu S, Gao WX, Zhang L (2022) Elite opposition-based learning golden-sine harris hawks optimization. Comput Eng Appl 58(10):153–161
Liu XL, Liang TY (2022) Harris hawk optimization algorithm based on square neighborhood and random array. Control Decis. https://doi.org/10.13195/j.kzyjc.2021.0478
Fan Q, Chen ZJ, Xia ZH (2020) A novel quasi-reflected harris hawks optimization algorithm for global optimization problems. Soft Comput 24:14825–14843. https://doi.org/10.1007/s00500-020-04834-7
Hussain K, Zhu W, Salleh MNM (2019) Long-term memory harris’ hawk optimization for high dimensional and optimal power flow problems. IEEE Access. https://doi.org/10.1109/ACCESS.2019.2946664
Ma YM, Shi ZD, Zhao K, Gong CL, Shan LH (2020) Tdoa localization based on improved harris hawk optimization algorithm. Comput Eng Appl 46(12):179–184. https://doi.org/10.19678/j.issn.1000-3428.0056965
Zawbaa HM, Emary E, Grosan C (2016) Feature selection via chaotic antlion optimization. PLoS ONE 11(3):0150652
Gupta S, Deep K (2019) Improved sine cosine algorithm with crossover scheme for global optimization. Knowle Based Syst 165:374–406. https://doi.org/10.1016/j.knosys.2018.12.008
WuG, Mallipeddi R, Suganthan PN (2016) Problem definitions and evaluation criteria for the cec 2017 competition on constrained real-parameter optimization. Technical Report, Nanyang Technological University, Singapore, September 2017
Arora J (2012) Introduction to optimum design. Elsevier, Amsterdam Netherlands
Bayzidi H, Talatahari S, Saraee M, Lamarche C-P (2021) Social network search for solving engineering optimization problems. Comput Intell Neurosci 2021:8548639. https://doi.org/10.1155/2021/8548639
Sattar D, Salim R (2020) A smart metaheuristic algorithm for solving engineering problems. Engineering with Computers 37:2389–2417
Acknowledgements
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions.
Funding
This study was supported by the National Natural Science Foundation of China (No. 61535008), the Natural Science Foundation of Tianjin (No. 20JCQNJC00430), the National Natural Science Foundation of China (No. 62203332) and the Science and Technology Research Team in Higher Education Institutions of Hebei Province (No. ZD2018045).
Author information
Authors and Affiliations
Contributions
CL was responsible for methodology, writing and reviewing, and supervision. SN was involved in data collation and tabulation, writing, reviewing, and editing, revising the manuscript and software. MY took part in writing, reviewing, and editing; and plotting the figures.
Corresponding author
Ethics declarations
Conflict of interest
We declare that the authors have no competing interests as defined by Springer, or other interests that might be perceived to influence the results and/or discussion reported in this paper.
Consent for publication
Not applicable.
Ethics approval and Consent to participate
Informed consent was obtained from all individual participants included in the study.
Human and animal rights
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chen, L., Song, N. & Ma, Y. Harris hawks optimization based on global cross-variation and tent mapping. J Supercomput 79, 5576–5614 (2023). https://doi.org/10.1007/s11227-022-04869-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11227-022-04869-7