1 Introduction

Optimization refers to searching for the optimal solution among all possible solutions for a particular problem [1]. In scientific computing, medical testing, financial analysis, and other fields, optimization problems exist in large numbers and become more and more complex [2,3,4,5]. The traditional optimization methods include gradient descent, Newton’s method, quasi-Newton methods, conjugate gradient, and Lagrange multiplier Method, which is used to solve constraint optimization problems. Although the traditional optimization methods have the advantages of relatively perfect theory and small computation, they have obvious limitations when applied to complex and challenging optimization problems. Compared with conventional methods, the meta-heuristic algorithm has the advantages of simple operation, substantial flexibility, and no need for gradient information. It has more advantages in solving large-scale and complicated optimization problems [6, 7].

Meta-heuristic algorithms can be divided into the following two categories according to the different inspired mechanisms: Firstly, algorithms based on biological evolution and physical laws in nature, such as genetic algorithm (GA) [8], differential evolution algorithm (DE) [9], multi-verse optimization (MVO) [10] and atom search optimization (ASO) [11], are based on biological evolution mechanism. Secondly, algorithms based on social behaviors of social living organisms, such as particle swarm optimization (PSO) [12] and artificial swarm algorithm (ABC) [13], are also known as swarm intelligence optimization algorithms. In recent years, scholars have proposed many novel meta-heuristic algorithms, such as biogeography-based optimization (BBO) [14], dragonfly algorithm (DA) [15], sine–cosine algorithm (SCA) [16], moth flame optimization (MFO) [17], arithmetic optimization algorithm (AOA) [18], wWeighted mean of vectors (INFO) [19], coronavirus optimization algorithm (COVIDOA) [20], Runge–Kutta optimizer (RUN) [21], and artificial hummingbird algorithm (AHA) [22]. These novel optimization algorithms have been applied to many problems, such as fault diagnosis and feature selection. In 2019, Heidari et al. [23] proposed a new swarm intelligence optimization algorithm, Harris hawks optimization (HHO). The algorithm is based on the simulation of the hunting behavior of the Harris hawks, which is the most intelligent bird at present. The algorithm integrates the concepts of a population center, random population division, and escape energy and has a good solving ability for most continuous unconstrained optimization problems [24]. Moreover, the Harris hawks optimization has few parameters to adjust, and it is easy to operate and implement. So it has attracted extensive attention once proposed.As an excellent new swarm intelligence optimization algorithm, HHO has been used to solve optimization problems in many fields. For example, Ahmad Abbasi et al. [25] use HHO algorithm in minimizing the entropy generation of the microchannel and it has a superior performance. Mali Satya et al. [26] use Harris hawks optimization (HHO) to offer a maximum power point tracking technique (MPPT) for photovoltaic (PV) powered e-Vehicles. Mashaleh et al. [27] used HHO algorithm to improve machine learning algorithm for spam detection. In addition, HHO and its improved algorithm have also achieved good application results in image recognition [28], support vector machine optimization [29], neural network optimization [30] in the computer field, photovoltaic system modeling [31], constraint engineering optimization [32,33,34], rainfall modeling [35], and other problems in the engineering field.

However, the HHO algorithm still has some problems that the convergence rate is not fast enough in the exploration stage, and it is easy to fall into local optimum in the later stage. In view of these problems, scholars have proposed many improvements to the HHO algorithm. Song Shiming et al. [36] introduced Gaussian mutation strategy and dimension decision strategy in cuckoo search into the HHO algorithm, which is conducive to fully mining the solutions in the search area. Li Chenyang et al. [37] proposed an exploration strategy based on logarithmic spiral and opposition learning, and integrated it into HHO algorithm to improve the exploration ability of HHO. Shijie Zhao et al. [38] proposed an optimized HHO algorithm combining the regulation mechanism of periodic energy decline of prey and Newton’s local enhancement strategy, which enhanced the mining ability of the algorithm. Yuxin Guo et al. [39] introduced the elite reverse learning mechanism and golden sine algorithm to improve population diversity and reduce algorithm convergence time. Xiaolong Liu et al. [40] balanced the exploration and development of the algorithm through the multi-subgroup square neighborhood topology and fixed permutation probability. Oian Fan et al. [41] proposed a new quasi-reflection Harris hawks optimization (QRHHO), which combined the HHO algorithm with a quasi-reflection-based learning mechanism (QRBL) to improve the optimization accuracy of the HHO algorithm. Hussain et al. [42] introduced long-term memory into the HHO algorithm, referring to the past individual location information to increase the diversity of the population in the search process. Yiming Ma et al. [43] used maximum likelihood estimation to improve the algorithm’s fitness function and improve the solving accuracy.

Although these improved algorithms can improve the detection ability or avoid premature convergence to some extent, according to the No Free Lunch theorem, no algorithm can achieve the best effect on all optimization problems. Therefore, given the difficulties of slow convergence speed and local optimal stagnation in the early stage of the HHO algorithm, it is still necessary to continue studying and improving the strategy. In this work, two strategies are added into the Harris hawks optimization algorithm to better balance exploration and development. Tent mapping is considered to adjust the value of a random parameter, so that the ergodicity and regularity of chaotic mapping can be used to reduce unnecessary consumption in the exploration process of Harris hawks, and then make the convergence accelerated. The global cross-mutation strategy is added to make full use of the information of the eagle group, avoid skipping the optimal solution, and enhance the local exploration ability of the eagle group in the later stage. By modifying the search equation of HHO with the above two strategies, the search mechanism of HHO is improved, and the exploration and development of the algorithm are better balanced. This paper proposes a Harris hawks optimization based on global cross-variation and tent mapping (CRTHHO), and its performance is tested on ten benchmark functions and the CEC2017 test set. The algorithm’s improvement strategy and performance test will be discussed in detail in 3 and 4.

2 Harris hawks optimization (HHO)

Harris hawks optimization (HHO) is a novel meta-heuristic algorithm, proposed by Heidari in 2019, solving complicated combinatorial optimization problem. It is inspired by the hunting and attacking behavior of Harris hawks and its typical feature is swarm intelligence. This optimization algorithm consists of two stages: the exploration stage and the exploitation phase, and each phase mimics the different behaviors of hawks during predation. The parameter E represents the escape energy of prey, which decreases over time, and its calculation formula is as follows:

$$\begin{aligned} E = 2{E_0} \left(1 - \frac{t}{T}\right) \end{aligned}$$
(1)

where \({E_0} = 2{r_1} - 1\) represents the initial state of energy, \({r_1}\) is a random number between (0, 1) , and \({E_0}\) varies randomly between \((-1,1)\) during each iteration. T is the maximum iteration number, and t is the current iteration number.

The Harris hawks choose different capture strategies depending on the escape energy E of the prey. When \(|E |> 1\), Harris hawks are in the exploration phase. Otherwise, they are in the exploitation phase.

2.1 Exploration phase

During the exploration phase, Harris hawks search for prey using two strategies to update their position based on the random location of rabbits and other flock members. Its mathematical model is as follows:

$$\begin{aligned} x(t + 1) = \left\{ \begin{array}{l} {x_{rand}}(t) - {r_1}|{{x_{rand}}(t) - 2{r_2}x(t)} |,\quad q \ge 0.5\\ ({x_{prey}}(t) - {x_m}(t)) - {r_3}\left[ {LB + {r_4}(UB - LB)} \right] ,\quad q < 0,5 \end{array} \right. \end{aligned}$$
(2)

where t represents the current iteration number, T represents the maximum iteration number, N represents the population number, \({r_1},{r_2},{r_3},{r_4}\), and q are random numbers between (0, 1), which are updated in each iteration, UB and LB show the upper and lower bounds of the search space, \(x(t+1)\) is the position vector of hawks in the next iteration, \({x_{prey}}(t)\) represents the current optimal position (the position of prey), x(t) is the current position vector of hawks, \(x_rand\) is a randomly selected hawk from the current population of hawks, and \({x_m}(t)\) represents the average position of the current population of hawks. The average position of hawks is calculated by the following formula:

$$\begin{aligned} {x_m}(t) = \frac{1}{N}\sum \limits _{i = 1}^N {{x_i}(t)} \end{aligned}$$
(3)

where \({x_i}(t)\) indicates the location of each hawk in iteration t.

2.2 Exploitation phase

In the game of chasing and escaping between hawks and their prey, the escape energy of prey decreases gradually. When \({|{E}|<1}\), the Harris hawks enter the exploitation phase, and the captors further pursue the prey and try to capture it. HHO mimics this predatory behavior by using four strategies: soft besiege, hard besiege, soft besiege with progressive rapid dive, and hard besiege with progressive rapid dive. HHO confirms which strategy to use through the escape energy E and the escape probability r.

When \(|E |\ge 0.5,r \ge 0.5\), carry out the soft besiege. In this stage, the prey still has the energy to escape, and the eagles use a weak encircle strategy to deplete it and launch a surprise attack. This behavior is modeled by the following rules:

$$\begin{aligned}&x(t + 1) = \Delta x(t) - E|J{x_{prey}}(t) - x(t)| \end{aligned}$$
(4)
$$\begin{aligned}&\Delta x(t) = {x_{prey}}(t) - x(t) \end{aligned}$$
(5)
$$\begin{aligned}&J = 2(1 - {r_5}) \end{aligned}$$
(6)

where \(\Delta x(t)\) represents the difference between the prey position and the current position of the Harris hawks in iteration t, \({r_5}\) is a random number between (0, 1), and J represents the random jump strength of the rabbit throughout the process of escaping. The J value changes randomly in each iteration to simulate the nature of rabbit motions.

When \(|E |< 0.5,r \ge 0,5\), the Harris hawks carry out the hard besiege. In this stage, the prey has no energy to escape, and the Harris hawks use a hard encircling to hunt the prey for a final assault. The current position is updated as follows:

$$\begin{aligned} x(t + 1) = {x_{prey}}(t) - E|\Delta x(t)| \end{aligned}$$
(7)

When \(|E |\ge 0.5,r < 0.5\), the Harris hawks carry out the soft besiege with progressive rapid dive. In this stage, the prey has enough energy to escape from the eagles and the Harris hawks will gradually dive and soft surround the target. In order to simulate the escape mode and jumping action of the prey, the levy function to the position update process. This process is modeled as follows:

$$\begin{aligned}&Y = {x_{prey}}(t) - E|J{x_{prey}}(t) - x(t)| \end{aligned}$$
(8)
$$\begin{aligned}&Z = Y + S \times LF(D) \end{aligned}$$
(9)
$$\begin{aligned}&x(t + 1) = \left\{ \begin{array}{l} Y, \quad \text{if} \,\, F(Y)< F[X(t)]\\ Z, \quad \text{if} \,\, F(Z) < F[X(t)] \end{array} \right. \end{aligned}$$
(10)

where D is the problem dimension and S is the D-dimensional random row vector.

When \(|E |< 0.5,r < 0.5\), carry out the hard besiege with progressive rapid dive. In this stage, the prey is exhausted and has low escape energy, so the Harris hawks surround the prey through the hard besiege with progressive rapid dive. They attempt to reduce the distance between their average position and the target prey. The position update formula is as follows:

$$\begin{aligned} x(t + 1) = \left\{ \begin{array}{l} Y,\quad \text{if}\,\, F(Y)< F[X(t)]\\ Z,\quad \text{if}\,\, F(Z) < F[X(t)] \end{array} \right. \end{aligned}$$
(11)

where Y and Z are updated by formula (12) and (13):

$$\begin{aligned}&Y = {x_{prey}}(t) - E|J{x_{prey}}(t) - {x_m}(t)| \end{aligned}$$
(12)
$$\begin{aligned}&Z = Y + S \times LF(D) \end{aligned}$$
(13)

The pseudocode of the basic HHO algorithm is shown in Algorithm 1

figure a

3 Harris hawks optimization based on global cross-variation and tent mapping

In this section, the proposed algorithm CRTHHO has been described in detail. Tent mapping is introduced to solve the problem of slow convergence in the exploration phase. In order to solve the problem of low convergence accuracy in the later stage of the algorithm, a global crossover mutation is proposed. These two methods will be elaborated on below. The HHO algorithms that introduce tent mapping, global cross-variation alone, and combine the two strategies are named THHO, CRHHO, and CRTHHO.

3.1 Harris hawks optimization based on tent mapping (THHO)

Chaos mapping has the characteristics of randomness, ergodicity, and regularity and is often used to generate the initial population of the algorithm or as a disturbance in the algorithm process [12]. Because of the slow convergence speed of Harris hawks optimization in the early stage, the tent mapping is introduced into the exploration phase of the algorithm, which can significantly accelerate the optimization progress in the early stage of the search. The specific introduction method is as follows:

In the original Harris hawks optimization, the exploration stage of the algorithm is realized by Eq. (2), where q is a random number between (0, 1), which is a simple uniform distribution. In the improved THHO algorithm, the tent map [44] is used to adjust the value of an arbitrary parameter q, which can better use the ergodicity and regularity of the chaotic map so that Harris hawks can reduce unnecessary consumption in the exploration process. Thus, it can accelerate the convergence speed. Therefore, the position update formula of the exploration process in THHO is changed to:

$$\begin{aligned} {x_i}(t + 1) = \left\{ \begin{array}{l} {x_{rand}} - {r_1}|{x_{rand}} - 2{r_2}{x_i}|,\qquad {q_i} \ge 0.5\\ ({x_{prey}}(t) - {x_m}(t)) - {r_3}[l{b_i} + {r_4}(u{b_i} - l{b_i})],\qquad {q_i} < 0.5 \end{array} \right. \end{aligned}$$
(14)

where \(q_i\) is constantly changing in the iteration process, and it is updated through tent mapping. The formula of \(q_i\) is as follows [44]:

$$\begin{aligned} {q_{i + 1}} = \left\{ \begin{array}{l} {{{{q_i}}} / {{0.6}}},\qquad \text{if} \,\, 0 < {q_i} \le 0.6\\ {{{(1 - {q_i})}} /{{0.4}}}, \qquad \text{if} \,\, {q_i} > 0.6 \end{array} \right. \end{aligned}$$
(15)

3.2 Harris hawks optimization based on global cross-variation (CRHHO)

When Harris hawks optimization enters the later stage, the search scope will be smaller and it is easier to fall into local optimum. Therefore, a global crossover mutation strategy was introduced. After completing a global iterative optimization, the current position \(({x_1}(t),{x_2}(t), \ldots ,{x_N}(t))\) cross-mutates the optimal position \({x_{prey}}(t)\). This strategy can make full use of the information of the eagle group, avoid skipping the optimal solution, and enhance the local exploration ability of the eagle group in the later period. After crossover and mutation, the greedy selection strategy is used to update the optimal location, ensuring the exploration ability and avoiding diversity spillover. Using crossover mutation and greedy selection to improve the HHO algorithm can avoid local optimization in the later stage to enhance the optimization effect.

The crossover mutation strategy is realized by a crossover operator. Set the crossover operator to hybridize the parent solution to produce children that contain the characteristics of the parent solution. Now the crossover work used is defined as follows [45]:

$$\begin{aligned} {u_i}(t) = \left\{ \begin{array}{l} v_i^j(t),\quad \text{if} \,\, {r_j} < CR\\ x_{prey}^j(t), \quad \text{otherwise} \end{array} \right. \end{aligned}$$
(16)

where CR is the crossover rate. \({v_i}(t)\) is selected from \({x_1}(t),{x_2}(t), \ldots ,{x_N}(t)\) randomly, \({x_{prey}}(t)\) is currently the best position for the eagles. \({r_j}\) is the uniformly distributed random number between (0, 1), \(j = 1,2, \ldots ,D\). In the present work, the crossover probability CR is taken as 0.3.

Greedy selection compares the fitness of the updated vector with the current solution vector to determine whether the updated vector will survive in the next generation. After crossover and mutation, whether to update the optimal position of prey is determined by the greedy selection, and the specific operation is as follows [45]:

$$\begin{aligned} {x_{prey}}(t) = \left\{ \begin{array}{l} {u_i}(t) ,\quad \text{if} \,\, F({u_{i,t}}) \le F({x_{prey,t}})\\ {x_{prey}}(t) \,\,,\quad \text{if} \,\, F({u_{i,t}}) > F({x_{prey,t}}) \end{array} \right. \end{aligned}$$
(17)

where F is the objective function, and the above selection process describes the selection of the minimization problem. Through cross-variation and greedy selection, it can effectively avoid falling into the local optimum and reduce the possibility of stagnation.

3.3 Harris hawks optimization combined the two strategies (CRTHHO)

The tent chaos diagram is introduced into the THHO algorithm, and the position updating formula of the original algorithm is changed in the exploration stage so that it has a faster convergence rate in the early stage and improves the optimization performance of the algorithm. CRHHO guides the current optimal individual to mutate through the crossover mutation operator and updates the optimal position of the current population through greedy selection to avoid falling into local optimal at the later stage of the algorithm and improve the convergence accuracy of the algorithm. A CRTHHO algorithm is proposed to combine the above two improved methods into the HHO algorithm. The CRTHHO contains the advantages of the above two improved strategies. It is expected that CRTHHO can not only converge at a faster speed in the early stage but also avoid local optimization to achieve better accuracy in the later stage. The pseudocode of CRTHHO is shown in Algorithm 2, and the flowchart is shown in Fig. 1. The performance of the CRTHHO algorithm is described in 4.

figure b
Fig. 1
figure 1

Flowchart of CRTHHO

4 Experimental simulation and result analysis

In order to verify the optimization performance of CRTHHO, two kinds of experiments are designed in this paper: (1) comparing the improved HHO and HHO; and (2) comparing CRTHHO with five mainstream meta-heuristic algorithms. The experiment selected ten benchmark test functions for testing, among which F1–F4 is a high-dimensional single-peak function, and F5–F10 is a high-dimensional multi-peak function. The function with only one extreme point in the feasible region is called the single-peak function, which can test the algorithm’s local development ability and convergence speed. The function with multiple local optimal values in the feasible region is called the multi-peak function, which can better test the power of the optimization algorithm to jump out of the local optimal value. The detailed expressions, dimensions, feasible regions, and target values of the functions are shown in Table 1. In addition, the CRTHHO algorithm’s performance is also tested in CEC2017, which will be explained in detail in Sect. 4.2.

In order to ensure the fairness of experimental results, the population size of all algorithms was set as 30, and parameter settings of HHO and improved HHO are consistent with those in the literature [23]. At the same time, to reduce the experiment’s randomness, the experimental results are the average values after 30 independent runs. Because this experiment is limited by computer software, the minimum achievable accuracy is 2.2251E−308. Therefore, when the convergence accuracy is lower than 2.2251E−308, the result is 0. For ease of observation and comparison, the best results of each test function are shown in bold in the table. The ordinate of drawing is taken as \({\log _{10}}(fitness\_value)\).

Table 1 Benchmark functions

4.1 Compare improved HHOs and basic HHO

The first part of the experiment is divided into two parts: 1) Compare the convergence accuracy and speed with fixed iteration times and 2) compare iteration times with fixed target precision. In this experiment, the population size and maximum iteration times of the improved algorithm THHO, CRHHO, CRTHHO, and the basic algorithm HHO are set to 30 and 500, respectively, and the dimension of the test function F1-F10 is 30. The final experimental result is the average of 30 independent running results.

4.1.1 Compare the convergence accuracy and speed with fixed iteration times

When the number of iterations is fixed at 500, the test results of HHO, THHO, CRHHO, and CRTHHO on ten benchmark test functions are shown in Table 2. Figure 2 shows the convergence curves of these four algorithms on ten benchmark functions.

In Table 2, the first row of each function records the average of the 30 independent runs and the second standard deviation of behavior. As can be seen from the data in Table 2, in the above 10 test functions, the three improved algorithms, THHO, CRHHO, and CRTHHO, are superior to the original HHO algorithm, and except for F3, the CRTHHO algorithm combining the two improved strategies is superior to the other two algorithms introducing only one approach. It shows that both systems improve the performance of HHO, and the algorithm combining the two methods can better improve the performance of the original HHO algorithm. The convergence precision of the four algorithms on F5, F7, and F8 is very close to the theoretical optimal value, and they have the same convergence precision on F5-F8. Therefore, we can further analyze the performance difference of the four algorithms through the convergence curve.

Fig. 2
figure 2figure 2

Convergence curves of improved HHO and HHO fixed iterations

Table 2 Comparison of convergence precision of the four algorithms after 500 iterations

As shown in Fig. 2, the convergence curves of F1, F2, F4, F9, and F10 clearly show that compared with HHO, THHO has significantly improved the convergence speed in the early stage and the convergence accuracy in the late stage. Compared with HHO, CRHHO has no apparent advantage in the early stage but can jump out of local optimum in the late stage and improve the convergence accuracy. Compared with HHO, CRTHHO has both the advantages of the former two, accelerating the convergence speed in the early stage and improving the convergence accuracy in the later stage. Similarly, compared with F5-F8, it can be seen that although the absolute convergence precision is the same, the convergence speed of CRTHHO is significantly better than that of HHO.

4.1.2 Compare iteration times with fixed target precision

In order to further study the effectiveness of the improved strategy, this section compares the number of iterations required by HHO, THHO, CRHHO, and CRTHHO with the same target accuracy (the experimental results are rounded). Table 3 shows the target accuracy of each function in the experiment and the number of iterations required by each algorithm to achieve the accuracy.

Table 3 Comparison of iteration times under fixed target accuracy

By analyzing the data in Table 3, it can be seen that for the above 10 test functions, within the maximum number of iterations, 6 of the basic HHO algorithm, 3 of the THHO algorithm, and 2 of the CRHHO algorithm cannot achieve the target accuracy. In contrast, the CRTHHO combined with the two improved strategies can achieve the target accuracy. Among the above four algorithms, CRTHHO has the highest success rate of reaching the target accuracy within 500 times. In addition, a horizontal comparison of the results of each test function shows that CRTHHO can achieve the same accuracy as HHO, THHO, and CRHHO with fewer iterations. In summary, CRTHHO significantly improves the performance of the HHO algorithm.

4.2 Compare CRTHHO with 5 advanced algorithms

In order to increase the difficulty of the experiment, the second kind of experiment compares the optimization performance of CRTHHO and five mainstream meta-heuristic algorithms on different dimensions of 10 benchmark functions and CEC2017 test functions. The five algorithms selected in the experiment are as follows: particle swarm optimization (PSO, 1995) [12], biogeography-based optimization (BBO, 2009) [14], dragonfly algorithm (DA, 2016) [15], sine–cosine algorithm (SCA, 2016) [16], and the arithmetic optimization algorithm (AOA, 2021) [18]. PSO is a classic swarm intelligence optimization algorithm. BBO, DA, and SCA are excellent optimization algorithms which are proposed and widely used in recent years. AOA is a novel algorithm proposed in 2021. And the performance of them is outstanding on benchmark functions and real engineering problems. Therefore, the performance of proposed CRTHHO algorithm can be proved by comparing with these five algorithms.The initial parameters of these algorithms are shown in Table 4.

Table 4 Parameter Settings of algorithms

4.2.1 Compare CRTHHO with 5 advanced algorithms on basic test functions

In this section, five novel optimization algorithms are selected to perform performance tests with CRTHHO to verify that the performance of the proposed CRTHHO algorithm is excellent among meta-heuristic algorithms. The test functions are ten basic test functions, as shown in Table 1. At the same time, in order to test its performance in higher dimensions, the experiment in this section is divided into three groups for comparative experiments in dimensions 30, 50, and 100, respectively. The experimental results are shown in Table 5 , and the convergence curves are shown in Figs. 3 and 4.

Table 5 shows the convergence accuracy of 6 optimization algorithms on three different dimensions of 10 benchmark functions. Figure 3 shows the convergence curves of the six optimization algorithms on the benchmark function when the dimension is 30. In comparison, the convergence curves of the six optimization algorithms when the size is 100 are shown in Fig. 4. Combined with the data in Table 5 and the convergence curves in Figs. 3 and 4, the following analysis is carried out.

By comparing the optimization effects of the algorithm on different functions in the same dimension, it can be seen that: At 30 and 50 sizes, CRTHHO’s convergence accuracy is optimal for F1 and F3-F10, and the optimization result of CRTHHO is close to the theoretical optimal value at F5, F7, and F8. In contrast, the convergence accuracy of CRTHHO is not as good as that of AOA at F2. However, Fig. 3 shows that the convergence speed of the CRTHHO algorithm is higher than that of the AOA algorithm. When the dimension is 100, CRTHHO’s convergence accuracy is superior to the other five algorithms for F1-F10, and Fig. 4 shows that the convergence speed of CRTHHO is also faster than other algorithms.

Table 5 Comparison of the convergence accuracy with other algorithms on the benchmark function

By comparing the optimization effects of the algorithms in different dimensions of the same function, it can be concluded that: When the dimension of the function increases, the convergence accuracy of the same optimization algorithm will decrease, but the CRTHHO convergence accuracy changes little or even improves. This can be obtained by comparing the accuracy changes of AOA and CRTHHO as the dimension increases. Compared with AOA, the accuracy of CRTHHO decreased in F1, F3, F4, F9, and F10, but the variation range was small. In F2, with the increase of dimension, the accuracy of AOA decreased significantly, from close to the theoretical optimal value to 3.3E−44, while the accuracy of CRTHHO remained stable at about 1E−110 and even increased slightly. On F6, AOA’s accuracy dropped by 12 orders of magnitude as the dimension went up to 100, but CRTHHO’s accuracy remained 8.9E−16.

Fig. 3
figure 3figure 3

Convergence curves of 6 algorithms on 10 benchmark functions (30 dims)

Fig. 4
figure 4figure 4

Convergence curves of 6 algorithms on 10 benchmark functions (100 dims)

In conclusion, compared with the above five advanced algorithms, CRTHHO has apparent advantages in convergence accuracy and convergence speed and has a stable solution accuracy for high-dimensional problems.

4.2.2 Compare CRTHHO and 5 advanced algorithms on the CEC2017 test set

CEC2017 [46] is a set of challenging test functions, the details of which are shown in Table 6. The test functions in CEC2017 are characterized by rotation and translation. These functions are complicated, and it is difficult to find the optimal solution. Therefore, the number of iterations in this section is set to 1000. Considering the time cost and computer performance constraints, the function dimension is selected as ten dimensions. We compared CRTHHO with five advanced algorithms on the CEC2017 test set in this experiment. The experimental results are shown in Table 7. Figure 5 shows the convergence curves of the six algorithms on 30 test functions. In addition, the difference between the proposed algorithm and other algorithms is analyzed by the rank-sum test.

The rank-sum test is a nonparametric statistical method used to compare whether the two are significantly different. In this experiment, the rank-sum test is performed on the other five algorithms, respectively, with CRTHHO, and the significance level is set at 0.05. The results and ranking of each algorithm are shown in Table 8. p value is the result of rank-sum test. When p value is less than 0.05, the result is significant, indicating that the algorithm is significantly different from the CRTHHO algorithm.

Table 6 Summary of the CEC2017 Test Functions

Table 7 shows the optimization results of CRTHHO and the other five optimization algorithms on CEC2017, where mean and STD represent the mean and standard deviation of the results of 30 independent runs, respectively. According to the data in Table 7, it can be seen that CRTHHO has the optimal convergence accuracy on the 17 functions F1, F3–F6, F8, F10–13, F15, F17, F20, F22–23, F25, and F30, and is suboptimal on the five functions F16, F18–19, and F23–24. Therefore, CRTHHO can achieve superior results on more than seventy percent of the tasks in the CEC2017 test set. Figure 5 shows the convergence curves of CRTHHO and the other five advanced algorithms on CEC2017. Combined with Fig. 5, it can be seen that CRTHHO has a fast convergence speed in the early stage and is faster than other algorithms on more than half of the functions. The rank-sum test results and ranking of each algorithm are shown in Table 8, where p value is used to determine whether the algorithm is significantly different from CRTHHO, rank is a ranking of the six algorithms according to convergence accuracy, + means that the algorithm is significantly different from and better than CRTHHO, = means that the algorithm is not significantly different from CRTHHO, and − indicates that the algorithm is significantly different from and inferior to CRTHHO. Combining Table 8, we can see that CRTHHO ranks first in 17 functions. In addition, Table 8 gives the comprehensive ranking of the six algorithms on the 30 functions, and CRTHHO also ranks first. In conclusion, CRTHHO outperforms other algorithms in half or more functions of the CEC test set, which fully proves the effectiveness of the proposed algorithm in complex optimization problems.

Table 7 Results of 6 algorithms on CEC2017
Table 8 Rank-sum test results and ranking of each algorithm
Fig. 5
figure 5figure 5figure 5figure 5

Convergence curves of CRTHHO and five optimization algorithms on CEC2017

4.3 Comparison of CRTHHO with other improved HHOs

To further compare the performance of CRTHHO, it is compared with other improved HHO algorithms on CEC2017. LMHHO [41] is an improved algorithm proposed by Hussain K et al in 2019, while GCHHO [35] is proposed by Song et al in 2021. The test results of CRTHHO, LMHHO, GCHHO, and HHO on CEC2017 are shown in Table 9, and the optimal results have been indicated in bold. By comparison the test results, it can be seen that CRTHHO has 20 optimal results and 5 suboptimal results on 29 test functions except F2. It can be concluded that the proposed algorithm is competitive with other improvements for HHO.

Table 9 Comparison of CRTHHO and other improved HHOs

4.4 Real-world optimization problems

In order to study the performance of CRTHHO in real-world engineering applications, we tested CRTHHO with PSO, SCA, BBO, DA, and AOA on four engineering design problems. Due to the low dimensionality of these four engineering problems, compression spring design, pressure vessel design, speed reducer design, and string design, we set the maximum number of iterations and the population to 50 and 30, respectively. Moreover, the final statistical result of each algorithm is the average of 30 independent runs.

4.4.1 Compression spring design

The goal of compression spring design (CSD) [47] is to minimize its mass while satisfying certain constraints, including four inequality constraints: minimum deflection, shear stress, oscillation frequency, and outer diameter limit. The average coil diameter, the diameter of the spring wire, and the effective coil number of the spring are three design variables. The problem can be described as:

$$\begin{aligned} \begin{array}{l} \min f(x) = ({x_3} + 2){x_2}x_1^2,\\ \\ Subject\quad to\,\, :\\ {g_1}(x) = 1 - \frac{{x_2^3{x_3}}}{{71785x_1^4}} \le 0,\\ {g_2}(x) = \frac{{4x_2^2 - {x_1}{x_2}}}{{12566({x_2}x_1^3 - x_1^4)}} + \frac{1}{{5108x_1^2}} - 1 \le 0,\\ {g_3}\left( x \right) = 1 - \frac{{140.45{x_1}}}{{x_2^2{x_3}}} \le 0,\\ {g_4}\left( x \right) = \frac{{{x_1} + {x_2}}}{{1.5}} - 1 \le 0,\\ \\ Variance\ range:\\ 0.05 \le {x_1} \le 2,\\ 0.25 \le {x_2} \le 1.3,\\ 2 \le {x_3} \le 15. \end{array} \end{aligned}$$
(18)

The comparison results between CRTHHO and the five algorithms is recorded in Table 10. As we can see from Table 10, the performance of CRTHHO is better than other five algorithms. Therefore, CRTHHO can effectively solve the problem of compression spring design.

Table 10 Comparison results of CRTHHO and other 5 algorithms on CSD problem

4.4.2 Pressure vessel design

The goal of pressure vessel design [48] is to minimize the total cost while meeting the production needs. The four design variables are the thickness of the shell Ts(= x1), the thickness of the head Th(= x2), the inner radius R(= x3), and the length of the cylindrical part of the container L(= x4). In addition, x1 and x2 are integer multiples of 0.625, R and L are continuous variables. The problem can be described as:

$$\begin{aligned} \begin{array}{l} \min f(x) = 0.6224{x_1}{x_3}{x_4} + 1.7781{x_2}x_3^2 + 3.1661x_1^2{x_4} + 19.84x_1^2{x_3},\\ \\ Subject \quad to:\\ {g_1}\left( x \right) = - {x_1} + 0.0193{x_3} \le 0,\\ {g_2}\left( x \right) = - {x_2} + 0.00954{x_3} \le 0,\\ {g_3}\left( x \right) = - \pi x_3^2{x_4} - \frac{4}{3}\pi x_3^3 + 1296000 \le 0,\\ {g_4}\left( x \right) = {x_4} - 240 \le 0,\\ \\ Variable \quad Range \,\, :\\ {x_1},{x_2} \in \left\{ {1 \times 0.0625, 2 \times 0.0625, 3 \times 0.0625, \ldots , 1600 \times 0.0625,} \right\} ,\\ 10 \le {x_3},\\ {x_4} \le 200. \end{array} \end{aligned}$$
(19)

The comparison results between CRTHHO and the five algorithms is recorded in Table 11. As we can see from Table 11, the performance of CRTHHO is better than other five algorithms. Therefore, CRTHHO can effectively solve the problem of pressure vessel design.

Table 11 Comparison results of CRTHHO and other 5 algorithms on PVD problem

4.4.3 Speed reducer design

In the mechanical system, the reducer is one of the important parts of the gear box. In the speed reducer design problem (SRD) [49], the weight of the reducer is minimized under 11 constraints. The seven variables affecting the weight of the reducer are: face width \(b\left( { = {x_1}} \right) \), module of teeth \(m\left( { = {x_2}} \right) \), the number of teeth in pinion \(z\left( { = {x_3}} \right) \), the length of the first shaft between bearings \({l_1}\left( { = {x_4}} \right) \), the length of the second shaft between bearings \({l_2}\left( { = {x_5}} \right) \), the diameter of the first shaft \({d_1}\left( { = {x_6}} \right) \), the diameter of the second shaft \({d_2}\left( { = {x_7}} \right) \). The problem can be described as:

$$\begin{aligned} \begin{array}{c} \min f(X) = 0.7854{x_1}x_2^2\left( {3.3333x_3^2 + 14.9334{x_3} - 43.0934} \right) \\ - 1.508{x_1}\left( {x_6^2 + x_7^2} \right) + 7.4777\left( {x_6^3 + x_7^3} \right) + 0.7854\left( {{x_4}x_6^2 + {x_5}x_7^2} \right) ,\\ Subject \,\, to:\\ {g_1}\left( X \right) = \frac{{27}}{{{x_1}x_2^2{x_3}}} - 1 \le 0,\\ {g_2}\left( X \right) = \frac{{397.5}}{{{x_1}x_2^2x_3^2}} - 1 \le 0,\\ {g_3}\left( X \right) = \frac{{1.93x_4^3}}{{{x_2}x_6^4{x_3}}} - 1 \le 0,\\ {g_4}\left( X \right) = \frac{{1.93x_5^3}}{{{x_2}x_7^4{x_3}}} - 1 \le 0,\\ {g_5}\left( X \right) = \frac{{\sqrt{{{\left( {{{{745{x_4}}} / {{{x_2}{x_3}}}}} \right) }^2} + 16.9 \times {{10}^6}} }}{{110x_6^3}} - 1 \le 0,\\ {g_6}\left( X \right) = \frac{{\sqrt{{{\left( {{{{745{x_5}}} / {{{x_2}{x_3}}}}} \right) }^2} + 157.5 \times {{10}^6}} }}{{85x_7^3}} - 1 \le 0,\\ {g_7}\left( X \right) = \frac{{{x_2}{x_3}}}{{40}} - 1 \le 0,\\ {g_8}\left( X \right) = \frac{{5{x_2}}}{{{x_1}}} - 1 \le 0,\\ {g_9}\left( X \right) = \frac{{{x_1}}}{{12{x_2}}} - 1 \le 0,\\ {g_{10}}\left( X \right) = \frac{{1.5{x_6} + 1.9}}{{{x_4}}} - 1 \le 0,\\ {g_{11}}\left( X \right) = \frac{{1.1{x_7} + 1.9}}{{{x_5}}} - 1 \le 0,\\ Variance\ Range:\\ 2.6 \le {x_1} \le 3.6,\\ 0.7 \le {x_2} \le 0.8,\\ {x_3} \in \left\{ {17,18,19, \ldots ,28} \right\} ,\\ 7.3 \le {x_4},\\ {x_5} \le 8.3,\\ 2.9 \le {x_6} \le 3.9,\\ 5 \le {x_7} \le 5.5. \end{array} \end{aligned}$$
(20)

The experimental results of CRTHHO and the five algorithms on reducer design are shown in Table 12. By comparing the optimization algorithms, it can be seen that CRTHHO can find the lowest evaluation function value. Therefore, it has better optimization performance than other algorithms in reducer design.

Table 12 Comparison results of CRTHHO and other 5 algorithms on SRD problem

4.4.4 Tubular column design

The goal of the tubular column design problem (TCD) [48] is to obtain a uniform string using minimal cost so that the string can withstand compressive loads. This problem has two design variables, the mean diameter of the column \(d\left( { = {x_1}} \right) \) and the thickness of tube \(t\left( { = {x_2}} \right) \). The optimization model of this problem is:

$$\begin{aligned} \begin{array}{c} \min f(X) = 9.8{x_1}{x_2} + 2{x_1},\\ Subject\,\, to:\\ {g_1}\left( X \right) = \frac{P}{{\pi {x_1}{x_2}{\sigma _y}}} - 1 \le 0,\\ {g_2}\left( X \right) = \frac{{8P{L^2}}}{{{\pi ^3}E{x_1}{x_2}\left( {x_1^2 + x_2^2} \right) }} - 1 \le 0,\\ {g_3}\left( X \right) = \frac{{2.0}}{{{x_1}}} - 1 \le 0,\\ {g_4}\left( X \right) = \frac{{{x_1}}}{{14}} - 1 \le 0,\\ {g_5}\left( X \right) = \frac{{0.2}}{{{x_2}}} - 1 \le 0,\\ {g_6}\left( X \right) = \frac{{{x_2}}}{8} - 1 \le 0,\\ Variance \,\, Range:\\ 2 \le {x_1} \le 14,\\ 0.2 \le {x_2} \le 0.8. \end{array} \end{aligned}$$
(21)

The CRTHHO algorithm and five other algorithms are used to solve this problem, and the test results are shown in Table 13. By comparison, it can be seen that the best value, worst value, and average value of CRTHHO algorithm are the lowest. Therefore, compared with the other five algorithms, CRTHHO algorithm has better optimization effect.

Table 13 Comparison results of CRTHHO and other 5 algorithms on TCD problem

5 Conclusion and prospect

This paper presents a Harris hawks optimization based on global cross-variation and tent mapping. Firstly, the tent map is introduced into the exploration stage of the algorithm, and the parameter is adjusted by using its ergodicity and regularity, which makes up for the slow convergence speed of the algorithm in the early stage and improves the convergence speed of the HHO algorithm. Secondly, the crossover mutation strategy is introduced to mutate the current global optimal solution and select the optimal individual by the greedy selection, which can avoid missing the potential optimal solution in a certain probability and improve the convergence accuracy. The improved algorithm proposed in this paper combines the above two strategies. The performance of CRTHHO is tested by three different experiments in Sect. 4. Comparing the test results of three different dimensions in the 10 benchmark functions, it can be found that the CRTHHO algorithm has the best test results for the 9 functions, and with the increase of dimension, the advantage of CRTHHO algorithm become more outstanding. Comparing the test results of 30 test functions in CEC2017, it can be concluded that the results of CRTHHO are best in 17 functions and suboptimal in 5 functions, while it wins the first place in overall score. Finally, the performance of CRTHHO is further tested by four engineering problems, and the optimal test results are obtained. The above three experiments show that CRTHHO algorithm has better optimization performance than the basic HHO algorithm and five novel meta-heuristic algorithms. The CRTHHO algorithm still has a lot of development space in the future, and there are many aspects to study. For example, whether CRTHHO can introduce new strategies to further improve the algorithm’s performance, whether the proposed improved strategy can be applied to the multi-objective optimization algorithm, and whether the introduction has a good effect. It is also worth thinking about what practical problems this proposed algorithm can study.