Abstract
Since no single algorithm can provide the optimal solutions for all problems, new metaheuristic methods are always being proposed or developed by combining current algorithms or creating adaptable versions. Metaheuristic methods should have a balanced exploitation and exploration stages. One of these two talents may be sufficient in some metaheuristic methods, while the other may be insufficient. By integrating the strengths of the two algorithms and hybridizing them, a more efficient algorithm can be formed. In this paper, the Aquila optimizer-tangent search algorithm (AO-TSA) is proposed as a new hybrid approach that uses the intensification stage of the tangent search algorithm (TSA) instead of the limited exploration stage to improve the Aquila optimizer’s exploitation capabilities (AO). In addition, the local minimum escape stage of TSA is applied in AO-TSA to avoid the local minimum stagnation problem. The performance of AO-TSA is compared with other current metaheuristic algorithms using a total of twenty-one benchmark functions consisting of six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions according to different metrics. Furthermore, two real engineering design problems are also used for performance comparison. Sensitivity analysis and statistical test analysis are also performed. Experimental results show that hybrid AO-TSA gives promising results and seems an effective method for global solution search and optimization problems.
Similar content being viewed by others
1 Introduction
Optimization is the task of searching the best among all candidate solutions for a problem under certain conditions. An optimization problem can be expressed as any problem that aims to find unknown variable values, provided that certain constraints are met (Murty 2003). While the solution method mostly depends on the type of variables (integer, real, etc.), objectives, and constraints (linear, non-linear, etc.) used in modeling the problem in classical optimization algorithms, its effectiveness also depends on the solution space (concave, convex, etc.), the number of constraints, and the number of decision variables. Furthermore, they do not provide general solution strategies that can be applied to problem formulations in the presence of different types of decision variables, objectives, and constraints. That is, most algorithms solve models with certain types of objective and constraint functions. However, the formulation of optimization problems in many different fields simultaneously needs various types of decision variables, objective functions, and constraint functions. Therefore, classical and general-purpose metaheuristic optimization algorithms are proposed. These techniques have gained considerable popularity in recent years because to their high computational efficiency and simplicity in transformation (Akyol and Alatas 2012, 2017; Alatas 2007). In most real-life problems, the search space is infinite or so large that all solutions cannot be evaluated. Thus, it is necessary to find a good solution by evaluating the solutions in a reasonable time. For such problems, assessment of solutions in a reasonable time is essentially the same thing as assessment of “some solutions” in the entire search space. What and how some solutions are selected vary based on the metaheuristic technique.
General-purpose metaheuristic methods can be grouped into eight categories as biology-based, physics-based, swarm-based, music-based, social-based, chemistry-based, sports-based, and mathematics-based. There are also hybrid methods that are combinations of these (Akyol and Alatas 2012, 2017; Alatas 2007). Tunicate Swarm Algorithm (Kaur et al. 2020), Reptile Search Algorithm (Abualigah et al. 2022), Spotted Hyena Optimizer (Dhiman and Kumar 2017), Emperor Penguin Optimizer (Dhiman and Kumar 2018), Seagull Optimization Algorithm (Dhiman and Kumar 2019), and Sooty Tern Optimization Algorithm (Dhiman and Kaur 2019) are biology-based; the Parliamentary Optimization Algorithm (Borji 2007) and the Imperialist Competitor Algorithm (Atashpaz-Gargari and Lucas 2007) are social-based; Artificial Chemical Reaction Algorithm (Alatas 2011) is chemistry-based; Melody Search Algorithm (Ashrafi and Dariane 2011) is music-based; Archimedes Optimization Algorithm (Hashim et al. 2021) and Spring Search Algorithm (Dehghani et al. 2020b) are physics-based; Bonobo Optimizer (Das and Pratihar 2019) and Rat Swarm Optimizer (Dhiman et al. 2021) are swarm-based; Most Valuable Player Algorithm (Bouchekara 2020) and Darts Game Optimizer (Dehghani et al. 2020c) are sports-based and Arithmetic Optimization Algorithm (Abualigah et al. 2021a) is mathematics-based algorithms and models. Multi Leader Optimizer (Dehghani et al. 2020a) can also be classified as both a swarm-based and social-based algorithm (Akyol 2018; Akyol and Alatas 2012, 2020; Alatas 2007). In some studies, algorithms inspired by plant intelligence were examined in a separate group as plant-based methods (Akyol and Alatas 2017) and algorithms inspired by the law of reflection and refraction of light were categorized as light-based methods (Alatas and Bingol 2020).
AO (Abualigah et al. 2021b) is swarm-based and TSA (Layeb 2021) is mathematical-based metaheuristic algorithms. AO was developed with inspiration from the hunting skills of the Aquila and TSA was developed based on the tangent function. In the literature, the number of studies with AO and TSA is few. Only a few problem-oriented studies were performed for AO and there is not any study on the improvement of TSA. Using AO, AlRassas et al. developed the Adaptive Neuro-Fuzzy Inference System model and used this model to predict oil production for two different locations in Yemen and China (AlRassas et al. 2021). Elaziz et al. proposed a hybrid version of deep learning and AO and applied it to images of Covid-19 data (Abd Elaziz et al. 2021). Ma and Zhao proposed an improved version of the AO algorithm using wavelet mutation and quasi-opposite learning strategies. Then, they used this improved version of the AO and the Bernoulli model together to estimate the rural community population in China (Ma et al. 2021). Vashishtha and Kumar used AO to adjust the length of the minimum deconvolution filter used to detect bearing defects in the Francis turbine (Vashishtha and Kumar 2021).
Metaheuristic algorithms need to have exploration and exploitation capabilities. These two abilities must work in a balanced way. In some metaheuristic algorithms, while the exploitation capability works well, the exploration capability may be lacking, or the exploration capability may work well, but the exploitation capability may not be sufficient due to the stochastic nature of the method (Dhiman 2021). By hybridizing the algorithms, a more efficient algorithm can be obtained by combining the strengths of the two algorithms along with eliminating the weaknesses. Many optimization algorithms suffer from early convergence due to the local optimum in the exploitation phase (Upadhyay and Chhabra 2021). Hybrid algorithms, which are more preferred than individual metaheuristic algorithms, are used in solving broader optimization problems (Verma and Parouha 2021). When compared to classical forms of metaheuristic methods, their hybrid versions show substantial gains (Dokeroglu et al. 2019). Recent research suggests that hybrid metaheuristic algorithms can provide more efficient behavior and better flexibility (Blum et al. 2011). The fundamental purpose of hybrid algorithms is to combine the characteristics of various metaheuristic techniques and reap the benefits of synergy.
AO and TSA are two of the newest metaheuristic methods and there is not any hybrid version of TSA. Only two hybrid versions of AO exist in the literature. Wang et al. proposed a new hybrid algorithm by addressing the strengths of AO’s good exploration ability and the exploitation phase of Harris Hawks Optimizer (Wang et al. 2021). Mahajan et al. stated that the AO converges early due to the direct addition of the global best location in the location update, making the search phase strong. However, this will cause it to remain at the local optimum. In addition, they also stated that the convergence speed of the Arithmetic Optimization Algorithm (AOA) is reported as low and the search capability as weak. To overcome this deficiency of AOA, they proposed the hybrid AOAAO algorithm (Mahajan et al. 2022).
Based on these, in this article, a new hybrid algorithm, AO-TSA, is proposed for complex solution search and optimization problems. While the exploration phase is applied at 2/3 of the number of iterations in the AO method, the exploitation phase is generally insufficient. To give more space to exploitation capability, the effective intensification phase of TSA is applied instead of the narrowed exploration phase of the AO. In addition, TSA’s local minimum escape steps are also applied to avoid the local minimum stagnation problem. According to various metrics, the performance of AO-TSA is compared to those of other existing metaheuristic algorithms utilizing a total of twenty-one benchmark functions, including six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions. In addition, two real engineering design problems are used to compare performance. Statistical test analysis and sensitivity analysis are also performed. The proposed hybrid method seems to achieve better results at an earlier time than AO and TSA. In the second part of this study, AO and TSA are explained in detail and flow diagrams are given. In the third part, the hybrid AO-TSA algorithm is explained and how the two algorithms are combined in the flow diagram is shown. The fourth part introduces the used benchmark functions and real engineering design problems. In the fifth part, the performances of standard AO, standard TSA, AO-TSA, and metaheuristic algorithms with promising results in the literature are presented by comparing the experimental results obtained in benchmark functions. The paper is concluded along with possible future research directions in the sixth part.
2 Standard Aquila optimizer and tangent search algorithm
In this section, the AO (Abualigah et al. 2021b), which is inspired by the hunting skills of Aquila, and TSA (Layeb 2021), which is based on the mathematical tangent function, are explained in detail and flow diagrams are given.
2.1 Aquila optimizer
Aquilas are the most common species of eagles and are known as the most popular birds of prey in the Northern Hemisphere. The four hunting methods used by Aquilas are described as follows: The first method, vertically inclined high-flying, is used by the Aquila to hunt birds while flying high above the ground. After discovering its prey and getting closer and closer to it, the wings take a long, low-angle glide that rises rapidly. The second method, the perimeter flight with a short soaring attack, is the flight in which the Aquila rises low above the ground. In the third method, the Aquila slowly attacks its victim, trying to land on the prey it has chosen on the ground. This method is called slow landing attack and flight action. The last method is walking and catching. The Aquila tries to attract prey by walking on the ground in this method. The AO algorithm was inspired by these four hunting methods of Aquila.
As in all population-based methods, in AO, the initial population (X) starts with randomly generated values between the lower and upper limits. The optimal result is the best solution obtained from the candidate solutions as a result of iterations. \(X\) consists of a set of randomly generated candidate solutions using Eq. (1). \(N\) represents the total number of candidate solutions (population size), \({X}_{ij}\) represents \(j\)th decision values (positions) of the \(i\)th solution, and \(D\) represents the size of the problem. \(rand\) is a random number, \({UB}_{j}\) is the \(j\)th upper limit, and \({LB}_{j}\), is the \(j\)th lower limit (Abualigah et al. 2021b).
Based on the four behavior patterns that the Aquila simulates while hunting, the AO algorithm is represented by a method for each of these behaviors: high glide search area selection with vertical slope, short glide attack, and reconnaissance within a different search area with the boundary line, low flight exploitation within a convergence search area with slow landing attack, and catching prey on foot. In the AO algorithm, where \(T\) is the maximum number of iterations and \(t\) is the number of current iteration, exploration steps are applied when the condition \(t\le \left(\frac{2}{3}\right)\times T\) is met, otherwise, exploitation steps are started.
The behavior of Aquila is modeled as a mathematical optimization paradigm, which tries to find the best solution according to the certain constraints (Abualigah et al. 2021b).
2.1.1 Step 1: extended exploration \(({X}_{1})\)
In this method \({(X}_{1})\), Aquila discovers the hunting ground and chooses the best hunting area with a high glide on a vertical slope. Here, AO conducts extensive expeditions by flying high to determine the search area where prey is located. This behavior is mathematically presented as in Eq. (2).
Here, \({X}_{1}\left(t+1\right)\) represents the solution of the (\(t+1)\)th iteration produced by the first method (\({X}_{1}\)). \({X}_{best}\left(t\right)\) represents the best solution achieved up to the \(t\)th iteration and with this, the proximate location of the prey is determined. By using the \(\left(1-\frac{t}{T}\right)\) equation, it is decided whether to apply extended search steps. \({X}_{M}\left(t\right)\) calculated using Eq. (3) indicates the mean value of the positions of the existing solutions in the \(t\)th iteration. \(rand\) is a randomly generated number in [0, 1] (Abualigah et al. 2021b).
2.1.2 Step 2: narrowed exploration \({(X}_{2})\)
In this method \({(X}_{2})\), the hunting ground is found with a high flight. In the short glide attack and contour flight method, the Aquila prepares the attack area by drawing circles on the prey before attacking the prey it has determined. In this method, AO searches in detail around its chosen prey before attacking. This behavior is mathematically presented as in Eq. (4).
\({X}_{2}\left(t+1\right)\) represents the solution of the \(\left(t+1\right)\) th iteration, produced by the second method \({(X}_{2})\). Levy flight distribution \(Levy\left(\right)\) is calculated as given in Eq. (5). The random solution \({X}_{R}\left(t\right)\) is in the interval \([1, N]\) in the \(t\)th iteration (Abualigah et al. 2021b).
Here, \(s\) is a constant with a value of 0.01. \(u\) and \(v\) are random numbers between 0 and 1. \(\sigma \) is computed by Eq. (6).
Here, \(\beta \) is a constant with a value of 1.5. In Eq. (4), \(x\) and \(y\) are utilized for representing the spiral shape in the search and are calculated using Eqs. (7), (8), (9), (10), and (11).
\({r}_{1}\) takes a value from 1 to 20 to determine the number of search cycles, and \(U\) is a constant with a value of 0.00565. \({D}_{1}\) is an integer from 1 to the length of the search field (\(D\)) and \(\omega \) is a small constant of 0.005 (Abualigah et al. 2021b).
2.1.3 Step 3: extended exploitation \({(X}_{3})\)
In this method \({(X}_{3})\), when the area of prey is correctly found and the Aquila is ready to land and attack, it descends steeply towards to prey with a frontal attack to explore its response. This method, in which the Aquila approaches and attacks by using the environment of the prey it chooses, is called low flight with landing attack. The mathematical expression of this behavior is shown in Eq. (12).
\({X}_{3}\left(t+1\right)\) is the solution of the \((t+1)\) th iteration produced by the third search method (\({X}_{3}\)). \(\alpha \) and \(\delta \) represent parameters of the exploitation tuning that are set to a small value of 0.1 (Abualigah et al. 2021b).
2.1.4 Step 4: narrowed exploitation \({(X}_{4})\)
In the last method \({(X}_{4})\), when the Aquila approaches its prey on land, it attacks according to the stochastic movements of its prey. This step is called walking and catching prey. Finally, the AO goes after the prey in the final location. This phenomenon is mathematically presented as in Eq. (13).
\({X}_{4}\left(t+1\right)\) is the solution of the \((t+1)\) th iteration produced by the fourth search method (\({X}_{4}\)). \(QF\) is calculated using Eq. (14) and represents a quality function used to balance search strategies. \({G}_{1}\) calculated using Eq. (15) represents the different movements of the AO for tracking the prey during evasion. \({G}_{2}\), calculated using Eq. (16), is a decreasing value from 2 to 0, denoting the slope of the flight for the AO for tracking prey during evasion from the first position to the final position. \(X\left(t\right)\) is the current solution in the \(t\) th iteration (Abualigah et al. 2021b).
\(QF\left( t \right)\) is the quality function value of the \(t\)th iteration. The flowchart of the AO is given in Fig. 1.
2.2 Tangent search algorithm
The Tangent Search Algorithm is developed based on the tangent function used to explore the search area well. In TSA, the equations of motion are performed with a spherical step “\(step\times \mathrm{tan}(\theta )\)”. As in Levy flight, it is called tangent flight because the tangent function performs the flight function.
Having a better balance between exploration and intensification (exploitation) will make the optimization algorithm more successful. Too much intensification causes algorithms to converge quickly to the local minimum, and too much exploration slows down and sometimes differentiates the algorithm too much. To reach a balance between intensification and exploration, TSA is built from 3 main components: intensification, exploration, and escape from local minimum. Also, the local minimum escape procedure is applied to a random individual (solution) in each iteration to avoid getting stuck in the local minimum. As in other population-based algorithms, in TSA, individuals in the first population are calculated using Eq. (1) to be evenly distributed within the boundaries of the solution space (Layeb 2021).
2.2.1 Intensification phase
In this phase, TSA first performs a random local walk according to Eq. (17) or Eq. (18). Some of the decision variables of the solution obtained (for the problems with size larger than four, the variable rate is 20%, while for the problems with four dimensions or less, it is 50%) are replaced with the values of the corresponding variables in the current best solution.
As a result, the resulting solution \({X}_{i}\left(t+1\right)\) has a similarity rate of less than 50% with the optimal available solution. If the values of the found solution exceed the \(LB\) and \(UB\) limits of the problem, Eq. (19) is used for correction (Layeb 2021).
2.2.2 Exploration phase
Unlike local search methods, it has a large exploration capacity due to the global random walk. TSA uses tangent flight and variable step size for global random walking. By using the tangent function, the search area is explored more efficiently. In fact, \(\theta \) being close to \(\pi /2\) will increase the tangent value and the resulting solution will be far from the current solution. With a value of \(\theta \) near to 0, the tangent function will have small values, and the solution will be similar to the current solution. Therefore, Eq. (20) belonging to the exploration phase converges between local and global random walk. For exploratory search, this equation used is applied with probability \(1/D\) to each variable (Layeb 2021).
2.2.3 Escape from the local minimum
TSA includes a mechanism that uses a certain procedure to escape the local minimum stagnation problem. The procedure consists of two parts executed with the \(Pesc\) probability value. A random agent search is selected at each iteration, and then either Eq. (21) or Eq. (22) is used. In addition, the new random solutions can replace the worst solutions with probability 1% (Layeb 2021). In Eq. (21), \(sign()\) represents the signum function that returns the sign of given number.
\(Pswitch\), \(Pesc\), \(step\), and \(\theta \) are the basic parameters used to highlight intensification and exploration search. The balance between global and local random walks is aimed at controlling the parameter of switching, \(Pswitch \epsilon [0, 1]\). The parameter \(Pesc \epsilon [0, 1]\) is the escape procedure probability. For guiding and highlighting the intensification and exploration capability, the \(step\) parameter is used. A variable step size is used in TSA to get a good approximation of the best solution and avoid lack of precision. At the beginning of the search, a large step size is adopted as the search process progresses, and the step size is non-linearly decreased during iterations. By this adaptive size behavior, a good balance between intensification and exploration is aimed in TSA. The step size is influenced by the tangent flights that give an oscillating and periodic behavior to the search. This method utilizes a nonlinear reduction scheme for logarithm function-based adaptive step size to adapt the exploration and intensification search process. A fine convergence is aimed with the help of the logarithm function. On the other hand, it is seen that better results are obtained when different step size functions are used (Layeb 2021). Therefore, to be more efficient, two step size variants are used in TSA. In the intensification search, the first variant is used and is calculated as given in Eq. (23). The second step size is calculated using Eq. (24) in the exploration search.
Here, \(norm()\) is a specific mathematical norm. The \(sign(-,+)\) component controls the direction of the exploration and intensification phase (Layeb 2021). The flowchart of TSA is given in Fig. 2.
3 Hybrid Aquila optimizer-tangent search algorithm
High exploration and exploitation abilities are required in solution search and optimization methods. The exploration phase’s goal is to thoroughly investigate the search space and identify the most promising potential solutions. The exploitation phase is designed to guide the search process to the best solution feasible for the population. The accuracy and speed of convergence of a metaheuristic method can be enhanced by properly balancing exploitation and exploration performance. A new hybrid algorithm that is stronger than these two algorithms can be obtained by hybridizing an algorithm with strong exploitation ability, but weak exploration ability and an algorithm with strong exploration ability but weak exploitation ability. A more powerful new algorithm is aimed by combining the strong exploitation capability of the first algorithm with the powerful exploration capability of the second algorithm. Thus, the hybridization of the two methods can be used to combine the strengths of each method in a single approach and take advantage of the advantages while eliminating the disadvantages of each method.
In the AO method, while the exploration phase is applied at 2/3 of the iteration, the exploitation phase is insufficient. To give more importance to exploitation capability, the effective intensification phase of TSA is applied instead of the narrowed exploration phase of the AO. Therefore, it is aimed to reach the optimum solution earlier by strengthening the exploitation stage of the hybrid AO-TSA. In this paper, a hybrid AO-TSA is proposed by integrating the intensification steps of TSA into the narrowed exploration phase of the AO. This proposed hybrid algorithm finds the global solution faster without getting stuck in the local optimum, by properly balancing the exploitation and exploration phase. Finally, in order to escape from the local minimum stagnation problem, the escape from local minimum steps used by TSA are also implemented in this new hybrid algorithm. The flow diagram of the proposed hybrid AO-TSA is shown in Fig. 3.
4 Test functions and engineering design problems
Generally, well-defined and complex functions are used to define standard measuring of the optimization methods. Standard problems and interfaces to search and optimization problems are already specified in order to compare different optimization algorithms on different types of search and optimization problems. Six unimodal (Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, and Schwefel’s 2.22), six multimodal (Schwefel, Levy Function, Ackley, Griewank, Penalized, and Rastrigin), and six fixed-dimension multimodal (Foxholes, Kowalik, Goldstein-Price, Shekel 7, Shekel 10, and Six Hump Camel) benchmark functions were used to compare the performances of the AO-TSA, AO, TSA, SCA (Mirjalili 2016), WOA (Mirjalili and Lewis 2016), I-GWO (Nadimi-Shahraki et al. 2021), CSA (Askarzadeh 2016), and BO (Das and Pratihar 2019) algorithms. In addition, three of the CEC 2019 benchmark functions (Storn's Chebyshev Polynomial Fitting Problem, Lennard–Jones Minimum Energy Cluster, and Inverse Hilbert Matrix Problem) were used. The equations, parameters, minimum values, and problem size of these twenty-one test functions are shown in Table 1.
Two engineering design problems (three bar truss design and tension/compression spring design) were also used to test the efficiency of the proposed method. In tension/compression spring design, which is a continuous constrained problem, it is aimed to find the best values of the mean coil diameter, number of active coils, and wire diameter parameters in order to minimize the tension/compression spring weight. The mathematical expression of the problem is shown in Eq. (25).
In the problem of three bar truss design, which aims to minimize the weight of a three-bar truss, the objective function is simple. However, as with other structural design problems, there are many constraints. These constraints are buckling, deflection, and stress. Equation (26) depicts the problem’s mathematical expression.
5 Experimental results
In this study, for all experiments, the initial population size of the algorithms used was 30. All algorithms were started and run under equal conditions. The number of function evaluations was used as the termination condition of the algorithms. Accordingly, the algorithm was terminated when the number of function evaluations reaches 10,000. Each algorithm was run 20 times for each test function and obtained results are presented. Standard values for the parameters of AO and TSA were used in the experiments.
Tables 2 and 3 show the best, worst, mean, and standard deviation values as “Best”, “Worst”, “Mean”, and “Std” respectively obtained after the Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA were run 20 times for each test function. According to these tables, the AO-TSA algorithm gives the best results for Sphere, Rosenbrock, Schwefel’s 2.21, Schwefel, Ackley, Griewank, Rastrigin, Foxholes, Kowalik, Shekel 7, Shekel 10, Six Hump Camel, and Lennard–Jones Minimum Energy Cluster test functions. The worst results were obtained from CSA in general. The best results according to mean values for Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Schwefel, Levy Function, Ackley, Griewank, Penalized, Rastrigin, Shekel 7, Shekel 10, Six Hump Camel, and Kowalik test functions were obtained from the hybrid AO-TSA. According to results obtained for unimodal benchmark functions, the proposed method seems to achieve the best results in 3 out of 6 functions with respect to the best values while it gives the best results in all unimodal functions concerning mean values. In multimodal benchmark functions, AO-TSA gives the best results in 4 out of 6 functions in terms of the best values, and the best results in all functions in terms of mean values are obtained by the proposed method.
As seen in Table 3, in fixed-dimension multimodal benchmark functions, AO-TSA achieves the best results in 5 out of 6 functions in terms of the best values, and the proposed technique achieves the best results in 4 out of 6 functions in terms of mean values. In CEC 2019 benchmark functions, the high performance obtained for other types of benchmark functions is not achieved by the proposed method. Generally, it is seen that the mean values obtained from AO-TSA for other test functions are better than other algorithms. When the standard deviation values were examined, AO-TSA gave the minimum value for all 6 test functions used. Considering the experiments, it is seen that AO-TSA gives promising results in terms of the standard deviation values compared to other algorithms.
Figure 4 shows the change in the mean fitness value/number of function evaluations of the test results of Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA in Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Schwefel, Levy Function, Ackley, Griewank, Penalized, and Rastrigin test functions.
According to the convergence curve demonstrated in Fig. 4, the hybrid AO-TSA reaches the best solution very rapidly for all test functions. Especially in Schwefel 2.21, Schwefel, Ackley, Griewank, and Rastrigin test functions, it gave good results in a very short time compared to other algorithms. Again, according to Fig. 4, the algorithm that reached the best solution in the test functions used was AO-TSA.
Figure 5 shows the change in the mean fitness value/number of function evaluations of the test results of Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA in Foxholes, Kowalik, Goldstein-Price, Shekel 7, Shekel 10, Six Hump Camel, Storn’s Chebyshev Polynomial Fitting Problem, Lennard–Jones Minimum Energy Cluster, and Inverse Hilbert Matrix Problem test functions. According to the convergence curve shown in Fig. 5, it is seen that the convergence rate of AO-TSA is quite good. Especially for Schekel 7 and Schekel 10 test functions, the proposed method converged much faster than other algorithms. In general, it is seen that AO-TSA converges rapidly and achieves better results.
On many benchmark functions, the effect of population size (N) is investigated. Numerous numbers of population size (20, 30, and 50) are investigated across 10,000 function evaluation number to properly analyze the AO-TSA’s parameter sensitivity. Sensitivity analysis for population size is shown in Table 4. The results presented in the table indicate that in general, performance of the method increases with higher N values.
Furthermore, the effect of t parameter is investigated on many benchmark functions. 1/2, 2/3, and 3/4 values of the t parameter are used for parameter analysis of the algorithm. Results of sensitivity analysis for t parameter are shown in Table 5. The best results are obtained with the value of \(\left(\frac{2}{3}\right)\times T\) for t parameter.
Box plots of the experimental results of the hybrid AO-TSA proposed in this study and the other seven algorithms used for comparison are shown in Figs. 6 and 7. When the box plots in Figs. 6 and 7 are examined, it is seen that the lower quartile, median, and upper quartile values obtained based on 20 runs from AO-TSA in Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Levy Function, Griewank, Penalized, Rastrigin, Foxholes, Kowalik, and Storns Chebyshev Polynomial test functions, especially the Schwefel test function, are smaller than the values obtained from other algorithms. It is observed that the lower quartile, median, and upper quartile values obtained from the AO-TSA for the Sphere, Ackley, Golden Stein Price, and Lennard Jones Minimum Energy Cluster test functions are close and small compared to those obtained from other algorithms.
For the problems of three bar truss design and tension/compression spring design, all algorithms were started and run under equal conditions. The optimal results are obtained when each algorithm is run 20 times and the parameter values that give these results are listed in Tables 6 and 7. When the tables are examined, it is seen that for these two problems, the best results are obtained from the proposed AO-TSA. Afterward, BO and I-GWO algorithms give better results, respectively.
In addition, experimental results were evaluated by the Friedman test and it was examined whether there was a statistically significant difference between the results obtained from the algorithms used in this study. This non-parametric test is used to describe the differences in the behavior of multiple algorithms. Test cases are expressed in rows while the outcomes of comparing algorithms are shown in columns in the Friedman test. In the experiments, the Null Hypothesis (\({H}_{0}\)) is “There is not a meaningful difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA”. Alternative Hypothesis (\({H}_{1}\)) is “There is a meaningful difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA”. Analysis results with the Friedman test are shown in Table 8. The alpha value was determined as 0.05 and the degrees of freedom (\(Df\)) (number of samples compared-1) was 7. According to the alpha value and the degrees of freedom, it is seen that the value of \({x}_{F}^{2}\) is 14.067, from the chi-square distribution table. When Table 8 is examined, it is seen that the \({x}^{2}\) value is greater than the \({x}_{F}^{2}\) value (\({x}^{2}>{x}_{F}^{2}\)) for all test functions. Accordingly, the Null Hypothesis (\({H}_{0}\)) is rejected and the Alternative Hypothesis (\({H}_{1}\)) is accepted. In other words, there is a statistically significant difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA according to the Friedman test.
6 Conclusions
Metaheuristic approaches have grown in popularity in recent years as a result of their high computational power and ease of conversion. There is, however, no algorithm that can provide the optimal solution to all problems. As a result, new metaheuristic algorithms are being proposed and current algorithms are being improved. Exploration and exploitation skills should be included in metaheuristic algorithms. One of these two capabilities may be sufficient in some metaheuristic algorithms, while the other may be insufficient. In this study, a new hybrid method, AO-TSA, was proposed by combining the TSA intensification phase with the limited exploration stage to improve AO’s exploitation ability. In addition, the local minimum escape steps of TSA are also applied in the new hybrid algorithm. By combining the strengths of these two metaheuristic algorithms, a new more efficient general-purpose hybrid algorithm was presented as a solution search methodology for complex optimization problems.
Six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions were used to compare the performances of AO-TSA, AO, TSA, WOA, SCA, I-GWO, CSA, and BO. When the mean results obtained by running 20 times were examined; it was observed that, generally, hybrid AO-TSA reaches better solutions earlier than the other algorithms according to the convergence curves. The best or near best values for the Best, Mean, and Standard Deviation criteria in multimodal test functions were found by AO-TSA. Furthermore, in unimodal benchmark functions, the best mean values are obtained from the proposed hybrid AO-TSA. The best results for multimodal benchmark functions are achieved by the proposed method. The high performance obtained for other types of benchmark functions is not achieved by the proposed algorithm in CEC 2019 benchmark functions. Finally, when Friedman analysis of the results was performed, statistically significant differences were found. Despite the fact that the results show that hybrid AO-TSA is an effective method for global optimization, the requirement for parameters for the hybrid method a priori appears to be an AO-TSA limitation. However, in order to eliminate this limitation, new approaches can be proposed for adaptively specifying the algorithm parameters.
In future studies, it is planned to use different versions of this algorithm in real-world problems by proposing and making it multi-objective. Different local search methods can be integrated into this method to increase the accuracy and search power of hybrid AO-TSA. In addition, distributed and parallel versions of this algorithm can be developed for future studies.
Data deposition statement
No new data were generated for this study.
References
Abd Elaziz M, Dahou A, Alsaleh NA, Elsheikh AH, Saba AI, Ahmadein M (2021) Boosting COVID-19 image classification using MobileNetV3 and Aquila optimizer algorithm. Entropy 23(11):1383. https://doi.org/10.3390/e23111383
Abualigah L, Diabat A, Mirjalili S, Abd Elaziz M, Gandomi AH (2021a) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609. https://doi.org/10.1016/j.cma.2020.113609
Abualigah L, Yousri D, Abd Elaziz M, Ewees AA, Al-qaness MA, Gandomi AH (2021b) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250. https://doi.org/10.1016/j.cie.2021.107250
Abualigah L, Abd Elaziz M, Sumari P, Geem ZW, Gandomi AH (2022) Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst Appl 191:116158. https://doi.org/10.1016/j.eswa.2021.116158
Akyol S (2018) Güncel akıllı optimizasyon algoritmalarıyla duygu sınıflandırılması. Doktora Tezi, Fırat Üniversitesi, Fen Bilimleri Enstitüsü, Elazığ.
Akyol S, Alatas B (2012) Güncel sürü zekası optimizasyon algoritmaları. Nevşehir Üniversitesi Fen Bilimleri Enstitüsü Dergisi 1(1):36–50
Akyol S, Alatas B (2017) Plant intelligence based metaheuristic optimization algorithms. Artif Intell Rev 47(4):417–462. https://doi.org/10.1007/s10462-016-9486-6
Akyol S, Alatas B (2020) Sentiment classification within online social media using whale optimization algorithm and social impact theory based optimization. Phys A 540:123094. https://doi.org/10.1016/j.physa.2019.123094
Alatas B (2007) Kaotik haritalı parçacık sürü optimizasyon algoritmaları geliştirme. Doktora Tezi, Fırat Üniversitesi, Fen Bilimleri Enstitüsü, Elazığ.
Alatas B (2011) ACROA: artificial chemical reaction optimization algorithm for global optimization. Expert Syst Appl 38(10):13170–13180. https://doi.org/10.1016/j.eswa.2011.04.126
Alatas B, Bingol H (2020) Comparative assessment of light-based intelligent search and optimization algorithms. Light Eng. https://doi.org/10.33383/2019-029
AlRassas AM, Al-qaness MA, Ewees AA, Ren S, Abd Elaziz M, Damaševičius R, Krilavičius T (2021) Optimized ANFIS model using Aquila optimizer for oil production forecasting. Processes 9(7):1194. https://doi.org/10.3390/pr9071194
Ashrafi SM, Dariane AB (2011) A novel and effective algorithm for numerical optimization: Melody search (MS). In: 2011 11th International Conference on Hybrid Intelligent Systems (HIS) pp. 109–114. IEEE. https://doi.org/10.1109/HIS.2011.6122089
Askarzadeh A (2016) A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm. Comput Struct 169:1–12. https://doi.org/10.1016/j.compstruc.2016.03.001
Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: An algorithm for optimization inspired by imperialistic competition. In: IEEE Congress on Evolutionary Computation, CEC 2007, pp 4661–4667. https://doi.org/10.1109/CEC.2007.4425083
Blum C, Jakob P, Raidl GR, Roli A (2011) Hybrid metaheuristics in combinatorial optimization: a survey. Appl Soft Comput 11(6):4135–4151. https://doi.org/10.1016/j.asoc.2011.02.032
Borji A (2007) A new global optimization algorithm inspired by parliamentary political competitions. Lect Notes Comput Sci 4827(2007):61–71. https://doi.org/10.1007/978-3-540-76631-5_7
Bouchekara HREH (2020) Most valuable player algorithm: a novel optimization algorithm inspired from sport. Oper Res Int Journal 20(1):139–195. https://doi.org/10.1007/s12351-017-0320-y
Das AK, Pratihar DK (2019) A new bonobo optimizer (BO) for real-parameter optimization. In: 2019 IEEE Region 10 Symposium (TENSYMP), IEEE, pp 108–113. https://doi.org/10.1109/TENSYMP46218.2019.8971108
Dehghani M, Montazeri Z, Dehghani A, Ramirez-Mendoza RA, Samet H, Guerrero JM, Dhiman G (2020a) MLO: Multi leader optimizer. Int J Intell Eng Syst 13:364–373. https://doi.org/10.22266/ijies2020a.1231.32
Dehghani M, Montazeri Z, Dhiman G, Malik OP, Morales-Menendez R, Ramirez-Mendoza RA, Parra-Arroyo L (2020b) A spring search algorithm applied to engineering optimization problems. Appl Sci 10(18):6173. https://doi.org/10.3390/app10186173
Dehghani M, Montazeri Z, Givi H, Guerrero JM, Dhiman G (2020c) Darts game optimizer: a new optimization technique based on darts game. Int J Intell Eng Syst 13(5):286–294. https://doi.org/10.22266/ijies2020c.1031.26
Dhiman G (2021) ESA: A hybrid bio-inspired metaheuristic optimization approach for engineering problems. Eng Comput 37(1):323–353. https://doi.org/10.1007/s00366-019-00826-w
Dhiman G, Kaur A (2019) STOA: A bio-inspired based optimization algorithm for industrial engineering problems. Eng Appl Artif Intell 82:148–174. https://doi.org/10.1016/j.engappai.2019.03.021
Dhiman G, Kumar V (2017) Spotted hyena optimizer: A novel bio-inspired based metaheuristic technique for engineering applications. Adv Eng Softw 114:48–70. https://doi.org/10.1016/j.advengsoft.2017.05.014
Dhiman G, Kumar V (2018) Emperor penguin optimizer: a bio-inspired algorithm for engineering problems. Knowl-Based Syst 159:20–50. https://doi.org/10.1016/j.knosys.2018.06.001
Dhiman G, Kumar V (2019) Seagull optimization algorithm: theory and its applications for large-scale industrial engineering problems. Knowl-Based Syst 165:169–196. https://doi.org/10.1016/j.knosys.2018.11.024
Dhiman G, Garg M, Nagar A, Kumar V, Dehghani M (2021) A novel algorithm for global optimization: rat swarm optimizer. J Ambient Intell Humaniz Comput 12(8):8457–8482. https://doi.org/10.1007/s12652-020-02580-0
Dokeroglu T, Sevinc E, Kucukyilmaz T, Cosar A (2019) A survey on new generation metaheuristic algorithms. Comput Ind Eng 137:106040. https://doi.org/10.1016/j.cie.2019.106040
Hashim FA, Hussain K, Houssein EH, Mabrouk MS, Al-Atabany W (2021) Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Appl Intell 51(3):1531–1551. https://doi.org/10.1007/s10489-020-01893-z
Kaur S, Awasthi LK, Sangal AL, Dhiman G (2020) Tunicate swarm algorithm: a new bio-inspired based metaheuristic paradigm for global optimization. Eng Appl Artif Intell 90:103541. https://doi.org/10.1016/j.engappai.2020.103541
Layeb A (2021) The tangent search algorithm for solving optimization problems. arXiv preprint arXiv:2104.02559.
Ma L, Li J, Zhao Y (2021) Population forecast of China’s rural community based on CFANGBM and improved Aquila optimizer algorithm. Fractal Fracti 5(4):190. https://doi.org/10.3390/fractalfract5040190
Mahajan S, Abualigah L, Pandit AK, Altalhi M (2022) Hybrid Aquila optimizer with arithmetic optimization algorithm for global optimization tasks. Soft Comput 26(10):4863–4881. https://doi.org/10.1007/s00500-022-06873-8
Mirjalili S (2016) SCA: a Sine Cosine Algorithm for solving optimization problems. Knowl-Based Syst 96:120–133. https://doi.org/10.1016/j.knosys.2015.12.022
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008
Murty KG (2003) Optimization models for decision making. Internet edition, vol 1. http://www-personal.umich.edu/~murty/books/opti_model/. Accessed 23 Oct 2021
Nadimi-Shahraki MH, Taghian S, Mirjalili S (2021) An improved grey wolf optimizer for solving, engineering problems. Expert Syst Appl 166:113917. https://doi.org/10.1016/j.eswa.2020.113917
Upadhyay P, Chhabra JK (2021) Multilevel thresholding based image segmentation using new multistage hybrid optimization algorithm. J Ambient Intell Humaniz Comput 12:1081–1098. https://doi.org/10.1007/s12652-020-02143-3
Vashishtha G, Kumar R (2021) Autocorrelation energy and Aquila optimizer for MED filtering of sound signal to detect bearing defect in Francis turbine. Meas Sci Technol 33(1):015006. https://doi.org/10.1088/1361-6501/ac2cf2
Verma P, Parouha RP (2021) An advanced hybrid algorithm for nonlinear function optimization with real world applications. J Ambient Intell Humaniz Comput. https://doi.org/10.1007/s12652-021-03588-w
Wang S, Jia H, Abualigah L, Liu Q, Zheng R (2021) An improved hybrid Aquila optimizer and harris hawks algorithm for solving industrial engineering optimization problems. Processes 9(9):1551. https://doi.org/10.3390/pr9091551
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Akyol, S. A new hybrid method based on Aquila optimizer and tangent search algorithm for global optimization. J Ambient Intell Human Comput 14, 8045–8065 (2023). https://doi.org/10.1007/s12652-022-04347-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-022-04347-1