1 Introduction

Optimization is the task of searching the best among all candidate solutions for a problem under certain conditions. An optimization problem can be expressed as any problem that aims to find unknown variable values, provided that certain constraints are met (Murty 2003). While the solution method mostly depends on the type of variables (integer, real, etc.), objectives, and constraints (linear, non-linear, etc.) used in modeling the problem in classical optimization algorithms, its effectiveness also depends on the solution space (concave, convex, etc.), the number of constraints, and the number of decision variables. Furthermore, they do not provide general solution strategies that can be applied to problem formulations in the presence of different types of decision variables, objectives, and constraints. That is, most algorithms solve models with certain types of objective and constraint functions. However, the formulation of optimization problems in many different fields simultaneously needs various types of decision variables, objective functions, and constraint functions. Therefore, classical and general-purpose metaheuristic optimization algorithms are proposed. These techniques have gained considerable popularity in recent years because to their high computational efficiency and simplicity in transformation (Akyol and Alatas 2012, 2017; Alatas 2007). In most real-life problems, the search space is infinite or so large that all solutions cannot be evaluated. Thus, it is necessary to find a good solution by evaluating the solutions in a reasonable time. For such problems, assessment of solutions in a reasonable time is essentially the same thing as assessment of “some solutions” in the entire search space. What and how some solutions are selected vary based on the metaheuristic technique.

General-purpose metaheuristic methods can be grouped into eight categories as biology-based, physics-based, swarm-based, music-based, social-based, chemistry-based, sports-based, and mathematics-based. There are also hybrid methods that are combinations of these (Akyol and Alatas 2012, 2017; Alatas 2007). Tunicate Swarm Algorithm (Kaur et al. 2020), Reptile Search Algorithm (Abualigah et al. 2022), Spotted Hyena Optimizer (Dhiman and Kumar 2017), Emperor Penguin Optimizer (Dhiman and Kumar 2018), Seagull Optimization Algorithm (Dhiman and Kumar 2019), and Sooty Tern Optimization Algorithm (Dhiman and Kaur 2019) are biology-based; the Parliamentary Optimization Algorithm (Borji 2007) and the Imperialist Competitor Algorithm (Atashpaz-Gargari and Lucas 2007) are social-based; Artificial Chemical Reaction Algorithm (Alatas 2011) is chemistry-based; Melody Search Algorithm (Ashrafi and Dariane 2011) is music-based; Archimedes Optimization Algorithm (Hashim et al. 2021) and Spring Search Algorithm (Dehghani et al. 2020b) are physics-based; Bonobo Optimizer (Das and Pratihar 2019) and Rat Swarm Optimizer (Dhiman et al. 2021) are swarm-based; Most Valuable Player Algorithm (Bouchekara 2020) and Darts Game Optimizer (Dehghani et al. 2020c) are sports-based and Arithmetic Optimization Algorithm (Abualigah et al. 2021a) is mathematics-based algorithms and models. Multi Leader Optimizer (Dehghani et al. 2020a) can also be classified as both a swarm-based and social-based algorithm (Akyol 2018; Akyol and Alatas 2012, 2020; Alatas 2007). In some studies, algorithms inspired by plant intelligence were examined in a separate group as plant-based methods (Akyol and Alatas 2017) and algorithms inspired by the law of reflection and refraction of light were categorized as light-based methods (Alatas and Bingol 2020).

AO (Abualigah et al. 2021b) is swarm-based and TSA (Layeb 2021) is mathematical-based metaheuristic algorithms. AO was developed with inspiration from the hunting skills of the Aquila and TSA was developed based on the tangent function. In the literature, the number of studies with AO and TSA is few. Only a few problem-oriented studies were performed for AO and there is not any study on the improvement of TSA. Using AO, AlRassas et al. developed the Adaptive Neuro-Fuzzy Inference System model and used this model to predict oil production for two different locations in Yemen and China (AlRassas et al. 2021). Elaziz et al. proposed a hybrid version of deep learning and AO and applied it to images of Covid-19 data (Abd Elaziz et al. 2021). Ma and Zhao proposed an improved version of the AO algorithm using wavelet mutation and quasi-opposite learning strategies. Then, they used this improved version of the AO and the Bernoulli model together to estimate the rural community population in China (Ma et al. 2021). Vashishtha and Kumar used AO to adjust the length of the minimum deconvolution filter used to detect bearing defects in the Francis turbine (Vashishtha and Kumar 2021).

Metaheuristic algorithms need to have exploration and exploitation capabilities. These two abilities must work in a balanced way. In some metaheuristic algorithms, while the exploitation capability works well, the exploration capability may be lacking, or the exploration capability may work well, but the exploitation capability may not be sufficient due to the stochastic nature of the method (Dhiman 2021). By hybridizing the algorithms, a more efficient algorithm can be obtained by combining the strengths of the two algorithms along with eliminating the weaknesses. Many optimization algorithms suffer from early convergence due to the local optimum in the exploitation phase (Upadhyay and Chhabra 2021). Hybrid algorithms, which are more preferred than individual metaheuristic algorithms, are used in solving broader optimization problems (Verma and Parouha 2021). When compared to classical forms of metaheuristic methods, their hybrid versions show substantial gains (Dokeroglu et al. 2019). Recent research suggests that hybrid metaheuristic algorithms can provide more efficient behavior and better flexibility (Blum et al. 2011). The fundamental purpose of hybrid algorithms is to combine the characteristics of various metaheuristic techniques and reap the benefits of synergy.

AO and TSA are two of the newest metaheuristic methods and there is not any hybrid version of TSA. Only two hybrid versions of AO exist in the literature. Wang et al. proposed a new hybrid algorithm by addressing the strengths of AO’s good exploration ability and the exploitation phase of Harris Hawks Optimizer (Wang et al. 2021). Mahajan et al. stated that the AO converges early due to the direct addition of the global best location in the location update, making the search phase strong. However, this will cause it to remain at the local optimum. In addition, they also stated that the convergence speed of the Arithmetic Optimization Algorithm (AOA) is reported as low and the search capability as weak. To overcome this deficiency of AOA, they proposed the hybrid AOAAO algorithm (Mahajan et al. 2022).

Based on these, in this article, a new hybrid algorithm, AO-TSA, is proposed for complex solution search and optimization problems. While the exploration phase is applied at 2/3 of the number of iterations in the AO method, the exploitation phase is generally insufficient. To give more space to exploitation capability, the effective intensification phase of TSA is applied instead of the narrowed exploration phase of the AO. In addition, TSA’s local minimum escape steps are also applied to avoid the local minimum stagnation problem. According to various metrics, the performance of AO-TSA is compared to those of other existing metaheuristic algorithms utilizing a total of twenty-one benchmark functions, including six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions. In addition, two real engineering design problems are used to compare performance. Statistical test analysis and sensitivity analysis are also performed. The proposed hybrid method seems to achieve better results at an earlier time than AO and TSA. In the second part of this study, AO and TSA are explained in detail and flow diagrams are given. In the third part, the hybrid AO-TSA algorithm is explained and how the two algorithms are combined in the flow diagram is shown. The fourth part introduces the used benchmark functions and real engineering design problems. In the fifth part, the performances of standard AO, standard TSA, AO-TSA, and metaheuristic algorithms with promising results in the literature are presented by comparing the experimental results obtained in benchmark functions. The paper is concluded along with possible future research directions in the sixth part.

2 Standard Aquila optimizer and tangent search algorithm

In this section, the AO (Abualigah et al. 2021b), which is inspired by the hunting skills of Aquila, and TSA (Layeb 2021), which is based on the mathematical tangent function, are explained in detail and flow diagrams are given.

2.1 Aquila optimizer

Aquilas are the most common species of eagles and are known as the most popular birds of prey in the Northern Hemisphere. The four hunting methods used by Aquilas are described as follows: The first method, vertically inclined high-flying, is used by the Aquila to hunt birds while flying high above the ground. After discovering its prey and getting closer and closer to it, the wings take a long, low-angle glide that rises rapidly. The second method, the perimeter flight with a short soaring attack, is the flight in which the Aquila rises low above the ground. In the third method, the Aquila slowly attacks its victim, trying to land on the prey it has chosen on the ground. This method is called slow landing attack and flight action. The last method is walking and catching. The Aquila tries to attract prey by walking on the ground in this method. The AO algorithm was inspired by these four hunting methods of Aquila.

As in all population-based methods, in AO, the initial population (X) starts with randomly generated values between the lower and upper limits. The optimal result is the best solution obtained from the candidate solutions as a result of iterations. \(X\) consists of a set of randomly generated candidate solutions using Eq. (1). \(N\) represents the total number of candidate solutions (population size), \({X}_{ij}\) represents \(j\)th decision values (positions) of the \(i\)th solution, and \(D\) represents the size of the problem. \(rand\) is a random number, \({UB}_{j}\) is the \(j\)th upper limit, and \({LB}_{j}\), is the \(j\)th lower limit (Abualigah et al. 2021b).

$$ X_{ij} = rand \times \left( {UB_{j} - LB_{j} } \right) + LB_{j} , i = 1, 2, \ldots ,N j = 1, 2, \ldots ,D $$
(1)

Based on the four behavior patterns that the Aquila simulates while hunting, the AO algorithm is represented by a method for each of these behaviors: high glide search area selection with vertical slope, short glide attack, and reconnaissance within a different search area with the boundary line, low flight exploitation within a convergence search area with slow landing attack, and catching prey on foot. In the AO algorithm, where \(T\) is the maximum number of iterations and \(t\) is the number of current iteration, exploration steps are applied when the condition \(t\le \left(\frac{2}{3}\right)\times T\) is met, otherwise, exploitation steps are started.

The behavior of Aquila is modeled as a mathematical optimization paradigm, which tries to find the best solution according to the certain constraints (Abualigah et al. 2021b).

2.1.1 Step 1: extended exploration \(({X}_{1})\)

In this method \({(X}_{1})\), Aquila discovers the hunting ground and chooses the best hunting area with a high glide on a vertical slope. Here, AO conducts extensive expeditions by flying high to determine the search area where prey is located. This behavior is mathematically presented as in Eq. (2).

$$ X_{1} \left( {t + 1} \right) = X_{best} \left( t \right) \times \left( {1 - \frac{t}{T}} \right) + \left( {X_{M} \left( t \right) - X_{best} \left( t \right) \times rand} \right) $$
(2)

Here, \({X}_{1}\left(t+1\right)\) represents the solution of the (\(t+1)\)th iteration produced by the first method (\({X}_{1}\)). \({X}_{best}\left(t\right)\) represents the best solution achieved up to the \(t\)th iteration and with this, the proximate location of the prey is determined. By using the \(\left(1-\frac{t}{T}\right)\) equation, it is decided whether to apply extended search steps. \({X}_{M}\left(t\right)\) calculated using Eq. (3) indicates the mean value of the positions of the existing solutions in the \(t\)th iteration. \(rand\) is a randomly generated number in [0, 1] (Abualigah et al. 2021b).

$$ X_{M} \left( t \right) = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} X_{i} \left( t \right), \forall i = 1, 2, \ldots ,D $$
(3)

2.1.2 Step 2: narrowed exploration \({(X}_{2})\)

In this method \({(X}_{2})\), the hunting ground is found with a high flight. In the short glide attack and contour flight method, the Aquila prepares the attack area by drawing circles on the prey before attacking the prey it has determined. In this method, AO searches in detail around its chosen prey before attacking. This behavior is mathematically presented as in Eq. (4).

$$ X_{2} \left( {t + 1} \right) = X_{best} \left( t \right) \times Levy\left( {} \right) + X_{R} \left( t \right) + \left( {y - x} \right) \times rand $$
(4)

\({X}_{2}\left(t+1\right)\) represents the solution of the \(\left(t+1\right)\) th iteration, produced by the second method \({(X}_{2})\). Levy flight distribution \(Levy\left(\right)\) is calculated as given in Eq. (5). The random solution \({X}_{R}\left(t\right)\) is in the interval \([1, N]\) in the \(t\)th iteration (Abualigah et al. 2021b).

$$ Levy\left( {} \right) = s \times \frac{u \times \sigma }{{\left| v \right|^{{\frac{1}{\beta }}} }} $$
(5)

Here, \(s\) is a constant with a value of 0.01. \(u\) and \(v\) are random numbers between 0 and 1. \(\sigma \) is computed by Eq. (6).

$$ \sigma = \left( {\frac{{{\Gamma }\left( {1 + \beta } \right) \times \sin \frac{\pi \beta }{2}}}{{{\Gamma }\left( {\frac{1 + \beta }{2}} \right) \times \beta \times 2^{{\left( {\frac{\beta - 1}{2}} \right)}} }}} \right) $$
(6)

Here, \(\beta \) is a constant with a value of 1.5. In Eq. (4), \(x\) and \(y\) are utilized for representing the spiral shape in the search and are calculated using Eqs. (7), (8), (9), (10), and (11).

$$ y = r \times {\text{cos}}\left( \theta \right) $$
(7)
$$ x = r \times {\text{sin}}\left( \theta \right) $$
(8)
$$ r = r_{1} + U \times D_{1} $$
(9)
$$ \theta = - \omega \times D_{1} + \theta_{1} $$
(10)
$$ \theta_{1} = \frac{3 \times \pi }{2} $$
(11)

\({r}_{1}\) takes a value from 1 to 20 to determine the number of search cycles, and \(U\) is a constant with a value of 0.00565. \({D}_{1}\) is an integer from 1 to the length of the search field (\(D\)) and \(\omega \) is a small constant of 0.005 (Abualigah et al. 2021b).

2.1.3 Step 3: extended exploitation \({(X}_{3})\)

In this method \({(X}_{3})\), when the area of prey is correctly found and the Aquila is ready to land and attack, it descends steeply towards to prey with a frontal attack to explore its response. This method, in which the Aquila approaches and attacks by using the environment of the prey it chooses, is called low flight with landing attack. The mathematical expression of this behavior is shown in Eq. (12).

$$ X_{3} \left( {t + 1} \right) = (X_{best} \left( t \right) - X_{M} \left( t \right)) \times \alpha - rand + \left( {\left( {UB - LB} \right) \times rand + LB} \right) \times \delta $$
(12)

\({X}_{3}\left(t+1\right)\) is the solution of the \((t+1)\) th iteration produced by the third search method (\({X}_{3}\)). \(\alpha \) and \(\delta \) represent parameters of the exploitation tuning that are set to a small value of 0.1 (Abualigah et al. 2021b).

2.1.4 Step 4: narrowed exploitation \({(X}_{4})\)

In the last method \({(X}_{4})\), when the Aquila approaches its prey on land, it attacks according to the stochastic movements of its prey. This step is called walking and catching prey. Finally, the AO goes after the prey in the final location. This phenomenon is mathematically presented as in Eq. (13).

$$ X_{4} \left( {t + 1} \right) = QF \times X_{best} \left( t \right) - (G_{1} \times X\left( t \right) \times rand) - G_{2} \times Levy() + rand \times G_{1} $$
(13)

\({X}_{4}\left(t+1\right)\) is the solution of the \((t+1)\) th iteration produced by the fourth search method (\({X}_{4}\)). \(QF\) is calculated using Eq. (14) and represents a quality function used to balance search strategies. \({G}_{1}\) calculated using Eq. (15) represents the different movements of the AO for tracking the prey during evasion. \({G}_{2}\), calculated using Eq. (16), is a decreasing value from 2 to 0, denoting the slope of the flight for the AO for tracking prey during evasion from the first position to the final position. \(X\left(t\right)\) is the current solution in the \(t\) th iteration (Abualigah et al. 2021b).

$$ QF\left( t \right) = t^{{\frac{2 \times rand - 1}{{\left( {1 - T} \right)^{2} }}}} $$
(14)
$$ G_{1} = 2 \times rand - 1 $$
(15)
$$ G_{2} = 2 \times \left( {1 - \frac{t}{T}} \right) $$
(16)

\(QF\left( t \right)\) is the quality function value of the \(t\)th iteration. The flowchart of the AO is given in Fig. 1.

Fig. 1
figure 1

Flowchart of AO

2.2 Tangent search algorithm

The Tangent Search Algorithm is developed based on the tangent function used to explore the search area well. In TSA, the equations of motion are performed with a spherical step “\(step\times \mathrm{tan}(\theta )\)”. As in Levy flight, it is called tangent flight because the tangent function performs the flight function.

Having a better balance between exploration and intensification (exploitation) will make the optimization algorithm more successful. Too much intensification causes algorithms to converge quickly to the local minimum, and too much exploration slows down and sometimes differentiates the algorithm too much. To reach a balance between intensification and exploration, TSA is built from 3 main components: intensification, exploration, and escape from local minimum. Also, the local minimum escape procedure is applied to a random individual (solution) in each iteration to avoid getting stuck in the local minimum. As in other population-based algorithms, in TSA, individuals in the first population are calculated using Eq. (1) to be evenly distributed within the boundaries of the solution space (Layeb 2021).

2.2.1 Intensification phase

In this phase, TSA first performs a random local walk according to Eq. (17) or Eq. (18). Some of the decision variables of the solution obtained (for the problems with size larger than four, the variable rate is 20%, while for the problems with four dimensions or less, it is 50%) are replaced with the values of the corresponding variables in the current best solution.

$$ X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + step \times \tan \left( \theta \right) \times \left( {X_{i} \left( t \right) - X_{best} \left( t \right)} \right) $$
(17)
$$ X_{i} \left( {t + 1} \right) = X_{best} \left( t \right),\;{\text{if value}}i{\text{is selected}} $$
(18)

As a result, the resulting solution \({X}_{i}\left(t+1\right)\) has a similarity rate of less than 50% with the optimal available solution. If the values of the found solution exceed the \(LB\) and \(UB\) limits of the problem, Eq. (19) is used for correction (Layeb 2021).

$$ X = rand \times \left( {UB - LB} \right) + LB $$
(19)

2.2.2 Exploration phase

Unlike local search methods, it has a large exploration capacity due to the global random walk. TSA uses tangent flight and variable step size for global random walking. By using the tangent function, the search area is explored more efficiently. In fact, \(\theta \) being close to \(\pi /2\) will increase the tangent value and the resulting solution will be far from the current solution. With a value of \(\theta \) near to 0, the tangent function will have small values, and the solution will be similar to the current solution. Therefore, Eq. (20) belonging to the exploration phase converges between local and global random walk. For exploratory search, this equation used is applied with probability \(1/D\) to each variable (Layeb 2021).

$$ X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + step \times \tan \left( \theta \right) $$
(20)

2.2.3 Escape from the local minimum

TSA includes a mechanism that uses a certain procedure to escape the local minimum stagnation problem. The procedure consists of two parts executed with the \(Pesc\) probability value. A random agent search is selected at each iteration, and then either Eq. (21) or Eq. (22) is used. In addition, the new random solutions can replace the worst solutions with probability 1% (Layeb 2021). In Eq. (21), \(sign()\) represents the signum function that returns the sign of given number.

$$ X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + \left( {15 \times sign\left( {rand - 0.5} \right)/\log \left( {1 + t} \right)} \right) \times \left( {X_{best} \left( t \right) - rand \times \left( {X_{best} \left( t \right) - XX_{i} \left( t \right)} \right)} \right) $$
(21)
$$ X_{i} \left( {t + 1} \right) = X_{i} \left( t \right) + \tan \left( \theta \right) \times \left( {UB - LB} \right) $$
(22)

\(Pswitch\), \(Pesc\), \(step\), and \(\theta \) are the basic parameters used to highlight intensification and exploration search. The balance between global and local random walks is aimed at controlling the parameter of switching, \(Pswitch \epsilon [0, 1]\). The parameter \(Pesc \epsilon [0, 1]\) is the escape procedure probability. For guiding and highlighting the intensification and exploration capability, the \(step\) parameter is used. A variable step size is used in TSA to get a good approximation of the best solution and avoid lack of precision. At the beginning of the search, a large step size is adopted as the search process progresses, and the step size is non-linearly decreased during iterations. By this adaptive size behavior, a good balance between intensification and exploration is aimed in TSA. The step size is influenced by the tangent flights that give an oscillating and periodic behavior to the search. This method utilizes a nonlinear reduction scheme for logarithm function-based adaptive step size to adapt the exploration and intensification search process. A fine convergence is aimed with the help of the logarithm function. On the other hand, it is seen that better results are obtained when different step size functions are used (Layeb 2021). Therefore, to be more efficient, two step size variants are used in TSA. In the intensification search, the first variant is used and is calculated as given in Eq. (23). The second step size is calculated using Eq. (24) in the exploration search.

$$ step1 = 10 \times sign\left( {rand - 0.5} \right) \times norm\left( {X_{best} \left( t \right)} \right) \times {\text{log}}\left( {1 + 10 \times D/t} \right) $$
(23)
$$ step2 = sign\left( {rand - 0.5} \right) \times norm\left( {X_{best} \left( t \right) - X_{i} \left( t \right)} \right)/{\text{log}}\left( {20 + t} \right) $$
(24)

Here, \(norm()\) is a specific mathematical norm. The \(sign(-,+)\) component controls the direction of the exploration and intensification phase (Layeb 2021). The flowchart of TSA is given in Fig. 2.

Fig. 2
figure 2

Flowchart of TSA

3 Hybrid Aquila optimizer-tangent search algorithm

High exploration and exploitation abilities are required in solution search and optimization methods. The exploration phase’s goal is to thoroughly investigate the search space and identify the most promising potential solutions. The exploitation phase is designed to guide the search process to the best solution feasible for the population. The accuracy and speed of convergence of a metaheuristic method can be enhanced by properly balancing exploitation and exploration performance. A new hybrid algorithm that is stronger than these two algorithms can be obtained by hybridizing an algorithm with strong exploitation ability, but weak exploration ability and an algorithm with strong exploration ability but weak exploitation ability. A more powerful new algorithm is aimed by combining the strong exploitation capability of the first algorithm with the powerful exploration capability of the second algorithm. Thus, the hybridization of the two methods can be used to combine the strengths of each method in a single approach and take advantage of the advantages while eliminating the disadvantages of each method.

In the AO method, while the exploration phase is applied at 2/3 of the iteration, the exploitation phase is insufficient. To give more importance to exploitation capability, the effective intensification phase of TSA is applied instead of the narrowed exploration phase of the AO. Therefore, it is aimed to reach the optimum solution earlier by strengthening the exploitation stage of the hybrid AO-TSA. In this paper, a hybrid AO-TSA is proposed by integrating the intensification steps of TSA into the narrowed exploration phase of the AO. This proposed hybrid algorithm finds the global solution faster without getting stuck in the local optimum, by properly balancing the exploitation and exploration phase. Finally, in order to escape from the local minimum stagnation problem, the escape from local minimum steps used by TSA are also implemented in this new hybrid algorithm. The flow diagram of the proposed hybrid AO-TSA is shown in Fig. 3.

Fig. 3
figure 3

Flowchart of hybrid AO-TSA

4 Test functions and engineering design problems

Generally, well-defined and complex functions are used to define standard measuring of the optimization methods. Standard problems and interfaces to search and optimization problems are already specified in order to compare different optimization algorithms on different types of search and optimization problems. Six unimodal (Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, and Schwefel’s 2.22), six multimodal (Schwefel, Levy Function, Ackley, Griewank, Penalized, and Rastrigin), and six fixed-dimension multimodal (Foxholes, Kowalik, Goldstein-Price, Shekel 7, Shekel 10, and Six Hump Camel) benchmark functions were used to compare the performances of the AO-TSA, AO, TSA, SCA (Mirjalili 2016), WOA (Mirjalili and Lewis 2016), I-GWO (Nadimi-Shahraki et al. 2021), CSA (Askarzadeh 2016), and BO (Das and Pratihar 2019) algorithms. In addition, three of the CEC 2019 benchmark functions (Storn's Chebyshev Polynomial Fitting Problem, Lennard–Jones Minimum Energy Cluster, and Inverse Hilbert Matrix Problem) were used. The equations, parameters, minimum values, and problem size of these twenty-one test functions are shown in Table 1.

Table 1 Test functions

Two engineering design problems (three bar truss design and tension/compression spring design) were also used to test the efficiency of the proposed method. In tension/compression spring design, which is a continuous constrained problem, it is aimed to find the best values of the mean coil diameter, number of active coils, and wire diameter parameters in order to minimize the tension/compression spring weight. The mathematical expression of the problem is shown in Eq. (25).

$$ \begin{gathered} \min f\left( x \right) = \left( {x_{3} + 2} \right)x_{2} x_{1}^{2} \hfill \\ g_{1} \left( x \right) = 1 - \frac{{x_{2}^{3} x_{3} }}{{71785x_{1}^{4} }} \le 0 \hfill \\ g_{2} \left( x \right) = \frac{{4x_{2}^{2} - x_{1} x_{2} }}{{12566\left( {x_{2} x_{1}^{3} - x_{1}^{4} } \right)}} + \frac{1}{{5108x_{1}^{2} }} - 1 \le 0 \hfill \\ g_{3} \left( x \right) = 1 - \frac{{140.45x_{1} }}{{x_{2}^{2} x_{3} }} \le 0 \hfill \\ g_{4} \left( x \right) = \frac{{x_{1} + x_{2} }}{1.5} - 1 \le 0 \hfill \\ 2 \le x_{1} \le 15,{ }0.25 \le x_{2} \le 1.3,{ }0.05 \le x_{3} \le 2 \hfill \\ \end{gathered} $$
(25)

In the problem of three bar truss design, which aims to minimize the weight of a three-bar truss, the objective function is simple. However, as with other structural design problems, there are many constraints. These constraints are buckling, deflection, and stress. Equation (26) depicts the problem’s mathematical expression.

$$ \begin{gathered} \min f\left( x \right) = \left( {2\sqrt 2 x_{1} + x_{2} } \right) \times l \hfill \\ g_{1} \left( x \right) = \frac{{\sqrt 2 x_{1} + x_{2} }}{{\sqrt 2 x_{1}^{2} }}P - \sigma \le 0 \hfill \\ g_{2} \left( x \right) = \frac{{x_{2} }}{{\sqrt 2 x_{1}^{2} + 2x_{1} x_{2} }}P - \sigma \le 0 \hfill \\ g_{3} \left( x \right) = \frac{{1x_{2} }}{{\sqrt {2x_{2} + } x_{1} }}P - \sigma \le 0 \hfill \\ 0 \le x_{1} ,x_{2} \le 1, \hfill \\ { }l = 100{ }cm,{ }P = 2\frac{KN}{{cm^{2} }},{ }\sigma = KN/cm^{2} \hfill \\ \end{gathered} $$
(26)

5 Experimental results

In this study, for all experiments, the initial population size of the algorithms used was 30. All algorithms were started and run under equal conditions. The number of function evaluations was used as the termination condition of the algorithms. Accordingly, the algorithm was terminated when the number of function evaluations reaches 10,000. Each algorithm was run 20 times for each test function and obtained results are presented. Standard values for the parameters of AO and TSA were used in the experiments.

Tables 2 and 3 show the best, worst, mean, and standard deviation values as “Best”, “Worst”, “Mean”, and “Std” respectively obtained after the Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA were run 20 times for each test function. According to these tables, the AO-TSA algorithm gives the best results for Sphere, Rosenbrock, Schwefel’s 2.21, Schwefel, Ackley, Griewank, Rastrigin, Foxholes, Kowalik, Shekel 7, Shekel 10, Six Hump Camel, and Lennard–Jones Minimum Energy Cluster test functions. The worst results were obtained from CSA in general. The best results according to mean values for Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Schwefel, Levy Function, Ackley, Griewank, Penalized, Rastrigin, Shekel 7, Shekel 10, Six Hump Camel, and Kowalik test functions were obtained from the hybrid AO-TSA. According to results obtained for unimodal benchmark functions, the proposed method seems to achieve the best results in 3 out of 6 functions with respect to the best values while it gives the best results in all unimodal functions concerning mean values. In multimodal benchmark functions, AO-TSA gives the best results in 4 out of 6 functions in terms of the best values, and the best results in all functions in terms of mean values are obtained by the proposed method.

Table 2 Statistical results for unimodal and multimodal test functions
Table 3 Statistical results for fixed-dimension multimodal and CEC 2019 test functions

As seen in Table 3, in fixed-dimension multimodal benchmark functions, AO-TSA achieves the best results in 5 out of 6 functions in terms of the best values, and the proposed technique achieves the best results in 4 out of 6 functions in terms of mean values. In CEC 2019 benchmark functions, the high performance obtained for other types of benchmark functions is not achieved by the proposed method. Generally, it is seen that the mean values obtained from AO-TSA for other test functions are better than other algorithms. When the standard deviation values were examined, AO-TSA gave the minimum value for all 6 test functions used. Considering the experiments, it is seen that AO-TSA gives promising results in terms of the standard deviation values compared to other algorithms.

Figure 4 shows the change in the mean fitness value/number of function evaluations of the test results of Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA in Sphere, Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Schwefel, Levy Function, Ackley, Griewank, Penalized, and Rastrigin test functions.

Fig. 4
figure 4

Change of the fitness values according to the number of function evaluations

According to the convergence curve demonstrated in Fig. 4, the hybrid AO-TSA reaches the best solution very rapidly for all test functions. Especially in Schwefel 2.21, Schwefel, Ackley, Griewank, and Rastrigin test functions, it gave good results in a very short time compared to other algorithms. Again, according to Fig. 4, the algorithm that reached the best solution in the test functions used was AO-TSA.

Figure 5 shows the change in the mean fitness value/number of function evaluations of the test results of Hybrid AO-TSA, AO, TSA, WOA, SCA, I-GWO, BO, and CSA in Foxholes, Kowalik, Goldstein-Price, Shekel 7, Shekel 10, Six Hump Camel, Storn’s Chebyshev Polynomial Fitting Problem, Lennard–Jones Minimum Energy Cluster, and Inverse Hilbert Matrix Problem test functions. According to the convergence curve shown in Fig. 5, it is seen that the convergence rate of AO-TSA is quite good. Especially for Schekel 7 and Schekel 10 test functions, the proposed method converged much faster than other algorithms. In general, it is seen that AO-TSA converges rapidly and achieves better results.

Fig. 5
figure 5

Change of the fitness values according to the number of function evaluations

On many benchmark functions, the effect of population size (N) is investigated. Numerous numbers of population size (20, 30, and 50) are investigated across 10,000 function evaluation number to properly analyze the AO-TSA’s parameter sensitivity. Sensitivity analysis for population size is shown in Table 4. The results presented in the table indicate that in general, performance of the method increases with higher N values.

Table 4 Sensitivity analysis of AO-TSA for population size (N)

Furthermore, the effect of t parameter is investigated on many benchmark functions. 1/2, 2/3, and 3/4 values of the t parameter are used for parameter analysis of the algorithm. Results of sensitivity analysis for t parameter are shown in Table 5. The best results are obtained with the value of \(\left(\frac{2}{3}\right)\times T\) for t parameter.

Table 5 Sensitivity analysis of AO-TSA for t parameter

Box plots of the experimental results of the hybrid AO-TSA proposed in this study and the other seven algorithms used for comparison are shown in Figs. 6 and 7. When the box plots in Figs. 6 and 7 are examined, it is seen that the lower quartile, median, and upper quartile values obtained based on 20 runs from AO-TSA in Rosenbrock, Quartic, Schwefel’s 1.20, Schwefel’s 2.21, Schwefel’s 2.22, Levy Function, Griewank, Penalized, Rastrigin, Foxholes, Kowalik, and Storns Chebyshev Polynomial test functions, especially the Schwefel test function, are smaller than the values obtained from other algorithms. It is observed that the lower quartile, median, and upper quartile values obtained from the AO-TSA for the Sphere, Ackley, Golden Stein Price, and Lennard Jones Minimum Energy Cluster test functions are close and small compared to those obtained from other algorithms.

Fig. 6
figure 6

Box plots for experimental results from unimodal and multimodal benchmark functions

Fig. 7
figure 7

Box plots for experimental results from fixed-dimension multimodal and CEC 2019 benchmark functions

For the problems of three bar truss design and tension/compression spring design, all algorithms were started and run under equal conditions. The optimal results are obtained when each algorithm is run 20 times and the parameter values that give these results are listed in Tables 6 and 7. When the tables are examined, it is seen that for these two problems, the best results are obtained from the proposed AO-TSA. Afterward, BO and I-GWO algorithms give better results, respectively.

Table 6 Optimal results obtained from the methods for tension/compression spring design
Table 7 Optimal results obtained from the methods for three bar truss design

In addition, experimental results were evaluated by the Friedman test and it was examined whether there was a statistically significant difference between the results obtained from the algorithms used in this study. This non-parametric test is used to describe the differences in the behavior of multiple algorithms. Test cases are expressed in rows while the outcomes of comparing algorithms are shown in columns in the Friedman test. In the experiments, the Null Hypothesis (\({H}_{0}\)) is “There is not a meaningful difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA”. Alternative Hypothesis (\({H}_{1}\)) is “There is a meaningful difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA”. Analysis results with the Friedman test are shown in Table 8. The alpha value was determined as 0.05 and the degrees of freedom (\(Df\)) (number of samples compared-1) was 7. According to the alpha value and the degrees of freedom, it is seen that the value of \({x}_{F}^{2}\) is 14.067, from the chi-square distribution table. When Table 8 is examined, it is seen that the \({x}^{2}\) value is greater than the \({x}_{F}^{2}\) value (\({x}^{2}>{x}_{F}^{2}\)) for all test functions. Accordingly, the Null Hypothesis (\({H}_{0}\)) is rejected and the Alternative Hypothesis (\({H}_{1}\)) is accepted. In other words, there is a statistically significant difference between the fitness function values of the compared algorithms and those of the proposed hybrid AO-TSA according to the Friedman test.

Table 8 Analysis of the results with the Friedman test

6 Conclusions

Metaheuristic approaches have grown in popularity in recent years as a result of their high computational power and ease of conversion. There is, however, no algorithm that can provide the optimal solution to all problems. As a result, new metaheuristic algorithms are being proposed and current algorithms are being improved. Exploration and exploitation skills should be included in metaheuristic algorithms. One of these two capabilities may be sufficient in some metaheuristic algorithms, while the other may be insufficient. In this study, a new hybrid method, AO-TSA, was proposed by combining the TSA intensification phase with the limited exploration stage to improve AO’s exploitation ability. In addition, the local minimum escape steps of TSA are also applied in the new hybrid algorithm. By combining the strengths of these two metaheuristic algorithms, a new more efficient general-purpose hybrid algorithm was presented as a solution search methodology for complex optimization problems.

Six unimodal, six multimodal, six fixed-dimension multimodal, and three modern CEC 2019 benchmark functions were used to compare the performances of AO-TSA, AO, TSA, WOA, SCA, I-GWO, CSA, and BO. When the mean results obtained by running 20 times were examined; it was observed that, generally, hybrid AO-TSA reaches better solutions earlier than the other algorithms according to the convergence curves. The best or near best values ​​for the Best, Mean, and Standard Deviation criteria in multimodal test functions were found by AO-TSA. Furthermore, in unimodal benchmark functions, the best mean values are obtained from the proposed hybrid AO-TSA. The best results for multimodal benchmark functions are achieved by the proposed method. The high performance obtained for other types of benchmark functions is not achieved by the proposed algorithm in CEC 2019 benchmark functions. Finally, when Friedman analysis of the results was performed, statistically significant differences were found. Despite the fact that the results show that hybrid AO-TSA is an effective method for global optimization, the requirement for parameters for the hybrid method a priori appears to be an AO-TSA limitation. However, in order to eliminate this limitation, new approaches can be proposed for adaptively specifying the algorithm parameters.

In future studies, it is planned to use different versions of this algorithm in real-world problems by proposing and making it multi-objective. Different local search methods can be integrated into this method to increase the accuracy and search power of hybrid AO-TSA. In addition, distributed and parallel versions of this algorithm can be developed for future studies.