1 Introduction

Reaching the global optimum solution is quite a difficult task for many engineering optimization problems. For this purpose, various deterministic and meta-heuristic optimization methods are developed in the literature. Deterministic methods mostly use the gradient and sometimes the hessian information to find a search direction. Although these methods are enormously efficient for finding the global optimum solutions for convex optimization problems, they are highly dependent on the initial solutions for the non-convex and multimodal optimization problems. Furthermore, these methods may be inadequate for solving the optimization problems where the objective functions cannot be differentiated. For this reason, meta-heuristic methods are generally preferred to solve these kinds of optimization problems [1, 2].

Meta-heuristic optimization methods are the approaches which mimic the processes observed in natural phenomena by considering some stochastic components [3]. In the literature, these methods are classified under two categories: local search-based optimization methods and population-based optimization methods [4]. In local search-based optimization methods, the search process is started with a single solution and the objective function value is continuously improved by means of the considered heuristic processes. On the contrary, in the population-based optimization methods, the same tasks are conducted by considering a population rather than a single solution [5,6,7]. Note that the local search-based meta-heuristic optimization methods have some advantages including fast convergence, strong exploitation capability. However, use of the population-based methods is usually preferred for solving the complex problems since performance of the local-search-based methods is lower than the population-based methods especially during the global exploration process [6]. In the literature, there are different population-based metaheuristic optimization methods such as Genetic Algorithm (GA) [8], Ant Colony Optimization (ACO) [9], Particle Swarm Optimization (PSO) [10], Differential Evolution (DE) [11]. There exist a huge number of studies which employ these meta-heuristic optimization methods for solving different optimization problems in different disciplines of science.

Ant Lion Optimization (ALO) method [7] is one of the population-based nature-inspired optimization algorithms which mimics the hunting strategy of antlions. The ALO has been used in different studies and applications in the literature such as feature selection [12,13,14,15], data clustering [16,17,18], machine learning [19,20,21,22], and various engineering optimization problems [23,24,25,26,27,28,29]. Although ALO is successfully employed for solving many complicated problems, there are some reported limitations such as the requirement of high number of iterations, possibility of trapping to local optimum solutions, and the premature convergence especially for the complex or large-scale problems [30,31,32]. To overcome these limitations, different improved versions of ALO are proposed in the literature. Yao and Wang [33] proposed a Dynamic Adaptive Ant Lion Optimization (DAALO) approach by including the Levy flights to random walk process in ALO. The results of the DAALO approach are compared with GA, PSO, artificial bee colony (ABC), and original ALO. Dinkar and Deep [30] improved ALO by replacing the uniform distribution with Laplace distribution during generation of the random numbers. Furthermore, they considered an Opposition-Based (OB) Learning model for exploring the better candidate solutions. The performance of their proposed OB-L-ALO approach is evaluated with some benchmark and engineering design problems. Rajan et al. [34] improved ALO by replacing the elitism process of the original ALO with the weighted elitism concept by including a weight parameter to update ant positions. Their modified ALO approach (MALO) is used for solving the optimal reactive power dispatch problem. Wu et al. [35] improved the original ALO (IALO) by mapping the initial positions of ants and antlions to chaos space. They evaluated the performance of their IALO approach by solving some benchmark problems and by comparing the identified results with PSO, bat algorithm (BA), and ALO. Kilic and Yuzgec [36] improved the original ALO by modifying the process of returning the ants to the search space and used tournament selection instead of roulette wheel selection to solve the problem of parallel machine scheduling. Yang et al. [37] introduced an escape strategy and adaptive convergence criterion for improving the original ALO. Their proposed approach is used for estimating the support vector machine (SVM) parameters. Their results indicated that the proposed approach provided better results than ALO-, GA-, and PSO-based approaches. Toz [38] proposed a new boundary shrinking procedure for ALO which is based on the inverse of the incomplete Gamma function. This improved ALO (IALO) approach handles the problem of sudden boundary changes due to the stationary points in boundary shrinking procedure of the original ALO. Results of the proposed IALO approach indicated that this modification improved the performance of the original ALO during solution of the image clustering problem. Guo et al. [31] proposed a new search pattern consisting of a spiral complex path for the random walk process in the original ALO method. Their proposed approach aims to accelerate the convergence speed and evaluate its performance by solving some benchmark problems. Kilic et al. [39] applied the same modifications of Kilic and Yuzgec [36] for training an Adaptive Neuro-Fuzzy Inference System (ANFIS) model parameters. The performance of their Improved ALO approach with Tournament selection (IALOT) is also evaluated by solving some engineering optimization problems, and the obtained results are compared with DE, PSO, ABC, simulated annealing (SA), and touring ant colony optimization (TACO). Liu et al. [40] proposed a method called self-adaptive ALO (saALO) by using the trigonometric Sine function in random walk process to accelerate the convergence speed. Chaitanya et al. [41] modified the original ALO (MALO) by including a new tuning parameter in the elitism process. They used the proposed approach to solve the optimal reactive power dispatch problem and compared the results with original ALO, moth-flame optimization technique (MFO), PSO, and tabu search (TS). Singh and Singh [42] proposed an improved ALO (IALO) by shrinking of the radius of ant’s random walks to avoid over-exploitation process. Their approach also aims to replace the averaging process with the blend crossover (BLX) operation for improving the exploration and exploitation capabilities. Yao et al. [43] proposed virtual force-directed ant lion optimization algorithm (VF-IALO) by including adaptive boundary shrinkage factor, dynamic weight coefficient for updating ant positions and improving selection of elite antlions. They used VF-IALO for coverage control of wireless sensor networks. El Bakrawy et al. [44] modified the original ALO (MALO) by including a new parameter which depends on the step length of each ant for updating their positions. Their proposed approach MALO is evaluated with some benchmark functions, and the identified results are compared with the original ALO. Chen et al. [32] proposed an exponential-weighted ant lion optimization algorithm (EALO) to increase the diversity of the random walks of the ants. The performance of EALO is evaluated by solving some benchmark problems, and the obtained results are compared with original ALO, saALO, PSO, and grey wolf optimization (GWO). Yan et al. [45] proposed a new approach which mutually integrates the original ALO with Levy flight and golden Sine algorithm (LGSALO). The main objective of their proposed LGSALO approach is to avoid trapping to local optimum solutions and increase the convergence accuracy. The performance of LGSALO is evaluated with benchmark problems and the obtained results compared with PSO and original ALO. Lin and Ouyang [46] implemented a anti-collision factor in the seagull optimization to the random walk of ants around antlion phase for improving the search capability of ALO. Moreover, they proposed an elimination probability for eliminating the last ants sorted by fitness and updating their position. Shen and Liu [47] are proposed an improved ALO algorithm in four phases: initialization, updating ant position, random walk around ant lion, and elitism. The performance of their proposed approach is evaluated with the layout optimization problem of branch pipelines. Lu et al. [48] introduced the chaotic strategy in the initial stage of each iteration and used the Gaussian mutation after the ant lion iteration to improve the performance of algorithms. They evaluated the performance of the algorithm using benchmark problems and then combined it with SVM to solve feature selection problem.

As can be seen from the aforementioned studies, improvements and modifications are mostly conducted in the random walk around antlions process of the original ALO method. The reason of these modifications is associated with the importance of this process for increasing the diversity of the solutions. In this context, the SHuffled Ant Lion Optimization (SHALO) approach is proposed in this study. In SHALO, the original ALO is modified by conducting two improvements. The first improvement deals with the boundary shrinking procedure of the random walk around antlions process. For this purpose, a new exponentially weighted approach is proposed. The second improvement is associated with the shuffling process which aims to increase the diversity of the random walk vector of the original ALO. The performance of the proposed SHALO approach is evaluated by solving four unconstrained and two constrained benchmark problems, and two constrained engineering design problems. All the problems are solved 100 times for different random number seeds, and the identified results are statistically compared with the ones obtained by using the original ALO, saALO [40], EALO [32], GA [8], and PSO [10]-based optimization approaches. The results of this analysis indicated that the proposed SHALO approach significantly improves the solution accuracy and can find the global or near-global optimum solutions with high success rates.

2 Ant Lion Optimization (ALO) method

ALO is a population-based meta-heuristic optimization method which is first developed by Mirjalili [7] depending on the hunting strategy of antlions in nature. Note that antlions belong to the family of Neuroptera (web-winged insects) which are carnivorous in both larvae and adult phases. In larvae phase, the antlions dig a cone-shaped sand pit and hide under the sand at the bottom of the pit for hunting insects, especially the ants (Fig. 1). When the antlion realizes that a prey (e.g., ant) walks inside pit, it starts throwing sand particles to the ant for sliding toward the bottom of the pit. After hunting and consuming the prey, the antlion repairs the shape of the pit to prepare for the next hunt. Depending on this philosophy, solution of an optimization problem by using ALO can be mathematically formulated as follows:

Fig. 1
figure 1

Hunting strategy of antlions in the constructed cone-shaped pit

Let \(T\) is the maximum number of iterations\((t=\mathrm{1,2},3,\cdots ,T)\); \({{\varvec{x}}}^{t}\) and \({\widetilde{{\varvec{x}}}}^{t}\) be the population matrices of ants and antlions, respectively; \(n\) be the number of ants or antlions (candidate solutions) in the population; \(d\) be the number of decision variables of the problem; \({x}_{ij}^{t}\) and \({\widetilde{x}}_{ij}^{t}\) are the elements of \({{\varvec{x}}}^{t}\) and \({\widetilde{{\varvec{x}}}}^{t}\) matrices at \({i}^{{\text{th}}}\) candidate solution,  \({j}^{{\text{th}}}\) decision variable, and \({t}^{{\text{th}}}\) iteration \((i=\mathrm{1,2},3,\cdots ,n\;;\;\; j=\mathrm{1,2},3,\cdots,d\;;\;\; t=\mathrm{1,2},3,\cdots,T)\), respectively. Under these definitions, the population matrices of ants and antlions at \({t}^{{\text{th}}}\) iteration can be defined as follows:

$${{\varvec{x}}}^{t}=\left[\begin{array}{cccc}{x}_{11}^{t}& {x}_{12}^{t}& \dots & {x}_{1d}^{t}\\ {x}_{21}^{t}& {x}_{22}^{t}& \dots & {x}_{2d}^{t}\\ \vdots & \vdots & \ddots & \vdots \\ {x}_{n1}^{t}& {x}_{n2}^{t}& \dots & {x}_{nd}^{t}\end{array}\right]$$
(1)
$${\widetilde{{\varvec{x}}}}^{t}=\left[\begin{array}{cccc}{\widetilde{x}}_{11}^{t}& {\widetilde{x}}_{12}^{t}& \dots & {\widetilde{x}}_{1d}^{t}\\ {\widetilde{x}}_{21}^{t}& {\widetilde{x}}_{22}^{t}& \dots & {\widetilde{x}}_{2d}^{t}\\ \vdots & \vdots & \ddots & \vdots \\ {\widetilde{x}}_{n1}^{t}& {\widetilde{x}}_{n2}^{t}& \dots & {\widetilde{x}}_{nd}^{t}\end{array}\right]$$
(2)

The elements of \({{\varvec{x}}}^{t}\) and \({\widetilde{{\varvec{x}}}}^{t}\) are calculated randomly at the start of optimization \((t=0)\) by considering the lower and upper bounds of the decision variable as follows:

$${x}_{ij}^{t=0}={x}_{{\text{min}},j}^{t=0}+\mathrm{ rand}\left(\mathrm{0,1}\right)\cdot \left({x}_{{\text{max}},j}^{t=0}-{x}_{{\text{min}},j}^{t=0}\right)$$
(3)

where \({x}_{{\text{min}},j}^{t=0}\) and \({x}_{{\text{max}},j}^{t=0}\) are the lower and upper bounds of the \({j}^{{\text{th}}}\) decision variable at initial stage of the optimization, and \({\text{rand}}\left(\mathrm{0,1}\right)\) is a uniform random number in the range of \(\left(\mathrm{0,1}\right)\). After initial generation of \({{\varvec{x}}}^{t=0}\) and \({\widetilde{{\varvec{x}}}}^{t=0}\) randomly, the corresponding fitness values are calculated as follows:

$${{\mathcal{F}}}^{{t}}=\left[\begin{array}{cccc}f\left(\left[{x}_{11}^{{t}}\right.\right.& {x}_{12}^{{t}}& \dots & \left.\left.{x}_{1{d}}^{t}\right]\right)\\ f\left(\left[{x}_{21}^{{t}}\right.\right.& {x}_{22}^{{t}}& \dots & \left.\left.{x}_{2{d}}^{{t}}\right]\right)\\ \vdots & \vdots & \ddots & \vdots \\ f\left(\left[{x}_{{n}1}^{{t}}\right.\right.& {x}_{{n}2}^{t}& \dots & \left.\left.{x}_{{nd}}^{{t}}\right]\right)\end{array}\right]$$
(4)
$${{\widetilde{\mathcal{F}}}}^{t}=\left[\begin{array}{cccc}f\left(\left[{\widetilde{x}}_{11}^{t}\right.\right.& {\widetilde{x}}_{12}^{t}& \dots & \left.\left.{\widetilde{x}}_{1d}^{t}\right]\right)\\ f\left(\left[{\widetilde{x}}_{21}^{t}\right.\right.& {\widetilde{x}}_{22}^{t}& \dots & \left.\left.{\widetilde{x}}_{2d}^{t}\right]\right)\\ \vdots & \vdots & \ddots & \vdots \\ f\left(\left[{\widetilde{x}}_{n1}^{t}\right.\right.& {\widetilde{x}}_{n2}^{t}& \dots & \left.\left.{\widetilde{x}}_{nd}^{t}\right]\right)\end{array}\right]$$
(5)

where \({\mathcal{F}}^{t}\) and \({\widetilde{\mathcal{F}}}^{t}\) represent the fitness vector of the ants and antlions at \({t}^{{\text{th}}}\) iteration, respectively. After this step, a roulette wheel (RW) approach is employed to mimic the hunting behavior of antlions. This is an important stage in ALO since the selection of antlions is conducted based on their calculated fitness values. By applying the roulette wheel approach, the fitter antlions get higher chance to catch the ants for consuming [7]. After selection of the antlions by means of the roulette wheel approach, the sliding behavior of ants toward the bottom of the pit is simulated based on a boundary shrinking procedure in which the lower and upper bounds of the variables are adaptively decreased as follows:

$${x}_{{\text{min}},j}^{t}=\frac{{x}_{{\text{min}},j}^{t}}{I}$$
(6)
$${x}_{{\text{max}},j}^{t}=\frac{{x}_{{\text{max}},j}^{t}}{I}$$
(7)
$$I={10}^{\omega }\frac{t}{T}$$
(8)

As can be seen from Eqs. (6) and (7), the search space of the ants is decreased by dividing the corresponding lower and upper bounds to parameter \(I\) given in Eq. (8). Note that the value of \(I\) is dynamically calculated depending on the ratio of \(t/T\) and \(\omega\). The value of \(\omega\) is used for adjusting the accuracy of exploitation and calculated by dividing the iteration domain into several segments as follows:

$$\omega =\left\{\begin{array}{cc} 1& {\text{if}}\;\;\, t>0 \;\;\;\;\; \;\; \\ 2& {\text{if}}\;\;\, t>0.10T\\ 3& {\text{if}}\;\;\, t>0.50T\\ 4& {\text{if}}\;\;\, t>0.75T\\ 5& {\text{if}}\;\;\, t>0.90T\\ 6& {\text{if}}\;\; \,t>0.95T \end{array}\right.$$
(9)

To simulate the entrapment process of the ants inside the cone-shaped pit, the lower and upper bounds of the ants are updated by using the position of the antlions as follows:

$${x}_{j,{\text{min}}}^{t}=\left\{\begin{array}{l}{\widetilde{x}}_{ij,{\text{RW}}}^{t}+{x}_{j,{\text{min}}}^{t} \,\;\;\;\text{if} \,\;\text{rand}\left(\mathrm{0,1}\right)<0.5\\ {\widetilde{x}}_{ij,{\text{RW}}}^{t}-{x}_{j,{\text{min}}}^{t} \,\;\;\;\text{otherwise}\end{array}\right.$$
(10)
$${x}_{j,{\text{max}}}^{t}=\left\{\begin{array}{l}{\widetilde{x}}_{ij,{\text{RW}}}^{t}+{x}_{j,{\text{max}}}^{t} \,\;\;\;\text{if}\; \,\text{rand}\left(\mathrm{0,1}\right)\ge 0.5\\ {\widetilde{x}}_{ij,{\text{RW}}}^{t}-{x}_{j,{\text{max}}}^{t} \,\;\;\;\text{otherwise}\end{array}\right.$$
(11)

where \({\widetilde{x}}_{ij,{\text{RW}}}^{t}\) is the position of the selected \({i}^{{\text{th}}}\) antlion at \({t}^{{\text{th}}}\) iteration by means of the RW approach. After this step, the random walk process of the ants to find a forage is modeled based on the following equation:

$${\varvec{{\mathcal{X}}}}_{j}=\left[0, {{\mathcal{X}}}_{j}^{1}, {{\mathcal{X}}}_{j}^{2}, \cdots ,{{\mathcal{X}}}_{j}^{t}, \cdots , {{\mathcal{X}}}_{j}^{T}\right]$$
(12)
$${{\mathcal{X}}}_{j}^{t} =\mathcal{C}\left(2\cdot\ r\left(t\right)-1\right)$$
(13)
$$r\left(t\right)=\left\{\begin{array}{l}1\, \;\;\text{if} \,\;\text{rand} \left(\mathrm{0,1}\right)>0.5\\ 0\,\;\; \text{otherwise}\end{array}\right.$$
(14)

where \({\varvec{{\mathcal{X}}}}_{j}\) is the random walk vector for \({j}^{{\text{th}}}\) decision variable \((j=\mathrm{1,2},3,\cdots ,d)\); \({{\mathcal{X}}}_{j}^{t}\) is the \({t}^{{\text{th}}}\) element of the \({\varvec{{\mathcal{X}}}}_{j}\) vector; \(r\left(t\right)\) is a stochastic step function; \(\mathcal{C}\left(\circ \right)\) is a function which calculates the cumulative sum of the given input. Note that the elements of the random walk vector are normalized to ensure satisfying the specified lower and upper bounds of the decision variables based on the following min–max normalization equation:

$${{\mathcal{X}}}_{j}^{t}=\frac{\left({{\mathcal{X}}}_{j}^{t}-{a}_{j}\right)\cdot \left({x}_{j,{\text{max}}}^{t}-{x}_{j,{\text{min}}}^{t}\right)}{{b}_{j}-{a}_{j}}+{x}_{j,{\text{min}}}^{t}$$
(15)

where \({a}_{j}={\text{min}}\left({\varvec{{\mathcal{X}}}}_{j}\right)\) and \({b}_{j}={\text{max}}\left({\varvec{{\mathcal{X}}}}_{j}\right)\). During solution of the optimization problem, keeping the elite solution is an important step in any phase of the search process. Thus, the best solution should be stored to guarantee the improved convergence in every iteration. Note that in ALO, new positions of the ants are calculated by using the random walk around the selected antlion by roulette wheel approach and the elite antlion in terms of the output of Eq. (15) as follows:

$${x}_{ij}^{t}=\frac{{\mathcal{R}a}_{j}^{t}+{\mathcal{R}e}_{j}^{t}}{2}$$
(16)

where \({\mathcal{R}a}_{j}^{t}\) and \({\mathcal{R}e}_{j}^{t}\) are the random walk around the selected (RW approach) and the elite antlions for \({j}^{{\text{th}}}\) decision variable at \({t}^{{\text{th}}}\) iteration, respectively. When the ants get trapped at the bottom of the pit, the antlion updates its position to prepare the trap for a new prey. According to the provided source code of ALO [7], this process is described based on the following steps: (i) determine fitness vector for the recently calculated ant positions; (ii) append the fitness vector of ants to the bottom of the fitness vector of antlions (double fitness vector); (iii) append the position matrix of ants to the bottom of the position matrix of antlions (double position matrix); (iv) sort double fitness vector in ascending order and determine the corresponding sort index values; (v) re-order the double position matrix based on the sort index values; (vi) assign new positions and fitness values of antlions from the first half of the sorted double position matrix and double fitness vector, respectively. It should be noted that the computational scheme described above is iterated until reaching the maximum number of iterations in the original ALO.

3 The proposed SHuffled Ant Lion Optimization algorithm (SHALO)

As indicated previously, the ALO method is applied for solving many different optimization problems in the literature due to its interesting philosophy and simple computational structure. However, ALO has some limitations in the random walk around antlions process. Therefore, a significant number of the improved ALO approaches in the literature focus on this issue and try to improve the solution accuracy by modifying this process [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]. Depending on this issue, the SHALO approach is proposed in this study for increasing the solution accuracy of ALO approach. Note that the computational structure of the proposed SHALO approach also focuses on modifying the random walk around ant lions process with two improvements in the current ALO structure. The first improvement is related to the boundary shrinking procedure of ALO approach which is an important and essential part since the original procedure depends on a step-by-step decreasing procedure for shrinking the boundary. This procedure is controlled with the parameter \(\omega\) in Eq. (9) which is used to adjust level of exploitation. Note that the value of \(\omega\) in Eq. (9) is suddenly increased at some stationary points and this change results with a suddenly increased value of \(I\) as given in Eq. (8). Although this procedure can be useful for conducting the boundary reduction around the solutions, it can reduce the random search capability of the algorithm due to a sudden increase of the value of \(I\) at stationary points [38]. Therefore, a new exponentially weighted boundary shrinking approach is proposed to solve this problem as follows:

$$I=\left\{\begin{array}{l}1\, \;\;{\text{if}}\, \;\;k=0 \\ \widetilde{I} \;\;{\text{if}}\,\;\; k=1, 2, 3, 4, 5\end{array}\right.$$
(17)
$$\widetilde{I}=\left({{I}_{k}^{{\text{min}}}+I}_{k}^{{\text{max}}}\right)-{I}_{k}^{{\text{max}}}\cdot \left\{{e}^{\left[{\mathcal{L}}_{k}\cdot \left(t-{t}_{k}^{{\text{min}}}\right)\right]}+\lambda \cdot {\text{rand}}\left(\mathrm{0,1}\right)\cdot \left|{\mathcal{L}}_{k}\right|\cdot \left(t-{t}_{k}^{{\text{min}}}\right)\right\}$$
(18)
$${\mathcal{L}}_{k}=\frac{{\text{ln}}\left(\frac{{I}_{k}^{{\text{min}}}}{{I}_{k}^{{\text{max}}}}\right)}{{t}_{k}^{{\text{max}}}-{t}_{k}^{{\text{min}}}}$$
(19)

where \(\widetilde{I}\) is the value of the proposed exponentially weighted function, \({I}_{k}^{{\text{min}}}\) and \({I}_{k}^{{\text{max}}}\) are the minimum and maximum \(I\) values for \({k}^{{\text{th}}}\) region in Fig. 2, \({t}_{k}^{{\text{min}}}\) and \({t}_{k}^{{\text{max}}}\) are the minimum and maximum \(t\) values for \({k}^{{\text{th}}}\) region in Fig. 2, \({\mathcal{L}}_{k}\) is a function whose value is calculated logarithmically based on Eq. (19) for \({k}^{{\text{th}}}\) region in Fig. 2, \(\left|\circ \right|\) is the absolute value, and \(\lambda\) is the parameter controlling the magnitude of the random deviations from the value of \(\widetilde{I}\). The detailed representation of the parameters used in Eqs. (18) and (19) can be seen in Fig. 2.

Fig. 2
figure 2

Variation of \(I\) versus \(t\) for ALO (black lines) and SHALO (red \(\left(\lambda =0\right)\) and blue \(\left(\lambda =1\right)\) lines) approaches (color figure online)

As can be seen from Fig. 2, the iteration domain of the optimization problem is partitioned into 6 regions \(\left(k=\mathrm{0,1},2,\cdots ,5\right)\). According to the original ALO formulation, the value of \(I\) is suddenly increased at starting points of the 1st to 5th regions. On the other hand, the value of \(I\) linearly increases inside of each region depending on the ratio of \(\left(t/T\right)\). Figure 2 shows the variation of \(I\) for each region in the original ALO method with the black lines. Note that the minimum and maximum values of \(I\) for each region are taken as \({I}_{k}^{{\text{min}}}\) and \({I}_{k}^{{\text{max}}}\) in SHALO. Similarly, for each region, the minimum and maximum values of \(t\) are considered as \({t}_{k}^{{\text{min}}}\) and \({t}_{k}^{{\text{max}}}\) in Eqs. (18) and (19). Note that Fig. 2 also includes the variation of the proposed exponentially weighted function of \(\widetilde{I}\). As can be seen, for \(\lambda =0\) (red line), the proposed exponentially weighted function does not have any random deviations in terms of the calculated \(\widetilde{I}\) values. This means that the proposed function behaves just as the one given in the original ALO method. The key difference between the proposed function and the one given in the original ALO is that the proposed function does not have any stationary points where the value of I suddenly increases. Instead, the function value exponentially increases between the same minimum and maximum values for each region. This kind of use increases the random search capability of the approach rather than the original ALO method. Note that the random search capability of the proposed exponential function can also be increased by perturbing the calculated \(\widetilde{I}\) values with random deviations. This perturbation is also given in Fig. 2 for \(\lambda =1\) (blue line). As can be seen, although trend of the calculated \(\widetilde{I}\) values remains the same, their values have strong local oscillations depending on the considered value of \(\lambda\). Therefore, the proper selection of the value of \(\lambda\) is very important since it adjusts the magnitude of random deviations from the calculated value of \(\widetilde{I}\).

The second improvement of the SHALO approach is related to the modification of the min–max normalization process given in Eq. (15) in original ALO. As indicated previously, the generated random walk vector cannot be directly used for updating the position of ants. Therefore, values in this vector are normalized by using Eq. (15) in the original ALO by considering the minimum and maximum values of the random walk vector. In the proposed SHALO approach, the associated min–max normalization is modified as follows:

$${{\mathcal{X}}}_{j}^{t}=\frac{\left({{\mathcal{X}}}_{j}^{t}-{\widetilde{a}}_{j}\right)\cdot \left({x}_{j,{\text{max}}}^{t}-{x}_{j,{\text{min}}}^{t}\right)}{{\widetilde{b}}_{j}-{\widetilde{a}}_{j}}+{x}_{j,{\text{min}}}^{t}$$
(18)
$${\widetilde{a}}_{j}={\text{mean}}\left({\varvec{{\mathcal{X}}}}_{j}\right)-{\text{min}}\left({\varvec{{\mathcal{X}}}}_{j}\right)$$
(19)
$${\widetilde{b}}_{j}={\text{max}}\left({\varvec{{\mathcal{X}}}}_{j}\right)-{\text{mean}}\left({\varvec{{\mathcal{X}}}}_{j}\right)$$
(20)

where \({\text{mean}}\left({\varvec{{\mathcal{X}}}}_{j}\right)\) is the arithmetic mean of the random walk vector for \({j}^{{\text{th}}}\) variable. As can be seen from Eq. (20), the values of \({a}_{j}\) and \({b}_{j}\) in Eq. (15) are replaced with \({\widetilde{a}}_{j}\) and \({\widetilde{b}}_{j}\) by including the mean of the random walk vector as an additional variable. This inclusion significantly increases the diversity of random walk vector since it shuffles the content of the random walk vector. During this process, the two cases given in Fig. 3 are observed.

Fig. 3
figure 3

The two cases observed during the shuffling process

As can be seen from Fig. 3, the cases where the \({\text{mean}}\left({\varvec{{\mathcal{X}}}}_{j}\right)\) values are lower and greater than \({\text{median}}\left({\varvec{{\mathcal{X}}}}_{j}\right)\) are named as Case 1 and 2, respectively. Although these two cases are not important in the original ALO, they are important in SHALO for understanding the nature of the shuffling process. As an example, Fig. 4a–c compares the random walks for both ALO and SHALO approaches at 10th, 100th, and 800th iterations in terms of trends and statistics. Note that both approaches in Fig. 4 are executed by considering the same random number seeds to eliminate the influence of random numbers. As can be seen from Fig. 4a, the calculated random walks are in the same trend although the corresponding values in SHALO are much greater than the ones obtained in the original ALO. The random walks in Fig. 4a represent the behavior of Case 2 for both ALO and SHALO approaches where the calculated mean is greater than the median. On the other hand, in Fig. 4b, the calculated random walks also have greater values than those obtained in ALO. However, the corresponding values of SHALO are all obtained as upside down compared to the ones in the original ALO. The reason for this change is associated with the conditions given in Fig. 3 such a way that ALO represents the behavior of Case 2, whereas the SHALO is Case 1. Finally, in Fig. 4c, the same outcome is also observed where random walks in SHALO are all obtained as upside down compared to the original ALO due to their differences in terms of the cases in Fig. 3. Note that comparison of Fig. 4a–c also indicates the decreased values of the random walks in SHALO. As an example, Fig. 5 shows the variation of random walks through iterations for both ALO and SHALO approaches. As can be seen, the random walk vector of SHALO gets significantly greater values than those calculated values for ALO in early iterations. However, these big differences decrease in later iterations and values of the random walk vector approach to the optimum values. It is concluded from these results that the proposed SHALO approach significantly increases the diversity of random walk vector, especially in early iterations by increasing and shuffling the random walks.

Fig. 4
figure 4

Comparison of the trends statistics of random walks for both ALO and SHALO approaches for a: \(10^{{\text{th}}}\) iteration; b: \(100^{{\text{th}}}\) iteration; c: \(800^{{\text{th}}}\) iteration (the figures on the right side represent the statistics of the random walks in terms of the minimum (black), maximum (green), mean (red), and median (blue) metrics (color figure online))

Fig. 5
figure 5

Variation of random walks through optimization iterations for both ALO and SHALO approaches (color figure online)

4 Comparison with other approaches

As indicated previously, there are various modified versions of ALO in the literature for improving the quality of random walks around antlion process. Since SHALO also aims to improve this process, it is essential and required task to compare its performance with these approaches. In this context, the identified results of SHALO are compared with the original ALO for the same initial populations, optimization parameter values, and random number seeds. Similarly, all the identified results are compared with two improved versions of ALO which are the self-adaptive ALO (saALO) [40] and the Exponential-weighted ALO (EALO) [32] approaches. Both saALO and EALO approaches aim to improve the boundary shrinking procedure by modifying the variable \(I\). These modifications are conducted by introducing the sinusoidal and exponential terms to saALO and EALO approaches, respectively, as follows:

$${\text{saALO}}:\;I={10}^{\omega }\frac{t}{T}\left(0.5+{\text{sin}}\left(\frac{\pi t}{2T}\right)\cdot {\text{rand}}\left(\mathrm{0,1}\right)\right)$$
(23)
$${\text{EALO}}:\; I={10}^{\omega }\frac{t}{T}\left({\text{exp}}\left(\frac{t}{T}\right)\cdot {\text{rand}}\left(\mathrm{0,1}\right)\cdot \eta \right)$$
(24)

As can be seen from Eqs. (23) and (24), both saALO and EALO approaches use the same structure with Eq. (8) such a way that the value of \(I\) linearly increases with the ratio of \(t/T\) together with a dynamic change with the parameter of \(\omega\). The key difference is that in saALO, an additional term is introduced to Eq. (8) to include the sinusoidally changed random perturbations. Similarly, in EALO, this random perturbation is included to Eq. (8) by means of an exponential term together with a weight of \(\eta\) which controls the magnitude of random perturbations. Note that the performance of the proposed SHALO approach is also evaluated by solving the same problem via GA and PSO optimization approaches by using the same random number seeds with the other approaches.

5 Numerical applications

The applicability of the proposed SHALO approach is evaluated by solving four unconstrained and four constrained benchmark problems. The last two of the constrained problems are related with the solution of well-known engineering design problems. These problems are also solved with the original ALO, saALO, EALO, GA, and PSO approaches. All the approaches are executed by considering the number of candidate solutions (population) as 50 and the maximum iteration (generation) number of 1000. To make an exact comparison for all the approaches, a data set including 100 different uniform random number seed values is randomly generated for statistically evaluating the model results. For this purpose, commonly used statistical measures (minimum, maximum, mean, median, and standard deviation) are used together with a measure of Success Rate \((SR)\), given in Eq. (25), which is calculated by dividing the number of successful solutions \((SS)\) to the number of total solutions \((TS)\) [49]. A model execution is assumed to be successful depending on the mathematical definition given in Eqs. (26).

$$SR \left(\%\right)=\frac{\sum SS}{TS}\cdot 100$$
(25)
$$SS=\left\{\begin{array}{cc}\left[\begin{array}{cc} 1& {\text{if}}\,\; \left|f\left(x\right)-f\left({x}^{*}\right)\right|\le \varepsilon \\ 0& {\text{otherwise}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;} \end{array}\right.& \text{if}\,\;\; f\left({x}^{*}\right)=0\\ & \\ \left[\begin{array}{cc} 1& {\text{if}} \, \frac{\left|f\left(x\right)-f\left({x}^{*}\right)\right|}{\left|f\left({x}^{*}\right)\right|}\le \varepsilon \\ 0& \text{otherwise }\;\;\;\;\;\;\;\;\end{array}\;\;\;\;\;\;\;\;\;\right.& \text{if}\;\;\, f\left({x}^{*}\right)\ne 0\end{array}\right.$$
(26)

where \(f\left({x}^{*}\right)\) is the global optimum solution of the optimization problem, and \(\varepsilon\) is a threshold value. As can be seen from Eq. (26), the value of \(SS\) equals to 1 if absolute or absolute relative error values are lower than the given \(\varepsilon\) value depending on the value of the global optimum solution. Note that the measure of \(SR\) is used to evaluate the model results in terms of the calculated objective function values. Evaluation of model results in terms of the relative closeness to the optimum solution is conducted based on the accuracy metric \(\left(AC\right)\) as follows [36, 39]:

$$AC\, \left(\%\right)=\left(1-\frac{\left|{{x}_{k}^{\prime}-x}_{k}^{*}\right|}{\left|{x}_{{\text{max}},k}^{t=0}-{x}_{{\text{min}},k}^{t=0}\right|}\right)\cdot 100$$
(27)

where \({x}_{k}^{\prime}\) and \({x}_{k}^{*}\) represent the calculated and the optimum solution of the \({k}^{{\text{th}}}\) decision variable \(\left(k=\mathrm{1,2},3,\cdots ,d\right)\). Note that performance of the proposed SHALO approach is evaluated for 12 different \(\lambda\) values \(\left(\lambda \in \left[0, 1, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000\right]\right)\) where the model is executed 100 times for each \(\lambda\) values by considering the generated random number seed data set. The reason for this evaluation is to assess the applicability of the proposed boundary shrinking procedure for different magnitudes of random perturbations.

5.1 Unconstrained optimization applications

5.1.1 Michalewicz’s test function

Michalewicz’s test function is a nonlinear multimodal test function which contains \(d\) local optimum [50]. The mathematical description of the function together with the lower and upper bounds of the decision variables is given in Eq. (28).

$${\text{min}} \;f\left(x\right)=-\sum_{k=1}^{d}{\text{sin}}\left({x}_{k}\right){\left[{\text{sin}}\left(\frac{k.{x}_{k}^{2}}{\pi }\right)\right]}^{2\mu }$$
(28)
$${\text{s}}.{\text{t}}.\; 0\le {x}_{k}\le \pi\; ; \;k=\mathrm{1,2},3,\cdots ,d$$
(28a)

where the parameter \(\mu\) describes the “steepness” of the valleys or sides and is assumed to be 10 for this solution. This function has a global optimum solution at \({x}^{*}=\left[2.202906 \; 1.570796 \; 1.284992 \; 1.923058 \; 1.720470\right]\) with a function value of \(f\left({x}^{*}\right)=-4.688\) for \(d = 5\). Figure 6 shows three-dimensional representation of this function for \(d = 2\). As can be seen, the function has many local optimum solutions around global optimum that is why it may be classified as a difficult optimization test problem. As indicated previously, this problem is solved for different \(\lambda\) values in SHALO; each of them consists of 100 model executions. For these model executions, the calculated \(SR\) values are given in Fig. 7.

Fig. 6
figure 6

Michalewicz’s test function

Fig. 7
figure 7

Variation of \(SR\) (%) for different \(\lambda\) values for Michalewicz’s test function

As can be seen from Fig. 7, the maximum value of \(SR\) is obtained for the solution with \(\lambda =20\). For this solution, the results of the corresponding 100 model executions are compared with the ones obtained by using ALO, saALO, EALO, GA, and PSO approaches, respectively. As indicated previously, all these approaches are also executed 100 times for the same random number seed data set. Table 1 compares the identified results of these model executions.

Table 1 Evaluation of the 100 model executions for Michalewicz’s test function for different optimization approaches

It is seen that each approach determined the global optimum solution (− 4.688) at least one time for the conducted 100 model executions. When the obtained results are evaluated, the associated \(SR\) values of ALO, saALO, and EALO approaches are obtained as 3%, 4%, and 3%, respectively. However, the corresponding \(SR\) value is obtained as 48% and 32% for GA and PSO, respectively. For this example, the proposed SHALO approach is resulted with a \(SR\) value of 69% which is better than the other approaches. This outcome can also be seen from the mean and median values where the calculated values are very close to the global optimum solution in SHALO. For the calculated \(AC\) values, while the original ALO has the mean accuracy of 90%, all the other approaches have the mean accuracy 97% and over. These results indicate that there are significant local optimum solutions around the global optimum. The convergence plots for ALO, saALO, EALO, GA, PSO, and SHALO approaches are compared in Fig. 8. Note that these plots are composed by taking the arithmetic mean of the 100 model executions for each optimization approach. Results of these plots indicate that the proposed SHALO approach converges to the better mean objective function values than the other approaches when considering overall 100 model executions. For the best solutions in Table 1, the final decision variables and objective function values are compared in Table 2. As can be seen, all the identified values are the same since all the approaches converged to the global optimum solution at least once.

Fig. 8
figure 8

Comparison of the convergence histories of different approaches for the Michalewicz’s test function

Table 2 The final decision variables and objective function values for Michalewicz’s test function for the best solutions in Table 1

5.1.2 Rastrigin’s test function

Rastrigin’s test function is also a nonlinear and multimodal test function containing the large number of local optimum solutions [51]. The mathematical form of the function together with the considered lower and upper bounds of the decision variables is given in Eq. (29).

$${\text{Min}} f\left(x\right)=10d+\sum_{k=1}^{d}\left[{x}_{k}^{2}+10{\text{cos}}\left(2\pi {x}_{k}\right)\right]$$
(29)
$${\text{s}}.{\text{t}}.\; -5.12\le {x}_{k}\le 5.12\;\;\; k=\mathrm{1,2},\cdots ,d$$
(29a)

This function has a global optimum solution at \({x}_{k}^{*}=0\) \(\left(k=\mathrm{1,2},\cdots ,d\right)\) with a function value of \(f\left({x}^{*}\right)=0\) for any given value of \(d\). In this study, the problem is solved by considering \(d=5.\) Figure 9 demonstrates three-dimensional representation of this function for \(d = 2\). As can be seen, the function has many local optimums, and therefore, it may also be classified as a difficult optimization test problem. By applying the same solution scheme, this function is also solved 100 times for different \(\lambda\) values. Figure 10 shows the final \(SR\) values for these model executions with different \(\lambda\) values.

Fig. 9
figure 9

Rastrigin’s test function

Fig. 10
figure 10

Variation of \(SR\) (%) for different \(\lambda\) values for Rastrigin’s test function

As can be seen from Fig. 10, the maximum \(SR\) value is obtained as 95% which is obtained for the solution with \(\lambda =100\). For this solution, the results of the corresponding 100 model executions are compared with the ones obtained by using ALO, saALO, EALO, GA, and PSO approaches, respectively. The identified results of these model executions are compared in Table 3. As can be seen, the original ALO is not successful in determining the global optimum solution for any of the 100 model executions. Similarly, the saALO and EALO approaches obtained the global optimum solution only once. For the same conditions, GA and PSO approaches determined the global optimum solution 46 and 36 times, respectively. On the other hand, the proposed SHALO approach determined the global optimum solution 95 times out of 100 model executions. This outcome can also be seen from the calculated mean and median values which are close to the global optimum solution in SHALO although there are some differences in the other approaches. When the results of \(AC\) measure are evaluated, ALO, saALO, and EALO approaches resulted with the same mean \(AC\) value of 93%. On the other hand, GA, PSO, and SHALO approaches resulted with the mean \(AC\) values of 99%, 98%, and 100%, respectively.

Table 3 Evaluation of the 100 model executions for Rastrigin’s test function for different optimization approaches

The convergence plots of each approach for the mean objective function values of 100 model executions are compared in Fig. 11. Results of these plots demonstrate that the proposed SHALO approach converges to the minimum mean objective function value compared to the other approaches used. For the best solutions in Table 3, the final decision variables and objective function values are compared in Table 4. As can be seen, all the identified variables are very close to 0 which is the global optimum solution for this problem.

Fig. 11
figure 11

Comparison of the convergence histories of different approaches for the Rastrigin’s test function

Table 4 The final decision variables and objective function values for Rastrigin’s test function for the best solutions in Table 3

5.1.3 Trefethen’s test function

Trefethen’s test function [52] is an unconstrained nonlinear multimodal test function. This function includes two decision variables \(\left(d=2\right)\) and combines exponential, trigonometric, and arithmetic expressions in a single function. The mathematical form of the function together with the search domain is given in Eq. (30).

$${\text{Min}} f\left(x\right)={e}^{{\text{sin}}\left({50x}_{1}\right)}+{\text{sin}}\left({60e}^{{x}_{2}}\right)+{\text{sin}}\left(70sin\left({x}_{1}\right)\right)+{\text{sin}}\left(80{\text{sin}}\left({x}_{2}\right)\right)-{\text{sin}}\left(10\left({x}_{1}+{x}_{2}\right)\right)+\frac{1}{4}\left({{x}_{1}}^{2}+{{x}_{2}}^{2}\right)$$
(30)
$${\text{s}}.{\text{t}}.\;-6.5\le {x}_{1}\le 6.5$$
(30a)
$$-4.5\le {x}_{2}\le 4.5$$
(30b)

The global optimum solution of this function is observed at \({x}^{*}=\left[-0.0244031\; 0.2106124\right]\) with a function value of \(f\left({x}^{*}\right)=-3.307\). Three-dimensional representation of the Trefethen’s test function is given in Fig. 12. As can be seen, the function has many local oscillations which behave as the local optimum solutions. This example is also solved 100 times for different \(\lambda\) values. Results of the proposed SHALO approach in terms of the calculated \(SR\) values are given in Fig. 13 for different \(\lambda\) values.

Fig. 12
figure 12

Trefethen’s test function

Fig. 13
figure 13

Variation of \(SR\) (%) for different \(\lambda\) values for Trefethen’s test function

As can be seen from Fig. 13, the solution with \(\lambda =1\) is selected as the best since the calculated \(SR\) value of that solution is maximum. For this solution, the results of the corresponding 100 model executions are compared with the results which are obtained by using ALO, saALO, EALO, GA, and PSO approaches, respectively. Similarly, all the approaches are also executed 100 times for the same random number seed data set. A comparison of the obtained results of all model executions is given in Table 5.

Table 5 Evaluation of the 100 model executions for Trefethen’s test function for different optimization approaches

As can be seen, all the approaches determined the global optimum solution of − 3.307 for the conducted 100 model executions. Results of the calculated \(SR\) values for ALO, saALO, EALO, GA, and PSO approaches are obtained as 47%, 56%, 20%, 36%, and 63%, respectively. For this function, the corresponding \(SR\) value in SHALO is obtained as 66% which is the best compared to the other approaches. The calculated mean \(AC\) values are obtained as 98% for EALO and 99% for the other approaches. Note that the Trefethen’s test function has many local optimums around global optimum solution and this result can be clearly seen from Table 5 that the EALO-based optimization approach is resulted with a \(SR\) value of 20% although the calculated mean \(AC\) value of the same approach is 98%. Figure 14 shows the convergence plots of all approaches in terms of the mean objective function values. As can be seen, the proposed SHALO approach again converges to the minimum mean objective function value than the other approaches. For the best solutions given in Table 5, the final decision variables and objective function values are compared in Table 6. As can be seen, all the identified values are the same since all the approaches converged to the global optimum solution at least once.

Fig. 14
figure 14

Comparison of the convergence histories of different approaches for the Trefethen’s test function

Table 6 The final decision variables and objective function values for Trefethen’s test function for the best solutions in Table 5

5.1.4 De Villiers–Glasser 1 function

De Villiers–Glasser 1 test function is a continuous nonlinear test function which has a differentiable, non-separable, and non-scalable multimodal mathematical solution space [53]. The mathematical form of the function together with the search space is given in Eq. (31).

$${\text{Min}} f\left(x\right)=\sum_{i=1}^{24}{\left[{x}_{1}{x}_{2}^{{t}_{i}}{\text{sin}}\left[{x}_{3}{t}_{i}+{x}_{4}\right]-{y}_{i}\right]}^{2}$$
(31)
$${t}_{i}=0.1\left(i-1\right)$$
(31a)
$${y}_{i}=60.137\cdot {1.371}^{{t}_{i}}\cdot {\text{sin}}\left(3.112{t}_{i}+1.761\right)$$
(31b)
$${\text{s.t.}}\,1\le {x}_{k}\le 100\;\;\; k=\mathrm{1,2},\cdots ,d$$
(31c)

This function has a global optimum solution at \({x}^{*}=\left[60.137 \;1.371\; 3.112\; 1.761\right]\) with a function value of \(f\left({x}^{*}\right)=0\) for \(d=4\). Similarly, this problem is also solved for different \(\lambda\) values in SHALO; each of them consists of 100 model executions. Figure 15 shows the final \(SR\) values for these model executions.

Fig. 15
figure 15

Variation of \(SR\) (%) for different \(\lambda\) values for De Villiers–Glasser 1 test function

As can be seen from Fig. 15, the maximum value of \(SR\) is obtained for the solution with \(\lambda =2000\). After selecting the best \(\lambda\) value, the results of the corresponding 100 model executions are compared with the ones those obtained by using ALO, saALO, EALO, GA, and PSO approaches, respectively (Table 7). As can be seen from Table 7, the original ALO, saALO, EALO, and GA could not reach the global optimum in any of the 100 model executions. However, the problem is successively solved for 60 and 77 times out of 100 model executions by PSO and SHALO approaches, respectively. The calculated mean \(AC\) values are all obtained over 65% for all the approaches. Note that this function also has many local optimums around global optimum solution. For instance, while the mean value of \(AC\) is obtained as 68% for ALO, the corresponding value of \(SR\) is 0% for the same approach.

Table 7 Evaluation of the 100 model executions for De Villiers–Glasser 1 test function for different optimization approaches

The convergence plots for ALO, saALO, EALO, GA, PSO, and the proposed SHALO approaches are compared in Fig. 16 for the mean objective function values of 100 model executions. Results of these plots demonstrate that the proposed SHALO approach again converges to the minimum mean objective function value compared to the other optimization approaches. For the best solutions in Table 7, the final decision variables and objective function values are given in Table 8. As can be seen, only PSO and SHALO approaches determined the global or near-global optimum solution and the other approaches could not determine it.

Fig. 16
figure 16

Comparison of the convergence histories of different approaches for the De Villiers–Glasser 1 test function

Table 8 The final decision variables and objective function values for De Villiers–Glasser 1 test function for the best solutions in Table 7

5.2 Constrained optimization applications

In this section, the performance of the proposed SHALO approach is evaluated by solving two constrained benchmark problems and two well-known engineering design problems. For these problems, general form of the constrained optimization model can be mathematically described as follows:

$${\text{min}} \; f\left(x\right)$$
(32)
$${\text{s}}.{\text{t}}. \;\;{g}_{i}\left(x\right)\le 0 \,\;\;\;i=1, 2, 3, \cdots , m$$
(32a)
$${h}_{j}(x)=0\, \;\;\;j=1, 2, 3, \cdots , n$$
(32b)

where \(m\) is the number of inequality constraints; \({g}_{i}\,\left(x\right)\) is the \({i}^{{\text{th}}}\) inequality constraint; \(n\) is the number of equality constraints; and \({h}_{j}(x)\) is the \({j}^{{\text{th}}}\) equality constraint. Since ALO is a meta-heuristic optimization method, it can only be used for solving the unconstrained optimization problems in its original form. Therefore, solution of the constrained problems is conducted by converting them to the unconstrained problems by means of the penalty function approach as follows:

$${\text{min}}\;f^{\prime}\left(x\right)= f\left(x\right)+\sum_{i=1}^{m}{\alpha }_{i}\cdot {P}_{i}^{g}+\sum_{j=1}^{n}{\beta }_{j}\cdot {P}_{j}^{h}$$
(33)
$${P}_{i}^{g}=\left\{ \begin{array}{cc}0& \text{if}\, \;\;{g}_{i}\left(x\right)\le 0\\ {\left[{g}_{i}\left(x\right)\right]}^{2}& {\text{otherwise}\;\;\;\;\;}\end{array}\right.$$
(34)
$${P}_{j}^{h}=\left\{ \begin{array}{cc}0& \text{if} \,\;\;{h}_{j}\left(x\right)=0\\ {\left[{h}_{j}\left(x\right)\right]}^{2}& {\text{otherwise}\;\;\;\;\;}\end{array}\right.$$
(35)

where \(f^{\prime}\left(x\right)\) is the transformed objective function; \({P}_{i}^{g}\) and \({P}_{j}^{h}\) are the penalty functions for inequality and equality constraints, respectively; \({\alpha }_{i}\) and \({\beta }_{j}\) are the corresponding penalty coefficients for inequality and equality constraints, respectively. As can be seen from Eqs. (33) to (35), the given constrained problem is transformed to the unconstrained optimization problem with inclusion of penalty functions. These functions get a value of zero when all the constraints are satisfied, whereas they get a penalty value depending on constraint violation when the constraints are not satisfied. These penalty functions are integrated to the objective function by using the penalty coefficients of \({\alpha }_{i}\) and \({\beta }_{j}\) which are used to adjust the magnitude of penalty terms. Note that selection of \({\alpha }_{i}\) and \({\beta }_{j}\) values are the mostly problem dependent. Therefore, values of them are selected by conducting some trials before execution of the optimization model. Depending on these previous trials, values of penalty coefficients are taken as 1000 for all the problems.

5.2.1 Floudas and Pardalos’s function

Floudas and Pardalos’s function is a nonlinear constrained optimization test function which includes 13 decision variables and 9 inequality constraints [54]. The mathematical form of the objective and constraint functions together with the lower and upper bounds of the decision variables is given in Eq. (36).

$${\text{Min}} f\left(x\right)=5\sum_{k=1}^{4}{x}_{k}-5\sum_{k=1}^{4}{x}_{k}^{2}-\sum_{k=5}^{13}{x}_{k}$$
(36)
$${\text{s.t.}}\,{g}_{1}\left(x\right)=2{x}_{1}+2{x}_{2}+{x}_{10}+{x}_{11}-10\le 0$$
(36a)
$${g}_{2}\left(x\right)=2{x}_{1}+2{x}_{3}+{x}_{10}+{x}_{12}-10\le 0$$
(36b)
$${g}_{3}\left(x\right)=2{x}_{2}+2{x}_{3}+{x}_{11}+{x}_{12}-10\le 0$$
(36c)
$${g}_{4}\left(x\right)=-8{x}_{1}+{x}_{10}\le 0$$
(36d)
$${g}_{5}\left(x\right)=-8{x}_{2}+{x}_{11}\le 0$$
(36e)
$${g}_{6}\left(x\right)=-8{x}_{3}+{x}_{12}\le 0$$
(36f)
$${g}_{7}\left(x\right)=-2{x}_{4}-{x}_{5}+{x}_{10}\le 0$$
(36g)
$${g}_{8}\left(x\right)=-2{x}_{6}-{x}_{7}+{x}_{11}\le 0$$
(36h)
$${g}_{9}\left(x\right)=-2{x}_{8}-{x}_{9}+{x}_{12}\le 0$$
(36j)
$$0\le {x}_{k}\le 1,\; k=\mathrm{1,2},\cdots ,9$$
(36k)
$$0\le {x}_{k}\le 100, \;k=\mathrm{10,11,12}$$
(36l)
$$0\le {x}_{k}\le 1, \;k=13$$
(36m)

Floudas and Pardalos’s function has a global optimum solution at \({x}^{*}=\left[1\; 1\; 1\; 1\; 1\; 1\; 1\; 1\; 1\; 3\; 3\; 3\; 1\right]\) with a function value of \(f\left({x}^{*}\right)=-15\). As conducted previously, this problem is also solved for different \(\lambda\) values in SHALO for 100 model executions. Figure 17 shows the variation of \(SR\) for different \(\lambda\) values.

Fig. 17
figure 17

Variation of \(SR\) (%) for different \(\lambda\) values for Floudas and Pardalos’s test function

As can be seen from Fig. 17, the maximum \(SR\) value is obtained as 62 for the solution with \(\lambda =100\). Therefore, this solution is selected to compare the performance of the proposed SHALO approach with the ALO, saALO, EALO, GA, and PSO approaches. Similarly, all the approaches are executed 100 times for the same random number seed data set with SHALO. Table 9 compares the identified results of 100 model executions using different approaches.

Table 9 Evaluation of the 100 model executions for Floudas and Pardalos’s test function for different optimization approaches

As can be seen from Table 9, the original ALO, saALO, and EALO approaches could not find the global optimum solution for any of the 100 model executions. On the other hand, GA, PSO, and the proposed SHALO approaches found the global optimum solution 8, 10, and 62 times, respectively. These results indicate that the proposed SHALO approach provides the best identification results in terms of the calculated \(SR\) measure. For the \(AC\) measure, while the original ALO and EALO approaches resulted with the mean \(AC\) value of 58%, this measure is obtained for the saALO approach as 59%. However, the GA, PSO, and the proposed SHALO approach provided the mean \(AC\) value of 90%, 88%, and 97%, respectively. When the results are evaluated statistically, the mean and the median values of the objective function are very close to the global optimum solution in SHALO compared to the other approaches in Table 9. The convergence plots of each approach are compared in Fig. 18 in terms of the mean objective function values of 100 model executions. It is seen from Fig. 18 that the proposed SHALO approach converges to the minimum mean objective function value compared to the other approaches.

Fig. 18
figure 18

Comparison of the convergence histories of different approaches for the Floudas and Pardalos’s test function

For the best solutions in Table 9, the final decision variables, objective, and constraint function values for each approach are given in Table 10. As can be seen, GA, PSO, and the proposed SHALO approach determined the optimum decision variable values, while the original ALO, saALO, and EALO approaches could not fully determine them. When the results are evaluated in terms of the constraint violations, all the constraint functions are satisfied without any violation by each approach.

Table 10 The final decision variables, objective, and constraint function values for Floudas and Pardalos’s test function for the best solutions in Table 9

5.2.2 Rastrigin’s constrained function

Rastrigin’s constrained function, proposed by Poole and Allen [49], is a modified version of Rastrigin’s unconstrained test function. It includes a parameter controlling the number of global optimums, the Heaviside function, and a constrain function for transforming the original Rastrigin’s function to a constrained one. The mathematical form of the objective and constraint functions together with the lower and upper bounds of the decision variables is given in Eq. (37).

$${\text{Min}} f\left(x\right)=\sum_{k=1}^{d}\left\{10\left(1+{\text{cos}}\left(2\pi {\mathcal{K}}_{k}{x}_{k}\right)\right)+2{\mathcal{K}}_{k}{\left({x}_{k}-1\right)}^{2}H\left({x}_{k}-1\right)\right\}$$
(37)
$${\text{s.t.}}\,g\left(x\right)=\sum_{k=1}^{d}20{\text{cos}}\left(4\pi {\mathcal{K}}_{k}{x}_{k}\right)\le 0$$
(37a)
$$H\left(y\right)=\left\{\begin{array}{l}1 \,\;\;\;\text{if}\,\;\; y>0\, \\ 0\, \;\;\;\text{otherwise}\end{array}\right.$$
(37b)
$$0\le {x}_{k}\le 2,\;\; k=1, 2, 3, \cdots , d$$
(37c)

where \(H\left(y\right)\) is the Heaviside function; and \({\mathcal{K}}_{k}\in {\varvec{\mathcal{K}}}\) is a parameter which controls the number of global optimums for each search direction. Note that this modified problem has \({2}^{d}\prod_{k=1}^{d}{\mathcal{K}}_{k}\) global optimum solutions having the same objective function value of \(f=10d-5d\sqrt{2}\) [49]. For \(d=2\), Fig. 19 shows 2-dimensional representation of the Rastrigin’s constrained function. As can be seen, the function has 24 different locations where global optimum exists. For this case, the problem is solved for \(d=5\) decision variables and the corresponding \({\mathcal{K}}_{k}\) values for each variable are selected from the set of \({\varvec{\mathcal{K}}}=\left[1\; 1\; 1\; 1\; 2\right]\).

Fig. 19
figure 19

Two-dimensional representation of the Rastrigin’s constrained function (the black diamonds represent the locations of the global optimum solution)

As conducted previously, the problem is solved 100 times for each \(\lambda\) values. Figure 20 shows the variation of the calculated \(SR\) measures for different \(\lambda\) values. Among the obtained results, the solution of \(\lambda =1\) is selected as the best solution since it has the highest \(SR\) value. After this process, the solution corresponding to the selected \(\lambda\) value compared with the original ALO, saALO, EALO, GA, and PSO approaches. Similarly, each approach is executed 100 times with the same random number seed data set for comparing the identified results. The results of this comparison are given in Table 11.

Fig. 20
figure 20

Variation of \(SR\) (%) for different \(\lambda\) values for Rastrigin’s constrained test function

Table 11 Evaluation of the 100 model executions for Rastrigin’s constrained test function for different optimization approaches

As can be seen from Table 11, the calculated \(SR\) values of the original ALO, saALO, and EALO approaches are obtained as 30%, 37%, and 30%, respectively. For the same conditions, while the GA found the global optimum solution with a \(SR\) value of 43%, this value is obtained as 58% and 59% for PSO and the proposed SHALO approaches, respectively. As indicated previously, the number of global optimum solutions is given as \({2}^{d}\prod_{k=1}^{d}{\mathcal{K}}_{k}\). By considering \(d=5\) decision variables and \({\mathcal{K}}_{k}\in \left[1\; 1\; 1\; 1 \;2\right]\), the number of global optimum solutions having the same objective function value (14.645) is calculated as 64. Therefore, the measure of \(AC\) is not calculated for this example since there is no single optimum solution. The convergence plots for each approach in terms of the mean objective function values are compared in Fig. 21.

Fig. 21
figure 21

Comparison of the convergence histories of different approaches for the Rastrigin’s constrained test function

It can be seen from Fig. 21 that the proposed SHALO approach converges to the minimum mean objective function values than the other approaches. For the best solutions in Table 11, the final decision variables, objective, and constraint function values are compared in Table 12. As can be seen, different decision variables are obtained by each approach although their corresponding objective function values are the same. This is an expected outcome since the modified problem has 64 different global optimum solutions.

Table 12 The final decision variables, objective, and constraint function values for Rastrigin’s constrained test function for the best solutions in Table 11

5.2.3 Pressure vessel design problem

The pressure vessel design problem is a popular constrained engineering design problem which is suggested by Kannan and Kramer [55]. The purpose of the problem is to minimize the total cost including the material, forming, and welding costs. A general schematic view of the considered pressure vessel is given in Fig. 22. There are four design variables in this problem which are \({T}_{s}\) (shell thickness), \({T}_{h}\) (head thickness), \(R\) (inner radius), and \(L\) (length of the cylindrical section). These variables are denoted as \({x}_{1}\) to \({x}_{4}\), respectively. Note that among these design variables, \({T}_{s}\) and \({T}_{h}\) get discrete values which consist of the integer multipliers of 0.0625 inch, whereas \(R\) and \(L\) are continuous variables. The mathematical form of the design problem is given as follows:

Fig. 22
figure 22

The pressure vessel design problem

$${\text{min}} f\left(x\right)=0.6224{x}_{1}{{x}_{3}x}_{4}+1.7781{x}_{2}{x}_{3}^{2}+3.1661{x}_{1}^{2}{x}_{4}+19.84{x}_{1}^{2}{x}_{3}$$
(38)
$${\text{s.t.}}\,{g}_{1}\left(x\right)=-{x}_{1}+0.0193{x}_{3}\le 0$$
(38a)
$${g}_{2}\left(x\right)=-{x}_{2}+0.00954{x}_{3}\le 0$$
(38b)
$${g}_{3}\left(x\right)=-{\pi {x}_{3}^{2}x}_{4}-1.3333\pi {x}_{3}^{3}+1296000\le 0$$
(38c)
$${g}_{4}\left(x\right)={x}_{4}-240\le 0$$
(38d)
$$0.0625\le {x}_{1},{x}_{2}\le 99\times 0.0625$$
(38e)
$$10\le {x}_{3},{x}_{4}\le 200$$
(38f)

Note that Yang et al. [56] proved that this function has a near-global optimum solution at \({x}^{*}=\left[0.8125\; 0.4375\; 42.0984\; 176.6366\right]\) with a function value of \(f\left({x}^{*}\right)=6059.7143\). This engineering design problem is also executed 100 times for each \(\lambda\) values for the proposed SHALO approach. Figure 23 compares the final calculated \(SR\) values for these model executions. As can be seen, the maximum \(SR\) value is obtained as 95% and this result is observed for the solution with \(\lambda =1\). This solution is again compared with the original ALO, saALO, EALO, GA, and PSO approaches which are also executed 100 times with the same random number seed data set. The identified results of each approach are compared in Table 13.

Fig. 23
figure 23

Variation of \(SR\) (%) for different \(\lambda\) values for pressure vessel design problem

Table 13 Evaluation of the 100 model executions for pressure vessel design problem for different optimization approaches

As can be seen from Table 13, the \(SR\) values of the ALO, saALO, EALO approaches are obtained as 84%, 88%, and 52%, respectively. On the other hand, the same measure is obtained in GA, PSO, and the proposed SHALO approach as 36%, 84%, and 95%, respectively. The calculated mean \(AC\) measures are obtained over 87% for each approach. The convergence plots of each approach for the mean objective function values are compared in Fig. 24.

Fig. 24
figure 24

Comparison of the convergence histories of different approaches for the pressure vessel design problem

As can be seen from Fig. 24, the proposed SHALO approach converges to a lower mean objective function value than the other approaches. For the best solutions in Table 13, Table 14 compares the final decision variables, objective, and constraint function values for each approach. As can be seen, all the identified design variables are close to each other, and the constraints are satisfied without any violation for all the approaches.

Table 14 The final decision variables, objective, and constraint function values for pressure vessel design problem for the best solutions in Table 13

5.2.4 Trapezoidal channel design problem

Design of the trapezoidal composite channels is an important problem in water resources engineering. The reason for using these channels is to transmit the water over long distances for various purposes including irrigation, flood control, or water supply, etc. Therefore, a small amount of reduction on their dimensions corresponds to the significant savings in the project budget. In this context, this problem is considered as an important engineering design problem and examined by several researchers in the literature [57,58,59,60]. Figure 25 shows the schematic view of the considered trapezoidal channel cross section. The design variables of this problem are the bed width \((b)\), the flow depth \((y)\), and the slopes of the left and right faces of the channel (\({z}_{1}\) and \({z}_{2}\)). These variables are denoted as \({x}_{1}\) to \({x}_{4}\), respectively.

Fig. 25
figure 25

Composite channel design problem: trapezoidal cross section

where \({T}_{t}\) is the top width of the cross section; \({T}_{w}\) is the top width of flow section, \(f\) is the freeboard, \({n}_{1}\),\(, {n}_{2}\), and \({n}_{3}\) are the Manning’s surface roughness coefficient values for the left and right faces, and the bed of the channel, respectively. The objective of the channel design problem is to minimize the cost function which can be written in mathematical form as follows:

$${\text{min}} f\left(x\right)={c}_{1}{A}_{t}+{c}_{2}{P}_{1}+{c}_{3}{P}_{2}+{c}_{4}{P}_{3}$$
(39)
$${\text{s.t.}}\,g\left(x\right)=\left|\frac{Q{n}_{e}}{\sqrt{{S}_{0}}}-\frac{{A}_{w}^{5/3}}{{P}_{w}^{2/3}}\right|-\varepsilon \le 0$$
(39a)

where \({c}_{1}\) is the cost of excavation per unit cross-sectional area for a unit length of the channel; \({c}_{2}\), \({c}_{3}\), and \({c}_{4}\) are the costs of the lining per unit length of the perimeter of left-side face, right-side face (including the freeboard), and channel bed, respectively; \({A}_{t}\) is the total cross-sectional area of the channel; \({P}_{1}\), \({P}_{2}\), and \({P}_{3}\) are perimeters of left and right side faces (including the freeboard), and channel bed, respectively; \(\varepsilon\) is the allowable threshold value; \(Q\) is the flow rate in the channel; \({S}_{0}\) is the bed slope; \({A}_{w}\) is the wetted flow area; \({P}_{w}\) is the wetted perimeter; and \({n}_{e}\) is the equivalent Manning’s surface roughness coefficient. The mathematical forms of these parameters are given in Eqs. (39b)–(39l) [60].

$${A}_{w}=by+\left({z}_{1}+{z}_{2}\right)\frac{{y}^{2}}{2}$$
(39b)
$${P}_{w}=\left\{\left[{\left({z}_{1}^{2}+1\right)}^{1/2}+{\left({z}_{2}^{2}+1\right)}^{1/2}\right]y+b\right\}$$
(39c)
$${T}_{w}=b+\left({z}_{1}+{z}_{2}\right)y$$
(39d)
$${n}_{e}={\left[\frac{\left(\sqrt{{z}_{1}^{2}+1}\cdot {n}_{1}^{3/2}+\sqrt{{z}_{2}^{2}+1}\cdot {n}_{2}^{3/2}\right)y{+b\cdot n}_{3}^{3/2}}{{P}_{w}}\right]}^{2/3}$$
(39e)
$${A}_{t}=b\left(y+f\right)+\left({z}_{1}+{z}_{2}\right)\frac{{\left(y+f\right)}^{2}}{2}$$
(39f)
$${P}_{t}=\left\{\left[{\left({z}_{1}^{2}+1\right)}^{1/2}+{\left({z}_{2}^{2}+1\right)}^{1/2}\right]\left(y+f\right)+b\right\}$$
(39g)
$${P}_{1}=\sqrt{\left({z}_{1}^{2}+1\right)}\left(y+f\right)$$
(39h)
$${P}_{2}=\sqrt{\left({z}_{2}^{2}+1\right)}\left(y+f\right)$$
(39k)
$${P}_{3}=b$$
(39l)

Similarly, this constrained design problem is also executed 100 times for each different \(\lambda\) values by considering the previously generated random number seeds. During these solutions, the constants which are required to solve the channel design problem are taken as: \(Q=100\), \(f=0.5\), \({S}_{0}=0.0016\), \({n}_{1}=0.018\), \({n}_{2}=0.020\), \({n}_{3}=0.015\), \({c}_{1}=0.60\), \({c}_{2}=0.25\), \({c}_{3}=0.20\), \({c}_{4}=0.30\), and \(\varepsilon =0.001\). Figure 26 compares the determined \(SR\) values for these model executions. As can be seen, the best solution is obtained for \(\lambda =0\) which gives a \(SR\) value of 84%. By considering this solution, the performance of SHALO is also compared by solving the same problem with the original ALO, saALO, EALO, GA, and PSO approaches for the same conditions. Table 15 compares the statistical results of these model executions for each approach.

Fig. 26
figure 26

Variation of \(SR\) (%) for different \(\lambda\) values for composite channel design problem

Table 15 Evaluation of the 100 model executions for composite channel design problem for different optimization approaches

As can be seen from Table 15, the \(SR\) values of the original ALO and saALO are obtained as 13%, while the same measure in EALO is obtained as 3%. On the other hand, the GA, PSO, and the proposed SHALO approaches solved the problem with the \(SR\) values of 4%, 46%, and 84%, respectively. This outcome can also be seen from the statistical results in Table 15 that the obtained standard deviation value in SHALO is lower than the ones obtained by using the other approaches. For the \(AC\) values, the original ALO, saALO, EALO, and GA approaches resulted with the mean \(AC\) values of 79%, 80%, 75%, 84%, respectively. On the other hand, the same measure is obtained for PSO and SHALO as 93% and 96%, respectively. The convergence plots for ALO, saALO, EALO, GA, PSO, and SHALO approaches are compared in Fig. 27 for the mean objective function values of the 100 model executions. It can be seen from Fig. 27 that the proposed SHALO approach converges to a lower mean objective function value than the other approaches. For the best solutions in Table 15, the final decision variables, objective, and constraint function values are given in Table 16. As can be seen, although there are some differences in the identified design variables, they do not produce significant variations in the calculated objective function values. Furthermore, all the solutions are obtained without any constraint violations.

Fig. 27
figure 27

Comparison of the convergence histories of different approaches for the composite channel design problem

Table 16 The final decision variables, objective, and constraint function values for composite channel design problem for the best solutions in Table 15

6 Conclusions

In this study, the SHuffled Ant Lion Optimization (SHALO) approach is proposed. The main contribution of this approach is to improve the random walk around antlions process of ALO by means of two new modifications. As the first modification, a new exponentially weighted approach is proposed for the boundary shrinking procedure to increase the random search capability of the algorithm. The second modification deals with increasing the diversity of the random walk vector by means of the proposed shuffling approach. The performance of the proposed SHALO approach is evaluated by using four unconstrained and two constrained benchmark problems, and two constrained engineering design problems. These problems are solved 100 times for performance evaluation of the proposed approach in terms of different random number seed values. Furthermore, the performance of the proposed boundary shrinking procedure is evaluated for 12 different cases in terms of different magnitudes of random perturbations. All the identified results are also compared with the ones which are obtained by using the original ALO, two modified versions of ALO which are the saALO and EALO, GA, and PSO approaches. Although the overall results of the proposed SHALO approach are quite effective, the following critical issues need to be taken into account when solving an optimization problem:

The main critical point of ALO-based approaches is the necessity of completing the given number of iterations to obtain a solution. This is important since the boundary shrinking procedure in random walk around antlions process requires of specifying the dynamic change points (e.g., \(0.10T\), \(0.50T\), \(0.75T,\) \(0.90T\), and \(0.95T\) given in Fig. 2) for the iteration domain. Therefore, the optimization process requires to proceed until satisfying the maximum number of iterations. The proposed SHALO approach also uses this solution strategy and requires solving the problem until reaching the maximum iteration number without considering any other termination criterion. This solution strategy may increase the required CPU times, especially for the real-world engineering optimization problems.

One of the important contributions of this study is the newly proposed exponentially weighted boundary shrinking approach. Note that in original ALO, the boundary shrinking procedure is conducted by suddenly changing the value of \(I\) at some stationary points (Fig. 2). These stationary points may decrease the performance of the random search feature of the algorithm. This issue is eliminated by means of the proposed exponentially weighted boundary shrinking approach such a way that the value of \(I\) is continuously and exponentially increased between given minimum and maximum points for each region in Fig. 2. Note that these minimum and maximum \(I\) values are used as the same with the original ALO to make an exact comparison of the approaches. It may be possible to achieve better results for different minimum and maximum \(I\) values for each region in Fig. 2. However, this issue is beyond the scope of this work and can be examined in different studies.

The other issue of the proposed exponentially weighted boundary shrinking procedure is to control the magnitudes of the random perturbations. For this purpose, a weight parameter of \(\lambda\) is used such a way that big values of this parameter increase the random perturbation in the calculated value of \(I\). Since there is no any systematic way of determining the \(\lambda\) values, the performance of the proposed SHALO approach should be evaluated for different values of \(\lambda\) as conducted in this study.

Another contribution of the proposed SHALO approach is to shuffle the elements of the random walk vector. This process is conducted by modifying the min–max normalization equation as given between Eqs. (20) and (22). This modification significantly increases the diversity of the random walk vector and improves the local optima problem by altering the magnitudes and trends of random walks as indicated in Fig. 4. Note that these changes are particularly significant in early iterations (while shuffling in early iterations, the proposed SHALO approach may return NaN values for some variable values and this problem is handled by randomly generating those variable values in the feasible search space). In this context, with the proposed approach, it is possible to explore the search space more comprehensively at the beginning of the optimization.

The performance of the proposed SHALO approach is compared with the original ALO, saALO, and EALO approaches by solving each problem for the same random number seed values. Therefore, all the problems are solved by considering the same initial populations and sequence of the random numbers. This outcome can be clearly seen from the convergence plots such that the original ALO, saALO, and EALO approaches start the search process from the same function value since the initial population of them are the same. Although the same initial populations are used, the proposed SHALO approach starts the search process from different function values. This result is associated with the shuffled random walk vector which influences the function values at the start of the search process.

The performance of the SHALO approach is also evaluated by solving the same problems on GA and PSO approaches. During these solutions, built-in GA and PSO optimization modules of the MATLAB platform are used. Since the solutions in ALO, saALO, EALO, and SHALO approaches are conducted by considering the population number of 50 and the maximum number of iterations of 1000, the same values are also used for both GA and PSO to make an exact comparison in terms of the total number of function evaluations. However, different values of these parameters may provide better results for GA and PSO which is out of scope of this work. Furthermore, although the same random number seeds are also used, both GA and PSO approaches start the search process from different initial points since the solution sequence of the MATLAB’s built-in modules are different.

The reason for solving all the problems 100 times is to evaluate the performance of the SHALO approach for different random number seeds. This is an important analysis to evaluate the robustness of the approach. In this context, the outcome of these 100 model executions is evaluated by using different statistical metrics (minimum, maximum, mean, median, and standard deviation). Furthermore, all the solutions are evaluated in terms of the success rate \((SR)\) and accuracy \((AC)\) metrics. These model evaluations indicate that the proposed SHALO approach statistically provides better results than all the other approaches. The same outcome is also observed for the other measures that the mean \(SR\) values of the solved problems are obtained as 22% for ALO, 25% for saALO, 14% for EALO, 28% for GA, and 49% for PSO. On the other hand, this mean \(SR\) value is obtained as 76%for the proposed SHALO which is significantly better than the other approaches. Similarly, the mean \(AC\) values for each solved example are obtained as: 83% for ALO and EALO, 84% for saALO, 88% for GA, and 91% for PSO. For the same measure, the proposed SHALO approach is resulted with a mean \(AC\) value of 91%. These results indicate that use of the SHALO optimization approach can increase the rate of reaching the global optimum solutions for the problems solved in this work.