1 Introduction

Optimization determines the optimal values of a problem’s variables to minimize (or maximize) an objective function. It can be represented in the following way:

Given a function f to minimize, where \(f:\mathbb {R}^D \rightarrow \mathbb {R}\), with D as the dimension of the problem (number of variables), sought a vector \(x_{o} \in \mathbb {R}^D\) such that \(f({x_o}) \leqslant f(x) \,\, \forall \, x \in \mathbb {R}^D\), while \(x_{o}\) satisfying inequality and equality constraints, \({g_p}({x_0}) \leqslant 0\) and, \({h_q}({x_0}) = 0\), with \(p = 1,2,..,P\) and, \(q = 1,2,..,Q\), where P and Q, are number of inequality and equality constraints, respectively. Additionally, it must be met that \(x_i^l \leqslant {x_i} \leqslant x_i^u\), where \(i=1,2,..,D\) and i-th variable varies in the interval \([x_i^l,x_i^u]\).

Optimization problem-solving strategies are classified as Deterministic or Stochastic. Deterministic methods are grouped as gradient-based or not gradient-based, where both classifications show good performance for linear, convex, and simple optimization tasks. Nevertheless, these methods don’t work in certain cases that involve complex problems, objective functions that can’t be differentiated, search spaces that aren’t linear, problems that aren’t convex, and NP-hard problems. Given that many real-world optimization problems are NP-hard, the scientific community frequently uses stochastic methodologies such as metaheuristics instead of deterministic methods.

Fig. 1
figure 1

Metaheuristics classification

Some of the metaheuristics and their classification are depicted in Fig. 1 and described as follows:

  • Evolutionary algorithm: Genetic Algorithm (GA) (Holland 2006), Differential Evolution (DE) (Ahmad et al. 2022), Cultural Algorithm (Maheri et al. 2021)(CA), Memetic Algorithm (MA) (Ahrari and Essam 2022), Evolutionary Strategy (ES) (Beyer et al. 2002), and Gradient Evolution Algorithm (GEA) (Kim and Lee 2023).

  • Swarm-based algorithm: Particle Swarm Optimization(PSO) (Kennedy and Eberhart 1995), Dingo Optimization Algorithm (DOA) (Peraza-Vázquez et al. 2021), Black Widow Optimization Algorithm (BWOA) (Peña-Delgado et al. 2020), Jumping Spider optimization Algorithm (JSOA) (Peraza-Vázquez et al. 2021), Fennec Fox Algorithm (FFA) (Trojovska et al. 2022), Artificial Rabbits Optimization (ARO) (Wang et al. 2022), Tunicate Swarm Algorithm (TSA) (Kaur et al. 2020), and Mountain Gazelle Optimizer (Abdollahzadeh et al. 2022).

  • Ancient-based algorithm: Giza Pyramids Construction (GPC) (Harifi et al. 2021) and Al-Biruni Earth Radius (BER) Metaheuristic Search Optimization Algorithm (El-Kenawy et al. 2022).

  • Physics-based algorithm: Simulated Annealing (SA) (Duan and Hou 2021), Thermal Exchange Optimization (TEO) (Kaveh and Dadras 2017), Gravitational Search Algorithm (GSA) (Mittal et al. 2021), Momentum Search Algorithm (MSA) (Dehghani and Samet 2020), Water Cycle Algorithm (WCA) (Sadollah et al. 2016), Electro-Magnetism Optimization (EMO) (Abedinpourshotorban et al. 2016), Kepler optimization algorithm (KOA) (Abdel-Basset et al. 2023), and Cyclical Parthenogenesis Algorithm (CPA) (Kaveh and Bakhshpoori 2019)

  • Chemistry-based algorithm: Chemical Reaction Optimization (CRO) (Lam and Li 2012), Crystal Structure Algorithm (CryStAl) (Talatahari et al. 2021), and Artificial Chemical Process (ACP) (Irizarry 2004).

  • Human-based algorithm: Sewing Training-Based Optimization (STBO) (Dehghani et al. 2022), Society Civilization Algorithm (SCA) (Ray and Liew 2003), Anarchic Society Optimization (ASO) (Ahmadi-Javid 2011), Teamwork Optimization Algorithm (TOA) (Dehghani and Trojovský 2021), Imperialist Competitive Algorithm (ICA) (Atashpaz-Gargari and Lucas 2007), and Coronavirus Mask Protection Algorithm (CMPA) (Yuan et al. 2023).

  • Plant-based algorithm: Invasive Weed Optimization(IWO) (Xing and Gao 2014), Artificial Plant Optimization (APO) (Zhao et al. 2011), Artificial Root Foraging Algorithm (ARFA) (Liu et al. 2017), Rooted Tree Optimization (RTO) (Labbi et al. 2016), Artificial Algae Algorithm (AAA) (Uymaz et al. 2015), Willow Catkin Optimization Algorithm (WCO) (Pan et al. 2023), Strawberry Algorithm (SBA) (Khan 2018), and Waterwheel Plant Algorithm (Abdelhamid et al. 2023).

  • Music-based / Art-based algorithm: Harmony Search Algorithm (HSA) (Kim 2016), Chaotic harmony search algorithm (CJSA) (Alatas 2010), Musical Composition Algorithm (MMC) (Mora-Gutiérrez et al. 2014), Melody Search Algorithm (MSA) (Ashrafi and Dariane 2011,) and Stochastic Paint Optimizer (Kaveh et al. 2022).

  • Sport-based algorithm: Volleyball Premier League (VPL) (Moghdani and Salimifard 2018), Puzzle Optimization Algorithm (POA) (Patil et al. 2022), Running City game optimizer (RCGO) (Ma et al. 2023), Football Game-Based Optimization (FGBO) (Fadakar and Ebrahimi 2016), and Alpine Skiing Optimization (ASO) (Yuan et al. 2022, 2023a, b).

  • Mathematical-based algorithm: Stochastic Fractal Search (SFS) (Salimi 2015), Hyper-Spherical Search (HSS) (Karami et al. 2014), Arithmetic Optimization Algorithm (AOA) (Abualigah et al. 2021), and Sine-Cosine Algorithm (SCA) (Mirjalili 2016).

  • Single-solution-based algorithm: Large Neighbourhood Search (LNBS) (Pisinger and Ropke 2010), Tabu Search (TS) (Yu et al. 2023), and Variable Neighbourhood Search (VNBS) (Hansen and Mladenovići 2018).

  • Hybrid algorithm: Cuckoo optimization algorithm and SailFish optimizer (COA-SFO) (Ikram et al. 2023), Hybrid Mutualism Mechanism-inspired Butterfly and Flower Pollination Optimization Algorithm (HMMB-FPOA) (Pratha et al. 2023), Butterfly Optimization Algorithm Combined with Black Widow Optimization (BFA-BWOA) (Xu et al. 2022), Equilibrium Whale Optimization Algorithm (EWOA) (Tan and Mohamad-Saleh 2023), Improved Dingo Optimization Algorithm (IDOA) (Almazán-Covarrubias et al. 2022), Cuckoo Search and Stochastic Paint Optimizer (CSSPO) (Ismail et al. 2023), Elite opposition-based learning, Chaotic k-best gravitational search strategy, and Grey wolf optimizer (EOCSGWO) (Yuan et al. 2022), Adaptive resistance and Stamina strategy-based Dragonfly algorithm (ARSSDA) (Yuan et al. 2020), and Coulomb force search strategy-based dragonfly algorithm (CFSSDA) (Yuan et al. 2020).

Since no algorithm can effectively and efficiently solve any optimization or instances of the same problem (Joyce and Herrmann 2017), the scientific community keeps developing new algorithms to solve challenging optimization problems that outperform or vie with those already described in the literature.

A generic metaheuristic framework consists of four phases described below and depicted in Fig. 2.

Phase 1: A set of initial population vectors are randomly generated. This population will evolve with each iteration. Each vector represents the search agent, where the population size can affect the algorithm’s performance.

Phase 2: Fitness is the result of evaluating the entire population of the objective function of each vector. When minimizing, the population’s lowest fitness vector is best.

Phase 3: Here, it is required to incorporate mathematical functions recombining the vectors. These functions can model the behavior of living beings or physical/chemical phenomena as bio-inspired functions.

Phase 4: The vector with the best fitness is then compared to the fitness of the previously recombined vectors in each iteration. If the number of iterations has not been met, stop condition, it returns to phase 2. Otherwise, the best fitness value reached is reported. The number of iterations, known as generations, is a value defined before starting the algorithm that influences the algorithm’s performance in this evolutionary process.

Fig. 2
figure 2

The generic framework of a Metaheuristic Algorithm

Finally, every metaheuristic should have a good balance between the ability to explore (diversify) and exploit (intensify) the search solution space. In other words, it should have global and local search strategies to improve its performance.

This paper proposes a Horned Lizard Optimization Algorithm (HLOA) as a novel swarm-based algorithm. This algorithm was inspired by how the horned lizard reptile conceals and defends itself from predators. The contributions of this work can be briefed as follows:

  • A novel bio-inspired optimization algorithm that encompasses all aspects of the Horned Lizard’s behavior to defend himself from their predators. Four defense strategies include crypsis behavior, skin darkening or lightening blood-Squirting, and move-to-escape. Also, the alpha-melanophore stimulating hormone rate, which influences their skin color change, is considered. These strategies could inspire other researchers to explore new directions and applications in the bio-inspired algorithms field, leading to a proliferation of related research.

  • HLOA performance is evaluated in the following set of functions: IEEE CEC 2017 ”Constrained Real-Parameter Optimization” for 10-dimensional, 30-dimensional, 50-dimensional, and 100-dimensional benchmark problems, IEEE CEC06-2019 "100-Digit Challenge" test functions, and sixty-three testbench functions from the literature. Furthermore, three real-world constraint optimization applications from CEC2020 and two engineering problems, multiple gravity assist optimization and the optimal power flow problem, were also tested.

  • When comparing the HLOA algorithm to other approaches, it allows the scientific community to understand the relative strengths and weaknesses of different techniques evaluated with the same test instances.

  • The algorithm is validated with Friedman tests, Wilcoxon tests, and convergence analyses. The results are compared to those of ten recently developed bio-inspired metaheuristic algorithms.

  • The scientific community can access the HLOA’s Matlab source code to support this study’s findings.

The remaining sections of this paper are structured as follows: The second section illustrates the HLOA bio-inspiration, a detailed mathematical formulation, the time complexity, and the pseudo-code. Then, the performance of the proposed approach is benchmarked with several testbench functions, and their comparison with ten recent bio-inspired algorithms is presented in the third section. In the fourth section, the algorithm’s results and discussion are presented. The fifth section describes the application of HLOA to real-world optimization problems and the constraint-handling technique employed. Finally, the paper summarizes the conclusions and future work.

2 Horned lizard optimization algorithm (HLOA)

2.1 Biological fundamentals

The horned lizard is scientifically known as Phrynosoma. It is an endemic reptile from south-central regions of the US to northeastern Mexico. They are adapted to arid or semiarid extreme temperature areas. Horned lizards feed on various species, including grasshoppers, crickets, beetles, spiders, ticks, butterflies, and moths (Leaché and McGuire 2006). Their primary passive method of defense is crypsis (Stevens and Merilaita 2011; Ruxton et al. 2004). This method consists of the capacity to blend in with its surroundings through color, pattern, and shape. For example, the color pattern of the Horned Lizards changes geographically to match the terrain, and their spines cover their body outlines, making them hard to spot (Ruxton et al. 2004). Another passive defensive strategy is move-to-escape. Moreover, when threatened, this lizard employs aggressive tactics such as expelling a short stream of blood that travels more than a meter away (Cooper and Sherbrooke 2010; Middendorf 2001). It should be noted that all reptiles, horned lizards included, resort to thermoregulation since they cannot produce their body heat, depending on the surrounding temperature, to maintain their warmth (Lara-Reséndiz et al. 2015; Grigg and Buckley 2013). In addition, the horned lizard can lighten or darken its skin, depending on whether or not it needs to decrease or increase its solar thermal gain. Thus, at high temperatures (25\(^{\circ }\) - 40\(^{\circ }\) C), they acquire lighting color, whereas at low temperatures (16\(^{\circ }\) - 17\(^{\circ }\) C), they acquire darkened skin. Dark skin does not reflect any color; on the contrary, it absorbs all wavelengths of light, turning them into heat. The rapid color change of the skin of the horned lizard is due to the effects of temperature on their alpha-melanophore stimulating hormone (\(\alpha\)-MSH) (Sherbrooke 1997) (Fig. 3).

Fig. 3
figure 3

Horned Lizard Phrynosoma. Photograph by Brdavids (published under a CC BY 2.0 license)

2.2 Mathematical model and optimization algorithm

As previously described, the lizard can defend itself by changing its colors to match its surroundings. Additionally, it can lighten or darken its skin, depending on whether it needs to increase or decrease its solar thermal gain. The rate of the lizard’s alpha-melanophore-stimulating hormone (\(\alpha\)-MSH) is a factor in this rapid color change. Moreover, it can also shoot a short stream of blood to defend against its prey. In this work, each of these lizards’ defense behaviors, described before, are mathematically modeled as part of the optimization algorithm.

2.2.1 Strategy 1: Crypsis behavior

Crypsis is the process through which an organism can blend in with its surroundings by imitating characteristics of the environment, such as color and texture, or even by becoming translucent, making it difficult for predators or prey to detect or recognize them, see Fig. 5. It is an adaptive behavior that helps organisms avoid detection, thus increasing their chances of survival in the wild world (Ruxton et al. 2004). As the scope of this work is based on the horned lizard, it is to be noted that its crypsis method is mathematically represented through color theory (Westland et al. 2012; Niall 2017).

On the other hand, The International Commission on Illumination (CIE) (Niall 2017) standardized light sources by the amount of emitted energy, throughout the visible spectrum (400 to 700 nm), at each wavelength. In addition, the organization defined a color evaluation system, e.g., L*a*b system for Cartesian coordinates and L*C*h system for polar coordinates, to compute a color in a color space.

In the L*a*b system, L* indicates the luminosity, and a* and b* are the chromatic coordinates, as shown below.

$$\begin{aligned} \begin{aligned} \begin{array}{l} {a^*} = \left\{ {\begin{array}{*{20}{c}} { + a,}&{}{indicates\,\,Red}\\ { - a,}&{}{indicates\,\,Green} \end{array}} \right. \\ {b^*} = \left\{ {\begin{array}{*{20}{c}} { + b,}&{}{indicates\,\,Yellow}\\ { - b,}&{}{indicates\,\,Blue} \end{array}} \right. \end{array} \end{aligned} \end{aligned}$$
(1)

In the L*C*h system, L* defines lightness, C* specifies color intensity, and h* indicates hue angle (an angular measurement). Hue moves in a circle around the "equator" to describe the color family (red, yellow, green, and blue) and all the colors in between. i.e., The numbers on the hue circle range from 0 to 360, starting with red at 0 degrees, then moving counterclockwise through yellow, green, blue, then back to red. The L axis describes the luminous intensity of the color. Comparing the value makes it possible to classify colors as light or dark. Both color system representations are shown in Figs. 4 and 5.

Fig. 4
figure 4

Representation of the color space for the CIE L*a*b and L*C*h systems

The transformation of rectangular coordinates to polar coordinates can be seen in Eq. 2.

$$\begin{aligned} \begin{aligned} {c^*}&= \sqrt{{a^{*2}} + b^{*2}} \\ h&= arcTg\left( {\frac{{{b^*}}}{{{a^*}}}} \right) \end{aligned} \end{aligned}$$
(2)

c* and h values correspond to chroma (or saturation) and hue, respectively. The value of h is the hue angle and is expressed in degrees ranging from 0\(^\circ\) to 360\(^\circ\). The inverse formulas are as follows:

$$\begin{aligned} \begin{aligned} \begin{array}{l} {a^*} = {c^*}\cos \left( h \right) \\ {b^*} = {c^*}sin\left( h \right) \end{array} \end{aligned} \end{aligned}$$
(3)

Without loss of generality, let the ordered pair \((a_p^*, b_q^*)\) and \((a_r^*, b_s^*)\) be any two colors, with \(p \ne q \ne r \ne s\). So, any two new colors, e.g., \(colorVar_1\) and \(colorVar_2\), can be obtained with the following arithmetic operations shown in Eq. 4

$$\begin{aligned} \begin{aligned} colorVa{r_1}&= b_p^* - a_q^* - a_r^* + b_s^*\\ colorVa{r_2}&= b_p^* - a_q^* + a_r^* - b_s^* \end{aligned} \end{aligned}$$
(4)

These colors can be represented in a single equation, as shown below.

$$\begin{aligned} \begin{aligned} colorVar = b_p^* - a_q^* \pm \left[ {a_r^* - b_s^*} \right] \end{aligned} \end{aligned}$$
(5)

Eq. 5 can be represented in the inverse form as follows:

$$\begin{aligned} \begin{aligned} colorVar = {c_1}\sin ({h_p}) - {c_1}\cos ({h_q}) \pm \left[ {{c_2}\cos ({h_r}) - {c_2}\sin ({h_s})} \right] \end{aligned} \end{aligned}$$
(6)

where the angles (hue) meets \(h_p \ne h_q \ne h_r \ne h_s\), and chroma \(c_{1} \ne c_{2}\). Finally, \(c_1\) and \(c_2\) are factorized as shown in Eq. 7.

$$\begin{aligned} \begin{aligned} colorVar = {c_1}\left[ {\sin ({h_p}) - \cos ({h_q})} \right] \pm {c_2}\left[ {\cos ({h_r}) - \sin ({h_s})} \right] \end{aligned} \end{aligned}$$
(7)

An equation that contains the arithmetic operation of chromatic coordinates represented in Eq. 7, can be seen below.

$$\begin{aligned} \begin{aligned} {\mathop x\limits ^ \rightarrow }_{i}(t + 1)&= {\mathop x\limits ^ \rightarrow }_{best}(t) + \left( {\partial - {{\partial \cdot t } \over {Max\_iter}}} \right) \\&\left[ {{c_1}\left( {\sin ({{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t)) - \cos ({{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t))} \right) - {{( - 1)}^\sigma } {c_2}\left( {\cos (\mathop {{x_{{r_3}}}(t)}\limits ^ \rightarrow ) - \sin ({{\mathop x\limits ^ \rightarrow }_{{r_4}}}(t))} \right) } \right] \end{aligned} \end{aligned}$$
(8)

Where \({\mathop x\limits ^ \rightarrow }_{i}(t + 1)\) is the new search agent position (horned lizard) in the solution search space for the generation \(t+1\), \({\mathop x\limits ^ \rightarrow }_{best}(t)\) is the best search agent for the generation t; \(r_1\), \(r_2\), \(r_3\) and, \(r_4\) are integer random numbers generated between 1 and the utmost number of search agents, with \(r_1 \ne r_2 \ne r_3 \ne r_4\); \({{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t)\), \({{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t)\), \({{\mathop x\limits ^ \rightarrow }_{{r_3}}}(t)\) and, \({{\mathop x\limits ^ \rightarrow }_{{r_4}}}(t)\) are the \(r_1\), \(r_2\), \(r_3\), \(r_4\)-th search agent selected; \(Max\_iter\) represents the utmost number of iterations (generations), \(\sigma\) is a binary value obtained by Algorithm 1, \(\partial\) is set to 2, and, \(c_1\), \(c_2\), with \(c_1 \ne c_2\), are random numbers taken from Table 29 containing the normalized color palette.

Algorithm 1
figure a

\(\sigma\) procedure

Fig. 5
figure 5

Horned Lizard Crypsis. Photograph by Paul Asman and Jill Lenoble (published under a CC BY 2.0 license)

2.2.2 Strategy 2: Skin darkening or lightening

The horned lizard can lighten or darken its skin, depending on whether or not it needs to decrease or increase its solar thermal gain (Sherbrooke and Sherbrooke 1988). Thermal energy obeys the same conservation laws as light energy (Burtt 1981). Therefore, it is the key to the relationship between color and temperature. Thus, colors that reflect lighter repel more heat. In this way, dark colors absorb more heat because they absorb more light energy (Burtt 1981). The color changes in the skin of the horned lizard are represented by Eqs. 9 and 10. Equation 9 represents the lightning-skin strategy. Whereas, Eq. 10 represents the darkening-skin strategy.

$$\begin{aligned} {\mathop x\limits ^ \rightarrow }_{worst}(t)&= {\mathop x\limits ^ \rightarrow }_{best}(t) + {1 \over 2}Light_1\sin \left( {{{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t) - {{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t)} \right) \nonumber \\&- {( - 1)^\sigma }{1 \over 2}Light_2\sin \left( {{{\mathop x\limits ^ \rightarrow }_{{r_3}}}(t) - {{\mathop x\limits ^ \rightarrow }_{{r_4}}}(t)}\right) \end{aligned}$$
(9)
$$\begin{aligned} {\mathop x\limits ^ \rightarrow }_{worst}(t)&= {\mathop x\limits ^ \rightarrow }_{best}(t) + {1 \over 2}Dark_1\sin \left( {{{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t) - {{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t)} \right) \nonumber \\&- {( - 1)^\sigma }{1 \over 2}Dark_2\sin \left( {{{\mathop x\limits ^ \rightarrow }_{{r_3}}}(t) - {{\mathop x\limits ^ \rightarrow }_{{r_4}}}(t)} \right) \end{aligned}$$
(10)

Where \(Light_1\) and \(Light_2\) are random numbers generated between \(Lightening_1\) (0 value) and \(Lighthening_2\) (0.4046661 value), using these normalized values taken from Table 1. Analogously, \(Dark_1\) and \(Dark_2\) are random numbers generated between \(Darkening_1\) (0.5440510 value) and \(Darkening_2\)(1 value), also using the normalized values from Table 1.

In addition, for both Eqs., 9 and 10, \({\mathop x\limits ^ \rightarrow }_{worst}(t)\) and \({\mathop x\limits ^ \rightarrow }_{best}(t)\) are the worst and the best search agent found, respectively. \(r_1\), \(r_2\), \(r_3\) and, \(r_4\) are integer random numbers generated between 1 and the utmost number of search agents, with \(r_1 \ne r_2 \ne r_3 \ne r_4\); \({{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t)\), \({{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t)\), \({{\mathop x\limits ^ \rightarrow }_{{r_3}}}(t)\) and, \({{\mathop x\limits ^ \rightarrow }_{{r_4}}}(t)\) are the \(r_1\), \(r_2\), \(r_3\), \(r_4\)-th search agent selected. Finally, \(\sigma\) is a binary value obtained by algorithm 1. The skin color change strategy is shown in Algorithm 2.

Table 1 The color palette used for lightening or darkening the skin
Algorithm 2
figure f

Darkening or lightening of skin procedure

Notice that the worst search agent in the t iteration is replaced by the new one obtained by the skin-darkening or skin-lightening strategy.

2.2.3 Strategy 3: blood-squirting

The Horned lizard fends off enemies by shooting blood from its eyes (Middendorf 2001). The shooting blood defense mechanism can be represented as a projectile motion, depicted in Fig. 6. To obtain the equations of motion, we separate the projectile motion into its two components, X-axis (horizontal) and Y-axis (vertical):

In the horizontal direction, the shot of blood describes a uniform line movement, so its equation of motion will be given by:

$$\begin{aligned} \begin{aligned} \mathop \upsilon \limits ^ \rightarrow = \mathop {{\upsilon _0}}\limits ^ \rightarrow + \int _0^t {\mathop g\limits ^ \rightarrow } dt = \mathop {{\upsilon _0}}\limits ^ \rightarrow + \mathop g\limits ^ \rightarrow t \end{aligned} \end{aligned}$$
(11)

In the vertical direction, the shot of blood describes a uniformly accelerated rectilinear motion, it is as follows:

$$\begin{aligned} \mathop r\limits ^ \rightarrow&= \mathop {{r_0}}\limits ^ \rightarrow + \int _0^t {\left( {\mathop {{\upsilon _o}}\limits ^ \rightarrow + \mathop g\limits ^ \rightarrow t} \right) } dt = \mathop {{r_0}}\limits ^ \rightarrow + \mathop {{\upsilon _o}}\limits ^ \rightarrow t + {1 \over 2}\mathop g\limits ^ \rightarrow {t^2} \end{aligned}$$
(12)
$$\begin{aligned} {\mathop r\limits ^ \rightarrow }_{0}&= \mathop 0\limits ^ \rightarrow \end{aligned}$$
(13)

The vector equations, position, and velocity, are represented by Eqs. 14 and 15, respectively.

$$\begin{aligned} {\mathop \upsilon \limits ^ \rightarrow }_{0}&= {\upsilon _0}\cos (\alpha )t\mathop j\limits ^ \rightarrow + \left( {({\upsilon _0}\sin (\alpha ))t - {1 \over 2}g{t^2}} \right) \mathop k\limits ^ \rightarrow \end{aligned}$$
(14)
$$\begin{aligned} \mathop \upsilon \limits ^ \rightarrow&= \mathop r\limits ^ \rightarrow = \left( {{\upsilon _0}\cos (\alpha )} \right) \mathop j\limits ^ \rightarrow + \left( {{\upsilon _0}\sin (\alpha ) - gt} \right) \mathop k\limits ^ \rightarrow \end{aligned}$$
(15)

Finally, the trajectory can be expressed as follows:

$$\begin{aligned} \begin{aligned} {\mathop x \limits ^ \rightarrow }_{i}(t+1)&= \left[ {{v_o}\cos \left( {\alpha {t \over {Max\_iter}}} \right) +\varepsilon } \right] {\mathop x\limits ^ \rightarrow }_{best}(t) \\&+ \left[ {{v_o}\sin \left( {\alpha - {{\alpha t} \over {Max\_iter}}} \right) -g + \varepsilon } \right] {\mathop x\limits ^ \rightarrow }_{i}(t) \end{aligned} \end{aligned}$$
(16)

Where \({\mathop x\limits ^ \rightarrow }_{i}(t + 1)\) is the new search agent position (horned lizard) in the solution search space for the generation \(t+1\), \({\mathop x\limits ^ \rightarrow }_{best}(t)\) is the best search agent found, the current search agent is \({\mathop x\limits ^ \rightarrow }_{i}(t)\), \(Max\_iter\) represents the utmost number of iterations (generations), t is the current iteration, \(v_0\) is set to 1 seg, \(\alpha\) is set to \(\frac{\pi }{2}\), \(\epsilon\) is set to 1E-6 and, g is the gravity of the earth, 0.009807 \(km/{s^2}\)

Fig. 6
figure 6

Horned lizard shooting blood

2.2.4 Strategy 4: move-to-escape

In this strategy, the horned lizard performs a random fast move around the environment to escape predators. Ruxton et al. (2004). A function that includes a local and global movement is proposed for the mathematical modeling of this evasion strategy; it is described in Eq. 17 and depicted in Fig. 7. In this equation \(walk\left( {{1 \over 2} - \varepsilon } \right) {\mathop x\limits ^ \rightarrow }_{i}(t)\) is a local motion around \({\mathop x\limits ^ \rightarrow }_{i}(t)\), and adding \({\mathop x\limits ^ \rightarrow }_{best}(t)\) generates a displacement through the solution search space (the global movement).

$$\begin{aligned} {\mathop x\limits ^ \rightarrow }_{i}(t + 1) = {\mathop x\limits ^ \rightarrow }_{best}(t) + walk\left( {{1 \over 2} - \varepsilon } \right) {\mathop x\limits ^ \rightarrow }_{i}(t) \end{aligned}$$
(17)

Where \({\mathop x\limits ^ \rightarrow }_{i}(t + 1)\) is the new search agent position (horned lizard) in the solution search space for the generation \(t+1\), \({\mathop x\limits ^ \rightarrow }_{best}(t)\) is the best search agent for the generation t, walk is a random number generated between -1 and 1, \(\epsilon\) is a random number generated from a standard Cauchy distribution with the mean and \(\sigma\) set to 0 and 1, respectively. \({\mathop x\limits ^ \rightarrow }_{i}(t)\) is the current i-th search agent in the t generation.

Fig. 7
figure 7

Horned lizard escaping from predators

2.2.5 Strategy 5: \(\alpha\)-melanophore stimulating hormone (\(\alpha\)-MSH) rate

The horned lizard can lighten or darken its skin, depending on whether or not it needs to decrease or increase its solar thermal gain. The rapid alteration in coloration observed on the skin of horned lizards can be attributed to the influence of temperature on the \(\alpha\)-melanophore stimulating hormone (\(\alpha\)-MSH). Additional information regarding the study on hormone levels in horned lizards can be seen in Sherbrooke (1997). In this research, the horned lizards’ \(\alpha\)-melanophore rate value is defined in the following equation:

$$\begin{aligned} melanophore(i) = {{Fitnes{s_{\max }} - Fitness(i)} \over {Fitnes{s_{\max }} - Fitnes{s_{\min }}}} \end{aligned}$$
(18)

Where \(Fitness_{min}\) and \(Fitness_{max}\) are the best and the worst fitness value in the current t generation, respectively, whereas fitness(i) is the current fitness value of the i-th search agent.

The melanophore(i) value vector obtained by computing Eq. 18 is normalized in the interval of [0, 1]. A low \(\alpha\)-MSH rate, less than 0.3, replaces search agents in Eq. 19, as described in Algorithm 3.

$$\begin{aligned} \overrightarrow{x}_{i}(t)=\overrightarrow{x}_{ best }(t) + \frac{ 1 }{ 2 } [ \overrightarrow{x}_{ { r }_{ 1 }} (t)-{ (-1) }^{ \sigma }\overrightarrow{x}_{ { r }_{ 2 }}(t)] \end{aligned}$$
(19)

Where \({\mathop x\limits ^ \rightarrow }_{i}(t )\) is the current search agent, \({\mathop x\limits ^ \rightarrow }_{best}(t)\) is the best search agent found, \(r_1\) and \(r_2\) are integer random numbers generated between 1 and the utmost number of search agents, with \(r_1 \ne r_2\), \({{\mathop x\limits ^ \rightarrow }_{{r_1}}}(t)\) and, \({{\mathop x\limits ^ \rightarrow }_{{r_2}}}(t)\), are the \(r_1\), \(r_2\)-th search agent selected and, \(\sigma\) is a binary value obtained by Algorithm 1

Algorithm 3
figure g

\(\alpha\)-melanophore procedure

2.2.6 The HLOA’s time complexity

The HLOA’s time complexity analysis includes the analysis of the initialization of the population, fitness evaluation, and updating search agents (lizards). The initialization of the HLOA population is O(PopSize x D), where popSize is the number of search agents (Lizards), and D is the dimension of the optimization problem (design variables number). O(T) represents the time complexity of computing the fitness value, i.e. objective function value. Thus, The amount of time required for the initial evaluation of fitness is bounded by O(PopSize \(\times\) O ( T )). Therefore, the HLOA main loop’s computational complexity is O(MaxIteration x PopSize x (D+ O(T))), as summarized in Algorithm 4 (Fig. 8 ).

2.2.7 Pseudo code for HLOA

In Algorithm 4, the pseudo-code of the HLOA algorithm is described.

Algorithm 4
figure h

Horned Lizard Optimizer Algorithm

Fig. 8
figure 8

HLOA flowchart

3 Experimental setup

The numerical efficiency and stability of the HLOA algorithm were evaluated by solving 63 classical benchmark optimization functions reported in the literature. Each of these functions is described in Appendix A, Tables 30, 31, and 32, where Dim represents the function’s dimension, Interval is the boundary of the search space for the function and, the optimum value is \(f_{min}\). The HLOA algorithm was compared with ten recent bio-inspired algorithms as described below:

  • Jumping Spider Optimization Algorithm (JSOA): The algorithm mimics the behavior of the Arachnida Salticidade spiders in nature and mathematically models its hunting strategies: search, persecution, and jumping skills to get the prey (Peraza-Vázquez et al. 2021).

  • Black Widow Optimization Algorithm (BWOA): It is based on modeling different spiders’ movement strategies for courtship-mating and the pheromone rate associated with cannibalistic behavior in female spiders (Peña-Delgado et al. 2020).

  • Coot Bird Algorithm (COOT): The Coot algorithm imitates two different modes of movement of birds on the water surface (Naruei and Keynia 2021).

  • Crystal Structure Algorithm (CSA): The algorithm is based on the principles underpinning the natural occurrence of crystal structures forming from the addition of the basis to the lattice points, which may be observed in the symmetrical arrangement of constituents in crystalline minerals (Talatahari et al. 2021).

  • Dingo Optimization Algorithm (DOA): The algorithm mimics the social behavior of the Australian dingo dog. Its inspiration comes from the hunting strategies of dingoes attacking by persecution, grouping tactics, and scavenging behavior (Peraza-Vázquez et al. 2021).

  • Enhanced Jaya Algorithm (EJAYA): The classic version of the Jaya algorithm has the defect of easily getting trapped in local optima. This updated version uses the population information more efficiently to improve its performance (Zhang et al. 2021).

  • Rat Swarm Optimizer (RSO): The inspiration of this optimizer is the attacking and chasing behaviors of rats in nature (Dhiman et al. 2021).

  • Smell Agent Optimization (SAO): The algorithm is based on the relationships between a smelling agent and an object evaporating a smell molecule (Salawudeen et al. 2021).

  • Tunicate Swarm Algorithm (TSA): The algorithm imitates jet propulsion and swarm behaviors of tunicates during the navigation and foraging process (Kaur et al. 2020).

  • Wild Horse Optimizer (WHO): The algorithm is based on the social behavior of wild horses, such as grazing, chasing, dominating, leading, and mating. The mathematical model includes the representation of mares, foals, and stallions living in groups (Naruei and Keynia 2021).

Each algorithm’s benchmark function was executed 30 times, with the population size and number of iterations set to 30 and 200, respectively. Furthermore, the Wilcoxon signed-rank test was used to compare their performance, and the ranking of each algorithm was obtained by the Friedman test. It is to be noted that the three best-ranked algorithms determined by this selection are then evaluated on IEEE CEC 2017 and CEC 2019, as described below.

Table 2 Initial values for the controlling parameters of algorithms

IEEE CEC 2017 Testbech Functions: Experimental studies are performed on 28 10-dimensional, 30-dimensional, 50-dimensional, and 100-dimensional benchmark problems from the IEEE CEC 2017 “Constrained Real-Parameter Optimization”, as outlined in Table 33.

IEEE CEC-06 2019 Testbech Functions: The “100-Digit Challenge” testbench functions from IEEE CEC-06 2019 are described in Table 3.

The entire parameter settings for each algorithm are shown in Table 2. All experiments were conducted on a standard desktop computer with the following specifications: Intel core i7-10750 H CPU, 2.60 GHz, 32 GB RAM, Linux Kubuntu 20.04 LTS operating system, and implemented in MATLAB R2021b.

Table 3 IEEE CEC-C06 2019 Benchmarks “The 100-Digit Challenge:”

4 Results and discussion

This section presents the computational results of HLOA on benchmark optimization problems. The comparison results are shown in Table 4, displaying the best, mean, and standard deviation values. Moreover, Figs. 9, 10, and 11 summarize the convergence graphs of all functions versus all algorithms chosen for this investigation. To analyze the significant differences between the results of the proposed HLOA and the other algorithms, a non-parametric Wilcoxon Signed-rank test with a significance level of \(\%5\) was conducted. This trial determines the significance level of two algorithms. An algorithm is statistically significant if the calculated p-value is less than 0.05. Table 5 summarizes the result of this test. Furthermore, the eleven algorithms were ranked by computing the Friedman test for 63 benchmark functions. Friedman test ranks the algorithms according to their average performance and generates a ranking score, where a lower value indicates a better performance. The results are shown in Table 6. According to the statistical data presented in Table 4, the Horned Lizard Optimization Algorithm (HLOA) can achieve exceptional outcomes. Among the 63 benchmark functions, there are unimodal and multimodal functions to test the algorithm’s capabilities related to the exploitation and exploration in the solution space. A more reliable way to compare the performance of algorithms when solving benchmark functions is through statistical tests. The results of the Wilcoxon rank-sum test, in Table 5, indicate that HLOA outperformed the following algorithms: Black Widow Optimization Algorithm (BWOA), Dingo Optimization Algorithm (DOA), Coot Bird Algorithm (COOT), Crystal Structure Algorithm (CSA), Enhanced Jaya Algorithm (EJAYA), Rat Swarm Optimizer (RSO), Smell Agent Optimization (SAO) and, Tunicate Swarm Algorithm (TSA). Meanwhile, there are no significant differences between HLOA versus the Jumping Spider Optimization Algorithm (JSOA) and Wild Horse Optimizer (WHO). Moreover, in the Friedman test analysis, the HLOA algorithm is ranked first, whereas the JSOA, WHO, and BWOA are ranked second, third, and fourth, respectively. Based on this analysis, in the rest of the paper, these algorithms are only used in IEEE CEC 2017 ”Constrained Real-Parameter Optimization”, IEEE CEC-06 2019 “100-Digit Challenge”, and real-world applications.

Table 4 Comparison of optimization results obtained for 63 benchmark functions
Table 5 Statistical results of Wilcoxon signed-rank test for HLOA versus other algorithms for 63 functions, with a significance level of \(\%5\)
Table 6 Friedman test of all compared algorithms for 63 functions

On the other hand, the findings obtained by computational analysis of HLOA on benchmark functions from IEEE CEC 2017 “Constrained Real-Parameter Optimization” problems for dimensions 10, 30, 50, and 100 are shown in Tables 7, 8, 9, and 10. Say tables display the best, mean, and standard deviation computed. To examine the disparities among the algorithms, a Wilcoxon signed-rank test was used with a significance level of \(5\%\). The ranking between the algorithms HLOA, BWOA, JSOA, and WHO was determined using the Friedman test.

Table 11 summarizes the Wilcoxon rank-sum test for 10-dimensional problems and indicates that HLOA outperformed all the algorithms. Nevertheless, in Table 12 "Friedman test", it is ranked in second place.

For 30-dimensional problems, the Wilcoxon rank-sum test in Table 13 shows that HLOA outperformed the JSOA and BWOA algorithms. Meanwhile, there are no significant differences with WHO. Whereas, in Table 14 "Friedman test" ranks HLOA in second place.

According to the results presented in Table 15, it can be observed that in the case of 50-dimensional problems, the HLOA algorithm demonstrated superior performance compared to the JSOA and BWOA algorithms, as indicated by the Wilcoxon rank-sum test. In contrast, there are no substantial disparities with the WHO. In comparison, the Friedman test assigns the highest rank to HLOA. See Table 16.

In the context of 100-dimensional problems, the Wilcoxon rank-sum test in Table 17 shows that HLOA outperformed all algorithms. Meanwhile, the Friedman test has determined that HLOA holds the highest rank, as seen in Table 18.

Table 7 IEEE CEC 2017 Benchmarks “Constrained Real-Parameter Optimization” results for Dimension 10, from the best ranked algorithms
Table 8 IEEE CEC 2017 Benchmarks “Constrained Real-Parameter Optimization” results for Dimension 30, from the best ranked algorithms
Table 9 IEEE CEC 2017 Benchmarks “Constrained Real-Parameter Optimization” results for Dimension 50, from the best ranked algorithms
Table 10 IEEE CEC 2017 Benchmarks “Constrained Real-Parameter Optimization” results for Dimension 100, from the best ranked algorithms
Table 11 Statistical results of Wilcoxon signed-rank test for IEEE CEC 2017 benchmark functions at Dimension 10, with a significance level of \(\%5\)
Table 12 Friedman test for IEEE CEC 2017 benchmark functions at Dimension 10
Table 13 Statistical results of Wilcoxon signed-rank test for IEEE CEC 2017 benchmark functions at Dimension 30, with a significance level of \(\%5\)
Table 14 Friedman test for IEEE CEC 2017 benchmark functions at Dimension 30
Table 15 Statistical results of Wilcoxon signed-rank test for IEEE CEC 2017 benchmark functions at Dimension 50, with a significance level of \(\%5\)
Table 16 Friedman test for IEEE CEC 2017 benchmark functions at Dimension 50
Table 17 Statistical results of Wilcoxon signed-rank test for IEEE CEC 2017 benchmark functions at Dimension 100, with a significance level of \(\%5\)
Table 18 Friedman test for IEEE CEC 2017 benchmark functions at Dimension 100

Additionally, the computational results of HLOA on benchmark functions from CEC-06 2019 “The 100-Digit Challenge” problems are shown in Table 19, displaying the best, mean, and standard deviation values. Furthermore, Fig. 12 summarizes the convergence graphs of all functions versus the best-ranked algorithms. To analyze the significant differences between the results, Wilcoxon Signed-rank test with a significance level of \(\%5\) was carried out. Table 20 summarizes the result of this test and indicates that HLOA outperformed Black Widow Optimization Algorithm (BWOA). Meanwhile, there are no significant differences between HLOA with the Jumping Spider Optimization Algorithm (JSOA) and Wild Horse Optimizer (WHO).

Table 19 IEEE CEC-C06 2019 Benchmarks ‘‘The 100-Digit Challenge:” results
Fig. 9
figure 9

Convergence curves of the best-ranked algorithms by Friendam test. From F1 to F20, functions shown in Appendix A

Fig. 10
figure 10

Convergence curves of the best-ranked algorithms by Friendam test. From F21 to F42, functions shown in Appendix A

Fig. 11
figure 11

Convergence curves of the best-ranked algorithms by Friendam test. From F43 to F63, functions shown in Appendix A

Table 20 Statistical results of Wilcoxon signed-rank test for CEC 2019, with a significance level of \(\%5\)
Fig. 12
figure 12

Convergence Curves of CEC 2019 functions

5 Real-world applications

In this section, the capabilities of the HLOA were tested by solving five optimization problems, which are three Real-World Single Objective Bound Constrained Numerical Optimization problems taken from the CEC 2020 special session (Kumar et al. 2020), the Multiple Gravity Assist (MGA) problems provided by the European Space Agency (ESA) [89] and the Optimal Power Flow Problem. For all engineering problems solved, HLOA was compared against the three best-ranked algorithms calculated from the Friedman test, see Table 6.

5.1 Constraint handling

The Penalization of Constraints method was used for constraint handling. The mathematical formulation of this method is described in Eq. 20 and taken from Peraza-Vázquez et al. (2021).

$$\begin{aligned} F(\mathop x\limits ^ \rightarrow ) = \left\{ {\begin{array}{*{20}{c}} {f(\mathop x\limits ^ \rightarrow ),}&{}{if}&{}{MCV (\mathop x\limits ^ \rightarrow ) \le 0}\\ {{f_{\max }} + MCV (\mathop x\limits ^ \rightarrow ),}&{}{}&{}{otherwise.} \end{array}} \right. \end{aligned}$$
(20)

Where \(f(\mathop x\limits ^ \rightarrow )\) is the fitness function value of a feasible solution (a solution that does not violate constraints), whereas \(f_{max}\) is the fitness function value of the worst solution in the population, and \({MCV (\mathop x\limits ^ \rightarrow )}\) is the Mean Constraint Violation (Peraza-Vázquez et al. 2021) represented in Eq. 21.

$$\begin{aligned} MCV (\mathop x\nolimits ^ \rightarrow ) = \frac{{\sum \limits _{i = 1}^p {{G_i}(\mathop x\nolimits ^ \rightarrow } ) + \sum \limits _{j = 1}^m {{H_j}(\mathop x\nolimits ^ \rightarrow } )}}{{p + m}} \end{aligned}$$
(21)

Here, the \(MCV(\mathop x\limits ^ \rightarrow )\) is the mean sum of the inequalities (\({G_i}(\mathop x\limits ^ \rightarrow )\)) and the equalities (\(H_{j}(\mathop x\limits ^ \rightarrow )\)) constraints, depicted by Eq. 22 and 23, respectively. Notice that the inequality \({g_i}(\mathop x\limits ^ \rightarrow )\) and equality \(h_{j}(\mathop x\limits ^ \rightarrow )\) constraints only have a value, the punishment if the constraint is violated.

$$\begin{aligned} {G_i}(\mathop x\limits ^ \rightarrow ) = \left\{ {\begin{array}{*{20}{c}} {0,}&{}{if}&{}{{g_i}(\mathop x\limits ^ \rightarrow ) \le 0}\\ {{g_i}(\mathop x\limits ^ \rightarrow ),}&{}{}&{}{otherwise.} \end{array}} \right. \end{aligned}$$
(22)
$$\begin{aligned} H_{j}(\mathop x\limits ^ \rightarrow ) ={\left\{ \begin{array}{ll} 0, &{} if \ \vert h_{j}(\mathop x\limits ^ \rightarrow ) \vert -\delta \le 0 \\ \vert h_{j}(\mathop x\limits ^ \rightarrow ) \vert , &{} otherwise \end{array}\right. } \end{aligned}$$
(23)

5.2 Process flow sheeting problem

This non-convex constrained optimization problem has three decision variables with three inequality constraints (Kumar et al. 2020) as described in Eq. 24. The best-known feasible objective function value is \(f(\mathop x\limits ^ \rightarrow ) = \mathrm{{1}}\mathrm{{.0765430833}}\).

$$\begin{aligned} \begin{array}{*{20}{c}} {Minimize}&{}{f(\mathop x\limits ^ \rightarrow ) = - 0.7{x_3} + 5{{(0.5 - {x_1})}^2} + 0.8}&{}{}\\ {Subject \; \mathrm{{ to}}}&{}{{g_1}(\mathop x\limits ^ \rightarrow ) = - {e^{({x_1} - 0.2)}} - {x_2} \le 0}&{}{}\\ {}&{}{{g_2}(\mathop x\limits ^ \rightarrow ) = {x_2} + 1.1{x_3} \le - 1.0}&{}{}\\ {}&{}{{g_3}(\mathop x\limits ^ \rightarrow ) = {x_1} - {x_3} \le 0.2}&{}{}\\ {with \; \mathrm{{ bounds:}}}&{}{0.2 \le {x_1} \le 1, - 2.22554 \le {x_2} \le 1,{x_3} \in \{ 0,1\} }&{}{} \end{array} \end{aligned}$$
(24)
Table 21 Comparison Results of the Process Flow Sheeting problem
Fig. 13
figure 13

Convergence graph of the Process Flow Sheeting Problem

In Table 21, the comparison results show that HLOA, JSOA, BWOA, and WHO reported feasible and competitive solutions, as seen in the convergence graph in Fig. 13. The HLOA difference with the best-known feasible objective function value for all algorithms is 1.17347E-05.

5.3 Process synthesis problem

This problem has seven decision variables and nine inequality constraints with non-linearities in real and binary variables (Kumar et al. 2020). The mathematical representation is shown in Eq. 25. The best-known feasible objective function value is \(f(\mathop x\limits ^ \rightarrow ) = \mathrm{{2}}\mathrm{{.9248305537}}\).

$$\begin{aligned} \begin{array}{*{20}{c}} {Minimize}&{}\begin{array}{l} f(\mathop x\limits ^ \rightarrow ) = {(1 - {x_1})^2} + {(2 - {x_2})^2} + {(3 - {x_3})^2} + {(1 - {x_4})^2} + \\ {(1 - {x_5})^2} + {(1 - {x_6})^2} - \ln (1 + {x_7}) \end{array}\\ {Subject \; \mathrm{{ to}}}&{}{{g_1}(\mathop x\limits ^ \rightarrow ) = {x_1} + {x_2} + {x_3} + x{}_4 + x{}_5 + {x_6} \le 5}\\ {}&{}{{g_2}(\mathop x\limits ^ \rightarrow ) = {x_1}^2 + {x_2}^2 + {x_3}^2 + {x_6}^2 \le 5.5}\\ {}&{}{{g_3}(\mathop x\limits ^ \rightarrow ) = {x_1} + {x_4} \le 1.2}\\ {}&{}{{g_4}(\mathop x\limits ^ \rightarrow ) = {x_2} + {x_5} \le 1.8}\\ {}&{}{{g_5}(\mathop x\limits ^ \rightarrow ) = {x_3} + {x_6} \le 2.5}\\ {}&{}{{g_6}(\mathop x\limits ^ \rightarrow ) = {x_1} + {x_7} \le 1.2}\\ {}&{}{{g_7}(\mathop x\limits ^ \rightarrow ) = x_2^2 + x_5^2 \le 1.64}\\ {}&{}{{g_8}(\mathop x\limits ^ \rightarrow ) = x_3^2 + x_6^2 \le 4.25}\\ {}&{}{{g_9}(\mathop x\limits ^ \rightarrow ) = x_3^2 + x_5^2 \le 4.64}\\ {with \; \mathrm{{ bounds}}}&{}{0 \le {x_1},{x_2},{x_3} \le 1,}\\ {}&{}{{x_4},{x_5},{x_6},{x_7} \in \{ 0,1\} } \end{array} \end{aligned}$$
(25)
Table 22 Comparison results of the Process Synthesis problem
Fig. 14
figure 14

Convergence graph of the Process Synthesis Problem

In Table 22, the comparison results show that all algorithms reported feasible and competitive solutions, as seen in the convergence graph in Fig. 14. Note that the HLOA and BWOA have the most competitive values, whilst HLOA difference to the best-know feasible objective function value is 1.11E-04.

5.4 Optimal design of an industrial refrigeration system

This problem stated in Eq. 26, have fourteen decision variables and fifteen inequality constraints formulated and a non-linear inequality-constrained optimization problem (Kumar et al. 2020). Where the best-known feasible objective function value is \(f(\mathop x\limits ^ \rightarrow ) =\) 3.22130008E-02.

$$\begin{aligned} \begin{array}{*{20}{c}} {Minimize}&{}\begin{array}{l} f(\mathop x\limits ^ \rightarrow ) = 63098.88{x_2}{x_4}{x_{12}} + 5441.5x_2^2{x_{12}} + 115055.5x_2^{1.664}{x_6} + 6172.27x_2^2{x_6}\\ + 63098.88{x_1}{x_3}{x_{11}} + 5441.5x_1^2{x_{11}} + 115055.5x_1^{1.664}{x_5} + 6172.27x_1^2{x_5} + \\ 140.53{x_1}{x_{11}}+281.29{x_3}{x_{11}} + 70.26x_1^2 + 281.29x_3^2 + \\ 14437x_8^{1.8812}x_{12}^{0.3424}x{}_{10}x_{14}^{ - 1}x_1^2{x_7}x_9^{ - 1}+\\ 20470.2x_7^{2.893}x_{11}^{0.316}x_1^2 \end{array}\\ {Subject \; \mathrm{{ to}}}&{}{{g_1}(\mathop x\limits ^ \rightarrow ) = 1.524x_7^{ - 1} \le 1}\\ {}&{}{{g_2}(\mathop x\limits ^ \rightarrow ) = 1.524x_8^{ - 1} \le 1}\\ {}&{}{{g_3}(\mathop x\limits ^ \rightarrow ) = 0.07789{x_1} - 2x_7^{ - 1}{x_9} \le 1}\\ {}&{}{{g_4}(\mathop x\limits ^ \rightarrow ) = 7.05305x_9^{ - 1}x_1^2{x_{10}}x_8^{ - 1}x_2^{ - 1}x_{14}^{ - 1} \le 1}\\ {}&{}{{g_5}(\mathop x\limits ^ \rightarrow ) = 0.0833x_{13}^{ - 1}{x_{14}} \le 1}\\ {}&{}{{g_6}(\mathop x\limits ^ \rightarrow ) = 47.136x_2^{0.333}x_{10}^{ - 1}{x_{12}} - 1.333{x_8}x_{13}^{2.1195} + 62.08x_{13}^{2.1195}x_{12}^{ - 1}x_8^{0.2}x_{10}^{ - 1} \le 1}\\ {}&{}{{g_7}(\mathop x\limits ^ \rightarrow ) = 0.04771{x_{10}}x_8^{1.8812}x_{12}^{0.3424} \le 1}\\ {}&{}{{g_8}(\mathop x\limits ^ \rightarrow ) = 0.0488{x_9}x_7^{1.893}x_{11}^{0.316} \le 1}\\ {}&{}{{g_9}(\mathop x\limits ^ \rightarrow ) = 0.0099{x_1}x_3^{ - 1} \le 1}\\ {}&{}{{g_{10}}(\mathop x\limits ^ \rightarrow ) = 0.0193{x_2}x_4^{ - 1} \le 1}\\ {}&{}{{g_{11}}(\mathop x\limits ^ \rightarrow ) = 0.0298{x_1}x_5^{ - 1} \le 1}\\ {}&{}{{g_{12}}(\mathop x\limits ^ \rightarrow ) = 0.056{x_2}x_6^{ - 1} \le 1}\\ {}&{}{{g_{13}}(\mathop x\limits ^ \rightarrow ) = 2x_9^{ - 1} \le 1}\\ {}&{}{{g_{14}}(\mathop x\limits ^ \rightarrow ) = 2x_{10}^{ - 1} \le 1}\\ {}&{}{{g_1}(\mathop x\limits ^ \rightarrow ) = {x_{12}}x_{11}^{ - 1} \le 1}\\ {with \; \mathrm{{ bounds}}}&{}{0.001 \le {x_i} \le 5,\mathrm{{ }}i = 1,2,..,14} \end{array} \end{aligned}$$
(26)
Table 23 Comparison results of the Optimal Design of an Industrial Refrigeration System
Fig. 15
figure 15

Convergence graph of the Optimal Design of an Industrial Refrigeration System. The dotted lines represent an infeasible solution shown by an algorithm

In Table 23, the comparison results show that HLOA and JSOA algorithms reported feasible solutions, whereas BWOA and WHO results are infeasible, as seen in the convergence graph, see Fig. 15. Note that the HLOA algorithm is ranked as the first-best obtained solution, and the difference with the best-known feasible objective function value is improved by -9.93E-04.

5.5 Multiple gravity assist (MGA) optimization problem: cassini spacecraft trajectory design

The Multiple Gravity Assist (MGA) problem is a straightforward benchmark for evaluating global optimization techniques in Space Mission Design-related challenges. The mathematical representation is a finite-dimension global optimization problem with nonlinear constraints. For an interplanetary probe powered by a chemical propulsion engine to travel from the Earth to another planet or asteroid, the best potential trajectory must be found. The MGA mathematical approach can be found in Zuo et al. (2016); Wagner and Wie (2015). Where the European Space Agency (ESA) raises an MGA issue with the Cassini spacecraft trajectory design problem [89]. The objective of this mission is to reach Saturn and get captured by its gravity into an orbit having pericenter radius \(r_p\) set to 108950 km, and eccentricity fixed to 0.98. The lower and upper variable bounds are shown in Table 24. The Constraints of the various fly-by pericenters are shown in Table 25. The planetary fly-by sequence and more details can be found in [89]. The best-known feasible objective function value is \(f(\mathop x\limits ^ \rightarrow ) = 4.9307\).

Table 24 Lower and Upper Bounds of variables
Table 25 Constraints on the various fly-by pericenters
Table 26 Comparison results for the Cassini Spacecracft Trajectory Design
Fig. 16
figure 16

MGA Optimization: Cassini Spacecraft Trajectory Design Problem

The comparison results in Table 26 show that HLOA, JSOA, BWOA, and WHO reported feasible solutions. The HLOA and WHO showed competitive results, whereas BWOA was outstanding, as seen in the convergence graph in Fig. 16. The HLOA difference with the best-known feasible objective function value is 4.26E-01.

5.6 Optimal power flow

The optimal power flow (OPF) is a non-linear optimization problem that combines an optimization function with the power flow problem to calculate the operating conditions of a power system network subjected to practical and physical constraints (Nucci et al. 2021; Huneault and Galiana 1991), OPF mathematical formulation can be described as follows:

$$\begin{aligned} \begin{aligned} \text {Minimize}\ J(x,u), \\ \text {Subjected to}\ g(x,u)=0, \\ \text {and}\ h(x,u) \le 0 \end{aligned} \end{aligned}$$
(27)

Where J(xu) is the objective function, g(xu) is the set of equality constraints, while h(xu) are the inequality constraints. The set of control variables u, defined in Eq. 28, is \(P_{G}\), active power generation at the PV buses; \(V_{G}\), voltage magnitude; \(Q_{C}\), shunt Volt-Amperes Reactive (VAR) compensator; T transformer tap settings. Subindices NG, NC, and NT are the number of generators, the number of regulating transformers, and the VAR compensators, respectively.

$$\begin{aligned} \begin{aligned} u^{T}=\left[ P_{G_{2}}\cdots P_{G_{NG}},V_{G_{1}}\cdots V_{G_{NG}},Q_{C_{1}}\cdots Q_{C_{NC}}, T_{1}\cdots T_{NT} \right] \end{aligned} \end{aligned}$$
(28)

The set of state variables \(x^{T}\), stated in Eq. 29, are:

  • \(P_{G1}\): Active power generation at slack bus

  • \(V_{L}\): Voltage at PQ buses

  • \(Q_{G}\): Generators reactive power output

  • \(S_{l}\): Line flow, transmission line loadings

$$\begin{aligned} \begin{aligned} x^{T}=\left[ P_{G_{1}},V_{L_{1}}\cdots V_{L_{NL}},Q_{G_{1}}\cdots Q_{G_{NC}}, S_{l_{1}}\cdots S_{l_{nl}} \right] \end{aligned} \end{aligned}$$
(29)

Where subindices NL and nl are the numbers of load buses and transmission lines, respectively.

The real and reactive power equality constraints taken from the power flow equations are defined in Eq. 30 and 31.

$$\begin{aligned}&P_{Gi}-P_{Di}-V_{i}\sum _{j=1}^{NB}V_{j}\left[ G_{ij}\cos (\theta _{ij})+B_{ij}\sin (\theta _{ij}) \right] =0 \end{aligned}$$
(30)
$$\begin{aligned}&Q_{Gi}-Q_{Di}-V_{i}\sum _{j=1}^{NB}V_{j}\left[ G_{ij}\sin (\theta _{ij})+B_{ij}\cos (\theta _{ij}) \right] =0 \end{aligned}$$
(31)

Where \(P_{G}\) and \(Q_{G}\) are the active and reactive power generation, whereas \(P_{D}\) and \(Q_{D}\) are the active and reactive load demand, NB is the number of buses, \(G_{ij}\) and \(B_{ij}\) are the conductance and susceptance between bus i and j of the admittance matrix \(Y_{ij} = G_{ij}+j B_{ij}\). The inequality constraints of the OPF formulation [?], summarized in Table 27, are defined in Eq. 32 to 35. Where Eq. 32 represent the generator constraints; Eq. 33 represent the transformer constraints; Eq. 34 define the shunt VAR compensator constraints; and Eq. 35 represent the security constraints.

$$\begin{aligned}&V^{min}_{G_{i}}\le V_{G_{i}}\le V^{max}_{G_{i}}, i=1,\cdots ,NG \nonumber \\&P^{min}_{G_{i}}\le P_{G_{i}}\le P^{max}_{G_{i}}, i=1,\cdots ,NG \nonumber \\&Q^{min}_{G_{i}}\le Q_{G_{i}}\le Q^{max}_{G_{i}}, i=1,\cdots ,NG \end{aligned}$$
(32)
$$\begin{aligned}&T^{min}_{i}\le T_{i}\le T^{max}_{i}, i=1,\cdots ,NT \end{aligned}$$
(33)
$$\begin{aligned}&Q^{min}_{Ci}\le Q_{GCi}\le Q^{max}_{Ci}, i=1,\cdots ,NG \end{aligned}$$
(34)
$$\begin{aligned}&V^{min}_{L_{i}}\le V_{L_{i}}\le V^{max}_{L_{i}}, i=1,\cdots ,NL \nonumber \\&S_{l_{i}}\le S^{max}_{l_{i}}, i=1,\cdots ,nl \end{aligned}$$
(35)

The Black Widow Optimization (BWOA), Jumping Spider Optimization (JSOA), and Wild Horse Optimizer (WHO) algorithms previously ranked by the Friedman test, see Table 6, are contrasted against the Horned Lizard optimization algorithm (HLOA) to solve the Optimal Power Flow problem for the IEEE-30 bus test system. The IEEE test system, depicted in Fig. 17, consists of six generators placed at nodes 1, 2, 5, 8, 11, and 13, highlighted in red color, four transformers located in lines 11, 12, 15, and 36, highlighted in blue color and nine reactive compensators at nodes 17, 20, 21, 23, 24 and 29 highlighted in yellow. Line and node numbering is depicted in green and black color, respectively. Three cases of studies are conducted, the minimization of generation fuel cost and the minimization of active and reactive power transmission losses. In the first case, the objective function represents the total fuel of the six generator units, and it is defined as follows:

$$\begin{aligned} \begin{aligned} J = \sum _{i=1}^{NG}f_{i}(\$/h) \end{aligned} \end{aligned}$$
(36)

Where \(f_{i}\) is a quadratic function as described in Ela et al. (2010).

The objective function for the minimization of active power transmission losses is defined in Eq. 37, whilst the reactive power minimization function is stated in Eq. 38 (Fig. 17).

$$\begin{aligned} J&= \sum _{i=1}^{NB}P_{i} = \sum _{i=1}^{NB}P_{Gi} - \sum _{i=1}^{NB}P_{Di} \end{aligned}$$
(37)
$$\begin{aligned} J&= \sum _{i=1}^{NB}Q_{i} = \sum _{i=1}^{NB}P_{Qi} - \sum _{i=1}^{NB}Q_{Di} \end{aligned}$$
(38)
Table 27 Generator, transformer and shunt var compensator inequality constraints
Fig. 17
figure 17

IEEE 30-bus system (Nusair and Alasali 2020)

Fig. 18
figure 18

First study case, Minimization of fuel cost

Fig. 19
figure 19

Second study case, minimization of active power transmission losses

Fig. 20
figure 20

Third study case, minimization of reactive power transmission losses

From the convergence graph analysis, it is confirmed that HLOA outperforms BWOA, WHO, and JSOA algorithms for the first and third case of study, see Fig. 18 and 20 respectively. However, for the second case of study, minimization of active power transmission losses, the HLOA algorithm achieves very competitive results as illustrated in Fig. 19. All tests were run for a population size of 30 and 500 iterations. Table 28 summarizes the algorithms’ minimum obtained values for each case of study.

Table 28 Obtained results

6 Conclusion

This paper presents a novel metaheuristic optimization algorithm inspired by the horned lizard’s defense behavior, named the Horned Lizard Optimization Algorithm (HLOA). The defense behavior included the following four methods: crypsis (the ability of an organism to conceal itself by having a color, pattern, and shape that allows it to blend into the surrounding environment), skin darkening or lightening, bloodstream shooting, and move-to-escape. Furthermore, the rapid skin color change is influenced by the \(\alpha\)-melanophore stimulating hormone (alpha-MHS) rate, is also modeled. These models progressively refine the search vectors by evolving them (recombining vectors) at each iteration. They provide a suitable balance between exploration and exploitation in the solution search space. The algorithm’s performance was evaluated on Sixty-three well-known testbench functions, twenty-eight functions for 10-dimensional, 30-dimensional, 50-dimensional, and 100-dimensional from IEEE CEC-2027 “Constrained Real-Parameter Optimization”, and ten functions from IEEE CEC06-2019 “100-Digit Challenge”. Furthermore, five real-world optimization problems, the Multiple Gravity Assist Optimization Problem, the Optimal Power Flow problem, and the rest of the problems are taken from CEC2020. Moreover, the HLOA performance is compared to ten of the most recent algorithms published in the scientific literature. The statistical results show that the HLOA algorithm outperforms BWOA, DOA, COOT, CSA, EJAYA, RSO, SAO, and TSA. At the same time, it has competitive results with JSOA and WHO algorithms.

The following are the findings and conclusions of this study:

  • The algorithm has no parameters to configure in modeling the Horned Lizard defense tactics. The only parameters of the algorithm are those common to all bio-inspired, that is, the size of the population and the number of iterations.

  • The Wilcoxon rank-sum test demonstrates that, in many situations, HLOA is significantly superior to alternative bio-inspired algorithms. The HLOA also ranks highest compared to other algorithms by the Friedman statistics test.

  • For 10-dimensional problems, the Wilcoxon rank-sum test shows that HLOA outperformed all the algorithms. Nevertheless, it is ranked in second place by the Friedman test. Furthermore, the Wilcoxon rank-sum demonstrates that HLOA outperforms JSOA and BWOA algorithms for 30-dimensional problems. Meanwhile, there are no significant differences with WHO. In addition, HLOA ranks second in Friedman’s test. In the case of 50-dimensional problems, the HLOA algorithm demonstrated superior performance compared to the JSOA and BWOA algorithms, as indicated by the Wilcoxon rank-sum test. In contrast, there are no substantial disparities with the WHO. In comparison, the Friedman test assigns the highest rank to HLOA. Finally, in the context of 100-dimensional problems, the Wilcoxon rank-sum test shows that HLOA outperformed all algorithms. Meanwhile, the Friedman test has determined that HLOA holds the highest rank. This n-dimensional analysis shows that the HLOA algorithm performs better for dimensions 50 and 100.

  • HLOA shows competitive results on the Process Flow Sheeting Problem, Process Synthesis Problem, and Multiple Gravity Assist Optimization Problem. While showing the best performance on the Optimal Design of an Industrial Refrigeration System

  • In the optimal power flow problem, HLOA better minimizes fuel cost and reactive power than BWOA, WHO, and JSOA algorithms.

  • HLOA can solve real-world problems with unknown search spaces with outstanding results.

On the other hand, currently under development for future work is an improved version of the HLOA algorithm to solve for multi-objective and many-objective optimization. Moreover, efforts are focused on hyper-parameter optimization of convolutional neural networks for medical applications.