1 Introduction

The need for optimal solutions gains substantial attention from vast research communities to deal with their real-world optimization problems. Optimization is an iterative improvement process normally concerned with finding the best configurations for the optimization problem with sometimes multimodal, non-convex, non-differentiable, constrained search space (Chong and Zak 2013). Indeed, the types of search spaces vary according to their variable domains, such as binary, discrete, continuous, permutation, and structured. To deal with any search space type, special operations are required. The real-world optimization problems are widely studied in different research fields such as engineering (Pereira et al. 2022; Mei and Wang 2021), scheduling (Abdalkareem et al. 2021; Arunarani et al. 2019), computer vision (Nakane et al. 2020), feature selection (Braik et al. 2023c, 2024), image processing (Braik 2022, 2023), modeling of industrial systems (Braik et al. 2023d, b), games (Cai et al. 2016), and many others.

Nowadays, the popularity of metaheuristic (MH) optimization algorithms has exponentially increased due to their tangible impact on tackling optimization problems. The MH algorithm is a general optimization framework initiated with a set of random solutions. At each iteration, the solutions are improved using intelligent operators controlled by carefully selected parameters to explore different search space regions and exploit the accumulated knowledge to find the optimal solution (Fausto et al. 2020; Braik et al. 2023a). There are common features of MH algorithms, such as derivative-free, parameter-less, simple and adaptable, sound and complete, evolution, and local optima avoidance (Blum and Roli 2003). Their evolution feature is mainly inspired by the natural behavior of humans, animals, or any optimization phenomenon (Sörensen 2015; Fausto et al. 2020). Therefore, they are known as nature-inspired MH algorithms and categorized into swarm-based, evolutionary-based, physics or chemistry-based, and social or human-based algorithms, as will be further discussed in Sect. 2.

In general, MH algorithms share a set of common phases and parameters (Alorf 2023; Rajwar et al. 2023). Their success in finding the optimal solution is mainly related to their ability to strike a suitable balance between wide-area exploration and narrow local-area exploitation during the iterative loop. Exploration refers to the ability of the MH algorithm to navigate several search space areas at the same time, while exploitation refers to the ability of the MH algorithm to navigate each area using the accumulative knowledge deeply and find its local optima (Alorf 2023). Based on these two principles, the deviation between MH algorithms is based on their ability to manage the balance between exploration and exploitation during the search. However, the performance of each MH algorithm fluctuates and cannot behave steadily for all search spaces of different optimization problems. This concurs with the no free lunch (NFL) theorem for optimization (Wolpert and Macready 1997) where there is no single MH algorithm able to excel all others for every optimization problem. Therefore, the optimization search communities are still investigating every nature-inspired optimization phenomenon to find a suitable MH for optimization problems. In general, the nature-inspired MH algorithms stemming from the swarm of animals such as bats (Yang 2010b), wolves (Mirjalili et al. 2014), sharks (Braik et al. 2022a), rabbits (Wang et al. 2022), crows (Askarzadeh 2016), bees (Awadallah et al. 2020), ants (Dorigo et al. 2006), Horses (MiarNaeimi et al. 2021), foxes (Połap and Woźniak 2021), cats (Seyyedabbasi and Kiani 2023), egrets (Chen et al. 2022), tunicates (Kaur et al. 2020), and Salps (Mirjalili et al. 2017) proves their viability to tackle a wide range of optimization problems. They mainly emulate the animals’ optimization phenomenon when they mate, search for food, attack prey or hunt, defend themselves, etc. In specific, the animals living as a herd are normally structured into leaders and followers so that the leaders normally drive the followers to the optimized situation. Although a large number of MH algorithms are inspired by the swarm of animals, there are still opportunities to investigate other animal optimization behaviors, such as the breeding cycle of the elk herds. In this paper, a new MH algorithm inspired by the breeding cycle of elk herds is proposed. The new swarm-based natural-inspired MH algorithm is called Elk Herd Optimizer (EHO). The elk herds are normally divided into a small group of males (or bulls) and a large group of females (caws or harems). Two breeding seasons are defined for elk herds: rutting and calving. In the rutting season, the elk herd is divided into different sub-herds (families) of various sizes. This division is based on fighting domination challenges between bulls where the strongest bull will have a chance to have more harems in its sub-herd. In calving season, the sub-herds breed new calves from the bull and harem. Finally, in a selection season, the family members are again assembled, and the rutting season will start over and over. Inspiration is mapped into the optimization context where the optimization loop consists of three operators: rutting season, calving season, and selection season. During the selection season all families (sub-population), including bulls (i.e., leader solutions of the sub-populations), harems (i.e., follower solutions of the sub-populations), and calves (i.e., new solutions of the sub-populations) are merged and the fittest elk herd (population) is selected in the selection season to be used in the following rutting and calving seasons. The performance of EHO is judged using a test suite of 29 benchmark optimization problems utilized in the CEC-2017 special sessions on real-parameter optimization (Awad et al. 2016; Doush 2012). Furthermore, four traditional real-world engineering design optimization problems are used further to assess the performance of EHO on real-world optimization problems. Initially, the parameters of EHO were studied to show their influence on EHO convergence behavior. The comparative analysis against ten well-established MH algorithms reveals the significant success of the optimization behavior of EHO. For further evaluation, statistical evidence using Friedman’s test post-hocked by Holm’s test (Pereira et al. 2015; Awadallah et al. 2022) shows the top rank of EHO in comparison to other methods.

The remaining parts of this paper are organized as follows: the other MH algorithms proposed in the literature are categorized in Sect. 2. The inspirations and procedural steps of the proposed EHO are thoroughly discussed in Sect. 3. The experimental results and discussion of the EHO performance are given in Sect. 4. Finally, the paper ends up with a conclusion and some possible future work, as shown in Sect. 5.

2 Related works

Meta-heuristic (MH) optimization algorithms rely on two phases in the optimization process exploration and exploitation (Zitar et al. 2021; Makhadmeh et al. 2022; Alyasseri et al. 2022). Exploration is the algorithm’s ability to scan the whole search space, thus escaping from being stuck in local optima. In contrast, exploitation is the algorithm’s ability to dig more deeply into promising search regions to improve the solution quality. The performance of MH algorithms can be enhanced when it has a balance between exploration and exploitation. There are four main types of metaheuristic algorithms (Molina et al. 2020; Zhong et al. 2022). In this section, the categories of MH algorithms and their popular and recent versions are introduced.

2.1 Swarm intelligence (SI) algorithms

The first class of algorithms is SI algorithms which mimic the social behavior of animals in groups (i.e., flocks or herds). This class of metaheuristic algorithms shares the collective information from the environment between all individuals to achieve the goal of the swarm (e.g., finding food or hunting an animal). Kennedy and Eberhart proposed Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995), which is one of the widely used SI algorithms. The PSO imitates the swarm particles’ natural behaviors by sharing and updating the local position to achieve the global best position. Each particle represents a candidate solution, and it has a local position and velocity. The particles follow the best solutions in their paths.

A large number of SI algorithms are proposed to solve optimization problems. The following are the most popular and recent ones: Bat Algorithm (Yang and Gandomi 2012), Flower Pollination Algorithm (Yang 2012), Krill Herd (Gandomi and Alavi 2012), Butterfly Optimization Algorithm (Arora and Singh 2019), Harris Hawks Optimization (Heidari et al. 2019), Seagull Optimization Algorithm (Dhiman and Kumar 2019), Sea Lion Optimization Algorithm (Masadeh et al. 2019), Black Widow Optimization Algorithm (Hayyolalam and Kazem 2020), Chimp Optimization Algorithm (Khishe and Mosavi 2020), Marine Predator Algorithm (Faramarzi et al. 2020a), Slime Mould Algorithm (Li et al. 2020), Tunicate Swarm Algorithm (Kaur et al. 2020), Chameleon Swarm Algorithm (Braik 2021), Red Fox Optimization (Połap and Woźniak 2021), and Prairie dog optimization algorithm (Ezugwu et al. 2022).

2.2 Evolutionary algorithms (EA)

The second class of algorithms is EA which simulates the survival of the fittest concept that adapted from the biological evolution in nature. The most popular EA is Genetic Algorithm (GA) which was developed by Holland (Holland 1992). It is inspired by the Darwinian theory of evolution. GA produces better solutions (offspring) by mating the fittest parents using the crossover concept. This concept is applied in nature to help in maintaining the diversity in ecosystems. Additionally, the mutation concept is used to add new characteristics from the parents to the offspring. Storn and Price proposed another EA that was largely utilized by researchers which is Differential evolution (DE) (Storn and Price 1997). Other EA algorithms have been recently proposed and they provide good performance when solving optimization problems such as Barnacles Mating Optimizer (Sulaiman et al. 2020), Genetic Programming (GP) (Koza and Koza 1992), Evolution strategies (ES) (Beyer and Schwefel 2002), Probability-based incremental learning (PBIL) (Baluja 1994), and Biogeography-based optimization (Simon 2008).

2.3 Physics or chemistry-based algorithms

The third class of algorithms are physics or chemistry-based algorithms that imitate a physical or chemistry phenomenon. The interaction of the algorithm search agents is modeled using the rules of the physics process or the rules of chemistry interaction. Van Laarhoven and Aarts (1987) proposed one of the popular algorithms that borrow the physics thermodynamics law when a material is applied to heating and then slowly cooled down to make the size of its crystals larger. Gravitational Search Algorithm (Rashedi et al. 2009) is another well-known algorithm that models Newton’s gravitational laws by having the searcher agents as a collection of masses that interact with each other using Newton’s gravity law and the laws of motion to find an optimal point. Various Physics or chemistry-based algorithms are proposed such as Charged System Search (Kaveh and Talatahari 2010), Chemical Reaction Optimization (Lam and Li 2012), Ray Optimization (Kaveh and Khayatazad 2012), Henry Gas Solubility Optimization (Hashim et al. 2019), Billiards-Inspired Optimization (Kaveh et al. 2020b), Equilibrium Optimizer (Faramarzi et al. 2020b), Plasma Generation Optimization (Kaveh et al. 2020a), Simulated annealing (Kirkpatrick et al. 1983), Solar System Algorithm (Zitouni et al. 2020), Vortex Search algorithm (Doğan and Ölmez 2015), and chaotic Henry gas solubility optimization (Yıldız et al. 2022).

2.4 Social or human-based algorithms

The final class of optimization is social or human-based algorithms that simulate social or human behaviors. For example, Brain Storm Optimization (Shi 2011) is inspired by the human brainstorming process. The algorithm uses two operations the convergent operation to group individuals in the search space and the divergent operation to make the individual depart the search space. Another human-based algorithm is the Teaching-Learning-Based Optimization (Rao et al. 2011) which imitates the effect of the influence of a teacher on learners. The algorithm has two phases: learning from the teacher and learning by interacting with other learners. Recently, many researchers have proposed social or human-based algorithms such as Harmony Search (Geem et al. 2001), Heap-Based Optimizer (Askari et al. 2020a), Interactive Autodidactic School (Jahangiri et al. 2020), Lévy Flight Distribution (Houssein et al. 2020), Most Valuable Player Algorithm (Bouchekara 2020), Nomadic People Optimizer (Salih and Alsewari 2020), Political Optimizer (Askari et al. 2020b), Arithmetic Optimization Algorithm (Abualigah et al. 2021), Stock exchange trading optimization (Emami 2022), Ali Baba and the forty thieves (Braik et al. 2022b), Football game inspired algorithm (Fadakar and Ebrahimi 2016), Ebola optimization search algorithm (Oyelade et al. 2022), Group teaching optimization algorithm (Zhang and Jin 2020), and Coronavirus herd immunity optimizer (MA et al. 2021).

3 Elk Herd Optimizer (EHO)

The breeding process of elk herds can be thought of as an optimization process. The elks are bred generation after generation to have a stronger herd that can face the challenges in the surrounding environment. In this section, the breeding process is mapped to optimization concepts. Firstly, the inspiration for EHO is discussed. Thereafter, the general optimization procedure of EHO and the mathematical model are illustrated.

3.1 EHO Inspiration

The elk, also called wapiti, belongs to the deer family and is the largest deer species after the Moose deer. Elks live in the forests and forests edge of Central East Asia and North America, where they usually prefer the warm weather prone to cold. Elks are non-predators, and they feed bark, leaves, plants, and grasses. Accordingly, elks are considered at the low level of the food chain hierarchy. Despite that, elks are muscular animals that can jump, swim, and run short distances with speeds up to 50 km/h, particularly when they feel threatened. Furthermore, elks have strong hearing and smelling senses.

Therefore, elk herds are weak compared to the upper levels in the hierarchy; therefore, they live in large herd families with 200 or more elk to protect themselves. The herd contains males, females, and young elks. Females in the herd, also known as cows, make up most of the herd, whereas males or bulls are few in the herd due to their hostile nature to each other for herd domination and protection. Young elks or calves usually follow older bulls and cow groups.

Within the herd, elks use different sound articulations for communication and warning others of dangers. The bulls use a distinct sound articulation called bugling that mainly advertises the male’s fitness and starts the mating season to attract mates. The bugle sound is also used to announce the bull’s position in the large herd. Cows produce a grunting sound to alert other elks in the herd of danger and also call and find their calves, while calves make a sharp squealing sound when they attack (Geist 1993).

The mating season or breeding season, which usually runs from September to October, is divided into the rutting and calving seasons. In the rutting season, elk became extremely aggressive against any animal, even other bulls in the same herd. A bull starts the season by attracting cows and inviting other bulls for a fighting challenge for herd domination by raising the head and making the mating bugle articulation, as shown in Figure 1. Subsequently, other bulls will respond to the challenge by bugling together, indicating that the fighting challenge has started. Once the fighting starts, elks start shoving and pushing, usually in pairs, using their antlers. Normally, the elks use the antlers locking strategy to exhaust the power of the rival elks in the battle to impose their domination, as shown in Figure 2. When the weaker bull feels the danger and death threat from the stronger bull, the weaker bull will stop fighting by trotting away (Geist 1991). Usually, these fights end with damage happening to the antlers.

Fig. 1
figure 1

Elk start bugling

Fig. 2
figure 2

Elks rutting season

After the fighting challenge between all elks is finished, the stronger bulls will gather more cows and make a group of cows, called harems, containing more than 20 cows. The weaker bull will gather a lower number of cows in their group of harems with no more than five cows. Each group of harems is led and protected by only one bull, as shown in Figure 3.

In the calving season, the mating between cows and bulls will begin to make the cows pregnant and reproduce new calves, where the cows mate only with their bulls. When pregnant cows become ready to give birth, they will leave the herd to find a proper area for delivery birth. These areas are usually covered with brush and trees for protection and to hide from predators. Afterwards, cows will breed calves that could be cows or bulls and end the breeding season. After three to four months from the end of the calving season, the new calves became young cows and bulls with stronger antlers that normally grew an inch every day. A new breeding season starts by gathering all elks, including the father’s bull and its cows’ harems and young calves. The goal is to find and select the stronger bull in the herd and start the domination challenge again (Shively et al. 2005; Geist 1993).

Subsequently, the mating between cows and bulls will begin to reproduce new calves and start the calving season, where the cows mate only with their bulls (Geist 1991).

Fig. 3
figure 3

Elks herd families

3.2 The mathematical model of EHO

In this section, the Elk Herd optimizer (EHO) is mathematically modeled in the optimization context. Initially, the elk herd population is divided into a set of families based on the number of bulls. In the rutting season, each family is led by its bull elk, where the number of its cows or harems is determined depending on the bull’s strength. The strength of the bull is determined through fighting domination challenges. In the calving season, each family then generates calves with the same number of family members. Finally, in the selected season, the members of all families are merged, and the best members will be invited to the rutting season again. This process is repeated to ensure that the generated elk herd is capable of dealing with the challenges in the surrounding environment.

In the mathematical model of the EHO, six procedural steps are proposed to bridge the breeding cycle of elk herds into the optimization framework. These steps will be thoroughly discussed. The flowchart of the EHO is given in Figure 4, while the pseudo-code is provided in Algorithm 2.

Fig. 4
figure 4

Flowchart of the elk herd optimizer

  • Step 1: Initialize Parameters of EHO and optimization problem.

    In order to embed the problem-specific knowledge into the EHO, two main components shall be provided: the objective function to evaluate the solution and the solution representation clarifying the search space type. In general, the simple forms of optimization problems with continuous search space where each decision variable has a specific value range. The general form of the objective function can be formulated as in Eq. (1).

    $$\begin{aligned} \min _x f({{\varvec{x}}}) \quad {{\varvec{x}}}\in [{{\varvec{lb}}},{{\varvec{ub}}}] \end{aligned}$$
    (1)

    where \(f({{\varvec{x}}})\) is the objective function used to measure the fitness of each elk or solution \({{{\varvec{x}}}}=(x_1,x_2,\ldots ,x_n)\). The variable \(x_i\) in each elk refers to one attribute of such elk indexed by i where \(x_i \in [lb_i,ub_i]\) in which \(lb_i\) is the lower bound, and \(ub_i\) is the upper bound for the attribute \(x_i\). n is the total number of attributes in each elk solution or solution dimensionality.

    The EHO is designed with only one parameter, which is the bull rate \(B_r\), which determines the rate of initial bulls in the elk herd. The other two standard parameters are the elk herd size or the population size (EHS) and the maximum number of iterations (\(M\_Itr\)).

  • Step 2: Generate the initial elk herd

    The elk herd (\({\textbf {EH}}\)) is initially generated, which is a population of the elk solutions, including bulls and harems. The EH is a matrix of size \(n \times EHS\) as formulated in Eq. (2).

    $$\begin{aligned} {{{\textbf{E}}}}{{{\textbf{H}}}}=&\left[ \begin{matrix} x^{1}_{1} &{} x^{1}_{2} &{} \cdots &{} x^{1}_{n}\\ x^{2}_{1} &{} x^{2}_{2} &{} \cdots &{} x^{2}_{n}\\ \vdots &{} \vdots &{} \cdots &{} \vdots \\ x^{EHS}_{1} &{} x^{EHS}_{2} &{} \cdots &{} x^{EHS}_{n}\\ \end{matrix} \right] . \end{aligned}$$
    (2)

    In the continuous domain, each solution \({{\textbf {x}}}^j\) can be generated as \( x_i^{j} =lb_i + (ub_i - lb_i) \times U(0,1)\),    \(\forall i=1,2, \ldots , n\). The fitness value for each elk solution is calculated using Eq. (2). Finally, the elks in \({{{\textbf{E}}}}{{{\textbf{H}}}}\) are sorted in ascending order based on their fitness values, such as \(f({{\varvec{x}}}^1)\le f({{\varvec{x}}}^1)\le \ldots \le f({{\varvec{x}}}^{EHS})\).

  • Step 3: Rutting season

    In rutting season, the EHO is modeled to create the families based on the bull rate (\(B_r\)). Initially, the total number of families is calculated as \(B= |B_r \times EHS|\). Then the bulls are selected from \({{{\textbf{E}}}}{{{\textbf{H}}}}\) based on their fitness values, where the elks of numbing B with the best fitness values at the top of \({{{\textbf{E}}}}{{{\textbf{H}}}}\) are considered as bulls (See Eq. (3)). This is to reflect the fighting domination challenges where the strongest elks are considered, and they will be assigned with more harems.

    $$\begin{aligned} {{\mathcal {B}}} = \arg \min _{j\in (1,2,\ldots ,B)} f({{\varvec{x}}}^j) \end{aligned}$$
    (3)

    The bulls in the \({{\mathcal {B}}}\) set then are fighting together to create families. To assign the harems to each bull in \({{\mathcal {B}}}\), the roulette-wheel selection is used where the harems are assigned to their bulls based on their fitness values with proportion to the total fitness values. In technical terms, firstly, each bull \(x^j\) in \({{\mathcal {B}}}\) will be assigned with a selection probability \(p_j\) based on its absolute fitness value \(f({{\varvec{x}}}^i)\) divided by the summation of absolute fitness values of all bulls as computed in Eq.(4).

    $$\begin{aligned} p_j=\frac{f({{\varvec{x}}}^j)}{\sum _{k=1}^{B}f({{\varvec{x}}}^k)} \end{aligned}$$
    (4)

    Secondly, the harems will be distributed to the bulls based on their selection probability \(p_j\) as given in the Algorithm 1. In the Algorithm, the vector \({{\textbf {H}}}=(h_1,h_2, \ldots , h_k)\), \(k=EHS-B\) reflects the harems, each of which is assigned by the bull index determined based on roulette-wheel selection.

    For example, if the elk herd size is ten (\(EHS=10\)), and the bull rate is 30%, then \(B=3\), which reflects the number of families. The \({{\mathcal {B}}}=({{\varvec{x}}}^1, {{\varvec{x}}}^2,{{\varvec{x}}}^3)\). The rest of elks (i.e., (\({{\varvec{x}}}^4, \ldots ,{{\varvec{x}}}^{10})\)) can be pointed as harems where they can be distributed based on the roulette-wheel selection, and the resulting assignment can be \({{\textbf {H}}}=(1,2,1,3,1,2,3)\) where the first bull has three harems, the second bull has two harems, and the third bull has two harems.

  • Step 4: Calving season

    In calving season, the calve (\(x_i^j(t+1)\)) of each family are reproduced based on the attributes mostly extracted from their father bull (\({{\varvec{x}}}^{h_j}\)) and mother harem(\(x_i^j(t)\)).

    In case the calf (\({{\varvec{x}}}_{i}(t+1)\)) has the same index i as its bull father in the family, the calf is reproduced as shown in Eq. (5).

    $$\begin{aligned} x_i^j(t+1) = x_i^j(t) + \alpha \cdot (x_i^{k}(t) - x_{i}^j(t)) \end{aligned}$$
    (5)

    where \(\alpha \) is a random value within the range of [0, 1] that determines the rate of the inherited attributes from the randomly selected elk in the herd \({{{\varvec{x}}}}^{k}(t)\) where \(k \in (1,2,\ldots , EHS)\). Please note that a higher value of \(\alpha \) results in a greater likelihood of random elements participating in the new calf, which, in turn, enhances diversification.

    In case the calf has the same index as its mother, then it \({{\varvec{x}}}_{i}(t+1)\) takes the attributes of its mother harem \({{\varvec{x}}}^j\) and father bull \({{\varvec{x}}}^{h_j}\) (See Figure 5) as formulated in Eq. (6).

    $$\begin{aligned} x_i^j(t+1)= x_i^j(t)+ \beta (x_{i}^{h_j}(t) - x_i^j(t)) + \gamma (x_i^r(t) - x_i^j(t)) \end{aligned}$$
    (6)

    where \(x_i^j(t+1)\) is the attribute i of the calf j at iteration \(t+1\) which will be stored in \({\textbf {EH'}}\). The \(h_j\) is the bull of the harem j, and r is the index of a random bull in the current bull set such that \(r \in {{\mathcal {B}}}\). In nature, in a few cases, the mother harem can also be mated with other bulls, if it is not defended well by its bull. \(\gamma \) and \(\beta \) are random values in the range of [0, 2] that randomly determine the portions of the attributes inherited from previously generated calves.

    It is worth mentioning from Equation 6 that the coefficients \(\beta \) and \(\gamma \) may represent significant parameters in the proposed EHO, given their resemblance to the ‘social’ and ‘cognitive’ models in the PSO (Kennedy 1997). Experiments have demonstrated the importance of both ‘social’ and ‘cognitive’ coefficients for PSO’s success, and numerous other researchers have adopted this configuration in their works as reported in the literature (Braik 2021; Braik et al. 2022c). It should also be realized that, for some optimization problems, ad hoc random values for \(\beta \) and \(\gamma \) in the interval [0, 2] instead of fixed values might result in improved performance. This could be because random values for \(\beta \) and \(\gamma \) in the specified range can be promising in achieving a respectable level of performance for EHO. This indicates that \(\beta \) and \(\gamma \) can balance the global and local search abilities of EHO.

  • Step 5: Selection season

    The bulls, calves, and harems of all families have merged. In technical terms, the \({\textbf {EH}}\) that stored the bulls and harem solutions and \({\textbf {EH}}'\) that stored the calves solutions are merged into one matrix \({{\textbf {EH}}_{temp}}\). The elks in the \({EH_{temp}}\) will be sorted in ascending order based on their fitness values. Finally, the top elks of the numbering EHS in \({{\textbf {EH}}_{temp}}\) will be kept to the next generation where they will replace the elks in \({\textbf {EH}}\), such that \({\textbf {EH}}^j={\textbf {EH}}_{temp}^j, {j=(1,\ldots , EHS)}\). In evolution strategy, this type of selection is called \(\mu +\lambda \)-selection where \(\mu \) is the parent population and \(\lambda \) is the offspring population (Eiben et al. 2003).

  • Step 6: Termination criteria

    Steps 3, 4, and 5 will be repeated until the termination criterion is met. Usually, the termination criteria can be the maximum number of iterations. This can be the maximum number of ideal iterations, the maximum computational time, or the optimal solution reachability.

Algorithm 1
figure a

The pseudo-code of Roulette-wheel selection

Fig. 5
figure 5

Calves reproduction

Algorithm 2
figure b

The pseudo-code of EHO

3.3 Numerical example

In order to provide a better understanding of the behavior of the EHO when navigating the search space of the optimization problems, the Shifted and Rotated Bent Cigar function is used. This test function is taken from CEC 2017 (Awad et al. 2016). The parameter settings of the EHO include \(EHS=10\), \(n=10\), \(B_r=30\%\), \(t_{max}=500\), ub=100, and lb=-100. Table 1 reports the resulting elk herd (\({\textbf {EH}}\)) in iterations 1, 10, 100, and 500. As can be noticed, in iteration 1, the initial elk herd is distributed over three families. The resulting calves have been substantially improved in comparison with their parents as shown at the second iteration. The fitness values of these solutions are divergent because these solutions are generated randomly. In the tenth iteration, the size of improvement is reduced, but in general, the fitness values of the calves are better than the fitness values of their parents. In iteration 100, the 10 solutions have become close to each other. In iteration 500, the size of improvement became narrower, and the elk distribution over the families tended to be random. This is because no superior bull can dominate the elk in \({\textbf {EH}}\).

Table 1 Numerical example of running the proposed EHO on Shifted and Rotated Bent Cigar function of 10 variables with 500 iterations and 10 solutions

The convergence behavior of the ten solutions is shown in Figure 6. Clearly, the exploration behavior of EHO is at its highest level in the initial course of runs. The elk solutions almost converge to the same region when iteration 20 is reached. This is to show that the EHO can quickly converge to the optimal region, especially when the problem search space is not large.

Fig. 6
figure 6

Example

4 Experiment results and discussion

This section presents the computational outcomes of the proposed EHO on standard test benchmark optimization problems. A set of two statistical measures is first utilized to explain the level of effectiveness of the proposed EHO and to show its effectiveness in comparison with other MH algorithms. Second, convergence curves are obtained to demonstrate how well the proposed EHO optimizes a certain collection of benchmark functions. To evaluate the accuracy and suitability levels of EHO in optimizing a collective group of real-world challenges, a set of four traditional engineering design problems is tackled. By contrasting the findings of EHO with those of other cutting-edge MH algorithms in the literature, the efficacy of EHO is examined, evaluated, and highlighted.

4.1 Description of the benchmark test functions

The performance of the proposed EHO was examined on a test suite of 29 benchmark optimization problems utilized in the CEC-2017 special sessions on real-parameter optimization. This test group consists of 30 test functions, of which there are 29 stable test functions and unstable test one. These test functions contain hybrid and composite functions. These functions are caught by rotating, shifting, expanding, and hybridizing uni-modal and multi-modal problems, comprising exceedingly difficult testbeds. These test functions mimic the complexity of a genuine search space with several local optimums and a variety of function forms in various regions. These test cases were created to evaluate the reliability of local optimum avoidance in addition to investigating the exploration ability of optimization methods. A skilled optimization algorithm is broadly known to avert local optimal solutions and quickly reach the global optimum. Due to the difficulty of the test set’s challenges and the added difficulty they give to the evaluation of EHO’s performance, it was chosen to explore the reliability and performance degrees of EHO. More information regarding the CEC-2017 benchmark test problems can be located in (Awad et al. 2016). The proposed EHO algorithm was also assessed on a test set of four traditional real-world engineering design optimization problems in order to add more challenge to the performance of EHO on real-world optimization tasks.

4.2 Experimental setup

To corroborate a thorough assessment of the proposed EHO, its outcomes are set side by side with nine of the most esteemed optimization algorithms in the literature when tested on the aforementioned benchmark test groups. The rival comparable MH algorithms are: Salp Swarm Algorithm (SSA) (Mirjalili et al. 2017), Sine Cosine Algorithm (SCA) (Mirjalili 2016), Rat Swarm Optimizer (RSO) (Dhiman et al. 2021), Moth-Flame Optimizer (MFO) (Mirjalili 2015), Horse herd Optimization Algorithm (HOA) (MiarNaeimi et al. 2021), Capuchin Search Algorithm (Braik et al. 2021), Ali Baba and the Forty Thieves (AFT) (Braik et al. 2022b), Crow Search Algorithm (CSA) (Askarzadeh 2016), Bat Algorithm (BA) (Yang and Gandomi 2012), and Particle swarm optimization (PSO) (Kennedy and Eberhart 1995), Ant Colony Optimization (ACO) Dorigo et al. (1996), and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) Hansen and Ostermeier (1997). Table 2 displays the control parameters and settings for the proposed EHO algorithm and other rival MH algorithms.

Table 2 Parameter setting of the proposed EHO algorithm and other MH competitors

The parameter settings of the competing optimization algorithms are mentioned in Table 2, except CSA which uses the recommended settings (Askarzadeh 2016). EHO uses a similar initialization process to other comparative optimization methods. This is done in order to compare EHO and those rival algorithms fairly. According to information in the literature, there are 100 search agents (i.e., EHS = 100), and the maximum number of iterations used is equal to \(10000\times n\) for each method. The bull rate (\(B_r\)) for EHO is determined based on the initial population composition, which is experimentally determined to fall into one of the following ratios: 10:90, 20:80, or 30:70. In our experiment, the 20:80 ratio is adopted, which indicates that 20% of the population consists of bulls, while the remaining 80% forms the harem.

Each optimization algorithm in Table 2 was assessed using thirty separate runs for each test optimization problem. Each algorithm has a maximum number of iterations as its stop condition. One may point out that while all algorithms are compared with identical floating point precision, the variations in the results are caused by the efficiency of the competing methods. Over the aforementioned number of independent runs, the best, mean, worst, and standard deviation (Stdv), are calculated and utilized as performance assessment indicators for the accuracy and stability of the rival algorithms. These statistical assessment metrics were calculated in this study for each method and each test function as the top four best solutions. While the standard deviation results analysis attempts to reveal the steady performance of the algorithms during the separate runs, the mean measure was employed to assess the algorithms’ accuracy. The top outcomes for all test functions are emboldened in all tables to afford them more preeminence out of others. The performance of EHO in comparison to various optimization algorithms in CEC-2017 and engineering design benchmark optimization tasks is presented and discussed in the next subsections.

4.3 Performance of EHO on CEC-2017 test functions with problem size of 10 variables

In this section, the performance of the proposed EHO was evaluated and compared to other comparative methods using CEC-2017 test functions with a problem size of 10 variables. The results of all competitors were summarized in terms of the best solution, mean of the results, the worst solution, and standard deviation in Table 3. It should be noted that the lower results reflect better performance, while the lower mean of results was highlighted using bold fonts. The results of this table highlight the superiority of the proposed EHO, where the EHO, SSA, and CapSA ranked first as each one obtained the best mean of the results in 10 test functions. The CSA was ranked second by achieving the best mean of the results in 7 test functions, while the AFT came in third rank by getting the best results in 6 test functions. In addition, the BA and PSO were placed fourth with each obtaining the best results in 3 test functions, while the remaining four competitors were not able to achieve the best results for any of the test functions.

Reading the results demonstrated in Table 3 it can be seen that the EHO performs better than the other comparative algorithms in simple multimodal functions (C17-F4 to C17-F8, and C17-F10). While the EHO obtained the best mean of results in 5 out of 6 test functions. In addition, the EHO outperforms the other comparative algorithms in 2 out of 3 unimodal functions (C17-F1 to C17-F3). The performance of the EHO was very convincing by obtaining the best mean of results in three out of 10 in the hybrid function (C17-F11 to C17-F20). It should be noted that the CapSA algorithm performs better than the EHO and all other comparative algorithms in 5 of the hybrid functions. This leads to the conclusion that the EHO has the second-best performance compared to others in the hybrid functions. Finally, the results of the EHO were acceptable and very competitive with other methods in the 10 composition functions (C17-F21 to C17-F30).

The standard deviation (Stdv) results reflect the stability of the solution method, the lower Stdv values mean better stability. Reading the Stdv results recorded in Table 3, it can be seen that the EHO is more stable than the other comparative methods. Especially on C17-F2, C17-F3, C17-F4, C17-F6, C17-F8, C17-F11, and C17-23. The performance of the EHO is more robust when compared against other comparative algorithms in the remaining test functions.

Table 3 Optimization results of EHO and other comparative algorithms on the CEC-2017 test functions of 10 variables with 100,000 FEs

Similarly, Friedman’s statistical test was used to prove the effectiveness of the proposed EHO against other comparative methods. This is illustrated in Table 4, which demonstrates the average rankings of all competitors according to the mean of the results summarized in Table 3. It is worth mentioning that the lower average rankings reflect better performance, while the significance level \(\alpha \) is equal to 0.05. \(H_0\) is the null hypothesis which assumes that all competitors have the same performance, while \(H_1\) is the alternative hypothesis which assumes that there is a significant difference between the performance of the competitors. From Table 4, it can be seen that SSA was ranked first by getting the lowest average ranking equal to 3.38, while the CSA comes in second rank. The CapSA was placed third, while the AFT was ranked Fourth. The proposed EHO was placed in the fifth position, while the remaining six algorithms come in the next ranking positions. The p-value calculated using Friedman’s test is 1.276E−10, and this value is less than the significance level (\(\alpha \)=0.05). This leads to reject the \(H_0\) and accept the \(H_1\).

Table 4 Friedman’s statistical test of EHO and other comparative algorithms on CEC 2017 with 10 variables in terms of mean results using Friedman’s test

Additionally, Holm’s test as a post-hoc procedure is used to confirm the differences between the performance of the controlled algorithm and the other comparative algorithms. It should be noted that the SSA is the controlled algorithm, this is due to the fact that the SSA was ranked first using Friedman’s test. From Table 5, it can be seen that there is a significant difference between the SSA and EIGHT of the competitors (i.e., RSO, BA, MFO, HOA, SCA, CMA-ES, ACO, and PSO). On the other hand, there is no significant difference between the SSA and the remaining comparative algorithms (i.e., CSA, AFT, CapSA, and EHO). This proves the effectiveness of the proposed EHO as an alternative algorithm in the optimization domain.

Table 5 Holm’s results between the control method (SSA) and other comparative methods based on the mean results of all algorithms on CEC 2017 test functions with 10 variables

4.4 Performance of EHO on CEC-2017 test functions with problem size of 30 variables

The performance of the proposed EHO was evaluated and compared to other comparative methods using CEC-2017 test functions with a problem size of 30. This is to evaluate the proposed algorithm using more complex optimization problems based on higher dimensionality. Table 6 shows the results of the EHO and other comparative algorithms in terms of the best solution, the mean, the worst solution, and the standard deviation. It should be noted that lower results mean better performance. Interestingly, it can be illustrated that the EHO was ranked first by obtaining the best mean of the results in 12 out of 29 test functions, while the CapSA ranked second by getting the best results in 10 test functions. The SSA was ranked third by achieving the best results in 7 datasets, while the AFT, CAS, and BA came in the next rankings positions by getting the best mean of results in the 4, 3, and 2 test functions, respectively. The PSO and HOA are ranked seventh as each getting the best results in one test function. However, the SCA, RSO, and MFO are not able to achieve the best results for any of the test functions.

Reading the results presented in Table 6 more in-depth, we find that the performance of the EHO is better than the other competitors in simple multimodal functions (C17-F4 to C17-F8, and C17-F10), where the EHO has obtained the best mean of the results in C17-F5 to C17-F8. Furthermore, the EHO performs better than the other comparative algorithms in composition functions (C17-F21 to C17-F30) by getting the best mean of the results in 5 out of 10 test functions. However, the results of the EHO were very competitive with others in the 10 composition functions (C17-F21 to C17-F30) and unimodal functions (C17-F1 to C17-F3).

Table 6 Optimization results of EHO and other comparative algorithms on the CEC-2017 test functions of 30 variables with 300,000 FEs

To prove the effectiveness of the proposed EHO, Friedman’s statistical test was used to rank all competitors according to the mean of the results summarized in Table 6. The average rankings of all competitors were illustrated in Table 7, while the lower rankings mean better performance. It can be seen that CapSA obtained the first rank, while the SSA was placed in the second rank. The proposed EHO was ranked third, while the remaining seven algorithms came in the next ranking positions. The p-value calculated using Friedman’s test is equal to 1.265E−10, and this value is bigger than the significance level (\(\alpha \)=0.05). This leads us to reject the \(H_1\) and accept the \(H_0\).

Thereafter, Holm’s procedure was used to confirm the outcomes of Friedman’s test. The CMA-ES is the controlled algorithm because it obtained the best average rankings using Friedman’s test. From Table 8, it can be demonstrated that there is a significant difference between the CMA-ES and nine of the other comparative algorithms (i.e., RSO, SCA, MFO, HOA, ACO, AFT, PSO, CSA, and BA). On the other hand, no significant differences between the controlled algorithm (CMA-ES) and the remaining algorithms (i.e., CapSA, SSA, and EHO). Clearly, no significant difference between the CMA-ES and the proposed EHO. This certainly confirms the efficiency of the proposed EHO as a powerful algorithm in the optimization domain.

Table 7 Friedman’s statistical test of EHO and other comparative algorithms on CEC 2017 with 30 variables in terms of mean results using Friedman’s test
Table 8 Holm’s results between the control method (CMA-ES) and other comparative methods based on the mean results of all algorithms on CEC 2017 test functions with 30 variables

4.5 Performance of EHO on CEC-2017 test functions with problem size of 50 variables

In this section, the effectiveness and robustness of the proposed EHO are compared against other competitors using large-scale CEC-2017 test functions with a problem size of 50. Table 9 demonstrates the results of all competitors in terms of the best solution, the mean, the worst solution, and the standard deviation. The lower values of the mean results are better, and the best mean of the results is highlighted using bold fonts. Reading the results recorded in Table 9, it can be seen the superiority of the proposed EHO, which came similar to the results of the competitors when tested on the same functions with a problem size of 10. However, the EHO, SSA, and CapSA ranked first with each obtaining the best mean of the results in 10 out of 29 test functions. The AFT and CSA obtained the second rank with each getting the best mean of results in three test functions. The PSO was placed third by getting the best results for the C17-F3 and C17-F7 test functions, while the BA obtained the best mean of results on the C17-F2 test function. Finally, the remaining four comparative algorithms are not able to achieve the best mean of results for any of the test functions.

Furthermore, Table 9 shows that the proposed HEO performs better results when compared against other comparative algorithms in the 10 composition functions (C17-F21 to C17-F30). The proposed algorithm obtains the best results for C17-F21, C17-F23, C17-F24, C17-F29, and C17-F30. Furthermore, the performance of the proposed EHO is better than the other comparative methods in the 6 simple multimodal functions (C17-F4 to C17-F8, and C17-F10) by getting the best results for C17-F5, C17-F7, and C17-F8. However, the results of the EHO are very competitive with other competitors in the composition functions (C17-F21 to C17-F30) and unimodal functions (C17-F1 to C17-F3).

Table 9 Optimization results of EHO and other comparative algorithms on the CEC-2017 test functions of 50 variables with 500,000 FEs

Friedman’s statistical test is used to prove the superiority of the proposed EHO by calculating the average ranking of the EHO against other competitors based on the mean of the results given in Table 9. The average rankings of all competitors are plotted in Table 10. The lower rankings reflect better performance. From Table 10, it can be observed that the CMA-ES was ranked first, while SSA was placed in the second rank. The proposed EHO achieved the third ranking, while PSO got the fourth-ranking. The nine remaining algorithms came in the next ranking positions. The p-value calculated using Friedman’s test is 9.588E−11, and this value is less than the significance level (\(\alpha \)=0.05). This leads us to reject the null hypothesis \(H_0\) and accept the alternative hypothesis \(H_1\).

Later on, Holm’s procedure was utilized to confirm the difference between the behavior of the controlled algorithm and other comparative algorithms. It should be noted that the CapSA is the controlled algorithm according to the results of Friedman’s test. Table 11 reported the results of Holm’s procedure. Clearly, there is a significant difference between CMA-ES and nine of the other methods (i.e., RSO, SCA, MFO, ACO, HOA, AFT, CSA, PSO, and BA). On the other hand, no significant difference between the behavior of the CapSA and the remaining methods (i.e., CapSA, EHO, and SSA). Finally, we can conclude that the performance of the proposed EHO is similar to some of the comparative algorithms and better than others. This proves the efficiency of the proposed EHO as a new alternative technique in the optimization domain.

Table 10 Friedman’s statistical test of EHO and other comparative algorithms on CEC 2017 with 50 variables in terms of mean results using Friedman’s test
Table 11 Holm’s results between the control method (CMA-ES) and other comparative methods based on the mean results of all algorithms on CEC 2017 test functions with 50 variables

4.6 EHO Convergence analysis

This section study and analyze the convergence behavior of the proposed EHO compared against some of the other comparative algorithms using the CEC-2017 test functions. The distribution of the results for these competitors during the search process is visualized in Figure 7. Also, the convergence curves of some competitors towards the optimal solution are plotted in Figure 8. It should be noted that seven of the test functions with three different problem dimensions (i.e., dim=10, dim=30, and dim=50) are considered in these figures to study the test functions with different search space complexities. This includes C17-F1 as unimodal; C17-F5 and C17-F10 as multimodal; C17-F15 and C17-F20 as hybrid functions; and C17-F22 and C17-F30 as composition functions.

Figure 7 demonstrates the notched boxplots used to plot the distribution of the results for the proposed EHO against the other competitors on seven test functions with different problem dimensionality. The x-axis represents the algorithm, while the y-axis represents the objective function values. It should be noted that the comparative methods on each test function were running 30 times. In the plots, the small gap between the best results, the median, and the worst results reflects the stability of the algorithm. From Figure 7, it can be clearly seen that no gap between the results of the proposed EHO on C17-F1, C17-F15, C17-F25, and C17-F30. In other words, the proposed EHO was able to achieve almost the same results at all times of the experiment. In addition, the gap in the results of the EHO widens as the dimension of the problem increases, as shown in the plot of C17-F20. The behavior of the proposed EHO seems stable as shown in the plot of C17-F5, and thus leads to achieving the best results. The behavior of the proposed EHO seems similar to other competitors on C17-F10, but unfortunately, the results of some other competitors are better than the proposed EHO. Finally, it can be observed that the performance of the proposed EHO appears to be stable regardless of the dimensions of the problem compared to other competitors in most of the cases studied, and this proves the efficiency of the proposed EHO.

Similarly, the convergence behavior of the proposed EHO compared against the other comparative methods is shown in Figure 8. The x-axis represents the iterations, while the y-axis represents the objective function values. The best solution obtained by running each algorithm on each test function 30 times was plotted in this figure. The preferable optimization algorithm is the one that presents rapid convergence at the early stages of the search process, and the improvements continue till the last stages of the search process. In other words, the optimization algorithm is able to make the right balance between the exploration and exploitation abilities during the search process and thus achieve satisfactory results. Reading Figure 8 one more time, it can be seen that the convergence curves of all algorithms on all test functions are stabilized before 2000 iterations. However, the convergence curve of the proposed EHO was better than the other comparative algorithms on C17-F5 and C17-F20. In addition, the convergence curve of the proposed EHO was similar to some of the other comparative algorithms in the remaining test functions studied in Figure 8. The curve of the RSO algorithm was the worst compared to other comparative algorithms, due to the fact the RSO has shortcomings in exploration ability and thus gets stuck in local optima.

Fig. 7
figure 7

Boxplots of the objective function results achieved by the proposed EHO and some other comparative algorithms. Boxplots of the objective function results achieved by the proposed EHO and other comparative algorithms.

Fig. 8
figure 8

The convergence characteristic curves of the proposed EHO and some other comparative algorithms for C17-F1, C17-F5, C17-F10, C17-F15, C17-F20, C17-F25, and C17-F13. The convergence characteristic curves of the proposed EHO and other comparative algorithms for C17-F1, C17-F5, C17-F10, C17-F15, C17-F20, C17-F25, and C17-F13

4.7 Performance of EHO on engineering problems

The performance of EHO in tackling real-world problems, particularly constrained optimization problems, is divulged by its validity on popular traditional engineering design problems. Here, EHO is utilized to address four well-researched engineering designs: the welded beam design problem, the pressure vessel design problem, the tension/compression spring design problem, and the speed reducer design problem. These problems have a relatively wide range of constraints that need to employ a constraint-handling strategy to optimize them.

4.7.1 Constraint handling

To deal with the constraints of the aforementioned engineering design problems, EHO was adapted with a simple method of dealing with constraints called static penalty handling method (Yang 2010a). This is applied to have a fair comparison between EHO and the comparative methods used in this work. The penalty function of this method can be presented as shown below:

$$\begin{aligned} \zeta (z) = f(z) \pm \left[ \sum _{i=1}^{m} l_i \cdot max (0, t_i(z))^\gamma + \sum _{j=1}^{n} o_j \left| U_j(z) \right| ^\psi \right] \end{aligned}$$
(7)

where \(o_j\) and \(l_i\) are two positive penalty constants, \(U_j(z)\) and \(t_i(z)\) are constraint functions, and \(\zeta (z)\) implements the objective function. The values of \(\psi \) and \(\gamma \) were set to 2 and 1, respectively.

This constraint method stands out for its ease of use and minimal computational cost. It is quite useful to tackle design problems with dominating infeasible areas since it does not require knowledge from infeasible solution information. This method determines the static penalty function’s penalty value for each solution, which can help the search agents of optimization algorithms find the right solution faster. It is important to note that the search agents and iterations used to solve each of the engineering problems below were the same as those used to solve the preceding test mathematical functions.

The parameters that EHO uses are presented above. The literature has a number of meta-heuristic optimization techniques that have previously been used to address these design optimization problems. As demonstrated below, the outcomes of EHO are contrasted with those of other promising meta-heuristic algorithms.

4.7.2 Welded beam design problem

The design of this problem is a cantilever beam welded at one end and subjected to a spot load at the other end. The goal of this problem is to design a welded beam for the construction shown in Figure 9 (Wang et al. 2014) to arrive at the lowest fabrication cost.

Fig. 9
figure 9

A welded beam structure’s design (Wang et al. 2014)

The welded beam structure comprises a beam, A, and a welding required to join the beam, A, to the member, B. The following restrictions apply to this problem: shear stress (\(\tau \)), bending stress (\(\theta \)), buckling load (\(P_c\)), and an end deflection of the beam (\(\delta \)). In order to solve this optimization problem, there is a necessity to track down the possible combination of the following structural parameters of the welded beam design: the thickness of the weld (h), the length of the clamped bar (l), the height of the bar (t) and the thickness of the bar (b).

The following vector may be used to represent these parameters: \(\vec {x} = [x_1, x_2, x_3, x_4]\), where \(x_1, x_2, x_3\) and \(x_4\) represent h, l, t and b, respectively. The cost function for this optimization problem has the following mathematical formula:

Consider \(\vec {x}= [x_1 x_2 x_3 x_4] = [hltb]\)

Minimize   \(f(\vec {x}) = 1.10471x^2_1x_2 +0.04811x_3x_4(14.0+x_2)\)

Subject to the following restrictions,

$$\begin{aligned}{} & {} g_1(\vec {x}) = \tau (\vec {x})-\tau _{max}\le 0\\{} & {} g_2(\vec {x}) = \sigma (\vec {x})- \sigma _{max} \le 0\\{} & {} g_3(\vec {x}) = x_1 -x_4 \le 0\\{} & {} g_4(\vec {x}) = 1.10471x^2_1 +0.04811x_3x_4(14.0+x_2) -5.0 \le 0\\{} & {} g_5(\vec {x}) = 0.125- x_1 \le 0\\{} & {} g_6(\vec {x}) = \delta (\vec {x})- \delta _{max} \le 0\\{} & {} g_7(\vec {x}) = P-P_c(\vec {x}) \le 0\\ \end{aligned}$$

Some more elements of this design problem can identified as follows:

$$\begin{aligned}{} & {} \tau (\vec {x})=\sqrt{((\tau ')^2 + (\tau '')^2)+\frac{2\tau '\tau '' x_2}{2R}}, \tau '=\frac{p}{\sqrt{2}x_1x_2}\\{} & {} \tau ''=\frac{MR}{J}, M=P(L+\frac{x_2}{2}), R=\sqrt{(\frac{x_1+x_3}{2})^2+\frac{x_2^2}{4}}\\{} & {} J=2\left\{ \sqrt{2}x_1x_2\left[ \frac{x_2^2}{12}+(\frac{x_1+x_3}{2})^2\right] \right\} , \sigma (\vec {x})=\frac{6PL}{x_4x_3^2}\\{} & {} \delta (\vec {x})=\frac{4PL^3}{Ex_4x_3^3}, P_c(\vec {x})=\frac{4.013\sqrt{EGx^2_3x_4^6/36}}{L^2}\left( 1-\frac{x^3}{2L}\sqrt{\frac{E}{4G}}\right) \end{aligned}$$

where \(P =6000lb\), \(\delta _{max} = 0.25\)inch, \(L =14\)in, \(G = 12*10^6\) psi, \(E =30*10^6\) psi, \(\sigma _{max} = 30000\) psi, \(\delta _{max} = 13600psi\).

The ranges of the parameters h, l, t and b were chosen to be correspondingly \(0.1\le x_1\le 2\), \(0.1\le x_2\le 10\), \(0.1\le x_3\le 10\), and \(0.1\le x_4\le 2\), respectively.

Table 12 compares the EHO’s best solutions to those produced by other comparative optimization algorithms.

Table 12 Optimization results of the welded beam design problem arrived at by EHO and other optimization methods

The findings presented in Table 12 point out that the proposed EHO achieves the best design for the welded beam structure by locating the optimal cost of around 1.724852, which is the least cost among all the algorithms considered. Table 13 compares the statistical performance of EHO and other optimization methods after 30 separate runs with respect to the best, worst, average, and standard deviation results.

Table 13 Statistical findings of EHO and other optimization techniques for the welded beam design problem

The outcomes of Table 13 point out that EHO outperforms other algorithms with the lowest average values in comparison to other rival algorithms. The outcomes of this table also speak that EHO once more behaves much better in terms of standard deviation values as well as determining the lower scores for worst and best solutions in comparison to others. This demonstrates EHO’s level of reliability and competence in handling such design problems.

4.7.3 Pressure Vessel Design Problem

This problem is one of the often used benchmark tests for a structural design that uses both continuous and discrete variables (Kannan and Kramer 1994). The objective of this problem is to lower the overall cost of materials, construction, and welding of the cylindrical pressure vessel with hemispherical heads on both ends, as illustrated in Figure 10.

Fig. 10
figure 10

An illustration of the cross-section of the pressure vessel design problem (Kannan and Kramer 1994)

The four optimization design variables for this problem are as follows: inner radius (R), length of the cylindrical section of the vessel without glancing at the head (L), the thickness of the shell (T\(_s\)) and head (T\(_h\)). These variables can be drafted in a vector as follows: \(\vec {x} = [x_1, x_2, x_3, x_4]\), where \(x_1, x_2, x_3\) and \(x_4\) stand for T\(_s\), T\(_h\), R and L, respectively. The variables L and R are continuous variables, while T\(_h\) and T\(_s\) are integer values that are multiples of 0.0625 inch. The following is the mathematical formula for this design problem:

Consider \(\vec {x}= [x_1 x_2 x_3 x_4] =[T_sT_hRL]\)

Minimize the function: \(f(\vec {x}) = 0.6224x_1x_3x_4 +1.7781 x_2x^2_3 +3.1661x^2_1x_4+19.84x^2_1x_3\)

This optimization problem is subject to four constraints as described below,

$$\begin{aligned}{} & {} g_1(\vec {x}) = -x_1 +0.0193x_3\le 0\\{} & {} g_2(\vec {x}) = -x_2 +0.00954x_3 \le 0 \\{} & {} g_3(\vec {x}) = -\pi x^2_3x_4 -\frac{4}{3}\pi x^3_3+1296000 \le 0\\{} & {} g4(\vec {x}) = x_4 -240 \le 0 \end{aligned}$$

where \(0 \le x1 \le 99\),     \(0 \le x_2 \le 99\),    \(10 \le x_3 \le 200\)     and    \(10 \le x_4 \le 200\).

The problem of pressure vessel design is one of the most popular optimization problems that researchers have utilized in various considerations to verify the effectiveness of their evolved optimization algorithms. Table 14 displays a comparison of the optimum outcomes attained by EHO and other optimization algorithms for the pressure vessel design problem.

Table 14 Optimization results of the pressure vessel design problem arrived at by EHO and other optimization methods

As per the optimization cost findings of the pressure vessel design problem in Table 14, EHO was capable of identifying the best design with the lowest possible cost, where it reported the lowest cost of 5885.332774. A comparison of the statistical outcomes between EHO and other rival optimization methods for the pressure vessel design problem over 30 separate runs is presented in Table 15.

Table 15 Statistical findings of EHO and other optimization techniques for the pressure vessel design problem

It may be observed from Table 15 that EHO outperforms other competing algorithms and provides competitive results in terms of Ave and Std values. This demonstrates how effective and reliable the proposed EHO is in solving this design optimization problem.

4.7.4 Tension/compression spring design problem

Another well-known benchmark problem is the design of a tension/compression spring with a schematic diagram given in Figure 11.

Fig. 11
figure 11

An illustration of the schematic structural diagram of a tension/compression spring (Coello 2000)

The reduction of the weight of a tension/compression spring design is the aim of this optimization problem. There are certain constraints on this problem, such as shear stress, surge frequency, and minimum deflection. The diameter of the wire (d), the diameter of the mean coil (D), and the number of active coils (N) are the parameters in this design problem.

The parameters for this problem were implemented by a vector as \(\vec {x} = [x_1, x_2, x_3]\), where \(x_1, x_2\) and \(x_3\) stand for the parameters dD, and N, respectively. As stated before, the purpose of this problem is to reduce the weight of the objective f(x), which is subject to the aforementioned constraints and limits on outside diameter and on design variables. This optimization problem’s mathematical formula is as follows:

Consider \(\vec {x}= [x_1 x_2 x_3] =[dDN]\)

Minimize the objective function: \(f(\vec {x}) = (x_3 +2)x_2x^2_1\)

This problem is subject to the constraints given next:

$$\begin{aligned}{} & {} g_1(\vec {x}) = 1-\frac{x^3_2x_3}{71785x^4_1}\le 0\\{} & {} g_2(\vec {x}) =\frac{4x^2_2-x_1x_2}{12566(x_2x^3_1-x^4_1)}+\frac{1}{5108x^2_1}-1 \le 0\\{} & {} g_3(\vec {x}) = 1-\frac{140.45x_1}{x^2_2x_3} \le 0\\{} & {} g_4(\vec {x}) = \frac{x_1+x_2}{1.5} -1\le 0 \end{aligned}$$

where \(0.05 \le x_1 \le 2.0\), \(0.25 \le x_2 \le 1.3\) and \(2 \le x_3 \le 15.0\).

Numerous meta-heuristic techniques were extensively used to address the tension/compression spring design problem. Table 16 compares the objective cost and design variable values for the proposed EHO and other competing algorithms for the tension/compression spring design problem.

Table 16 Optimization results of the tension/compression spring design problem arrived at by EHO and other optimization methods

The outcomes shown in Table 16 clearly demonstrate that EHO is able to identify the best solution, 0.012665, when compared to the costs determined by other methods for this design problem. Table 17 presents a summary of the statistical findings of this design problem produced by EHO and other rival methods.

Table 17 Statistical findings of EHO and other optimization techniques for the tension/compression spring design problem

The findings in Table 17 show that EHO outperforms other optimization techniques by offering better outcomes in terms of best, average, worst, and standard deviation. This confirms that EHO can be trusted to solve this design problem. In comparison to other algorithms like SSA, SCA, and RSO, the statistical findings demonstrate that EHO had extremely competitive statistical outcomes even with fewer iterations. In a nutshell, the general performance of the proposed EHO in optimizing the above three engineering problems attests to the reliability and efficiency of EHO to solve other complex real-world applications.

4.7.5 Speed reducer design problem

The speed reducer design, with the structure presented in Figure 12, is another classical real-world engineering design problem frequently employed as a benchmark case for evaluating various optimization algorithms. This is a challenging benchmark problem as it is associated with seven variables that are required to model the problem (Gandomi and Yang 2011).

Fig. 12
figure 12

An illustration of a speed reducer’s structural design (Gandomi and Yang 2011)

The weight to be reduced in this design problem is subject to four constraints described as follows: bending stress of the gear teeth, surface stress, transverse shaft deflections, and stresses in the shafts (Mezura-Montes and Coello 2005).

These are the seven design variables used in this problem: b, m, z, \(l_1\), \(l_2\), \(d_1\), and \(d_2\). These variables are, in order, specified as follows: face width, the module of teeth, number of teeth in the pinion, length of the first shaft between bearings, length of the second shaft between bearings, first shaft diameter, and second shaft diameter. These variables are denoted by the vector \(\vec {x}= [x_1 x_2 x_3 x_4 x_5 x_6 x_7]\) for solving this optimization problem. This is an example of a mixed-integer programming problem. The third variable, the pinion’s number of teeth (z), only takes integer values. All other variables (apart from \(x_3\)) are therefore continuous. This problem’s mathematical formulation is as follows:

Consider \(\vec {x}\)= [\(x_1\) \(x_2\) \(x_3\) \(x_4\) \(x_2\) \(x_3\) \(x_4\)] = [b m z \(l_1\) \(l_2\) \(d_1\) \(d_2\)]

Minimize   \(f(\vec {x}) = 0.7854x_1x_2^2(3.3333x_3^2+14.9334x_3-43.0934) -1.508x_1(x_6^2+x_7^2)+7.4777(x_6^3+x_7^3)+0.7854(x_4x_6^2+x_5x_7^2)\)

Subject to the following constraints,

$$\begin{aligned}{} & {} g_1(\vec {x}) = \frac{27}{x_1x_2^2x_3}-1 \le 0\\{} & {} g_2(\vec {x}) = \frac{397.5}{x_1x_2^2x_3^2}-1 \le 0\\{} & {} g_3(\vec {x}) = \frac{1.9 x_4^3}{x_2x_6^4x_3}-1 \le 0\\{} & {} g_4(\vec {x}) = \frac{1.93 x_5^3}{x_2x_7^4x_3}-1 \le 0\\{} & {} g_5(\vec {x}) = \frac{[(745(x_4/x_2x_3))^2+16.9\times 10^6]^{1/2}}{110x_6^3}-1 \le 0\\{} & {} g_6(\vec {x}) = \frac{[(745(x_5/x_2x_3))^2+157.5\times 10^6]^{1/2}}{85x_7^3}-1 \le 0\\{} & {} g_7(\vec {x}) = \frac{x_2x_3}{40}-1 \le 0\\{} & {} g_8(\vec {x}) = \frac{5x_2}{x_1}-1 \le 0\\{} & {} g_9(\vec {x}) = \frac{x_1}{12x_2}-1 \le 0\\{} & {} g_{10}(\vec {x}) = \frac{1.5x_6+1.9}{x_4}-1 \le 0\\{} & {} g_{11}\vec {x}) = \frac{1.1x_7+1.9}{x_5}-1 \le 0 \end{aligned}$$

where the scope of the 7 design variables \(b, m, z, l_1, l_2, d_1\) and \(d_2\) were presented as \(2.6\le x_1\le 3.6\), \(0.7\le x_2\le 0.8\), \(17\le x_3\le 28\), \(7.3\le x_4\le 8.3\), \(7.3\le x_5\le 8.3\), \(2.9\le x_6\le 3.9\) and \(5.0\le x_4\le 5.5\), respectively.

A comparison of the best solutions found by EHO and other comparative optimization techniques for the speed reducer design problem is shown in Table 18.

Table 18 Optimization results of the speed reducer design problem arrived at by EHO and other optimization methods

As presented in Table 18, the proposed EHO is superior to other optimization methods by getting the minimum cost for the speed reducer design problem of approximately 2994.471066. A summary of the statistical results of EHO and the other ten optimization algorithms for the speed reducer design problem is displayed in Table 19.

Table 19 Statistical findings of EHO and other optimization techniques for the speed reducer design problem

As per the findings in Table 19, EHO and PSO achieve the best optimal solutions among other competing optimizers. This makes it clear that EHO offers better outcomes in terms of best, average, worst, and standard deviation than other comparative algorithms.

In contrast to other well-known optimization algorithms, the proposed EHO has demonstrated its effectiveness and reliability in tackling four real-world engineering design problems. In terms of both the best cost outcomes and the standard deviation values, this approach performs better than several well-known optimization techniques like SSA and SCA. As a result, one may draw the conclusion that EHO is a suitable optimization technique and that it has a lot of potential for solving real-world contemporary problems. In conclusion, the overall effectiveness of the proposed meta-heuristic algorithm in solving the aforementioned four classical engineering problems belies its credibility and constancy, and it is undoubtedly a good candidate to address a variety of complicated real-world situations.

5 Conclusion and future work

This paper introduces the Elk Herd Optimizer (EHO), a novel swarm-based optimization algorithm inspired by the elk herd breeding cycle, aimed at solving a wide range of optimization problems. EHO encompasses a structured optimization loop comprising three key phases: rutting season, calving season, and selection season. These phases emulate the natural behavior of elk herds and facilitate the generation of improved solutions iteratively. During the rutting season, EHO divides the population into groups, each led by a dominant elk and accompanied by followers. The number of followers is determined based on the leader’s fitness, ensuring the emergence of stronger groups. In the calving season, these groups collaborate to find new solutions, simulating the reproduction process among elks. Offspring inherit traits from their parents, with occasional random traits from other elks, fostering diversity. The selection season merges all elk families, including leaders, followers, and offspring, and employs a \(\mu +\lambda \)-survivor selection scheme to choose the fittest individuals.

EHO’s efficacy is rigorously evaluated on 29 benchmark test functions, with problem sizes of 10, 30, and 50 variables, as well as on four real-world engineering design optimization problems: welded beam design, pressure vessel design, tension/compression spring design, and speed reducer design. Comparative analysis against nine state-of-the-art optimization algorithms, including SSA, SCA, RSO, MFO, HOA, CapSA, AFT, ACO, CSA, CMA-ES, and BA, reveals EHO’s superior performance. Statistical tests, such as the Friedman test and Holm post-hoc test, validate EHO’s dominance.

The results demonstrate that EHO consistently outperforms competitors across various problem types, including unimodal, simple multimodal, and hybrid benchmark functions. Furthermore, its competitive performance on composite benchmark functions highlights its versatility. When applied to engineering design problems, EHO consistently achieves superior outcomes, showcasing its effectiveness in real-world applications. This substantiates EHO’s ability to strike a balance between exploration and exploitation, making it a potent optimization tool. However, it’s important to acknowledge certain limitations. EHO’s performance on constrained optimization problems is promising but requires further investigation, particularly when handling intricate constraints in real-world applications. Additionally, while EHO shows strong potential, its scalability and adaptability to very high-dimensional problems may warrant further exploration. Nevertheless, the overall findings underscore EHO’s value as a reliable and efficient optimization algorithm for a wide range of practical scenarios.

As a novel nature-inspired swarm-based optimization algorithm, EHO has promising opportunities for future development. Some of these possible future directions can be summarized as follows:

  • Modified versions of EHO: Due to the different search space types of various optimization problems such as discrete, continuous, binary, structured, etc. In the initial version of EHO, the operations are designed to tackle an optimization problem with a continuous domain. In the future, these operations should be modified to cope with problem search space requirements.

  • Multi-objective version of EHO: There are several optimization problems with multi-objective functions. The need for a new version of EHO is essential to deal with Pareto concepts of multi-objective optimization.

  • Real-world optimization application: Since the majority of the real-world optimization problems are either NP-hard or NP-complete. These types of optimization problems are mostly constrained, non-linear, non-convex, and combinatorics. Therefore, a new version of EHO is required to connect the problem search space with the EHO operations tightly, such as hybrid versions.

  • Parameter-free EHO: The optimization research communities nowadays tend to build simple and easy-to-use MH algorithms due to the fact that the optimal solution can be utilized anywhere, and naive users can use it without proper knowledge. Therefore, it is highly recommended in the next study of EHO to find a proper parameter tuning mechanism to build a parameter-free EHO where the number of families, B, can be set automatically based on the population size.

  • Bull and harem selections of EHO: In EHO, the rutting season is set to select the bulls and assign the remaining elks as harems to the bulls’ families based on their fitness value using roulette-wheel selection. Indeed, the viability of different alternative mechanisms can be investigated in the future such as using clustering algorithms or using rank-based/ exponential-based selection mechanisms instead of roulette-wheel selection to avoid its shortcomings.

  • Survivor Selection of EHO: In most previous MH swarm-based methods, the whole parent population will be replaced by the offspring population in the next generation. To be more realistic and to cope with the natural phenomenon, the EHO has adopted the (\(\mu +\lambda \)) survivor selection mechanism. Therefore, other survivor selection methods can be further investigated such as elitism, round-robin tournament, (\(\mu ,\lambda \))-selection, etc.