Abstract
A populationbased optimizer called beluga whale optimization (BWO) depicts behavioral patterns of water aerobics, foraging, and diving whales. BWO runs effectively, nevertheless it retains numerous of deficiencies that has to be strengthened. Premature convergence and a disparity between exploitation and exploration are some of these challenges. Furthermore, the absence of a transfer parameter in the typical BWO when moving from the exploration phase to the exploitation phase has a direct impact on the algorithm’s performance. This work proposes a novel modified BWO (mBWO) optimizer that incorporates an elite evolution strategy, a randomization control factor, and a transition factor between exploitation and exploitation. The elite strategy preserves the top candidates for the subsequent generation so it helps generate effective solutions with meaningful differences between them to prevent settling into local maxima. The elite random mutation improves the search strategy and offers a more crucial exploration ability that prevents stagnation in the local optimum. The mBWO incorporates a controlling factor to direct the algorithm away from the local optima region during the randomization phase of the BWO. Gaussian local mutation (GM) acts on the initial position vector to produce a new location. Because of this, the majority of altered operators are scattered close to the original position, which is comparable to carrying out a local search in a small region. The original method can now depart the local optimal zone because to this modification, which also increases the optimizer’s optimization precision control randomization traverses the search space using random placements, which can lead to stagnation in the local optimal zone. Transition factor (TF) phase are used to make the transitions of the agents from exploration to exploitation gradually concerning the amount of time required. The mBWO undergoes comparison to the original BWO and 10 additional optimizers using 29 CEC2017 functions. Eight engineering problems are addressed by mBWO, involving the design of welded beams, threebar trusses, tension/compression springs, speed reducers, the best design of industrial refrigeration systems, pressure vessel design challenges, cantilever beam designs, and multiproduct batch plants. In both constrained and unconstrained settings, the results of mBWO preformed superior to those of other methods.
1 Introduction
Metaheuristic optimization algorithms are commonly used to solve problems in the real world, such as feature selection (Hancer 2020; Hussien et al. 2017, 2020; Hussien and Amin 2022; Mostafa et al. 2022), text clustering (Abualigah et al. 2020, 2021; Assiri et al. 2020) and image segmentation (Elaziz et al. 2021; Hussien et al. 2022). In addition, due to their high effectiveness and minimal computational complexity, they are also employed to solve engineering problems (Onay and Aydemır 2022; Chhabra et al. 2023; Zheng et al. 2023). The optimization method determines the optimal or nearoptimal solution for a given situation by minimizing or maximizing the problem’s objective function. Furthermore, metaheuristic algorithms explore the search space by considering random solutions for a global search and exploiting the feasible local solutions in the search space (Abu Khurma et al. 2020; Hussien et al. 2023; Hussien 2022). These two phases are the main steps in the metaheuristic algorithms, known as the exploration and exploitation phases (Khurmaa et al. 2021; Abu Khurma and Aljarah 2021; Fathi et al. 2021).
For metaheuristic algorithms (MAs), the balance between exploitation and exploration is essential. As an illustration, if MA has a crucial exploration, various search space areas are found by producing random operators, increasing the likelihood of trapping into the local optima region. The optimization strategy, on the other hand, aims to locate the optimal solution in the search space and is still able to fall in the local optima area if MA, on the other hand, pays more attention to the exploitation phase. For that, and to converge to the global optimum, the balance between these two phases must be ideal. Many MAs have been put forth in recent years to balance these two phases (Chen et al. 2022; Alzaqebah et al. 2022). However, the NFL theorem (Wolpert and Macready 1997; Singh et al. 2022; Hussien et al. 2022) states that no algorithm can solve every optimization problem. Therefore, studying metaheuristic algorithms for certain problems still has many realworld applications.
MAs can be categorized into four main categories: physics–mathematicsbased algorithms (PMA), humanbased algorithms, evolutionary algorithms (EA), and swarm intelligence (SI) (Onay and Aydemır 2022; Chen et al. 2022). In the PMA, MA draws inspiration from mathematical models and physical phenomena such as sine cosine algorithm (SCA) (Mirjalili 2016), Fick’s law algorithm (FLA) (Hashim et al. 2023), water cycle algorithm (WCA) (Hussien et al. 2022), and multiverse optimizer (MVO) (Mirjalili et al. 2016). In comparison, the humanbased algorithms take cues from human interaction and collaboration, such as the teaching–learningbased optimization algorithm (TLBOA) (Venkata Rao et al. 2011).
In contrast, the biological evolution that occurs in nature through reproduction, mutation, crossover, and selection serves as inspiration for the evolutionary algorithm (EA). Furthermore, individuals compete and may team up throughout evolution to identify the best candidate solution among them, such as genetic algorithms (GA) (Holland 1992) and evolutionary programming (EP) (Koza 1994). Last, the SI algorithms mimic animal behavior in nature in encircling and attacking the prey. The essential characteristic of the SI is the selforganization nature by selftransforming the components into valuable forms to deal with, such as particle swarm optimization (PSO) (Kennedy and Eberhart 1995), crow search algorithm (CSA) (Hussien et al. 2020), gray wolf optimizer (GWO) (Mirjalili et al. 2014), wild horse optimizer (WHO) (Zheng et al. 2022), Remora optimization algorithm (ROA) (Wang et al. 2022), Harris Hawk optimization (HHO) (Heidari et al. 2019), snake optimizer (SO) (Hashim and Hussien 2022), moth flame optimization (MFO) (Alazab et al. 2022), Aquila optimizer (AO) (Huangjing et al. 2022), jellyfish search (JS) (Gang et al. 2023), and whale optimization algorithm (WOA) (Mirjalili and Lewis 2016).
Beluga whale optimization (BWO) is a swarminspired populationbased optimizer that simulate swimming, hunting, and falling whales’ behavior. Despite the good performance of BWO, it has many drawbacks and still need more improvements. These drawbacks include slow convergence and imbalance between exploitation–exploration. In this paper, a novel modified BWO (mBWO) optimizer is suggested which includes: Elite evolution strategy, randomization control factor, and factor of transition between exploitation and exploration.
The contribution of this paper can be summarized as follows:

An improved BWO (mBWO) is suggested which integrates three techniques with BWO. These techniques are elite evolution strategy, randomization control factor, and transition factor between exploitation and exploration.

mBWO is compared with original BWO and ten various optimizers using twentynine CEC2017 functions.

mBWO is employed to solve eight engineering problems, namely welded beam design problem, threebar truss design problem, tension/compression spring design problem, speed reducer design problem, optimal design of industrial refrigeration system, pressure vessel design problem, cantilever beam design, and multiproduct batch plant.

Results of mBWO overwhelmed other algorithms in both constrained & unconstrained problems.
The rest of this paper is organized as follows: Section 2 introduces some enhancements that have been done in many algorithms, whereas Sect. 3 proposes beluga whale optimization inspiration and mathematical formulation. Sections 4 and 5 propose the novel approach and its results compared with many wellknown algorithms in solving constrained & unconstrained problems, whereas Sect. 6 concludes the paper.
2 Related work
Metaheuristic algorithms provide considerable advantages in solving challenging optimization and multiobjective problems. Numerous swarm intelligence techniques have been introduced and used to solve engineering and global optimization issues. However, there are a considerable number of swarm approaches. Moreover, balancing local and global search effectively avoids compromising the local optimum, drawing limitations in the metaheuristic algorithms. These limitations directly impact convergence, solution accuracy, and optimization efficiency for these algorithms (Wang et al. 2022; Singh et al. 2019).
In order to overcome these limitations, different metaheuristic algorithms were enhanced and modified using different techniques. Yueting et al. (2019) published a modified moth flame optimization technique (MFO) that incorporates chaotic local search and Gaussian mutation. Using these two techniques, the improved CLSGMFO outperforms the majority of metaheuristics. To overcome its shortcomings, MFO is further enhanced in Yueting et al. (2019) by merging it with Gaussian mutation (GM), Cauchy mutation (CM), and Lévy mutation (LM). Saunhita and Mini combined oppositebased learning (OBL) with Cauchy mutation and evolution boundary constraint management to improve the MFO’s functionality. Compared to other algorithms, this approach achieved the best results for 13 of 18 functions tested on a set from CEC2005 (Sapre and Mini 2019).
Similarly, the GWO’s performance was enhanced, and it was applied as a technique for solving global optimization issues by Ibrahim et al. (2018). The chaotic, oppositebased learning and DE algorithms are used to implement this improvement. Using evolutionary population dynamics (EPD), which aims to execute the removal of the weak individual, the authors in Saremi et al. (2015) enhanced the efficiency of GWO. This operator improves the exploration of GWO. The EEGWO algorithm was a revised GWO that the authors of Wen et al. (2018) proposed to strengthen its exploration. A new positionupdated equation is presented by choosing a random individual from the population to direct the search for new candidate individuals. This improvement is made possible by adding a new strategy to update the position by using a random solution to guide the process of searching for new solutions to improve the exploration.
Recently, and in the same manner, Onay and Aydemır (2022) proposed an improved hunger games optimizer (HGS) for global optimization and engineering problems. The authors employed the chaotic map in the HGS; ten different chaotic maps have been used in three different scenarios to control two random values in the exploitation and exploration phases in the HGS algorithm. The proposed improvement shows superior results on the CEC2017 realworld engineering problems compared with the standard HGS and promising convergence capability compared to other stateoftheart optimization algorithms. Furthermore, for solving multidimensional engineering global optimization problems, Lin et al. (2022) suggested a whale optimization algorithm with the niching strategy (NHWOA) to speeds up the algorithm’s convergence and covers more positions in the search space. However, the niching technique encourages population variation and prevents early convergence in searching for a universally superior solution. At the same time, a heuristic change is made to the WOA algorithm’s parameters to encourage search agents’ capacity for exploration throughout evolution. Multiple niches are created from the initialized global population, and each niche is updated separately. The search agents in the niches are redistributed to increase population variety after each constant interval iteration. The NHWOA was tested on the CEC2014 benchmark and showed exciting results compared with other modified versions of the WOA.
A novel slime mould algorithm (SMA) for global optimization in practical engineering issues was introduced by Örnek et al. (2022). However, there is a good chance that the SMA will enter the local optimum rather than converge effectively. To get around this restriction, the authors made use of the sine cosine algorithm’s strength. Additionally, they added an improved sigmoidal function that is based on the Schwarz lemma for transformation to replace the arctanh function. On various engineering problems, including cantilever beam design, pressure vessel design, threebar truss, and speed reducer realworld problems, their proposed technique exhibited a good orientation to avoid trapping into the local optima and faster convergence than the standard sine cosine slime mould algorithm.
Hashim et al. (2021) designed and tested a new metaheuristic algorithm for engineering problems, which is called AOA. The interesting physics concept known as Archimedes principle inspired the authors as they created a new optimization technique. The upward force exerted on an object that is wholly or partially submerged in a fluid is modeled as being proportional to the weight of the displaced fluid. It is important to note that the suggested strategy maintains a balance between exploitation and exploration. This characteristic makes it suitable for dealing with complex optimization issues with several local optimal solutions since it maintains a population of solutions and searches a large area to locate the best overall solution. Furthermore, the proposed algorithm shows a powerful optimization tool that balances exploration and exploitation with regard to convergence speed when evaluating it with other optimizers on the CEC2017 benchmark and four engineering design problems.
In line with the kinds mentioned earlier in the literature, solving realworld engineering problems has attention and concerns in the research field. And based on the freelunch theorem, it is unrealistic to expect any metaheuristic method to solve every optimization problem. This challenge motivates us to develop and enhance a metaheuristic algorithm to overcome the general drawbacks of the used techniques to deal with this type of problem. Table 1 summarizes the previous stateoftheart techniques used in the literature.
3 Beluga whale optimization (BWO)
The beluga whale (Delphinapterus leucas) is a species of whale that lives in the sea. It is renowned for its adults’ snowwhite hue and has earned the nickname “canary of the sea” due to the variety of noises it makes. A beluga whale has a mediumsized, spherical, stocky body that ranges in length from 3.5 to 5.5 ms and weighs roughly 1500 kgs; they can form groups that range in size from 2 to 25, with an average of ten members. The ability to hear and see clearly allows belugas to maneuver and hunt by sound, and due to their blunt teeth, beluga whales typically suck their prey into their mouths. Beluga whales are primarily found in the Arctic and subarctic oceans, including Alaska, northwest Canada, and the waters near Ellesmere Island. Some beluga whales live in aquariums and have beautiful movements and a friendly demeanor (Zhong et al. 2022).
Zheng and Mong proposed beluga whale optimization (BWO) algorithm in 2022 (Zhong et al. 2022), miming the natural behavior inspired by beluga whales’ swimming, hunting, and whalefalling habits. BWO has exploration and exploitation phases, just like other metaheuristics. However, by selecting beluga whales randomly, the exploration phase ensures the design space may be searched globally, while the exploitation phase manages local searches. The BWO uses a balance factor (\(B_{\textrm{f}}\)) to transit from the exploration to the exploitation phase. The BWO is in the exploration phase when the \(B_{\textrm{f}}>0.5\), and in the exploitation phase when the \(B_{\textrm{f}}\le .5\) which is calculated as:
where \(B_0\) is a random variable in the range [0, 1] and changes in each iteration. The current iteration and the maximum number of iterations are denoted as t and T, respectively. Figure 1 illustrates the main phases in the BWO algorithm.
The exploration phase represents the swimming behavior of beluga whales with various socialsexual postures and actions, for example, when a pair of beluga whales swim closely and in a synchronized or mirror form. So, the pair swim of beluga whales is the determinant of the search agent’s positions, which will be updated as follows (Zhong et al. 2022):
where \(X^{t+1}_{i,j}\) is the new position for the \(i^{th}\) search agent in the \(j^{th}\) dimension, \(p_j\) is a random number from the d dimension. \(X^{t}_{i,p_j}\) and \(X^{t}_{r,p_i}\) are the current positions of the \(i^{th}\) and \(r^{th}\) search agents. While \(r_1\) and \(r_2\) are random numbers in the range [0,1], they are used for enhancing the randomness in the exploration phase. Furthermore, the fins of the mirrored beluga whales are toward the surface according to \(\sin {(2\pi r_2)}\) and \(\cos {(2\pi r_2)}\). The updated position depicts the synchronous or mirror behaviors of beluga whales when swimming or diving, depending on the dimension selected by odd and even numbers.
The exploitation phase was mathematically modeled based on the beluga whales’ hunting habits. Beluga whales share information about the search agents’ positions and the best position for the candidate prey. In addition, beluga whales can forage and move together when other beluga whales are around. In contrast, and to enhance the convergence of the BWO in the exploitation phase, the Levy flight (LF) was mathematically modeled and adopted in the BWO as follows (Zhong et al. 2022):
where u and v are normally distributed random numbers, \(\beta \) is the default constant equal to 1.5.
Finally, the exploitation phase can be addressed as follows:
where \(X^{t+1}_{i}\) is the new position of the \(i^{th}\) search agent. \(X^{t}_{r}\) and \(X^{t}_{i}\) are the current positions of the \(i^{th}\) and random search agents. The best search agent’s position is represented as \(X^{t}_{\textrm{best}}\), and \(r_3\) and \(r_4\) are in the range [0,1] random variables. The \(C_1\) is determining the strength of the jump in the LF (\(C_1 = 2r_4(1t/T)\)).
The whale fall is a natural phenomenon on the seabed. However, instead of using their cunning to avoid the dangers around them, beluga whales may be attacked by other creatures like killer whales and humans and end up plummeting into the ocean depths. This dead body is considered food for other sea creations. In order to model this behavior, an assumption of the Whale fall (\(W_{\textrm{f}}\)) is introduced and expressed as follows (Zhong et al. 2022):
The risk of a whale falling is reduced from 0.1 in the first iteration to 0.05 in the last iteration, indicating that the danger posed by beluga whales lessens as they get closer to their food source during the optimization process.
It is crucial to note that the BWO’s search agent count, which must remain constant, is directly impacted by the whale fall. Therefore, the BWO uses the updated position, beluga whale positions, and step size of whale falls to guarantee the number of search agents (population size).
where \(r_5\), \(r_6\), and \(r_7\) are random numbers bounded by 0 and 1, and \(X_{\textrm{step}}\) is the step size of whale fall. \(C_2\) is the step factor, which is connected to the likelihood that whales would plummet and the size of the population (\(C_2=2W_{\textrm{f}} \times n\)). Lower and upper boundaries are denoted as \(l_b\) and \(U_b\), respectively.
4 Proposed BWO method
The main drawback of the standard BWO algorithm is falling into local optima because of weakness in the searchability technique of this algorithm. In this paper, we propose an enhanced version of the BWO called mBWO by strengthening the standard BWO’s capability of searching to avoid trapping into local optima and achieve good balance between the exploration and exploitation phases. In the study, the BWO algorithm is enhanced by taking the advantages of the global and local search strategies from evolutionbased and swarmbased algorithms with an exploration–exploitation balance. In contrast, and to achieve this purpose, we adopt several techniques; by using the elite evolution strategy, efficient solutions will be generated with meaningful variations between them to avoid falling into local optima. Additionally, we employed a controlling factor for controlling the randomization phase inside the BWO to force the algorithm to go away from the local optima region. The standard BWO is missing a transfer parameter to transit from the exploration phase to the exploitation phase, which directly affects the algorithm’s stability. For that, we add a transition factor to control the transition process.

Elite evolution strategy
In the populationbased MAs, these algorithms start with randomly selected solutions. During the optimization process, these algorithms update the solution’s positions by generating new solutions and keep the best solution till finding a better solution. The best solution may represent the local best solution, which means falling into local optima. In the worstcase scenario, just one individual can finish the entire search. Therefore, it uses an individual search mechanism and does not use fitness landscape information in its search process. The elite individual is a populationlevel solution with the highest level of fitness. The elite approach, which is used by many MA algorithms, preserves the top candidates for the following generation of searches and conducts memetic investigations around the elite candidates. The elite individual suggests a greater likelihood that the global optimum exists in the search location (Pei 2020). For that, we adopt the elite evolution strategy in the proposed improvements.

1.
Elite random mutation: The elite random mutation enhances the search strategy of the algorithm and provides a more vital exploration ability that avoids stagnation in the local optimum. Here, a new solution is generated, which is unexpected, and at the same time, related to the existing solutions; this gives the advantages of the exploration ability without excessive randomness. The generated solution \(X_{\textrm{new}}\) based on elite random mutation is given by:
$$\begin{aligned} X_{\textrm{new}}&= X_{\textrm{center}}+ R_G \times X_{\textrm{center}}  X_{\textrm{best}} \end{aligned}$$(9)$$\begin{aligned} X_{\textrm{center}}&= \frac{U  L}{2} \end{aligned}$$(10)where \(X_{\textrm{center}}\) is the centered position limited by the upper bound U and the lower bound L, and \(X_{\textrm{best}}\) represents the best position in the population space, while \(R_G\) is number generated by Gaussian probability distribution technique (\(\mu =0, \sigma =1\)).

2.
Gaussian local mutation: An optimization technique called Gaussian mutation (GM) (Luo et al. 2018) acts on the initial position vector to produce a new location by using a random number that complies with the normal distribution. As a result, most altered operators are dispersed close to the original location, which is analogous to conducting a local search in a constrained area. This mutation improves the optimizer’s optimization accuracy and enables the original algorithm to leave the local optimal zone. A few operators are located far from the current location, increasing population variety and enabling a faster and more accurate search for viable locations. This accelerates the optimizer’s trend toward convergence (Song et al. 2021). So, the Gaussian mutation provides localwide mutations for new solutions. This is given by:
$$\begin{aligned} X_{\textrm{new}}&= X_N + R_G \times X_N  X_t \end{aligned}$$(11)$$\begin{aligned} X_N&= {\left\{ \begin{array}{ll} X_{\textrm{best}2}, &{} \quad if\ r1< 0.5 \ and \ r2<0.5\\ X_{\textrm{best}3}, &{} \quad if\ r1 <0.5 \ and \ r2 >0.5 \\ X_{\textrm{best}}, &{}\quad otherwise \end{array}\right. } \end{aligned}$$(12)where \(R_G\) is number generated by Gaussian probability distribution technique (\(\mu \)=0, \(\sigma \)=0.333), \(X_t\) is the location at iteration t, and r1 and r2 are the random number in the range [0,1].

1.

Control randomization
There is a chance that the generated population will be in a particular area of the search space since the randomization process explores the search space using random positions, which could cause a fall into the local optima region. For that, controlling the randomization has a vital role in avoiding stagnation in local optimum. Here, we used a simple way to control randomization using:
$$\begin{aligned} \hbox {ran }= 2 \times \hbox {rand} 1 \end{aligned}$$(13)where the rand represents a random number bounded by 0 and 1. As a result, the ran operator gives different values in a negative and positive direction in the interval \([1, 1]\), excellently covering the given search space.

Transition Factor (TF) phase
The BWO algorithm lacks a transfer parameter during the exploration phase, which affects the search strategy’s stability and potentially adds to time waste. To address this issue, the search method in both phases includes a transfer factor that gradually transitions the agents from exploration to exploitation concerning the amount of time required; TF is given by:
$$\begin{aligned} \hbox {TF} = \exp \left( \frac{t}{T}\right) \end{aligned}$$(14)where t and T are the current iteration and the maximum number of iterations, respectively.
Finally, the techniques mentioned above will be employed in determining the BWO’s new position in the exploration, exploitation, and whale fall phases by taking the absolute value and adding control randomization (ran), and TF parameter. The new equations for exploration, exploitation, and whale fall phase are as follows:

Exploration phase
where D is diversity operator in the interval [1,1].

Exploitation phase
$$\begin{aligned} X_i^{t+1}{} & {} = R_3 \times X_{\textrm{best}}  r_4 \times X_i^t+D \times TF \times ran \nonumber \\{} & {} \quad \times C_1 \times L_{\textrm{f}} \times X_r^t  X_i^t \end{aligned}$$(16) 
Whale fall phase
In this phase, the high randomization without the best solution guide will lead to high diversity with unstable in the search mechanism and will lead to a fall in local optimum and waste time. Here, we simplify this phase that achieves sufficient diversity with control randomization given by:
$$\begin{aligned} X_i^{t+1} =X_r^t +D \times TF \times ran \times X_r^t  X_i^t \end{aligned}$$(17)
Algorithm 1 presents the pseudocode of the modified beluga whale optimization (mBWO) algorithm.
5 Experimental results and discussion
This study uses the CEC’17 test suite to evaluate the proposed mBWO. CEC’17 contains 30 functions that represent 30 minimization problems. These are effectively used to evaluate metaheuristic algorithms (Mirjalili et al. 2014; Hashim et al. 2021; Mirjalili and Lewis 2016). Furthermore, seven engineering design problems are used for evaluation purpose. The main objective of using these functions is to evaluate the search capability of the proposed method and its convergence behavior. Because these algorithms are stochastic based, the experiments in this study are running 30 times. The reason for this is to consider the randomness of the algorithms and the change in the results at each run. The parameter set up is illustrated in Table 2. Several stateoftheart and new developed algorithms are used in the experiments to make comparisons with the proposed mBWO. The used algorithms are BOA (Arora and Singh 2019), HHO (Heidari et al. 2019), WOA (Mirjalili and Lewis 2016), SCA (Mirjalili 2016), AEO (Zhao et al. 2020), BSA (Civicioglu 2013), SCSO (Seyyedabbasi and Kiani 2022), COA (Boveiri and Elhoseny 2020), SAO (Salawudeen et al. 2021), CHIO (AlBetar et al. 2021), WSO (Braik et al. 2022), and SMA (Li et al. 2020). To make fairness among the compared algorithms in the experiments, they all applied on the same hardware system to solve the CEC’17 test suite. Besides, all the experiments have a maximum number of iterations equal 1000. The used programming language for all algorithms is MATLAB2021 on 64bit Windows 8.1 operating system. Table 3 includes the name of each function, its type, and its optimal value. The common parameters between these algorithms are the population size which equals 30, maximum number of iterations which equals 1000, and the number of runs which equals 30.
5.1 Experimental series 1: CEC2017
The CEC2017 (Wu et al. 2017) consists of a set of functions that represent a set of optimization problems. These functions are characterized by diversity, complexity, and dynamism. They are commonly used for the evaluation of the proposed optimization algorithms. In this study, they are used to evaluate the proposed mBWO. Furthermore, they can describe the exploration and exploitation behavior of the optimization algorithm.
Table 3 shows the 30 functions, which are distributed into four sets as follows: F1 to F3 is the unimodal set. F4 to F10 is the multimodal set. F11 to F20 is the hybrid set, and finally, F21 to F30 is the composition set. F2 is not used in the evaluation process. Thus, 29 functions are used to evaluate the proposed mBWO and other algorithms. Also, Table 3 shows that the search range for all of the test functions is from 100 to 100 and the dimension equals 30 (Figs. 2, 3, and 4).
As given in Table 4, the results prove the efficiency of the proposed mBWO in optimizing the functions of the CEC2017 test suite. According to the rank results, mBWO outperforms the other compared algorithms across 25 functions. These functions are F1, F4, F5, F6, F7, F8, F9, F10, F11, F12, F13, F15, F16, F17, F19, F20, F21, F22, F23, F24, F25, F27, F28, F29, and F30. The HHO algorithm outperformed the mBWO across F3. The COA algorithm achieved the first rank for F14 and F18 and F26. The algorithms with the least performance across these functions are BWO, AEO, SAO, WOA, and BOA. Figures 5, 6, and 7 show the boxplots of the mBWO and other algorithms. These are the graphical representation of the values recorded in Table 4. For each algorithm, the minimum, maximum and mean value is represented by a boxplot.
To validate the mBWO, Table 5 shows the statistical results of the nonparametric Wilcoxon rank sum test. It gives an indication of the significance of the difference between the mBWO and other algorithms. According to the results in Table 5, the mBWO is significantly better than BWO, AEO, BOA, BSA, SAO, SCA, SMA, and WOA across all the test functions. On the other hand, mBWO is significantly better than COA, HHO, and SCSO for the majority of the functions.
Figures 2, 3, and 4 show the convergence behavior of the proposed mBWO and the other compared with algorithms applied on CEC2017 functions(F1–F10), (F11–F20), and (F21–F30), respectively. For functions F6, F9, F11, F13, F15, F16, F17, F19, F20, F21, F22, F23, F24, F29, and F30, the mBWO shows superior convergence capability. This is because it is capable of balance between exploration and exploitation. This is not only seen in the faster convergence achieved by the mBWO, but also in the optimal solutions found by the search process. The COA achieves the best convergence behavior for F14, F18, and F26.
For further validation of the mBWO results on the functions of the CEC2017 test suit, an extensive statistical analysis is performed during the search process. This is done by recording the ratios of exploration and exploitation. Figures 8, 9, and 10 show the exploration and exploitation obtained by mBWO while optimizing CEC2017 functions. It can be seen that mBWO achieves a high rate of exploration at the beginning of the search process, while it adaptively converges to exploitation at the later iterations of the search process. However, on F1, F6, F7, F8, F11, F14, F15, F16, F17, F18, F19, F20, F21, F23, and F24, the mBWO performed more exploration than exploitation to reach to the best position in the search space that contains the global optimal solutions. The mBWO follows the same behavior on F1, F4, F6, F8, F12, F13, F22, F24, F26, and F27. Overall, the proposed mBWO produces a dynamic behavior during the search process.
5.2 Experimental series 2: engineering problems
In this study, the proposed optimizer is tested on different eight constrained problems namely:

1.
Welded beam design problem

2.
Threebar truss design problem

3.
Tension/compression spring design problem

4.
Speed reducer design problem

5.
Optimal design of industrial refrigeration system

6.
Pressure vessel design problem

7.
Cantilever beam design

8.
Multiproduct batch plant
5.2.1 Welded beam design problem
This is an engineering design problem that is commonly used to evaluate new or modified optimizers. Its main objective is to minimize the cost of the welded beam design (Coello Coello 2000). The problem relies on four variables: the cut variable \(m(x_1)\), the bending stress in the beam \(n(x_2)\), the bending load on the bar \(s(x_3)\), and the end deviation of the beam \(t(x_4)\). The first and second variables are named \(\lambda \) and \(\theta \), respectively. The third and fourth variables are named \(\phi \) and \(k_d\), respectively.
Table 6 shows that the mBWO achieves the second rank after the AEO that comes in the first rank. It also has a very small standard deviation value which indicates the robustness of the algorithm. Figure 11 shows that most of the algorithms have similar convergence behavior. However, the mBWO can reach positions in the search space that contains the minimum values. The figures show that the AEO suffers from premature convergence and entrapment in local minima. Table 7 shows that the mBWO competitive results regarding the four variables of this problem in comparison with other algorithms. Table 22 show that the mBWO has a significant difference compared with other algorithms except for the AEO algorithm.
5.2.2 Threebar truss design problem
This engineering design problem is based on the area of bars 1 and 3 and the area of bar 2. Also, there are some design conditions that influence the manufacturing process such as stress, deflection, and buckling. As an optimization problem, its main objective is to minimize the total weight of the structure. More details of the threebar truss design problem are provided in Eskandar et al. (2012). In mathematics, it is described as follows:
Table 8 shows that the proposed mBWO achieves competitive results compared with other algorithms. It comes with the AEO algorithm in the second rank after BSA which comes in the first rank. Also, in Table 9, the mBWO achieves promising values regarding the first and second variables of the problem. Table 22 shows that mBWO has meaningful differences compared with other algorithms except the AEO algorithm. Figure 12 shows that mBWO has a promising convergence behavior where the curves reach very small values at the latest iterations. As opposed to the mBWO, the SAO suffers from premature convergence which indicates that it is entrapped in local minima. Also, the WOA and AEO converge to points that are not the optimal solutions of the problem.
5.2.3 Tension/compression spring design problem
Tension/compression spring is a design problem that is striving to obtain the minimum weight of the spring. Three independent variables are considered the basis of this problem. These are the wire of the diameter (w) or (\(x_1\)),(c) or coil diameter (\(x_2\)), and (a) the number of active coils (\(x_3\)). The mathematical model of this problem is described in Sadollah et al. (2013). The following set of equations describes the mathematical model of the tension/compression spring problem.
Table 10 shows that the mBWO outperforms other algorithms and comes in the first rank. It also has a very small standard deviation which means that it is a robust algorithm. In Table 11, the mBWO has achieved competitive results for its variables. Table 22 shows that mBWO has a meaningful difference with compared with other algorithms. Figure 13 shows that the mBWO and other algorithms have similar behavior by reaching to very small values at the latest iterations of the optimization process. However, the BOA and WOA has a premature convergence behavior that indicates the entrapment in local minima.
5.2.4 Speed reducer design problem
This is an optimization problem to minimize the weights of different design instruments. It uses the constraints in the optimization process such as gear teeth, stress and deflection ratios of bending, surface and shafts (MezuraMontes and Coello Coello 2005). Speed reducer is based on seven design variables for minimizing the weights. These are \(x_1\), \(x_2\), \(x_3\), \(x_4\), \(x_5\), \(x_6\), and \(x_7\), which stand for face width, teeth module, pinion teeth number, the length of the first shaft between the bearings, the length of the second shaft between the bearings, and the first diameter and second shafts.
Table 12 shows that the mBWO comes in the first rank with a very small standard deviation value. It also obtains competitive values regarding the seven variables as shown in Table 13. Table 22 shows that mBWO has a significant difference compared with other algorithms. Figure 14 shows that a great difference in the convergence behavior among the algorithms. However, it appears that the mBWO achieves the best convergence scale by reaching to a very small values at the latest iteration of the search process.
5.2.5 Optimal design of industrial refrigeration system
The cooling system uses coolant to cool a hot stream. This process is performed in three steps. Each step has a heat exchanger on one side and a boiling cooler on the other.
The amount of current being pumped depends on the surface area of the heat exchange. Also, at the beginning of each step, the temperature required for boiling the refrigerant is determined. Stream flow rate and fluid temperature are key factors in designing a cooling system. The optimum cooling design needs to determine the area of three surfaces of the liquid cooling heat exchanger. The required heat is 4186.8 J/kg\(^{\circ }\)C pumped at a rate of 10,800 kg per hour from 10\(^{\circ }\)C to 55\(^{\circ }\)C. The unit operates for a minimum of 300 days in a year. The main parameters for the refrigerating system are the latent heat of the refrigerant (\(\lambda \)), 2,32,600 J/Kg, and the overall heat transfer coefficient in stages, 1130 J/s m2 \(^{\circ }\)C. The main objective in the design of the refrigeration system is to minimize the cost of the three steps as in Eq. (22).
The Cost is the variable for the capital cost of the heat exchange surface area. The variable on the right side is the refrigerant operating cost. The optimization process target is to achieve competitive values of design variables such as fluid’s temperatures. The design variables determine the area of the heat exchange and the liquid refrigerant addition rate in each step. The temperature of the liquid refrigerant in each step is:
The temperature of the input fluid to the system (\(Temp_0\)) is \(10^o C\) and the temperature of the output fluid from the system (\(Temp_3\)) is \(55^o C\). The output temperature at a given step must be greater than the refrigerant temperature. Therefore, the conditions on the design variables are:
The log mean temperature at stage i:
The energy balance over refrigerant is:
\(\lambda _i\) is the Penaltyfactor.
where \(F_i\) is the ratio of heat flow, J/s.
The energy balance over the fluid is:
where kl is the specific heat of fluid, \(J/kgC^o\), V is the hot fluid pump ratio, Kg/hr.
Table 14 shows that the proposed mBWO obtains the first rank compared with other algorithms. Also, as shown in Table 15, the mBWO obtains the smallest values for most of the problem variables compared with other algorithms. Table 22 shows that mBWO has meaningful results in comparison with all other algorithms. Figure 15 shows that the algorithms have different behavior in convergence toward the best solution. The mBWO shows a promising convergence by reaching the minimum values at the latest iterations of the search process.
5.2.6 Pressure vessel design problem
Pressure vessel design problem (Kannan and Kramer 1994) is the third engineering design problem used in this study to evaluate the proposed AOABSA against other algorithms. This is an optimization problem that it aims to achieve the minimum cost of pressure vessel design. The cost depends on five design variables which are \(x_1\), \(x_2\), \(x_3\), and \(x_4\). These variables refer to shell thickness, head thickness, inner radius, and cylinder length, respectively. The mathematical model of this problem is described extensively in Hashim et al. (2021). The following set of equations describes the mathematical model of the pressure vessel design problem.
Table 16 shows that mBWO has the first rank with a minimum mean value which equals 179.1527. Figure 16 shows that all algorithms have nearly similar behavior. However, the mBWO has reached to minimum values in the latest iterations. Table 17 shows the values of all the variables of this problem obtained by all the algorithms. It is seen that mBWO achieves the competitive values for \(x_1\), \(x_3\), and \(x_4\). Also, it has the second minimum value for \(x_2\). Table 22 shows that the mBWO achieves a significant difference against all algorithms except the SMA algorithm.
5.2.7 Cantilever beam design
The proposed mWOA is applied to solve CBD problem which has five parameters that need to be determined during the optimization process. The mathematical representation of this problem can be formulated as:
In Table 18, the performance results of the mBWO for the CBD engineering problem are given. As per Table 18, the mBWO obtains the first rank with the smallest STD compared to other methods. In Table 19, the mBWO achieves competitive results regarding the problem’s variables. Figure 17 shows the convergence behavior the mBWO and other methods over 1000 iterations for the CBD. Figure 17 shows that the mBWO reaches to the minimum values at the latest iterations.
5.2.8 Multiproduct batch plant problem
In the multiproduct batch plant problem, the customer first announces his order before starting the production. Each customer’s order represents one product. The batch size of each order remains constant during production. A due and release dates are assigned with each order. Each stage has its own processing units that are only operated at this stage. The objective function of this problem is to minimize the makespan. Also, other constraints must be taken into consideration such as the unallowed units assignment orders due and release date, and storage issues. The mathematical model of this problem is described extensively in Gupta and Karimi (2003). Equations (31)–(42) show the formulation of this problem. Equation (31) shows the constraint related to order assignment, which requires that each order i can only be processed on a single unit j in step s.
Equations (32)–(33) shows order sequence of each unit. Equations (32) and Eq. (34) show that only one order can be the first order on each unit j. Equation (35) shows the sequence constraint for different orders i and \(i^\backprime \) on the same unit j.
Equations (35) and (36) are unit assignment constraints. If integer \(X_{ii^\backprime s}\) or \(X_{i^\backprime is}\) is activated, then order \(i^\backprime \) and order i must be processed in the same unit j
Equations (37)–(39) show the order timing constraints. Equation (37) shows the timing constraints for one order in different steps. Equation (38) is the timing constraint for different orders on the same unit. If unit release time \(UR_j\) or order release time \(OR_i\) are considered, Eqs. (39) and (40) are invoked. Equation (41) represents the constraint for cases with due date \(DD_i\).
The objective function is to minimize the makespan and is formulated as follows:
All the constraints can be easily satisfied except Eq. (41) for minimizing makespan using metaheuristic algorithms. To fit in Eq. (41), the penalty function is used. And Eqs. (43) and (44) are used to compute the objective function in this paper.
Equation (43) defines the penalization function and is applied to penalize the violation in Eq. (44). If the end time of each order is beyond the associated due date, Eq. (43) is activated. And the objective value in Eq. (44) would become down.
Table 20 shows that the mBWO comes in the second rank after the AEO. In Table 21, the mBWO has obtained competitive results compared with other algorithms regarding most of the problem variables. Table 22 shows that mBWO has a meaningful difference with other algorithms excepts the AEO. Figure 18 shows that the mBWO and some other algorithms have similar convergence behavior that reaches to the minimum values. However, the SAO and BWO have a premature convergence behavior that indicates the entrapment in local minima.
6 Conclusions and future work
This paper presents a modified beluga whale optimization called (mBWO) which overcomes the limitations of the classical BWO. These limitations are slow convergence, the imbalance between exploration–exploitation, falling into local optimal regions, etc. mBWO optimizer integrates three different strategies with standard BWO. These strategies are a transition factor for exploration–exploitation, a novel random control factor, and an elite evolution strategy. To test the optimizer and have a fair judgment, mBWO is compared with 13 optimizers using 29 CEC’17 functions and other eight constrained ones. Results indicate the significance and powerfulness of the suggested optimizer.
However, as all other optimizers, mBWO has the same drawbacks like getting trapped in local optimal areas. Moreover, as stated by NFL, mBWO is not able to solve all optimization problems.
In the future, a binary or multiobjective can be proposed to solve discrete and multiobjective problems. Also, mBWO can be applied to solve feature selection, scheduling, knapsack, etc.
Data Availability
Data are available on request.
References
Abu Khurma R, Aljarah I (2021) A review of multiobjective evolutionary algorithms for data clustering problems. Evolutionary data clustering: algorithms and applications, pages 177–199
Abu Khurma R, Aljarah I, Sharieh A, Mirjalili S (2020) Evolopyfs: An opensource natureinspired optimization framework in python for feature selection. In: Evolutionary machine learning techniques, pp 131–173. Springer
Abualigah L, Gandomi AH, Elaziz MA, Hussien AG, Khasawneh AM, Alshinwan M, Houssein EH (2020) Natureinspired optimization algorithms for text document clustering—a comprehensive analysis. Algorithms 13(12):345
Abualigah L, Gandomi AH, Elaziz MA, Hamad HA, Omari M, Alshinwan M, Khasawneh AM (2021) Advances in metaheuristic optimization algorithms in big data text clustering. Electronics 10(2):101
Alazab M, Khurma RA, Awajana A, Camacho D (2022) A new intrusion detection system based on mothflame optimizer algorithm. Expert Syst Appl 210:118439
AlBetar MA, Alyasseri ZAA, Awadallah MA, Doush IA (2021) Coronavirus herd immunity optimizer (CHIO). Neural Comput Appl 33:5011–5042
Alzaqebah A, Aljarah I, AlKadi O, Damaševičius R (2022) A modified grey wolf optimization algorithm for an intrusion detection system. Mathematics 10(6):999
Arora S, Singh S (2019) Butterfly optimization algorithm: a novel approach for global optimization. Soft Comput 23(3):715–734
Assiri AS, Hussien AG, Amin M (2020) Ant lion optimization: variants, hybrids, and applications. IEEE Access 8:77746–77764
Boveiri HR, Elhoseny M (2020) ACOA: an adaptive cuckoo optimization algorithm for continuous and combinatorial optimization. Neural Comput Appl 32(3):681–705
Braik M, Hammouri A, Atwan J, AlBetar MA, Awadallah MA (2022) White shark optimizer: a novel bioinspired metaheuristic algorithm for global optimization problems. KnowlBased Syst 243:108457
Chen P, Zhou S, Zhang Q, Kasabov N (2022) A metainspired termite queen algorithm for global optimization and engineering design problems. Eng Appl Artif Intell 111:104805
Chhabra A, Hussien AG, Hashim FA (2023) Improved bald eagle search algorithm for global optimization and feature selection. Alexandria Eng J 68:141–180
Civicioglu P (2013) Backtracking search optimization algorithm for numerical optimization problems. Appl Math Comput 219(15):8121–8144
Coello Coello CA (2000) Use of a selfadaptive penalty approach for engineering optimization problems. Comput Ind 41(2):113–127
Elaziz MA, Yousri D, Alqaness MAA, AbdelAty AM, Radwan AG, Ewees AA (2021) A Grunwald–Letnikov based manta ray foraging optimizer for global optimization and image segmentation. Eng Appl Artif Intell 98:104105
Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithma novel metaheuristic optimization method for solving constrained engineering optimization problems. Comput Struct 110:151–166
Fathi H, AlSalman H, Gumaei A, Manhrawy IIM, Hussien AG, ElKafrawy P, et al (2021) An efficient cancer classification model using microarray and highdimensional data. Comput Intell Neurosci 2021
Gang H, Wang J, Li M, Hussien AG, Abbas M (2023) EJS: multistrategy enhanced jellyfish search algorithm for engineering applications. Mathematics 11(4):851
Gupta S, Karimi IA (2003) An improved MILP formulation for scheduling multiproduct, multistage batch plants. Ind Eng Chem Res 42(11):2365–2380
Hancer E (2020) A new multiobjective differential evolution approach for simultaneous clustering and feature selection. Eng Appl Artif Intell 87:103307
Hashim FA, Hussien AG (2022) Snake optimizer: a novel metaheuristic optimization algorithm. KnowlBased Syst 242:108320
Hashim FA, Hussain K, Houssein EH, Mabrouk MS, AlAtabany W (2021) Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems. Appl Intell 51(3):1531–1551
Hashim FA, Mostafa RR, Hussien AG, Mirjalili S, Sallam KM (2023) Fick’s law algorithm: a physical lawbased algorithm for numerical optimization. KnowlBased Syst 260:110146
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Future Gener Comput Syst 97:849–872
Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73
Huangjing Yu, Jia H, Zhou J, Hussien A (2022) Enhanced Aquila optimizer algorithm for global optimization and constrained engineering problems. Math Biosci Eng 19(12):14173–14211
Hussien AG, Amin M (2022) A selfadaptive Harris hawks optimization algorithm with oppositionbased learning and chaotic local search strategy for global optimization and feature selection. Int J Mach Learn Cybern, pp 1–28
Hussien AG, Heidari AAsghar, Ye X, Liang G, Chen H, Pan Z (2022) Boosting whale optimization with evolution strategy and gaussian random walks: an image segmentation method. Eng Comput, pp 1–45
Hussien AG, Houssein EH, Hassanien AE (2017) A binary whale optimization algorithm with hyperbolic tangent fitness function for feature selection. In: 2017 Eighth international conference on intelligent computing and information systems (ICICIS), pp 166–172. IEEE
Hussien AG (2022) An enhanced oppositionbased salp swarm algorithm for global optimization and engineering problems. J Ambient Intell Hum Comput 13(1):129–150
Hussien AG, Oliva D, Houssein EH, Juan AA, Xu Yu (2020) Binary whale optimization algorithm for dimensionality reduction. Mathematics 8(10):1821
Hussien AG, Amin M, Wang M, Liang G, Alsanad A, Gumaei A, Chen H (2020) Crow search algorithm: theory, recent advances, and applications. IEEE Access 8:173548–173565
Hussien AG, Abualigah L, Zitar RA, Hashim FA, Amin M, Saber A, Almotairi KH, Gandomi AH (2022) Recent advances in Harris hawks optimization: a comparative study and applications. Electronics 11(12):1919
Hussien AG, Hashim FA, Qaddoura R, Abualigah L, Pop A (2022) An enhanced evaporation rate watercycle algorithm for global optimization. Processes 10(11):2254
Hussien A, Liang G, Chen H, Lin H (2023) A double adaptive random spare reinforced sine cosine algorithm. CMESComput Model Eng Sci 136(3):2267–2289
Ibrahim RA, Elaziz MA, Songfeng L (2018) Chaotic oppositionbased greywolf optimization algorithm based on differential evolution and disruption operator for global optimization. Expert Syst Appl 108:1–27
Kannan BK, Kramer SN (1994) An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des 116(2):405–411
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95international conference on neural networks, vol 4, pp 1942–1948. IEEE
Khurmaa RA, Aljarah I, Sharieh A (2021) An intelligent feature selection approach based on moth flame optimization for medical diagnosis. Neural Comput Appl 33(12):7165–7204
Koza JR (1994) Genetic programming as a means for programming computers by natural selection. Stat Comput 4(2):87–112
Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for stochastic optimization. Future Gener Comput Syst 111:300–323
Lin X, Yu X, Li W (2022) A heuristic whale optimization algorithm with niching strategy for global multidimensional engineering optimization. Comput Ind Eng 171:108361
Luo J, Chen H, Yueting X, Huang H, Zhao X et al (2018) An improved grasshopper optimization algorithm with application to financial stress prediction. Appl Math Model 64:654–668
MezuraMontes Efrén, Coello Coello Carlos A (2005) Useful infeasible solutions in engineering optimization with evolutionary algorithms. In: Mexican international conference on artificial intelligence, pp 652–662. Springer
Mirjalili S (2016) SCA: a sine cosine algorithm for solving optimization problems. KnowlBased Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng softw 69:46–61
Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multiverse optimizer: a natureinspired algorithm for global optimization. Neural Comput Appl 27(2):495–513
Mostafa RR, Hussien AG, Khan MAttique, Kadry S, Hashim FA (2022) Enhanced coot optimization algorithm for dimensionality reduction. In: 2022 Fifth international conference of women in data science at prince sultan university (WiDS PSU), pp 43–48. IEEE
Onay FK, Aydemır SB (2022) Chaotic hunger games search optimization algorithm for global optimization and engineering problems. Math Comput Simul 192:514–536
Örnek BN, Aydemir SB, Düzenli T, Özak B (2022) A novel version of slime mould algorithm for global optimization and real world engineering problems: enhanced slime mould algorithm. Math Comput Simul 198:253–288
Pei Y (2020) Chaotic evolution algorithm with elite strategy in singleobjective and multiobjective optimization. In: 2020 IEEE international conference on systems, man, and cybernetics (SMC), pp 579–584. IEEE
Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput 13(5):2592–2612
Salawudeen AT, Mu’azu MB, Yusuf A, Adedokun AE (2021) A novel smell agent optimization (SAO): an extensive CEC study and engineering application. KnowlBased Syst 232:107486
Sapre S, Mini S (2019) Oppositionbased moth flame optimization with Cauchy mutation and evolutionary boundary constraint handling for global optimization. Soft Comput 23(15):6023–6041
Saremi S, Mirjalili SZ, Mirjalili SM (2015) Evolutionary population dynamics and grey wolf optimizer. Neural Comput Appl 26(5):1257–1263
Seyyedabbasi A, Kiani F (2022) Sand cat swarm optimization: a natureinspired algorithm to solve global optimization problems. Eng Comput, pp 1–25
Singh PR, Elaziz MA, Xiong S (2019) Ludo gamebased metaheuristics for global and engineering optimization. Appl Soft Comput 84:105723
Singh S, Singh H, Mittal N, Hussien AG, Sroubek F (2022) A feature level image fusion for nightvision context enhancement using arithmetic optimization algorithm based image segmentation. Expert Syst Appl 209:118272
Song S, Wang P, Heidari AA, Wang M, Zhao X, Chen H, He W, Suling X (2021) Dimension decided Harris hawks optimization with gaussian mutation: balance analysis and diversity patterns. KnowlBased Syst 215:106425
Venkata Rao R, Savsani VJ, Vakharia DP (2011) Teachinglearningbased optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Wang S, Hussien AG, Jia H, Abualigah L, Zheng R (2022) Enhanced remora optimization algorithm for solving constrained engineering optimization problems. Mathematics 10(10):1696
Wang K, Guo M, Dai C, Li Z (2022) Informationdecision searching algorithm: theory and applications for solving engineering optimization problems. Inf Sci 607:1465–1531
Wen L, Jianjun J, Ximing L, Mingzhu T (2018) An explorationenhanced grey wolf optimizer to solve highdimensional numerical optimization. Eng Appl Artif Intell 68:63–80
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evolut Comput 1(1):67–82
Wu G, Mallipeddi R, Suganthan PN (2017) Problem definitions and evaluation criteria for the CEC 2017 competition on constrained realparameter optimization. National University of Defense Technology, Changsha, Hunan, PR China and Kyungpook National University, Daegu, South Korea and Nanyang Technological University, Singapore, Technical Report
Yueting X, Chen H, Heidari AA, Luo J, Zhang Q, Zhao X, Li C (2019) An efficient chaotic mutative mothflameinspired optimizer for global optimization tasks. Expert Syst Appl 129:135–155
Yueting X, Chen H, Luo J, Zhang Q, Jiao S, Zhang X (2019) Enhanced mothflame optimizer with mutation strategy for global optimization. Inf Sci 492:181–203
Zhao W, Wang L, Zhang Z (2020) Artificial ecosystembased optimization: a novel natureinspired metaheuristic algorithm. Neural Comput Appl 32(13):9383–9425
Zheng R, Hussien AG, Jia HM, Abualigah L, Wang S, Di W (2022) An improved wild horse optimizer for solving optimization problems. Mathematics 10(8):1311
Zheng R, Hussien AG, Qaddoura R, Jia H, Abualigah L, Wang S, Saber A (2023) A multistrategy enhanced African vultures optimization algorithm for global optimization problems. J Comput Des Eng 10(1):329–356
Zhong C, Li G, Meng Z (2022) Beluga whale optimization: a novel natureinspired metaheuristic algorithm. KnowlBased Syst 251:109215
Funding
Open access funding provided by Linköping University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
Authors have no conflict of interest.
Ethical approval
1. This material is the authors’ own original work, which has not been previously published elsewhere. 2. The paper is not currently being considered for publication elsewhere. 3. The paper reflects the authors’ own research and analysis in a truthful and complete manner.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hussien, A.G., Khurma, R.A., Alzaqebah, A. et al. Novel memetic of beluga whale optimization with selfadaptive exploration–exploitation balance for global optimization and engineering problems. Soft Comput 27, 13951–13989 (2023). https://doi.org/10.1007/s00500023084683
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500023084683
Keywords
 Beluga Whale Optimization
 BWO
 Elite Evolution Strategy
 Selfadaptive exploration–exploitation
 Engineering problems