1 Introduction

Recently, several nature-inspired methods have been designed to address complicated and tough non-linear issues. This is because, addressing real-world challenges using classical optimization methods takes a long time and is not adequately tackled. Nature-inspired methods, which are also called swarm intelligence algorithms, are designed by mimicking living organisms. For example, the artificial bee colony (ABC) optimization method was inspired by bee communities [1], while ant colony optimization (ACO) mimics the movement of actual ants among nests and food resources [2], and the grasshopper optimization algorithm (GOA) simulates the social forces among locust individuals in nature [3]. Also, particle swarm optimization (PSO) was initially proposed in 1995 by Eberhart and Kennedy [4], who were inspired by the social behaviors of a swarm of fish and birds.

Optimization is a process of determining ideal values of variables to minimize or maximize the target function. Optimization problems exist in different fields of study. Optimization algorithms are widely categorized as "metaheuristic algorithms" (MAs), which employ inspirations from natural organisms to address real-life challenges in engineering, medicine, standard, and other real-world problems. MAs contain different types of algorithms from nature, including bio-inspired algorithms, human-based algorithms, physics/chemistry algorithms, and other algorithms such as genetic algorithm (GA) [5], human metal search (HMS) [6], atom search optimization (ASO) [7], and grammatical evolution (GE) [8] respectively.

Addressing optimization problems involves four actions. Firstly, the determination of parameters and consequent identification of whether the issue is continuous or discrete. Second, it is necessary to recognize the constraints that are imposed on the parameters, as these constraints distinguish between constrained and unconstrained optimization problems [9]. Third, the problem’s objectives should be explored and considered because optimization challenges are divided into single-objective and multi-objective problems [10]. An appropriate optimizer should be picked and then used to address the issues depending on the stated types of parameters, constraints, and number of targets.

Mathematical optimization depends heavily on gradient-based information from the functions involved in determining the optimum. Although similar strategies are being employed by several scholars, they still have major drawbacks, such as local optima traps. This refers to a method that assumes a local solution as the global solution, failing to determine the global optimum. They are also ineffective for problems whose derivation is uncertain or is computationally costly [11]. Stochastic optimization [12] is another optimization technique that can slightly treat these two challenges. Stochastic approaches rely on random variables to avoid local optima, and all begin the optimization procedure by producing one or more randomized solutions to a given task. Unlike mathematical optimization approaches, they do not need to determine the gradient of a solution; instead, they simply evaluate the solutions using the fitness function. The MAs which include nature-inspired and population-based algorithms are the most widely used stochastic optimization techniques [13].

Despite the several effective stochastic optimization algorithms in the literature that solve various optimization problems, there are still some drawbacks, such as insufficient diversification and intensification ability, the insignificant balance between diversification and intensification, ineffectiveness for issues whose derivation is uncertain or computationally costly, and the most common local optima entrapment. This motivated the researchers to make some adjustments to the existing stochastic optimization algorithms to avoid these drawbacks, especially local optima entrapment. Opposition-based learning (OBL) [14], Lévy flights (LFs) [15], greedy selection (GS) [16], and genetic operators (crossover and mutation) [17] are a few of the effective strategies in the literature that enhance the performance of stochastic optimization algorithms by improving the quality of acquired solutions and avoiding local optima entrapment.

The orca predation algorithm (OPA) [18] is a bio-inspired recent optimizer (proposed by Y. Jiang et al. in 2022) that mimics orca hunting behavior and summarizes it into multiple formulas, including prey drive, encircling, and attack. To reinforce the global search for OPA and avoid the issue of local optimum entrapment, OPA was merged with LF and GS strategies in this paper. LF is a type of random walk that is based on Brownian motion but with a non-Gaussian, uniformly dispersed step lengths for the distance traveled [19]. LF can illustrate a variety of natural and artificial phenomena, including fluid dynamics, earthquake study, fluorescent molecule diffusion, cooling activity, noise, etc [20]. Furthermore, LF mimics the food-finding paths of numerous species, such as bumblebees, albatross, and deer, and has been incorporated into nature-inspired algorithms to assure algorithm progress [21, 22].

According to the no free lunch (NFL) theory [23], no optimization model can rationally address all the present challenges in optimization topics in the literature. As a result, it is regarded as one of the most significant primary motivators for proposed new stochastic optimization algorithms and advance the existing ones to enhance their searchability. In other words, when all optimization concerns are considered, algorithms in this field on average, work similarly well. Consequently, this theory has partly motivated the fast-growing number of stochastic optimization models proposed within the last decade.

This paper proposes an enhanced copy of OPA with LF and GS strategies called the Lévy flight orca predation algorithm (LFOPA). In non-destructive scavenging situations, LF-based behaviors are the best search techniques for foragers and predators [22, 24]. Moreover, it has been shown that LF-based designs may be seen in the chasing behaviors of creatures such as monkeys and sharks [25,26,27,28]. As a result, the LF strategy is incorporated into OPA’s chasing phase, especially in the prey forcing formulas, to allow orcas to get closer to the prey and maximize their opportunities of reaching the prey’s preferred position. Also, it is advantageous for enhancing the essential global search mechanisms (diversification abilities) of the LFOPA and consequently avoiding local optima entrapment. Furthermore, the GS strategy from the differential evolution (DE) algorithm [29], which supports the principle of survival of the fittest, is used here with a certain probability. According to this strategy, new superior orcas in each iteration can continue to be enhanced for future iterations while the least superior ones are ignored. This strategy corroborates LFOPA’s randomness. By incorporating the GS strategy into LFOPA, the searching abilities are enhanced since each pioneer orca can survive and subsequently communicates its observed knowledge with other hunter orcas throughout the subsequent stages of the search. It is also useful for stabilizing critical diversification and intensification trends to enable the convergence of LFOPA for higher-quality solutions. Generally, these two improvements are made to find more premium solutions to optimization challenges.

In summary, the main contributions of this paper are as follows:

  • The original contribution for this paper is modifying a very recent optimization algorithm called the Orca Predation Algorithm (OPA) by using efficient strategies such as Lévy flight (LF) and greedy selection (GS).

  • The modified OPA, called the Lévy Flight Orca Predation Algorithm (LFOPA), is proposed to address real-world engineering issues.

  • The efficacy of the proposed LFOPA is verified using the CEC’20 test platform, four real-world engineering design issues, and the node localization issue in wireless sensor networks (WSNs).

  • Statistical results on the CEC’20 test platform and real-world engineering issues revealed the superiority of the proposed LFOPA compared to OPA and other competitors.

  • Also, simulation results on the node localization issue in WSNs proved the super capability of the proposed LFOPA in minimizing the squared error and the localization error compared to OPA and other competitors.

The remainder of the paper is structured as follows: Section 2 is a review of the literature. Section 3 presents an overview of the original OPA. Section 4 introduces the suggested LFOPA. Sections 5 and 6 discuss the experimental outcomes on CEC’20 and real-world applications, respectively. Section 7 contains the conclusions as well as some suggestions for future research.

2 Literature Review

Lévy flights (LFs) are commonly employed in optimization algorithms, either as a main source of inspiration for the algorithm or as an aid to advance the algorithm performance considering their superior ability to boost the diversification (exploration) capability of the stochastic optimization algorithm and thus avoid the trap of local optimum solution. Also, LFs can increase the qualification of resource searches in uncertain environments. LFs have been seen in the foraging habits of albatrosses, fruit flies, and spider monkeys. Even humans, such as the Júhoansi hunters-gatherers, can trace LF pattern routes. Furthermore, LFs have a wide range of applications in various scientific processes, such as fluorescent molecule dispersion, cooling activity, and noise [13].

In [30], the LF is used as the main inspiration to propose an efficient optimization algorithm called the Lévy flight distribution (LFD) that is employed to address three engineering design challenges; the tension/compression spring, the welded beam, and the pressure vessel. Also, the LFD algorithm is employed to address node deployment in WSNs. The statistical and simulation outcomes demonstrated the LFD algorithm’s advantage over other competitors studied. Also, in [31] the LF idea is used in the Harris hawks optimizer (HHO) algorithm to mathematically describe the escape techniques of the prey and leapfrog motions. The LF is used to simulate the true zigzag deceptive maneuvers of prey (namely rabbits) during the escape phase, as well as the irregular, sudden, and quick dives of hawks surrounding the escaping prey. Hawks conduct a series of fast team dives around the rabbit, attempting to gradually alter their location and orientation in response to the deceiving actions of the prey. Real-world observations of various competing situations in nature lend credence to this approach.

In [32], the LF strategy is used to enhance the randomness, stochastic procedure, and diversification of the intelligent dragonflies in the dragonfly algorithm (DA), owing that the intelligent dragonflies must fly across the search space using a stochastic process when there are no adjacent solutions. The cuckoo search (CS) method is one of the most frequent stochastic optimization techniques that employ LFs as the main technique to generate new solutions, owing to the random walks that are created by LFs, associated with the resemblance between a cuckoo’s egg and the host’s egg, which can be difficult to implement. In this case, the step length controls how far a random walker may travel in a fixed number of iterations [33].

LFs which researchers use to make some improvements to the already existing stochastic optimization algorithms are abundantly found in the literature. For example, PSO is a widely known population-based approach that is utilized in global optimization and a variety of engineering challenges. Notwithstanding its stability and reliability, the PSO has setbacks such as being locked in local minima owing to precocious convergence and a lack of global search capabilities. In this study, the PSO is paired with LF to solve these challenges [34]. Furthermore, the grey wolf optimizer (GWO) is a recent population-based optimizer that is highly efficient. Compared to other well-known optimizers, the GWO method can provide optimum efficiency. However, because of inadequate wolf variety in some circumstances, a concern is that the GWO might still be susceptible to shrinking at local optima. An enhanced GWO method is provided in this work to address global or real-world optimization problems. To improve the performance of GWO, LF and GS techniques are combined with modified hunting stages [35]. To improve the performance of two recently suggested optimizers, a variation based on LF was presented. The first optimizer in the work is the sine-cosine algorithm (SCA), and the second is the whale optimization algorithm (WOA). In each optimization iteration, both optimizers are constructed of two phases of random movements, and both have difficulties with stagnation and precocious convergence. In the SCA, LF is utilized to exchange the movement based on the cosine function, and in the WOA, it is used to replace the spiral movement. The LF-based search ensures that a percentage of solutions other than the current optimal solution is created, which tolerates optimizer stagnation, and precocious convergence, and provides local optima evasion [15].

The culture algorithm (CA) is a hyper-heuristic iterative method that directly uses information represented in the ideology space to steer the evolutionary search. A new, improved cultural algorithm is introduced as a new component in this work, which integrates a fuzzy system with a modified LF search. A modified LF search is suggested, which uses knowledge from the ideology space as an input to aid the development process in producing more premium solutions. The approach is compared to various algorithms, including the best performers in the CEC 2015 tournament on learning-based real-parameter single objective optimization, and is evaluated on the benchmark suite from that competition. The findings show that the suggested algorithm exceeds the other state-of-the-art methods both statistically and in solution quality [36]. Likewise, the shuffling frog-leaping algorithm (SFLA), a unique evolutionary optimization method inspired by frog foraging behavior, has been widely utilized to solve integration issues in a multitude of domains. However, particularly for continuous optimization issues, it is simple to become entrapped by the local optimum. Based on the extended framework, this work developed a unique variant of SFLA for continuous optimization issues, termed the LF-based shuffled frog-leaping algorithm (LSFLA). For the local search procedure, an LF-based attractor was utilized, which improved the algorithm’s local search capability by allowing it to search for short and occasionally larger walking distances [37].

Ultimately, the work in [38] presents OB-LF-ALO, a new efficient variant of the newly announced antlion optimizer (ALO). In place of the uniformly dispersed random movement in the original ALO, the improved version is based on the theory of OBL combined with LF for random movement. Any optimization model’s effectiveness is contingent on a proper equilibrium of diversification and intensification during the evolution procedure. The original algorithm is prone to local optima entrapment, necessitating diverse diversification and a suitable intensification mix. The suggested method facilitates convergence by increasing initial diversity and providing strong intensification capacity in later generations. A diverse set of 21 unconstrained continuous benchmark investigation beds are utilized to investigate the validity of the developed method. According to the experimental investigation, the created variation OB-LF-ALO is better than ALO. Likewise, the bee algorithm (BA) is a population-based evolutionary model that is based on honeybee foraging behavior. This strategy has been demonstrated to be effective in the fields of multimodal and contextual optimization. Its behavior is also quite close to that observed in nature, and it is basic and straightforward to implement. However, a method to avert getting stranded in local optima, as well as a shorter convergence time to the optimal solution, is still required. A unique initialization strategy based on the patch idea and the LF distribution is provided in this work to initialize the bee population in BA [39].

3 An Overview of OPA

The OPA is a recent stochastic optimization method that mimics the behavior of a smart carnivorous predatory dolphin called orcas, with a high level of social interaction [40]. OPA starts its optimization procedure with a random initial population that consists of N individuals with scale (dimension) D. Mathematically, the behavior of orcas in the optimization procedure can be summarized into two main phases, the chasing phase and the attacking phase, which are presented as follows:

3.1 Chasing Phase

When orcas come across a group of fish, they don’t just start to hunt; instead, they utilize sonar to work cooperatively with one another. A troop of orcas will migrate, forcing the group of fishes to the surface and into a controlled cage. The authors of OPA simplify the chase stage of orcas’ hunting procedure into two types of behavior: forcing the prey and surrounding the prey. The orca’s ability to address these two behaviors separately is adjusted by utilizing the selection parameter p1, which is given a constant value between 0 and 1, and another number between 0 and 1 is randomly created. If the number is greater than p1, the forcing behavior will be performed; otherwise, the surrounding behavior will be performed.

3.1.1 Forcing Behavior

When orcas detect a group of fish, they chase them to the surface. If the troop of orcas is small, the locative scale of swimming becomes minimal, or when the hunting area is simple, the orcas can identify the prey speedily and precisely. On the other hand, if the troop of the orca is large, the locative scale of swimming is greater, and the hunting area is very complex, orca swimming is likely to scatter, making it harder to precisely approach the target location. In addition to allowing orca search agents to move closer to the prey, it is also, vital to maintaining control over the orca troop’s central location to keep it near the prey and avert the orca troop from deviating from the goal. Consequently, the process of chasing is summarized into two approaches based on the size of the orca population. The first approach is used when the orca troop is big (rand > Q), while the second approach is used when the orca troop is small (rand \(\leqslant\) Q).

Mathematically, the orca’s velocity and the associated position after it has moved are presented as follows:

$$\begin{aligned}{} & {} vel_{\textrm{ch}, i, F}^\textrm{it}=A*\left( V*\textrm{pos}_{\textrm{optimal}}^\textrm{it}-U*\left( B*M^\textrm{it}+C*\textrm{pos}_{i}^\textrm{it}\right) \right) \end{aligned}$$
(1)
$$\begin{aligned}{} & {} vel_{\textrm{ch}, i, S}^\textrm{it}=E*\textrm{pos}_{\textrm{optimal}}^\textrm{it}-\textrm{pos}_{i}^\textrm{it} \end{aligned}$$
(2)
$$\begin{aligned}{} & {} M=\frac{\sum _{i=1}^{N} \textrm{pos}_{i}^\textrm{it}}{N} \end{aligned}$$
(3)
$$\begin{aligned}{} & {} C=1-B \end{aligned}$$
(4)
$$\begin{aligned}{} & {} \left\{ \begin{array}{cll} \textrm{pos}_{\textrm{ch}, i, F}^\textrm{it}=\textrm{pos}_{i}^\textrm{it} +vel_{\textrm{ch}, i, F}^\textrm{it} &{} \text {if } &{} \text {rand }>Q \\ \textrm{pos}_{\textrm{ch}, i, S}^\textrm{it}=\textrm{pos}_{i}^\textrm{it} +vel_{\textrm{ch}, i, S}^\textrm{it} &{} \text {if } &{} \text {rand } \leqslant Q, \end{array}\right. \end{aligned}$$
(5)

where it indicates the current index of iterations, \(vel^\textrm{it}_{ch,i,F}\) specifies the chasing velocity of the \(i^{th}\) orca at iteration it after determining the first chasing approach, \(vel^\textrm{it}_{ch,i,S}\) specifies the chasing velocity of the \(i^{th}\) orca at iteration it after determining the second chasing approach, M indicates the arithmetic mean location of the orca troop, \(\textrm{pos}^\textrm{it}_{ch,i,F}\) is the location of the \(i^{th}\) orca at iteration it after determining the first chasing approach, \(\textrm{pos}^\textrm{it}_{ch,i,S}\) is the location of the \(i^{th}\) orca at iteration it after determining the second chasing approach, A, B and V are random values in the uniform distribution [0,1], E is a random value in between [0, 2], the esteem of U is equal to 2, and Q is a value in the uniform distribution [0,1], that indicates the probability of determining a certain chasing approach.

3.1.2 Surrounding Behavior

Orcas must surround the group of fish in a controlled ring after orcing them to the surface. During the surrounding behavior, orcas utilize sonar to interact with each other and decide their next location based on the location of adjacent orcas. OPA authors presume that the orcas locate themselves by utilizing the locations of three randomly chosen orcas, and then determine the location after swimming as follows:

$$\begin{aligned}{} & {} \textrm{pos}_{\textrm{ch}, i, TH, z}^\textrm{it}=\textrm{pos}_{j_1, z}^\textrm{it}+r * \left( \textrm{pos}_{j_2, z}^\textrm{it}-\textrm{pos}_{j_3, z}^\textrm{it}\right) \end{aligned}$$
(6)
$$\begin{aligned}{} & {} r=2 *(\text{ rand } -1 / 2) * \frac{\textrm{it}_\textrm{max} -it}{ \textrm{it}_\textrm{max} }, \end{aligned}$$
(7)

where \(\textrm{it}_\textrm{max}\) indicates the maximum number of iterations, \(j_1\), \(j_2\), \(j_3\) represent the three randomly determined orcas from N orcas, and \(j_1\) \(\ne\) \(j_2\) \(\ne\) \(j_3\), \(\textrm{pos}^\textrm{it}_{ch,i,TH}\) is the location of the \(i^{th}\) orca after determining the third chasing approach at iteration it.

3.2 Attacking Phase

3.2.1 Pouncing Preys

When orcas surround their prey, they alternate entering the cage to attack the prey, lashing their tails against the ring, and devouring the shocked fish, before returning to the original cage to be replaced by another orca. OPA authors suppose there are four orcas, which refer to the four best attacking locations in the ring (four orcas are selected here as too few orcas will cause the traveling of the individuals in a single orientation, while too many orcas will decrease the convergence velocity of the algorithm). If other orcas desire to get in the cage, they can do so by travelling in the same orientation as the four orcas. If orcas desire to come back to the cage after feeding to exchange for other orcas, the traveling orientation can be decided by the location of randomly picked adjacent orcas. The traveling velocity and location of the orca during the attacking phase can be computed according to the following formula:

$$\begin{aligned}{} & {} vel_{\textrm{at},i,H}^\textrm{it}=\left( \textrm{pos}_{\textrm{first}}^\textrm{it}+\textrm{pos}_{\textrm{second}}^\textrm{it}+\textrm{pos}_{\textrm{third}}^\textrm{it}+\textrm{pos}_{\textrm{four}}^{\textrm{it}}\right) / 4-\textrm{pos}_{\textrm{ch},i}^\textrm{it} \end{aligned}$$
(8)
$$\begin{aligned}{} & {} vel_{\textrm{at},i,R}^\textrm{it}=\left( \textrm{pos}_{\text {ch},{j_1}}^\textrm{it}+\textrm{pos}_{\textrm{ch},{j_2}}^\textrm{it}+\textrm{pos}_{\text {ch},{j_3}}^\textrm{it}\right) / 3-\textrm{pos}_{i}^\textrm{it} \end{aligned}$$
(9)
$$\begin{aligned}{} & {} \textrm{pos}_{\textrm{at}, i}^\textrm{it}=\textrm{pos}_{\text {ch}, i}^\textrm{it}+G_1 * vel_{\textrm{at}, i, H}^\textrm{it}+G_2 * vel_{\textrm{at}, i, R}^\textrm{it}, \end{aligned}$$
(10)

where \(vel_{\textrm{at},i,H}^\textrm{it}\) indicates the velocity dimension of the \(i^{th}\) orca to hunt prey at iteration it, \(vel_{\textrm{at},i,R}^\textrm{it}\) depicts the velocity dimension of the \(i^{th}\) orca to return to cage at iteration it, \(\textrm{pos}_{\textrm{first}}^\textrm{it}\), \(\textrm{pos}_{\textrm{second}}^\textrm{it}\), \(\textrm{pos}_{\textrm{third}}^\textrm{it}\), \(\textrm{pos}_{\textrm{four}}^\textrm{it}\) refer to the four orcas in the best location in turn, \(j_1\),\(j_2\),\(j_3\) stands for the three randomly selected orcas from N orcas in the chasing phase and \(j_1\) \(\ne\) \(j_2\) \(\ne\) \(j_3\), \(\textrm{pos}_{\textrm{at}, i}^\textrm{it}\) indicates the location of the \(i^{th}\) orca at iteration it after the attacking phase, \(G_1\) is a random value in between [0, 2], and \(G_2\) is a random value in between [\(-\)2.5, 2.5].

4 The Proposed LFOPA

The proposed LFOPA performs the same optimization process as the original OPA, including the two main phases (chasing and attacking phases). However, the LFOPA introduces two modifications: the chasing phase is reinforced with the LF strategy, and the GS strategy from the DE algorithm is embedded into the algorithm. These two modifications are presented as follows:

4.1 Reinforcing the Chasing Phase with the LF Strategy

In the chasing phase, the original OPA updates its allocation toward the prey based on the mere orca’s original location and velocity, as shown in Eq. (5). However, in some situations, the OPA individuals in the chasing phase are still prone to local optimal stagnation. As a result, OPA’s difficulties with unripe convergence can still be felt. Also, in some circumstances, the original OPA is unable to make a smooth transition from the diversification to the intensification stages. Based on that, the LF strategy can be utilized to alleviate these challenges. The LF strategy can help the original OPA find information using deeper search patterns. By using this strategy, it is possible to assure that LFOPA is capable of handling global searches more accurately. The issue of stagnation can also be alleviated in this way, and consequently, the quality of the acquired solutions is improved in LFOPA. The LF strategy is integrated with the OPA’s chasing phase, especially in the orca location updating formula of the prey forcing behavior (Eq. (5)), to allow orcas to get closer to the prey and upgrade their opportunities of reaching the prey’s preferred position. The dedicated formula is modified in LFOPA as follows:

$$\begin{aligned} \left\{ \begin{array}{cll} \textrm{pos}_{\textrm{ch}, i, F}^\textrm{it}=\textrm{pos}_{i}^\textrm{it}+vel_{\textrm{ch}, i, F}^\textrm{it} +\alpha \oplus Levy(\beta ) &{} \text {if } &{} \text {rand }>Q \\ \textrm{pos}_{\textrm{ch}, i, S}^\textrm{it}=\textrm{pos}_{i}^\textrm{it}+vel_{\textrm{ch}, i, S}^\textrm{it} +\alpha \oplus Levy(\beta ) &{} \text {if } &{} \text {rand } \leqslant Q, \end{array}\right. \end{aligned}$$
(11)

where \(\alpha\) is the step length that must be associated with the scales of the issue of interest. In the proposed LFOPA \(\alpha\) is a random number for all scales of oracs. Moreover, the sign of the \(\alpha\) signifies the movement orientation, positive to the right and negative to the left [19]. As such, to execute LFs or create random walks, two criteria must be specified; the step length of the walk that subjects to the chosen Lévy distribution (LD) [41, 42], which is represented in the value of \(\alpha\), and the direction of the movement that influences the travel toward the target (represented in the sign of \(\alpha\)). The recognized Mantegna algorithm for a symmetric and stable LD is the most straightforward and fastest approach for the determination of these criteria [30].

$$\begin{aligned} { \left\{ \begin{array}{cll} \textrm{pos}_{\textrm{ch}, i, F}^\textrm{it}=\textrm{pos}_{i}^\textrm{it}+vel_{\textrm{ch}, i, F}^\textrm{it} + \textrm{random}(\textrm{size}(D)) \oplus \textrm{Levy}(\beta ) &{} \text {if } &{} \text {rand }>Q \\ \textrm{pos}_{\textrm{ch}, i, S}^\textrm{it}=\textrm{pos}_{i}^\textrm{it}+vel_{\textrm{ch}, i, S}^\textrm{it} + \textrm{random}(\textrm{size}(D)) \oplus \textrm{Levy}(\beta ) &{} \text {if } &{} \text {rand } \leqslant Q \end{array}\right. } \end{aligned}$$
(12)

The product \(\oplus\) means entry-wise multiplications. A non-trivial scheme of generating step length s sample is discussed in detail in [41, 42], and it can be summarised as follows:

$$\begin{aligned} s={\text {random}}({\text {size}}(D)) \oplus {\text {Levy}}(\beta ) \sim 0.01 \frac{u}{|v|^{1 / \beta }}\left( \textrm{pos}_{i}^\textrm{it}-\textrm{pos}_\textrm{optimal}^\textrm{it}\right) \end{aligned}$$
(13)

where u and v are drawn from normal distribution. That is

$$\begin{aligned} u \sim N\left( 0, \sigma _{u}^{2}\right) \quad v \sim N\left( 0, \sigma _{v}^{2}\right) \end{aligned}$$
(14)

with

$$\begin{aligned} \sigma _{u}=\left\{ \frac{\Gamma (1+\beta ) \sin (\pi \beta / 2)}{\Gamma [(1+\beta ) / 2] \beta 2^{(\beta -1) / 2}}\right\} ^{1 / \beta } \quad , \quad \sigma _{v}=1 \end{aligned}$$
(15)

Here \(\Gamma\) is standard Gamma function.

s value with D-scale obtained by Eq. (13) is added to the orca updated location \(\textrm{pos}_{{ch},i}\) to determine the location of new orcas in the forcing behavior of the chasing phase. The distribution changes as the \(\beta\) parameter is changed. It takes longer hops for lower values and shorter jumps for larger values [34].

The value of the \(\beta\) parameter is one of the most significant factors to consider while executing a LF movement. According to Yang and Deb in [41], the \(\beta\) parameter yielded diverse outcomes at various values in the experiments undertaken in the study called Multi-objective CS for design optimization. Consequently, it can be stated that for each benchmark function, a different \(\beta\) parameter produced a more effective outcome. Furthermore, in Chang–Yong Lee’s Evolutionary Algorithms with Adaptive Lévy Mutations research [43], several constant values for the \(\beta\) parameter were chosen, and operations were carried out by computing the distribution for each of these values and picking the optimal of the offspring created. As shown in these two instances, the \(\beta\) parameter has a significant impact on movement. In this study, the \(\beta\) parameter is set to a small constant value in the [0,2] interval for the LF movement operation that is equal to 0.1 (to perform long jumps of orca). OPA changes its velocity by being influenced by both local and global factors. Despite this, because orcas develop resemblance after a certain number of repetitions (loss of diversity), velocity changes become insignificant, resulting in the loss of global search capabilities. The LFOPA aims to preserve variety while improving global searchability. For tiny values of the \(\beta\) parameter, it conducts lengthy hops, which avoids the global search weakness that prevents getting locked in local minima.

Unlike the LF in the Cuckoo optimization, step length s is computed by subtracting two random cuckoos from the LD, whereas \(\textrm{pos}_\textrm{optimal}^\textrm{it}\) is subtracted from the current orca \(\textrm{pos}_{i}^\textrm{it}\) in this study. As a result, the current orca \(\textrm{pos}_{i}^\textrm{it}\) will move towards the optimal orca \(\textrm{pos}_\textrm{optimal}^\textrm{it}\) and the optimal orca value is guaranteed to remain unchanged. If the \(\beta\) value is chosen as a small value in the acceptable range, it permits the orca to make extremely large jumps in the search space and avoids getting locked in local minima. In this study, since the parameter will be given a constant small value equal to 0.1 for each orca, it is ensured that long hops will take place. Long jumps improved the search efficiency of the CS in various circumstances, notably for multimodal and non-linear problems with multiple local minima but only one global minimum [41]. In this study, a global search was reinforced by employing a random walk with LF to overcome OPA’s weakness, preventing it from becoming locked in local minima, and was found to produce better results for CEC’20 benchmark functions, particularly multimodal functions.

4.2 Embedding the GS Strategy into the Algorithm

The GS strategy from the DE algorithm is embedded into the proposed LFOPA to keep the superior solutions continuing in the next generations (it avoids losing superior solutions in the next optimization iterations). In other words, this strategy supports the principle of survival of the fittest and is used here with a certain probability, p. According to this strategy, the most superior orcas in each iteration can continue to be enhanced for future iterations while the least superior orcas are ignored. This strategy is mathematically formulated as follows:

$$\begin{aligned} \textrm{pos}_i^{it+1}= {\left\{ \begin{array}{ll}\textrm{pos}_i^\textrm{it} &{}f\left( \textrm{pos}_i^{new}\right) >f(\textrm{pos}_i^\textrm{it}) \; \text{ and } \; h<p \\ \textrm{pos}_i^{new} &{} \text{ Otherwise }, \end{array}\right. } \end{aligned}$$
(16)

where h and p are random values inside the uniform distribution [0, 1], \(f(\textrm{pos}_i^\textrm{it})\) is the fitness value of the last location \(\textrm{pos}_i^\textrm{it}\), and \(\textrm{pos}_i^{new}\) indicates the new location that is determined by the LFOPA after applying chasing and attacking phases. The p value in Eq. (16) is derived at random within the interval [0, 1] in each iteration. This property confirms the random nature of LFOPA. By incorporating GS into LFOPA, the search skills are enhanced, owing to that each pioneer orca has a better chance of surviving and subsequently sharing the information it gathers with other hunters throughout the subsequent stages of the search. Furthermore, encouraging LFOPA to converge on higher-quality solutions to enable the stabilization of significant diversification and intensification trends.

To clarify more, Fig. 1 depicts the flowchart of the proposed LFOPA. Also, the steps for the implementation of the LFOPA are illustrated as follows:

Step 1: Initialize the algorithm population pos according to N, D, lb, and ub values. Likewise, initialize other parameters, including \(\textrm{it}_\textrm{max}\), p1, p2, Q, \(\beta\), and h.

Step 2: Assess orca troop individuals utilizing the fitness function, and the optimal one is selected as \(\textrm{pos}_\textrm{optimal}\).

Step 3: Apply the chasing phase in cooperation with LF strategy to orca troop individuals utilizing Eqs. (1)–(4), (11), and Eqs. (6)–(7). According to this phase, orcas select to force or surround their prey based on the selection parameter p1. Subsequently, they locate the prey and modify their own locations through sonar.

Step 4: Apply the attacking phase to orca troop individuals utilizing Eqs. (8)–(10). According to this phase, orcas attack the prey and modify their locations through sonar, during which some orcas will beat the edge of the shoal and their locations are exchanged by the lb.

Step 5: Apply the GS strategy utilizing Eq. (16).

Step 6: The new population is being built. The orcas will be rebuilt into a new troop when the GS strategy is completed.

Step 7: Termination of the loop. It is satisfied if the current iteration it has reached the \(\textrm{it}_\textrm{max}\). If the optimal output solution is not found, the procedure will be restarted from step 2.

4.3 Time Complexity

The proposed LFOPA is broken down into four levels for the evaluation of time complexity, and each level’s time complexity analysis and calculation procedure will be discussed in detail.

Level 1: Initialization. Given that the troop size is N and the orca is a dimension vector D, and the initialization level involves giving each orca a value for each D, the algorithm time complexity can be computed as O(N \(\times\) D).

Level 2: Evaluation of the orca troop. In the evaluation of the orca troop, the fitness score of each orca is computed, and orcas and iterations have the same number. Consequently, the time complexity of the algorithm is O(N).

Level 3: Chasing phase. This level includes two sub-levels. In the first sub-level, there are two procedures to adjust the locations of the orcas. If the orcas apply the location adjusting according to Eqs. (1)–(4), and (11), the adjusting is applied to each orca, and the time complexity is O(N); if the orcas applied Eqs. (6)–(7) to adjust the location, the adjusting is applied to each dimension of each orca, and the time complexity is O(N \(\times\) D). In the second sub-level, it is identified whether the orcas discover better locations, which is only associated to the number of orcas, and hence the time complexity is O(N).

Level 4: Attacking phase. Similar to level 3, the computation of the time complexity of this level is composed of two sub-levels. In the first sub-level, location adjusting is applied to each individual, and the time complexity is O(N). In the second sub-level, it is identified whether the orcas discover better locations. Because of the uncertainty around the number of adjusted orcas, it is impossible to compute the algorithm’s time complexity. The number of orcas to be adjusted is set as \({\hat{N}}\), and consequently the time complexity of the algorithm is O(\({\hat{N}}\) \(\times\) D). Because \({\hat{N}}\) is smaller than N, the time complexity of the proposed LFOPA is lower than O(N \(\times\) D), the same as the original OPA.

Overall, LFOPA has low computing complexity and, as a result, a fast calculation speed.

Fig. 1
figure 1

Flowchart of the proposed LFOPA

5 Experimental Results and Analysis

Performance assessment and experimentation of the suggested LFOPA on 10 standard benchmark test beds are shown in this section. Below is a detailed description of various benchmark test routines, including parameter settings and statistical assessment metrics. The performance is compared to that of other well-known MAs. The trials were carried out on a PC with a 64-bit Core i7 CPU running at 3.20 GHz and 8 GB of main memory, using Matlab software version R2014a.

5.1 Parameter Settings

The parameter settings utilized in the trials for all of the competing algorithms are shown in Table 1. The number of search agents (N), the maximum number of iterations (\(\textrm{it}_\textrm{max}\)), and the issue scale or dimension (D) have all been set to 30, 1000, and 10 respectively. For statistical analysis, each benchmark problem is subjected to 30 independent runs (R) per method for all algorithms under consideration. Moreover, the parameters of each algorithm are set to their default for adequate practice as reported in [44] to lessen the possibility of better parametrization bias.

Table 1 Parameters setting for LFOPA and other competitor algorithms

5.2 Statistical Assessment Metrics

To assess effectiveness and validate the proposed LFOPA, the following assessment measures are used:

  1. 1.

    Mean: the arithmetic mean of fitness scores produced by the algorithm after a certain number of R. It is provided by

    $$\begin{aligned} Mean = \frac{{\mathop \sum \nolimits _{i = 1}^R \left( {{f_i}} \right) }}{R} \end{aligned}$$
    (17)
  2. 2.

    Standard deviation (SD): the variation in fitness function scores acquired after running the algorithm R times. It is provided by

    $$\begin{aligned} SD = \sqrt{\frac{1}{R-1} \Sigma _{i=1}^{R}\left( f_{i}-M e a n\right) ^{2}} \end{aligned}$$
    (18)
  3. 3.

    Best: the lowest fitness score acquired after running the algorithm R times. It is provided by

    $$\begin{aligned} Best = \mathop {\min }\limits _{1 \le i \le R} {f_i} \end{aligned}$$
    (19)
  4. 4.

    Worst: the highest fitness score acquired after running the algorithm R times. It is provided by

    $$\begin{aligned} Worst = \mathop {\max }\limits _{1 \le i \le R} {f_i} \end{aligned}$$
    (20)

    where \({f_i}\) is the optimal fitness score acquired at run i.

5.3 Experimental Series 1: CEC’20 Test Suite

The proposed LFOPA is tested on ten functions from the CEC’20 benchmarks [45]. For each benchmark function, the method is iterated 1000 times, mimicking 30 independent running times with a scale of 10. In general, there are four types of benchmark test functions: unimodal, multimodal, hybrid, and composite functions. The LFOPA is compared to numerous well-known MAs in this regard, including the IMODE [46], CMA-ES [47], GSA [48], GWO [11], MFO [49], HHO [31], and OPA [18] techniques.

The following is an overview of the many benchmark routines that have been used: Firstly, the unimodal function F1 has just one global optimum, which is used to compare the proposed LFOPA’s intensification capability to that of competing MAs. As seen in Table 2, the LFOPA competes quite well with the comparable MAs. In the tested dimension, the LFOPA obtains the best result for function F1 and has good intensification. Secondly, the overall number of local optima for multimodal functions F2-F4 differs from those of unimodal functions. With the scale of the issue, the number of multimodal functions grows exponentially. As a result, benchmark functions of this type are highly useful for assessing the diversification capability of the contrasted MAs. The LFOPA has near-perfect diversification capabilities, as seen by the findings in Table 2 for the functions F2-F4. In a variety of benchmark issues, the LFOPA is always the most effective. This achievement is the outcome of the LFOPA’s integrated diversification procedures, which pulls in the attitude of the desired global optimum. Finally, for the proposed LFOPA and competitive algorithms, the hybrid arithmetical (F5-F7) and composite (F8–F10) functions represent a collection of the most difficult issues that are used to test both the diversification and intensification balancing capabilities as well as the escape mechanism from local optima entrapment. The findings presented in Table 2 for functions F5-F10 demonstrate that the LFOPA is successful since it gives the greatest outcomes and performance in terms of the Mean and SD statistics for \(D=10\) when compared to other competitor algorithms.

Table 2 Mean and SD statistics of fitness scores acquired over 30 independent runs by competitive MAs on CEC’20 test beds (F1–F10) with D = 10

5.3.1 Diversification and Intensification Phases Analysis

For a better understanding of the proposed LFOPA’s diversification and intensification balancing capabilities, see Fig. 2 that illustrates the arithmetic mean of the proposed LFOPA’s diversification and intensification phases based on Eq. (21). The LFOPA produces excellent diversification rates for locating the global optima, as illustrated by the curves in Fig. 2. In addition, the LFOPA has a high tendency to escape local optima and maintains a fair balance between diversification and intensification stages.

$$\begin{aligned} \frac{1}{\textrm{Var}_{j}}=\frac{1}{N} \sum _{i=1}^{N} {\text {Median}}\left( \textrm{pos}^{j}\right) -\textrm{pos}_{i}^{j}; \quad Var^\textrm{it}=\frac{1}{N} \sum _{j=1}^{D} \textrm{Var}_{j} \end{aligned}$$
(21)

where Median (\(\textrm{pos}^j\)) is the statistical median score of \(j^{th}\) scale of all search agents with number N and \(\textrm{pos}_{i}^{j}\) is \(j^{th}\) scale of \(i^{th}\) search agent. \(\textrm{Var}_j\) is the arithmetic mean diversity score estimation for scale j. This scale-wise diversity is then averaged (\(\textrm{Var}^\textrm{it}\)) on all the D scales for iteration \(\textrm{it}\) and \(\textrm{it} = 1, 2, \dots , \textrm{it}_\textrm{max}\). Once search agents diversity is estimated for \(\textrm{it}_\textrm{max}\), it is now available to measure whether how much proportion the search mechanism was diversifiative and how much it was intensificative. The diversification/intensification proportion estimation is provided by

$$\begin{aligned} \begin{array}{l}{\text {diversification } \%=\frac{Var^\textrm{it}}{\textrm{Var}_{\max }} * 100} \\ \\ {\text {intensification } \%=\frac{\left| Var^\textrm{it}-\textrm{Var}_{\max }\right| }{\textrm{Var}_{\max }} * 100}\end{array} \end{aligned}$$
(22)

where \(\textrm{Var}_\textrm{max}\) is the highest diversity acquired in all \(\textrm{it}_\textrm{max}\).

In this context, Table 2 shows that the suggested LFOPA is the optimal solution to the six benchmark challenges (F5–F10) compared with other competitors. This result suggests that the LFOPA is capable of balancing the diversification and intensification stages well. The adaptive delineation’s pulling capability was used to update the \(\textrm{pos}\) vector. Diversification (\(|\textrm{pos}|\) \(\ge\)1) is the focus of certain iterations, whereas intensification (\(|\textrm{pos}|\) < 1) is the focus of others.

Fig. 2
figure 2

Diversification (exploration) and intensification (exploitation) phases for the suggested LFOPA on patterns of CEC’20

5.3.2 Boxplot Behaviour Analysis

The boxplot analysis can show the data distribution features. Because this category of functions is connected to a large number of local minima, a boxplot of outcomes for each method and function is provided in Fig. 3 to help comprehend the distribution of outcomes. Boxplots are an excellent way to visualize data distributions in quartiles. The algorithm’s minimum and maximum data points, which constitute the whisker’s edges, are the lowest and highest data points reached by the algorithm. The ends of the rectangles define the lower quartile and upper quartile. A narrow boxplot implies a high level of data agreement. For D = 10, Fig. 3 illustrates the outcomes of a ten functions (belonging to CEC’20) boxplot. The proposed LFOPA’s boxplots are relatively narrow compared to other algorithms’ distributions for most functions, and they also have the lowest values. On most of the test functions, the suggested LFOPA outperforms the other algorithms, but only on F1, F2, and F3 does it have restricted performance. This is because of the impact of the LF movement, as it makes the algorithm individuals perform a sudden movement with a certain step length and direction in the search space, especially in the chasing phase. This causes various mutations in the optimal fitness scores that are obtained by the algorithm, which consequently affects negatively the standard deviation of these scores (data distributions). Indeed, the LF movement aids the algorithm in discovering global solutions and avoiding local optima entrapment, resulting in achieving optimal scores for the fitness function but sometimes in a wide boxplot, as shown in the case of F1, F2, and F3.

Fig. 3
figure 3

The boxplot graphs of the suggested LFOPA and other competitors acquired on patterns of CEC’20 with D=10

5.3.3 Convergence Behaviour Analysis

For all algorithms that compete, the convergence curve visualizes the best scores that are acquired by each algorithm in the competition for the considered fitness function (pattern) at time \(\textrm{it}\) (\(\textrm{it} = 1, 2, \dots , \textrm{it}_\textrm{max}\)), and thus it shows which algorithm in the competition is able to reach the optimal solution rapidly (convergence speed), and consequently, the selected algorithm will be the best one. Furthermore, the convergence curve is also essential as a graphical inspection to depict the ability of competing MAs in minimizing or maximizing the fitness function. This investigation aids in the analysis of MA convergence rates. For 1000 iterations and 30 independent runs, the convergence curves of the suggested LFOPA and other competitors are visualized in Fig. 4.

The convergence curves of the suggested LFOPA and their competitors for the CEC’20 patterns are shown in Fig. 4. For all considered patterns (F1-F10), the suggested method achieved a stable point. This signalizes that the suggested method is convergent. Furthermore, for most patterns, the suggested method delivers the lowest solutions and, so far, the fastest. The suggested LFOPA is a potential MA for tackling issues that need quick computing, such as online optimization issues, because of its swift convergence to a close-optimal solution.

Fig. 4
figure 4

The convergence curves of the suggested LFOPA and other competitors acquired on patterns of CEC’20 with D=10

5.3.4 Qualitative Metrics Analysis

Even if past results corroborate the suggested LFOPA’s superior efficiency, more trials and investigation will enable firmer conclusions about the algorithm’s efficiency for real-world issue-addressing. Tracking the behaviour of orcas, i.e., search agents, gives further insight into the optimization search mechanism and method convergence. Figure 5 depicts the qualitative study of the proposed LFOPA. Significantly, Fig. 5 depicts the agent’s behaviours, which comprise 2D views of the CEC’20 patterns (F1–F10), search history, trajectory, and average fitness history.

The following points are worthwhile from the qualitative metrics analysis:

  • Domain topology—patterns in 2D visualization: The first column of Fig. 5 depicts the pattern in 2D space. The patterns have distinct topologies, which can be used to figure out what type/shape of patterns the algorithm performs best with.

  • Search history: The second column of Fig. 5 depicts the search history of the orcas during optimization. Over the duration of 1000 iterations, the search history covers all of the historical locations of the 30 orcas. The analysis of these graphics can aid in a better understanding of how LFOPA seeks out the best solution. The orcas in F1, F3–F5, F8, and F10 show clear linear motion trajectories, whereas the motion in F2, F6, F7, and F9 is more chaotic. This phenomena demonstrates that orcas have two motion modes: linear approximation, which can converge to the best solution quicker, particularly for unimodal functions, and multi-directional movement, which can extend the algorithm search interval. It aids in preventing the algorithm from falling into local optima for multimodal functions. LFOPA can better balance the diversification and intensification phases by combining these two search strategies.

  • Trajectory: Trajectory indicates the location of the first scale of the first orcas in each iteration. As shown in the third column of Fig. 5, during the initial iteration, the orca oscillates, but gradually settles down. However, there may be a major mutation during the stabilization procedure, possibly because the orca will have a different distribution of locations as a result of the LF strategy that is utilized in the location updating mechanism of the chasing phase.

  • Average fitness: The variance of the arithmetic mean score of all orcas throughout the solution procedure is represented by mean fitness. The LFOPA can rapidly converge to the best solution and the history curves are all lowering, indicating that the population is improving with each iteration, as shown in the fourth column of Fig. 5, but stochastic mutations take place during the convergence procedure because the new location distribution of orcas during the attacking phase has a significant influence on the arithmetic mean score of the total fitness pattern.

Fig. 5
figure 5

The qualitative metrics on patterns of CEC’20: 2D views of the patterns, search history, trajectory, and average fitness history

5.4 Experimental Series 2: Optimization Engineering Challenges

Even though previous findings strongly recommend that the proposed LFOPA can handle real issues more efficiently than OPA and other competitors, four real structural challenges, including welded beam design, tension/compression spring design, pressure vessel design, and speed reducer design, are utilized to demonstrate the validity of the proposed LFOPA in tackling such engineering issues with unidentified search spaces. These challenges are utilized to compare the proposed LFOPA with competing algorithms. Accordingly, distinct restrictions exist in engineering design challenges, and a handling approach should be utilized to address them. There are various constraint management strategies in the literature [9], including penalty functions (e.g., adaptive, annealing, dynamic, static, and death penalty functions). As a result, the death penalty function is used in this study to manage various limitations in engineering design challenges. By counseling MAs during the optimization procedure, the death penalty function automatically removes inapplicable options (thereby minimizing the large fitness value). The minimal computing cost and simplicity of this technology are its most significant advantages. When dealing with engineering design restrictions, the death penalty function is applied. The proposed LFOPA’s performance is compared to that of seven well-known MAs, including IMODE, CMA-ES, GSA, GWO, MFO, HHO, and OPA.

5.4.1 Welded Beam Design

The target function of the welded beam design issue is suggested in [50] to minimize the fabrication cost of the welded beam. The cost of a welded beam design like the one depicted in Fig. 6 is minimized while keeping certain limits in mind. The bending stress in the beam \((\theta )\), the shear stress \((\tau )\), the buckling load (Pc), and the end deflection of the beam \((\delta )\) are all optimization constraints in this issue. The thickness of the weld (h), the length of the clamping bar (l), the height of the bar (t), and the thickness of the bar (b) are the four optimization variables.

The acquired optimization outcomes, as shown in Table 3, show that the proposed LFOPA can effectively address the welded beam issue with the best outcomes by lowering its cost. The statistical findings in Table 4 were acquired across 30 independent runs for the proposed LFOPA and the competing algorithms, with \(\textrm{it}_\textrm{max}\) set to 1000. It is apparent that LFOPA can effectively discover the best design at the lowest cost.

Fig. 6
figure 6

A schematic clarification for the welded beam challenge

Table 3 The optimal scores acquired by the LFOPA and other competitors for tackling the welded beam design issue
Table 4 The statistical outcomes acquired for the LFOPA and other competitors on tackling the welded beam design issue

5.4.2 Tension/Compression Spring Design

The major goal of this engineering design issue was to make the tension/compression spring as light as feasible stated in [51]. Figure 7 shows a schematic example of the tension/compression spring issue. Surge frequency, shear stress, and deflection limitations must all be met in the ideal design. Wire diameter (d), mean coil diameter (D), and the number of active coils (N) are the three optimization variables.

Table 5 shows the acquired optimization outcomes, indicating that the proposed LFOPA has accomplished the excellent outcomes in addressing this issue by minimizing the tension/compression spring weight. Table 6 demonstrates the statistical data acquired for the competing methods during 30 independent runs, with \(\textrm{it}_\textrm{max}\) = 1000.

Fig. 7
figure 7

A schematic clarification for the tension/compression spring challenge

Table 5 The optimal scores acquired by the LFOPA and other competitors for tackling the tension/compression spring design issue
Table 6 The statistical outcomes acquired for the LFOPA and other competitors on tackling the tension/compression design issue

5.4.3 Pressure Vessel Design

The target function of the pressure vessel design issue reduces the overall cost of the cylindrical pressure vessel (forming, material, and welding) [52]. Figure 8 depicts a schematic model of the tension/compression spring issue. The thickness of the shell \((T_s)\), the thickness of the head \((T_h)\), the inner radius (R), and the length of the cylindrical section without taking into consideration the head (L) are the four optimization variables.

The acquired optimization outcomes are reported in Table 7, demonstrating that the suggested LFOPA produced the excellent outcomes in addressing this issue by decreasing the overall cost of the cylindrical pressure vessel (forming, material, and welding). Table 8 shows the statistical data acquired for the proposed LFOPA and other competitors over 30 independent runs, with \(\textrm{it}_\textrm{max}\) = 1000.

Fig. 8
figure 8

A schematic clarification for the pressure vessel challenge

Table 7 The optimal scores acquired by the LFOPA and other competitors for tackling the pressure vessel design issue
Table 8 The statistical outcomes acquired for the LFOPA and other competitors on tackling the pressure vessel design issue

5.4.4 Speed Reducer Design

The speed reducer design issue is deeply explained in [53]. Multiple constraints are addressed in this issue to minimise the weight of the speed reducer, including surface stress, bending stress of the gear teeth, stress in the shafts, and transverse deflections of the shafts. Seven variables are investigated in this issue: face width \(x_1\), module of teeth \(x_2\), number of teeth in the pinion \(x_3\), length of the first shaft between bearings \(x_4\), length of the second shaft between bearings \(x_5\), diameter of the first shaft \(x_6\), and diameter of the second shaft \(x_7\), as depicted in Fig. 9.

Table 9 shows the obtained optimization outcomes, indicating that the proposed LFOPA has accomplished the excellent outcomes in addressing this issue by decreasing the weight of the speed reducer. Table 10 demonstrates the statistical data acquired for the competing methods during 30 independent runs, with \(\textrm{it}_\textrm{max}\) = 1000. Also, it is clear that the suggested LFOPA achieves outstanding outcomes in terms of mean, SD, best, and worst metrics.

Fig. 9
figure 9

A schematic clarification for the speed reducer challenge

Table 9 The optimal scores acquired by the LFOPA and other competitors for tackling the speed reducer design issue
Table 10 The statistical outcomes acquired for the LFOPA and other competitors on tackling the speed reducer design issue

6 Application of LFOPA: Node Localization Challenge in WSN

A wireless sensor networks (WSN) is made up of hundreds or thousands of low-cost nodes that communicate with one another [54, 55], and has various applications, such as environmental and structural surveillance, object tracking, and target detection. Additionally, it can be utilized for military purposes [56, 57]. In many of these applications, tiny nodes are placed across the area to be tracked, potentially covering a huge geographical area. Each node is a small device that gathers data from the adjacent environment via one or more sensors, analyses it locally, and sends it across a wireless protocol (ZigBee, IEEE 802.15.4) to the sink node [58]. The nodes’ tiny size and low cost force various physical constraints; for example, the inability to host powerful microprocessors or huge memory chips, limiting their computing and storage abilities. Furthermore, they are often powered by small batteries that are difficult to replace or recharge. As a result of these constraints, it is necessary to conserve energy to prolong the network’s lifespan [59, 60].

Information about the position of sensor nodes (node localization) is critical in many of the current WSN applications, including environmental surveillance, precision farming, collision avoidance, and logistic support. Furthermore, by eradicating the need for route detection, position-based routing protocols can save significant energy and enhance cache behavior for applications with position requests. Ultimately, position information can help to improve security [61]. Although the utilizing of a global positioning system (GPS) can offer position realization in concept, this strategy is not always feasible in practice due to the expense and energy usage of GPS receivers. Furthermore, GPS is not well adapted to indoor and subsurface deployments and impediments as dense plants or towering buildings that might obstruct satellite transmission.

These restrictions have prompted substitutional approaches known as localization strategies, which are discussed in [62,63,64]. In these strategies, only a few nodes in the network (anchor nodes) are given precise positions by GPS or manual positioning, while all nodes can determine their distances to surrounding nodes by utilizing any measurement strategy as depicted on the left side of Fig. 10. These distance-related strategies include received signal strength (RSS) measurements, time difference of arrival (TDoA), time of arrival (ToA), etc. Thus, considering that the coordinates of anchor nodes are identified, and exploiting pairwise distance measurements among the nodes, the localization issue is to estimate the locations of all non-anchor (unknown) nodes. This mission has proven to be tough due to the following two factors: First, identifying the node positions from a collection of pairwise distance measurements is a non-convex optimization issue. Second, noise unavoidably corrupts the measurements supplied to nodes [65].

The issue of node localization in WSNs is considered an NP-hard optimization issue [66, 67]. Traditional deterministic techniques and algorithms cannot handle NP-hard issues in an acceptable period. In this scenario, non-deterministic (stochastic) optimization algorithms are preferable, as they mimic natural groups of organisms such as swarms of fish, colonies of bees and ants, troops of bats and cuckoo birds, and so on. These algorithms are founded on four self-organization concepts: positive reviews, negative reviews, various interactions, and variations. They are stochastic, population-based, and iterative search methods [68]. Accordingly, stochastic optimization algorithms that may produce approximate global optimum solutions are required to handle the issue of node localization in WSNs [69, 70]. These algorithms require moderate memory and computational effort, employing their search agents to mimic WSN nodes and then determine positions based on the position-updating behavior of each algorithm, as depicted on the right side of Fig. 10. In this paper, the LFOPA and other competitive algorithms are employed to localize the unknown nodes with the lowest rates of squared error and average localization error. In LFOPA and other competitors, the population is restricted to a specific range (lb and ub) to avoid energy waste due to insignificant searches. In addition, the impact of characteristics such as node density, anchor proportion, and various communication ranges on squared error and average localization error is investigated using the LFOPA and other competitors. The following is a detailed overview of the network model, the utilized objective function, and the obtained simulation outcome discussion.

Fig. 10
figure 10

Node localization strategies in WSNs

6.1 Network Model

Consider a WSN composed of n nodes scattered in a specific region of interest (\(\textrm{ROI} = H \times W\), where H is the height of the region and W is its width) and such nodes transmit the same communication range (network connectivity) \(R_c\) and the same sensing range \(R_s\). This \(\textrm{ROI}\) includes m anchor nodes and \(n-m\) non-anchor (unknown) nodes where \(m<n\). The coordinates of non-anchor nodes must meet the space distance limitation, to keep the calculated positions close to the actual values. Limited facilities of wireless sensors are distributed equally on such nodes in terms of bandwidth, uniform energy, and limited memory. Also, the sink node has memory and bandwidth. All the sensing nodes in the \(\textrm{ROI}\) can sense and distribute environmental and physical circumstances among each other to send them directly to the sink node. Furthermore, such nodes are randomly distributed by a certain mechanism (topology construction algorithm [71]) in the \(\textrm{ROI}\) in which there is no node has any regional knowledge. Suppose that two nodes i and j belonging to the same deployment area \(\textrm{ROI}\) being in the connectivity range of each other, the inter node ranging distance \(d_{ij}\) that can be acquired by RSS measurement strategy is indicated by

$$\begin{aligned} d_{i j}=r_{i j}+e_{i j}, \end{aligned}$$
(23)

where \(r_{i j}=\sqrt{\left( x_{i}-x_{j}\right) ^{2}+\left( y_{i}-y_{j}\right) ^{2}}\) is the actual distance between the two nodes and \(e_{i j}\) is the ranging error of RSS strategy which follows a zero mean Gaussian distribution with variance \(\sigma ^{2}\) [63]; i.e. the variance of \(e_{i j}\) is given by \(\sigma ^{2}=\alpha ^{2} r_{i j}^{2}\). A value of \(\alpha =0.1\) is utilized in the simulation experiments.

6.2 Objective Function

The aim was to calculate the positions of the unknown nodes \(i=m+1,..., n\) as accurately as possible. Towards this target, the objective function SE is formulated as follows:

$$\begin{aligned} \hbox {Minimize} \;\;\; \hbox {SE}=\sum _{i=m+1}^{n}\left( \sum _{j \in N_{i}}\left( {\hat{d}}_{i j}-d_{i j}\right) ^{2}\right) \end{aligned}$$
(24)

where \(N_{i}\) is neighbour set of node i, that is give by

$$\begin{aligned} N_{i}=\left\{ j \in 1, \ldots , n, \quad j \ne i: r_{i j} \le R_c\right\} \end{aligned}$$
(25)

where \(R_c\) is the communication range of node i, and \({\hat{d}}_{i j}\) is the calculated distance between nodes i and j, indicated by

$$\begin{aligned} {\hat{d}}_{i j}= {\left\{ \begin{array}{ll}\sqrt{\left( {\hat{x}}_{i}-x_{j}\right) ^{2}+\left( {\hat{y}}_{i}-y_{j}\right) ^{2}} &{} \text {if node } j \text {is an anchor, } \\ \sqrt{\left( {\hat{x}}_{i}-{\hat{x}}_{j}\right) ^{2}+\left( {\hat{y}}_{i}-{\hat{y}}_{j}\right) ^{2}} &{} \text {otherwise. }\end{array}\right. }, \end{aligned}$$
(26)

where \(\left( {\hat{x}}_{i}, {\hat{y}}_{i}\right)\) and \(\left( {\hat{x}}_{j}, {\hat{y}}_{j}\right)\) are estimated coordinates of unknown nodes i and j.

After determining the positions of non-anchor nodes \(i=m+1,..., n\) by the algorithm the objective function SE measures the squared error between the inter-node distances obtained by the algorithm (\({{\hat{d}}}_{ij}\)) and the inter-node distances obtained by the RSS measurement strategy (\(d_{ij}\)) for the calculated positions.

6.3 Performance Metrics

In most cases, the node communication range \(R_c\) is linked to the localization error. To measure the localization efficiency of the suggested LFOPA and other competitors, two types of localization errors are established based on \(R_c\). First is the single localization error, or SLE of an non-anchor node, which is provided by Eq. (27), which is utilized to measure the efficiency of each node being calculated. The second is the average localization error (ALE) of the network, which is calculated using Eq. (28) and is then used to assess the performance of all the nodes under consideration.

$$\begin{aligned} \text {SLE }= & {} \frac{\sqrt{\left( {\hat{x}}_{i}-x_{i}\right) ^{2}+\left( {\hat{y}}_{i}-y_{i}\right) ^{2}}}{R_c} * 100 \% \text{, } \end{aligned}$$
(27)
$$\begin{aligned} \text {ALE }= & {} \frac{\sum _{i=1}^{n-m} \sqrt{\left( {\hat{x}}_{i}-x_{i}\right) ^{2}+\left( {\hat{y}}_{i}-y_{i}\right) ^{2}}}{(n-m) \times R_c} * 100 \% \end{aligned}$$
(28)

6.3.1 Simulation Experiments and Discussion

A number of n nodes are randomly deployed in a specific \(\textrm{ROI} = 500 \;m \times 500 \;m\), with m anchor nodes to measure the localization efficiency of the suggested LFOPA and other competitors in terms of squared error and localization error. To tackle and discuss the impact of characteristics such as node density, anchor proportion, and various communication ranges on average localization error, the network node density n is set to 50, 100, 150, 200, 250, and 300 nodes. Likewise, the proportion of anchor nodes m is set to 5%, 10%, 15%, 20%, 25%, and 30%. Furthermore, the diversity of node communication range \(R_c\) is set to 10 m, 20 m, 30 m, 40 m, 50 m, and 60 m. Note that all simulation experiments utilized \(\textrm{it}_\textrm{max}\)=1000 and 30 independent runs to obtain Mean, SD, Best, and Worst statistics of the objective function.

The following points are worthwhile from the simulation experiments discussion:

  • The performance analysis of competitors on the objective function: Under the condition of 20% anchor proportion and \(R_c\) = 60 m, Table 11 and Fig. 11 show the obtained statistics and the convergence curves of the objective function SE for all node densities, respectively. It is clear from Table 11 that the proposed LFOPA accomplishes the lowest statistics in comparison with other competitors, indicating its high and perfect performance in decreasing the objective function and producing the lowest squared error for the calculated positions of non-anchor nodes in all the considered network sizes. Furthermore, Fig. 11 depicts the convergence performance for the proposed LFOPA and other competitors for all the considered network sizes. According to Fig. 11, it is clear that the suggested LFOPA is the quickest one of all in the regards of convergence towards the best solution in all network sizes studied, indicating its great efficiency in terms of convergence rate speed towards the optimal solutions.

  • The influence of network node density on average localization error: Table 12 and Fig. 12 display the average localization errors, measured when varying the network node density n while keeping the anchor proportion constant at 20% and \(R_c\) = 60 m.

    1. (i)

      All of the average localization errors of eight approaches decrease as the number of nodes increases, and when the number of nodes exceeds 150, the number of nodes has minimal influence on the errors. It is clear that the errors produced by LFOPA are fewer than those produced by other competitors. due to the good ability of LFOPA, which force the LFOPA to converge towards optimal solutions.

    2. (ii)

      Furthermore, the LFOPA has better efficiency than OPA due to the calculation precision being influenced by the LF and GS strategies in LFOPA, and the average localization error is reduced by 3.80%, 4.60%, 4.68%, 4.95%, 5.70%, and 4.50%, respectively, compared to OPA with node density of 50, 100, 150, 200, 250, and 300.

    3. (iii)

      Further investigation reveals that, despite network topology changes as nodes increment, the LFOPA is robust in terms of having average localization errors of less than 15% with nodes exceeding 100.

  • The influence of the anchor proportion on the average localization error: Table 13 and Fig. 13 display the average localization errors, measured when varying the anchor proportion while keeping the node density constant at n = 100 and \(R_c\) = 60 m.

    1. (i)

      All the average localization errors are reduced as the anchor proportion rises due to the growing number of anchor nodes around non-anchor nodes, which means the localization accuracy has enhanced.

    2. (ii)

      Under the same anchor proportions, LFOPA has better localization precision than OPA. When compared to OPA, LFOPA minimizes the average localization error by 3.70% and 2.81%, respectively, with anchor proportions of 15% and 25%.

    3. (iii)

      LFOPA is reliable, with no efficiency degradation in terms of average localization errors being less than 17% when the anchor proportion is greater than 10%. These findings demonstrate that changing the anchor proportion has little effect on LFOPA’s strong efficiency.

  • The influence of the node communication range \(R_c\) on the average localization error: Table 14 and Fig. 14 display the average localization errors, measured when varying the node communication range \(R_c\), while keeping the node density constant at n = 100 and 20% anchor proportion.

    1. (i)

      The outcomes reveal that the localization precision of each technique increases as the node communication range is maximized. This is because increasing the communication range allows non-anchor nodes to interact with more neighbour nodes, hence improving the searchability of stochastic optimization methods.

    2. (ii)

      When the node communication range approaches 60 m, LFOPA and other competitors can maintain the average localization errors by decreasing them. This phenomenon is induced by an increase in the number of neighbour nodes as a result of the increased communication range.

    3. (iii)

      The LFOPA is reliable, with no discernible efficiency degradation in terms of average localization errors remaining less than 20% when the node connectivity range exceeds 30 m.

  • Finally, for graphical representation, Fig. 15 visualizes the locations of anchor nodes, original locations of non-anchor nodes, and calculated locations of non-anchor nodes obtained by the LFOPA for all the considered node densities n = 50, 100, 150, 200, 250, and 300 under the condition of 20% anchor proportion and \(R_c\) = 60 m.

The outcomes of the simulation trials discussed in this section illustrated that the suggested LFOPA has significant efficiency in addressing the issue of node localization in WSNs, as it provided superior results in terms of squared errors and average localization errors than OPA and other competitors under various conditions, including the diversity of node densities, the diversity of anchor node proportions, and the diversity of node communication ranges.

Table 11 Statistical outcomes of the competitors on the objective function over various node densities
Fig. 11
figure 11

Convergence of the competitors over various node densities

Table 12 Average localization errors with various node densities
Fig. 12
figure 12

Average localization errors with various node densities

Table 13 Average localization errors with various anchor proportions
Fig. 13
figure 13

Average localization errors with various anchor proportions

Table 14 Average localization errors with various node communication ranges \(R_c\) in different algorithms
Fig. 14
figure 14

Average localization errors with various node communication ranges \(R_c\) in different algorithms

Fig. 15
figure 15

WSN localization using LFOPA

7 Conclusion and Future Work

The LF and GS strategies are merged into OPA to develop a new upgraded version of the OPA called the LFOPA. This improvement reinforces the algorithm’s global searchability and avoids the entrapment of local optima, and thus, the quality of acquired solutions can be improved. The suggested LFOPA is validated on ten CEC’20 test patterns and is used to address four different design engineering issues, including the welded beam, the tension/compression spring, the pressure vessel, and the speed reducer. It is also employed to address the challenge of node localization in WSNs. The proposed LFOPA is evaluated in comparison with IMODE, CMA-ES, GSA, GWO, MFO, HHO, and the original OPA. The experimental findings on CEC’20 test suite and engineering challenges demonstrated the proposed LFOPA’s superiority in producing high-quality solutions compared to OPA and other competitors. The proposed LFOPA provided the best performance in node localization issues by producing low measurements for squared error and average localization error of non-anchor nodes. In the future, the LFOPA can be used to handle other challenges such as data mining, image processing, data clustering, and industrial activities. Furthermore, the LFOPA can be improved by introducing multi-objective versions to cope with multi-objective optimization problems.