Introduction

Low-emission technologies like distribution generation and electric vehicles have increased adoption in the distribution networks. These technologies are widely promoted to meet the growing power demand and economic and environmental factors. Various control schemes have been embraced in the active distribution networks to overcome the drawback of power quality issues from these penetrations and determine the suitable site and size of penetration. Mathematical optimization is the observed and broadly implemented key technology in validating control scheme technologies. Optimization has also been continuing in the limelight and is most focused on by researchers in various fields of engineering problems. Rapid depletion of existing sources and profit maximization could be the reasons for the same [1]. This work highlights the role of one such optimization algorithm, the Hunter–Prey Optimization Algorithm (HPO), in addressing the challenges posed by these low-emission technologies.

Background classification of optimization methods

The types of optimization are studied under four types, and some of the developed optimization techniques to address the earlier discussed issues [2], Fig. 1 depicting the same, are as follows:

  • Multi-objective optimization: This approach simultaneously considers two or more divergent objectives, such as power loss reduction and voltage stability enhancement. Multi-objective optimization can yield a series of optimal solutions that constitute a deal between the individual objectives, letting decision-makers opt for the solution that aptly fits their needs.

  • Hybrid optimization: This technique involves combinations of various meta-heuristic algorithms to overcome individual deficiencies and improve the algorithm's strength, convergence speed, and accuracy.

  • Machine learning optimization: This technique is widely used for large-scale systems to extract data and predict a much more accurate optimal solution at a faster rate.

  • Robust optimization: This technique considers the sensitivity of system parameters. It deals with uncertainties in the optimization problem like renewable energy introduction, sudden load demand, etc.

Fig. 1
figure 1

Types of optimization methods

On the other hand, optimization methods can be broadly classified under the categories of classical optimization techniques, sensitive index methods, meta-heuristic methods, and mixed techniques [2]. Classic optimization techniques involve the traditional methods of mathematics, namely Newton–Raphson (NR) [3], Linear Programming (LP) [4], Non-Linear Programming (NLP) [5], Mixed-Integer Programming (MIP) [6], Mixed-Integer Linear Programming (MILP) [7], Mixed-Integer Non-Linear Programming (MINLP) [8, 9], Dynamic Programming (DP) [7].

Sensitive index methods involve the calculation of sensitivity index factors for optimization solutions. LFI (Loss Factor Index), PLI (Power Loss Index), LSI (Loss Sensitivity Index), and voltage indices like VSI (Voltage Stability Index) and AVDI (Average Voltage Deviation Index) are a few of them that aid in finding an optimization solution considering the sensitivity parameters.

Mixed methods combine traditional mathematical and meta-heuristic methods with a sensitivity indices approach for a better solution than individual performances. Hybrid works of PSO-GSA [10], GA-GSA [11], fuzzy-GA [12], and fuzzy-DE [13] are a few of them.

The meta-heuristic methods of optimization approach can be classified under the categories of:

  1. 1.

    Biology-based optimization

  2. 2.

    Physics-based optimization

  3. 3.

    Geography-based optimization

  4. 4.

    Other population-based optimization

Biology-based optimization

These types of methods, which are inspired by biological and natural activities [14], can be further classified into:

  1. a.

    Evolutionary algorithms

  2. b.

    Swarm-based algorithms

(a) Evolutionary algorithms

A global adaptive search optimization algorithm named Genetic Algorithm (GA) based on natural selection [15], another technique called Evolutionary Programming (EP) put forward by D. B. Fogel in 1990 [16, 17], Evolution Strategy (ES) inspired by the biological principles of evolution [18], developed by Storny et al. [19], Differential Evolutionary (DE) algorithm, a widely accepted algorithm to handle nonlinear, multimodal, non-differentiable cost functions, swiftly [20] come under evolutionary algorithms. GA employs binary coding to represent the problem parameters, while DE uses float point, making it more accurate with more adaptive and control parameters [21, 22].

(b) Swarm-based algorithms

Swarm-based algorithms are highly emerging nature-inspired techniques that mimic the interactional behavior of large, homogenous species among themselves and their environment. Bird flocks, ant colonies, fish shoals, and honeybees form a few agents of these swarm-based algorithms. Particle-Swarm Optimization (PSO), proposed by Kennedy et al. [23], inspired by flocks of birds, is a broadly accepted swarm technique. The Bacteria Foraging Optimization Algorithm (BFO), developed by Passino [24], emulates Escherichia coli bacteria’s foraging behavior, the Cuckoo Search Algorithm (CS) by Yang et al. [25] mimics the breeding practice of cuckoo birds, Ant Colony Optimization (ACO) rested on the foraging behavior and pheromones-based communication of ants, Firefly Algorithm (FA) is motivated by the flashing behavior of the fireflies and developed by Yang [26], the home search strategy of pigeons inspired the Pigeon-Inspired Optimization (PIO) algorithm and its improved version, proposed by Duan et al. [27, 28], are various swarm-based algorithms. Other swarm-based optimizations include the Coral Reef Optimization algorithm (CRO) by Salcedo-Sanz et al. [29], an Artificial Immune System (AIS) based on clone generation and maturation [30], Whale Optimization Algorithm (WOA) [31], Cat Swarm Optimization (CaSO) [32], Crow Search Algorithm (CSA) [33], Moth-Flame Optimization Algorithm (MFO) [34], Grey Wolf Optimizer (GWO) [35], Flower Pollination Algorithm (FPA) [36], Honey Bee Mating Optimization (HBMO) [37, 38], symbiotic organism search algorithm, Butterfly Optimization Algorithm (BOA) [39, 40]. Camel Search Algorithm (CA) [41], Bat Algorithm (BA) [42], and Grasshopper Optimization Algorithm (GOA) [43] sum up the group.

Physics-based optimization algorithms

These optimization algorithms are inspired by the laws of physics, the physical behavior of matter, or its physical properties. The Gravitational Search Algorithm (GSA), rooted in the gravitational law and physics laws of motion by Rashedi et al.[44], Simulated Annealing (SA) follows the physical process of annealing that is used for crystallization [45, 46], Magnetic Optimization Algorithm (MOA) based on the attraction and repulsion principle of magnets by Tayarani et al. [47], Intelligent Water Drop (IWD) algorithm inspired from the river flow introduced by Hosseini [48] can be classified under the physics-based algorithms. Multiverse Optimization (MVO) [49], Atom Search Optimization (ASO) algorithm [50], Curved Space Optimization (CuSO) [51], Galaxy-Based Search Algorithm (GBSA) [52], Water Cycle Algorithm (WCA) [53], Black Hole (BH) algorithm [54], Harmony Search (HS) Algorithm [55] add up to the list.

Geography-based algorithms

The Imperialistic Competition Algorithm (ICA), pioneered by Gargari et al. [56], involves countries’ colonies and imperialists as population; Tabu Search (TS) following a search-escape pattern put forward by Glover [57] comes under the geography-based algorithms.

Other population-based algorithms

Sine Cosine Algorithm (SCA) [58], Parallel Seeker Optimization Algorithm (PSOA) [59], Artificial Rabbit Optimization (ARO) algorithm [32], Hunter–Prey Optimization (HPO) [60], Teaching–Learning-Based Optimization Algorithm (TLBO) [61], Bald Eagle Search Algorithm (BESA) [62], Chaotic Optimization Algorithm (COA) [63], political optimizer [64], Paddy Fields Algorithm (PFA) [65], Saplings Growth Algorithm (SGA) [66], Human-Inspired Algorithm (HIA) [67] are the other population-based optimization algorithms.

Figure 2a is on the classification of algorithms and 2b gives another classification based on the nature of inspiration. The algorithms can also be classified based on their inspiration. Since the nature-inspired optimization algorithms are inspired by a group or flock of animals’ behavior, they can conveniently be classified into a) animal inspired, b) bird inspired, c) insect inspired, d) plant inspired, and e) human inspired.

Fig. 2
figure 2

a Classification of optimization algorithms. b Classification based on mode of inspiration

Literature on Hunter–Prey Optimization

HPO is a nature-inspired population-based optimization algorithm by Naurei et al. [60] 2022 to address optimization problems in different engineering fields. Many researchers have employed the algorithm to solve various issues, as in [68], the authors applied the HPO algorithm for the optimal positioning of PV-STATCOM with energy loss depreciation and improving the voltage profile. Active power loss minimization, greenhouse gas emission reduction, and improving the hosting capacity of PV and the voltage profile are the objectives in [69] applying the HPO algorithm. Article [70] studies the algorithm for optimal PV placement with actual power loss reduction and voltage profile enhancement. A combined algorithm of HPO-HDL is put forward in [71] for fake news detection; HDL stands for Hybrid Deep Learning, an AI technique. HPO is used to identify the parameters of solar PV cells of R.T.C. France and STM-6/120 models [72]. A tabularized representation of the same is shown in Table 1.

Table 1 Works on Hunter–Prey Optimization (HPO)

There are articles on improvised versions of the standard HPO (IHPO) [73] defined an enhanced HPO algorithm for optimal FACTS and wind DG placement with power loss reduction, cost reduction, and voltage enhancement as objectives. IHPO with extreme machine learning to predict the wind power output and accuracy is studied in [74]. A hybrid combination of IHPO and Convolution Neural Networks (CNN) to speculate the structural damages in buildings and construction is proposed in [75]. [76] uses the IHPO to plan a robot path finding algorithm in unknown surroundings. A table representing the discussed works is depicted in Table 2.

Table 2 Works on improved Hunter–Prey Optimization (IHPO)

HPO motivation

HPO's elegance lies in its relative simplicity. It uses a small set of intuitive rules to traverse and exploit the search space effectively, making it computationally efficient and potentially applicable to many optimization problems. Like other algorithms, HPO also has exploration and exploitation phases after initialization, and there is a difference in effectively balancing the exploration and exploitation phases. On the other hand, reactive power can be optimally dispatched by finding the prime locations of the desired devices using the proposed algorithm. Hence, applying the newly emerging algorithm in the engineering field to handle the optimization problem motivates this article.

Research gap and challenges

Several optimization algorithms have been proposed recently, and many are successfully in use, while many are in the developing and testing stages. At this point, developing a new algorithm to showcase its supremacy could be challenging. However, as an answer to the question of ‘What is the need for a new algorithm?’ the NFL theory has been put forward by [77]. According to the No Free Launch (NFL), no one can present an algorithm that can solve all problems of optimization, and researchers are allowed to suggest new optimization algorithms or improve the existing techniques to solve a subset of problems in various fields. The new algorithm HPO can handle optimization problems in different engineering and non-engineering fields.

Contribution of the article

  • This article reviews the HPO algorithm, demonstrating its working phases: initialization, exploration, and exploitation, along with the parameters deciding the algorithm.

  • The paper also details the newer versions of the algorithm, explaining the improvements made to the standard algorithm.

  • The application of HPO in electrical engineering for optimal DG and capacitor bank placement is showcased. Overall, the paper briefs the new nature-inspired HPO algorithm, its variants, and applications.

Further, the paper is categorized, detailing the 2. Standard HPO, 3. Improvised versions of HPO, 4. Discussions, and 5. Conclusions.

Standard Hunter–Prey Optimization Algorithm (HPO)

Inspiration

Hunter–Prey Optimization (HPO) takes a captivating approach to problem-solving, drawing inspiration from the dynamic world of predator–prey interactions. It mimics predators' strategies to hunt and capture their prey. The scenario of a hunter searching for prey, and since prey is usually grouped, the hunter chooses prey far from the flock (average herd position). After the hunter finds his prey, he chases and hunts it. At the same time, the prey searches for food, escapes the predator's attack, and reaches a safe place, which is the fitness function. HPO is a class of swarm intelligence algorithms and falls under the broader category of meta-heuristic algorithms used for optimization problems.

Mathematical model of the algorithm

Naruei et al. [60] proposed HPO, a new intelligent optimization algorithm with fast convergence and a higher optimization potentiality. The general structure of any algorithm begins with the population in initialization, \(\left( {\vec{x}} \right) = \left\{ {\overrightarrow {x1} ,\overrightarrow {x2} , \ldots \ldots .,\overrightarrow {xn} } \right\}\). For every member, the objective function is computed. The positions of the hunter and the prey are updated at every iteration, evaluating the objective function till the algorithm stops. The initial position of the member is given by Eq. (1) from [60],

$$x_{i} = {\text{rand}}\left( {1,d} \right)*\left( {u - l} \right) + l$$
(1)

where xi is the initial position of hunter or prey, rand(1,d) is any random number between [0,1], u and l are the upper and lower boundaries, and d is the dimension of the problem. The objective function is then evaluated. \({\text{OF}} = f\left( {\vec{x}} \right)\). The exploration and exploitation are the following stages after initialization. These stages involve a search mechanism that pilots the search agents toward the optimal solution. Equation (2) defines the hunter's position as

$$x_{i,j} \left( {t + 1} \right) = x_{i,j} \left( t \right) + 0.5\left[ {\left( {2CZP_{{{\text{pos}}}} - x_{i,j} \left( t \right)} \right) + \left( {2\left( {1 - C} \right)Z\mu_{i} - x_{i,j} \left( t \right)} \right)} \right]$$
(2)

where xi,j(t) defines the current position of the hunter and xi.j(t + 1) for the next iteration. Ppos represents the position of the prey, and C and Z are the balance and adaptive parameters, respectively. μ is the mean of the locations and is evaluated using Eq. (3)

$$\mu = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \overrightarrow {{x_{i} }}$$
(3)

n is the number of iterations, and xt gives the position at the iteration t.

For a random vector P with 0 and 1 values,

$$P = \overrightarrow {{R_{1} }} < C, IDX = \left( {P = = 0} \right)$$
(4)

For R1 being a random vector, IDX defines the index value of the R1 vector at P =  = 0. The adaptive parameter Z can be evaluated from Eq. (5) with R2 and R3 as random vectors between [0,1]; ⊗ denotes the element-wise multiplication.

$$Z = \overrightarrow {{R_{2} }} \otimes IDX + \overrightarrow {{R_{3} }} \otimes \left( { \sim IDX} \right)$$
(5)

The balance parameter between the stages of exploration and exploitation, C, can be obtained from Eq. (6)

$$C = 1 - {\text{itr}}\left( {\frac{0.98}{{{\text{itr}}_{{{\text{max}}}} }}} \right)$$
(6)

Itr is the current iteration, and itrmax is the maximum iteration count. During iterations, the value of C decreases from 1 to 0.02.

The prey’s position Ppos is calculated using the average value μ from Eq. (3) and the Euclidean distance obtained from Eq. (7).

$$D_{Eu\left( i \right)} = \sqrt[2]{{\mathop \sum \limits_{j = 1}^{d} \left( {x_{i.j} - \mu_{j} } \right)^{2} }}$$
(7)

From Eq. (8), the member at the maximum distance is considered prey.

$$P_{{{\text{pos}}}} = \overrightarrow {{x_{i} }} \left| {i\;{\text{ is}}\;{\text{ the}}\;{\text{ index}}\;{\text{ of}}\;{\text{Max}}\left( {{\text{end}}} \right){\text{sort}}\left( {D_{Eu} } \right)} \right.$$
(8)

The hunter easily mocks down the animal away from the group, then goes for its next, and this continues. Also, there would be a late convergence if the search agent is considered at a longer distance from the mean every time. So, to avoid this situation, Eq. (9) is defined for N number of search agents.

$$k{\text{best}} = {\text{round}}\left( {C \times N} \right)$$
(9)

Now, the new prey position can be given by

$$P_{{{\text{pos}}}} = \overrightarrow {{x_{i} }} \left| {i\;{\text{ is}}\;{\text{ sorted}}\; D_{Eu} \left( {k{\text{best}}} \right)} \right.$$
(10)

The kbest value is N at the start but gradually decreases with iterations and gets its value equal to the first member. This is because the hunter chooses the prey at a farther distance each time, and thus, the kbest values decrease at each iteration. In the hunting scenario, the prey tries to escape from the attacker and reach its herd; hence, it can be said that the safe position of the prey is the optimal solution. The prey escape phase equation can be given as

$$x_{i,j} \left( {t + 1} \right) = T_{{{\text{pos}}\left( j \right)}} + CZ{\text{cos}}\left( {2\pi R_{4} } \right) \times \left( {T_{{{\text{pos}}\left( j \right)}} - x_{i.j} \left( t \right)} \right)$$
(11)

Here, in Eq. (11), xi,j(t) and xi,j(t + 1) are the current and the next prey’s positions. Z and C are calculated from Eqs. 5 and 6, respectively. R4 is the random vector between [− 1,1], and Tpos is the global optimum position. The significance of the cos function is that it decides the next prey’s location from the global maximum using the input parameters at various radii and angles. To distinguish the hunter and prey mathematically, by combining equations 2 and 11 and defining another random vector R5 ranging in [0,1], a regulating parameter β = 0.1,

$$x_{i,j} \left( {t + 1} \right) = \left\{ {\begin{array}{*{20}c} {x_{i,j} \left( t \right) + 0.5\left[ {\left( {2CZP_{{{\text{pos}}}} - x_{i,j} \left( t \right)} \right) + \left( {2\left( {1 - C} \right)Z\mu_{\left( j \right)} - x_{i,j} \left( t \right)} \right)} \right] 12a} \\ {T_{{{\text{pos}}\left( j \right)}} + CZ\cos \left( {2\pi R_{4} } \right) \times \left( {T_{{{\text{pos}}\left( j \right)}} - x_{i,j} \left( t \right)} \right) 12b} \\ \end{array} } \right.$$
(12)

Equation 12a gives the next position of the hunter for R5 < β, and Eq. 12b updates the prey’s position for R5 > β. The flow chart of HPO is shown in Fig. 3.

Fig. 3
figure 3

Flow chart for HPO

The following are the suppositions made by the author [60] for the HPO algorithm to provide appropriate solutions:

  • Random selection of hunter and prey assures the search for space exploration. Also, there will be a low probability of getting stuck in the local maximum.

  • Search space exploration by selecting the farthest member as prey and the mechanism of the mean distance reduction with every iteration ensure the convergence of the algorithm and its exploitation.

  • The adaptive parameter escalates the population divergence, reduces the severity of the hunter and prey position, and guarantees the algorithm’s convergence.

  • The adjustment parameters of the HPO algorithm are smaller in number and are non-gradient algorithms.

Parameters of HPO

The parameters of HPO can be identified as population-related and movement-related parameters. Population size and maximum iterations (itrmax) are population-based parameters, and regulating parameter β and balance parameter C are the movement-related parameters. The fitness function can be regarded as the other parameter of HPO. Table 3 summarizes the parameters of a few referred algorithms:

Table 3 Parameters of various algorithms

Complexity analysis of HPO

The complexity of an optimization algorithm refers to the computational resources it requires to solve a problem. There are two main aspects to consider:

Time complexity

This measures the time it takes for the algorithm to execute, typically expressed in terms of the number of basic operations it performs. Here are some common factors influencing time complexity:

  • Population size (N): Many optimization algorithms work with a population of solutions. The complexity often increases as the population grows, as the algorithm must evaluate and update each solution.

  • Number of iterations (T): Most algorithms iterate through a loop, refining their solutions. The complexity increases with the number of iterations required for convergence.

  • Problem dimensionality (D) refers to the number of optimized variables. For some algorithms, the complexity increases with the number of variables as the search space increases.

  • Function complexity: The complexity of the objective function (what the algorithm is trying to optimize) can also play a role. Evaluating a more complex function in each iteration can increase the overall complexity of the time.

Space Complexity

This refers to the amount of memory required by the algorithm to run. Here's what typically contributes to space complexity:

  • Storing solutions: The algorithm needs to store information about each solution in the population, including its position in the search space. This memory usage scales with the population size (N) and problem dimension (D).

  • Additional data structures: Some algorithms might use additional data structures like sorting mechanisms or temporary variables. These can contribute to the overall complexity of space, but their impact is usually smaller than that of storing solutions.

The space complexity is almost similarly computed for most algorithms, such as O(N*D), where N is the populace size and D is the dimension.

O(N), the most commonly used notation of complexity, denotes linear complexity, meaning the execution time grows linearly with the population size or number of iterations.

Typically, the complexity of the HPO algorithm depends on four components: initialization, updating of the hunter, updating prey, and fitness evaluation. Note that the initialization process’s computational complexity with N search agents is O (N). The computational complexity of the updated process can be given as \(O\left( {T \times N} \right) + O\left( {\left( {1 - \beta } \right) \times T \times N \times D} \right) + O\left( {\beta \times T \times N \times D} \right)\), T denotes the maximum number of iterations (earlier referred to as itermax), D denotes the number of problem variables, and β is a regulatory parameter with a value of 0.1 [60]. Hence, the total complexity is \(O\left( {N \times \left( {T + \left( {1 - \beta } \right)TD + \beta TN + 1} \right)} \right)\).

The complexity of PSO is O (D * N * iter_max). The computation time grows linearly with the problem dimensionality D, swarm size N, and maximum iterations iter_max. The complexity of the standard GWO algorithm is O (N * d * Tmax), N is the size, d is the population, and Tmax is the maximum number of iterations. ALO's time complexity is generally considered to be O (D * N * T), where D represents the dimensionality of the problem (number of variables being optimized), N represents the population size (number of antlions), and T represents the maximum number of iterations. The total time complexity of WOA is generally considered to be O (M * N * D + f(N)), where M represents the maximum number of iterations, N represents the population size, D represents the number of dimensions in the problem, and f(N) represents the time required to evaluate the fitness function for N individuals. The complexity of HHO is generally considered to be O (N (D + T)), T is the maximum iteration count, D is the dimension, and N is the population. O(N*T) gives the complexity of SCA. Here, T represents the total number of iterations the algorithm runs for and is the population size. Compared to HHO, SCA avoids the additional complexity of interactions between solutions (like hunting behavior), making it slightly more efficient. In general, Tabu Search is considered not to have a polynomial time complexity as determining a specific time complexity for Tabu Search is observed to be complicated.

Improved versions of the Hunter–Prey Optimization Algorithm

This section discusses various recent works with improvised versions of the HPO algorithm in varied fields of engineering.

Improvising regulating parameter

[76] proposes Improved HPO for robot path planning with an upgraded adjusting (regulating) parameter β and introducing a new parameter called changing parameter (CP). The new β is given as

$$\beta = 2 \times {\text{rand}} - 1$$
(13)

The changing parameter addresses the absence of a transfer parameter in the standard HPO. It increases the exploration speed and thus a faster pace for exploitation. Equation (14) gives the CP value and is followed by the refined position-defining equations.

$${\text{CP}} = {\text{sin}}\left( {C - \frac{t}{T}} \right)$$
(14)
$$x_{i,j} \left( {t + 1} \right) = \left\{ {\begin{array}{ll} {x_{i,j} \left( t \right) + 0.5\left[ {\left( {\alpha 2CZP_{{{\text{pos}}}} - x_{i,j} \left( t \right)} \right) + 2\left( {1 - C} \right)Z\mu_{j} - x_{i,j} \left( t \right)} \right)] 15a} \\ {T_{{{\text{pos}}\left( j \right)}} + \alpha CZ{\text{cos}}\left( {2\pi R_{4} } \right) \times \left( {T_{{{\text{pos}}\left( j \right)}} - x_{i,j} \left( t \right)} \right) 15b} \\ \end{array} } \right.$$
(15)

where α = β × CP.

In this version of IHPO, the author upgrades the regulatory parameter for randomization and prevents early convergence, which also helps alter the search direction. The proposed algorithm is juxtaposed with other contemporary algorithms of Particle Swarm Optimization (PSO), Salp Swarm Algorithm (SSA), Fitness-Dependent Optimizer (FDO), conventional COOT and HPO algorithms for 13 benchmark criteria, and 30-dimension functions that are widely used by the researchers. The IHPO performs its best when compared with other algorithms. The work uses the nature-inspired algorithm and local search and block detection strategies for robot path planning in an unrecognized environment. The author gives a future scope for real Turtlebot robot testing. Also, there is a reach to implement the algorithm in the electrical distribution networks with renewable source integration.

Improvising initialization phase

[75] presents a developed version of HPO, which will be upgraded at the initialization stage. Instead of random initialization in the conventional method, this work uses tent chaotic mapping for initialization and Cauchy distribution for random variables to clear the periodic points.

$$y_{i + 1} = \left\{ {\begin{array}{*{20}l} {\eta y_{i} , 0 \le y_{i} \le 0.5} \hfill \\ {\eta \left( {1 - y_{i} } \right), 0.5 \le y_{i} \le 1} \hfill \\ \end{array} } \right.$$
(16)
$$y_{i + 1} = \left\{ {\begin{array}{*{20}l} {\mu y_{i} + {\text{cauchy}}\left( {0,1} \right) \times \frac{1}{N}, 0 \le y_{i} \le 0.5} \hfill \\ {\mu \left( {1 - y_{i} } \right) + {\text{cauchy}}\left( {0,1} \right) \times \frac{1}{N}, 0.5 \le y_{i} \le 1} \hfill \\ \end{array} } \right.$$
(17)

Equations (16) and (17) give the tent mapping and Cauchy’s distribution with Ƞ = 2, called the chaotic parameter. Now, initialization Eq. (1) is modified to

$$x_{i} = y_{i} \times \left( {u - l} \right) + l$$
(18)

The author also suggests a linear combination of prey position, global optimum, and mean position, updating Eq. (12) as

$$x_{i,j} \left( {t + 1} \right) = \left\{ {\begin{array}{ll} {x_{i,j} \left( t \right) + 0.5\left[ {\left( {2CZ\left( {\frac{{P_{{{\text{pos}}\left( j \right)}} - T_{{{\text{pos}}\left( j \right)}} }}{2}} \right) - x_{i,j} \left( t \right)} \right) + 2\left( {1 - C} \right)Z\left( {\frac{{\mu_{j} - T_{{{\text{pos}}\left( j \right)}} }}{2}} \right) - x_{i,j} \left( t \right)} \right] 19a} \\ {T_{{{\text{pos}}\left( j \right)}} + CZ{\text{cos}}\left( {2\pi R_{4} } \right) \times \left( {\left( {\frac{{g{\text{best}}_{\left( j \right)} - T_{{{\text{pos}}\left( j \right)}} }}{2}} \right) - x_{i,j} \left( t \right)} \right) 19b} \\ \end{array} } \right.$$
(19)

The performance of the IHPO is compared to Differential Evolution (DE), Cuckoo Search Algorithm (CSA), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Moth-Flame Optimization (MFO), and the convergence speed of the improved HPO is much faster than the rest. It also surpasses the rest of the algorithms' convergence efficiency, accuracy, and optimization ability. In the referred article, the algorithm and Convolutional Neural Network (CNN) are implemented to identify the structural damages in buildings and constructions.

[74] Also, the standard HPO can be advanced to IHPO by upgrading the initialization phase. To increase population diversity, the stochastic reverse learning technique has been introduced as

$$X_{{{\text{rand}}}} = l + u - r \cdot X$$
(19)

l and u are the lower and upper boundaries, r is an arbitrary value between [0,1], X belongs to [l,u], and Xrand gives the random reverse solution. Stochastic reverse learning focuses on producing a random inverse response from the present iteration during the population search, collating the objective function values within the two solutions, and then picking out the prime solution to proceed to the next iteration.

Also, the paper introduces a weighing function, ω, to improvise the prey’s position equation; Eq. (11) is updated as Eq. (20).

$$x_{i,j} \left( {t + 1} \right) = \omega \cdot T_{{{\text{pos}}\left( j \right)}} + CZ{\text{cos}}\left( {2\pi R_{4} } \right) \times \left( {T_{{{\text{pos}}\left( j \right)}} - x_{i.j} \left( t \right)} \right)$$
(20)
$$\omega = \omega_{{{\text{min}}}} \left( {\frac{{\omega_{{{\text{max}}}} }}{{\omega_{{{\text{min}}}} }}} \right)^{{\frac{1}{{\frac{1 + c \cdot t}{{{\text{Itr}}_{{{\text{max}}}} }}}}}}$$
(21)

Itrmax gives the maximum number of iterations, t is the current iteration, c is the adjustment parameter, and ωmin and ωmax are the weight regulating parameters. It can be observed that a similar parameter has been introduced in earlier referred work [74] in improvising the prey’s position equation. The IHPO is compounded with Extreme Machine Learning (ELM) to estimate wind power and is found effective, providing scope to improve wind power prediction accuracy. The scope can be further extended for a hybrid DG placement.

Updating the step size

The step size is upgraded in two stages, high-velocity and low-velocity ratios inspired by the Marine Predator Algorithm (MPA) [73, 80], to avoid the optimum values trapped in local maxima. The first stage is defined for a higher step size, reflecting Brownian motion mathematically:

$$it < \frac{1}{3}{\text{maxit}}$$
(22)
$$S = \overrightarrow {{R_{B} }} \otimes \left( {E - \overrightarrow {{R_{B} }} \otimes x_{i.j} \left( t \right)} \right)$$
(23)
$$x_{i.j} \left( {t + 1} \right) = x_{i,j} \left( t \right) + P \cdot \overrightarrow {{R_{B} }} \otimes S$$
(24)

For P = 0.5, RB is the vector representing Brownian motion, and E is the fittest solution matrix; it is the number of iterations and maxit: maximum iterations. For a lower step size, the equations considered are

$${\text{it}} > \frac{1}{3}{\text{maxit}}$$
(25)
$$S = \overrightarrow {{R_{L} }} \otimes \left( {\overrightarrow {{R_{L} }} \otimes x_{i.j} \left( t \right) - E} \right)$$
(26)
$$x_{i.j} \left( {t + 1} \right) = E + P \cdot CF \otimes S$$
(27)

The optimum solution matrix E is given as

$$E = \left[ {\begin{array}{*{20}c} {Xb_{1,1}^{t} } & \cdots & {Xb_{1,d}^{t} } \\ \vdots & \ddots & \vdots \\ {Xb_{1,n}^{t} } & \cdots & {Xb_{n,d}^{t} } \\ \end{array} } \right]$$
(28)

For n search agents, d dimensions Xb is the best solution.

The vector RL implements the Levy method in the second stage. The optimal power flow problem is addressed for wind energy and FACTS-integrated distribution systems using the Enhanced HPO (EHPO). The works conclude that EHPO is an efficient tool to solve real-world complex power system problems, with scope for analyzing the same for large-scale systems and systems incorporating PV_DG and EV technologies.

Improved HPO

[81] proposes an improved version of HPO, IHPO, to overcome the local maximum trap and improve the accuracy of the conventional algorithm. Two steps are involved in studying the IHPO; firstly, the initial population set is generated using the tent chaotic mapping like in [75], followed by an Enhanced Sine Cosine Algorithm (ESCA) adaption and Cauchy’s strategy of mutation for the exploration and exploitation stages.

Improvised initialization phase

The initialization stage is reformed using chaotic mapping, which is characterized by an increase in population diversity. Mathematically, Eq. (16) follows in the other way,

$$y_{i + 1} = \left\{ {\begin{array}{*{20}c} {\frac{{y_{i} }}{u}, 0 \le y_{i} < u} \\ {\frac{{1 - y_{i} }}{1 - u}, u < y_{i} < 1} \\ \end{array} } \right.$$
(29)

u = 0.5. The next equation denotes the Tent mapping for controllable randomness of variables to avoid unstable fixed points and trapping into the small periods during iterations. For d dimensions and N search agents,

$$y_{i + 1}^{d} = \left\{ {\begin{array}{*{20}c} {2\left( {y_{i}^{d} + {\text{rand}}\left( {0,1} \right) \times \frac{1}{N}} \right), 0 \le y_{i}^{d} < 0.5} \\ {2\left( {1 - y_{i}^{d} + {\text{rand}}\left( {0,1} \right) \times \frac{1}{N}} \right), 0.5 \le y_{i}^{d} < 1} \\ \end{array} } \right.$$
(30)

Hence, the initialization equation updates as follows:

$$x_{i}^{d} = l + \left( {u - l} \right) \times y_{i}^{j}$$
(31)

Adaption of ESCA

The better global search ability of the SCA is implemented to the standard HPO considering

$$P_{v} = {\text{exp}}\left( {\mu \times \left( \frac{t}{T} \right)} \right)^{3}$$
(32)

Pv gives the population position, with t being the ongoing iteration number and T being the maximum number of iterations. μ is the conversion factor. For rand < Pv, the population position is updated using the ESCA; if rand > Pv, the position is updated using HPO. μ = 0.01 by [82], and in [81], μ = − 10. The standard SCA is enhanced by initiating hyperbolic sine regulating factor and dynamic cosine wave weight coefficient [82].

Cauchy’s strategy

Cauchy's mutation strategy is introduced to improve the convergence speed and convergence accuracy and avoid local maximum trapping. The updated population is given as

$$x_{i}^{d} \left( {t + 1} \right) = r_{1} \otimes x_{i}^{d} \left( t \right) + {\text{Cauchy}} \otimes \left( {x_{{{\text{best}}}}^{d} \left( t \right) - x_{i}^{d} \left( t \right)} \right)$$
(33)

xid(t) is the current individual position, xid(t + 1) is the position after Cauchy mutation, r1 is a random value ranging in [0,1]. The Cauchy mutation is applied to avoid the early occurrence of optimization. It is referred from [83], where the condition for Cauchy mutation is defined for \(stdY\left( t \right) > Cstd.\) The algorithm converges at a good rate, and for \(stdY\left( t \right) \le Cstd\), there is a chance of a local maxima trap and Eq. (33) coming into the picture. In general, Cstd, the maximum value of the variation coefficient is 0.1, and the mutation strategy is applied at T/5 iterations (T is the maximum iteration count). stdY(t) is the standard deviation of three consecutive iterations.

The efficacy of the proposed algorithm is tested for eight classic test functions with Wild Horse Optimization (WHO), Gray Wolf Optimization (GWO), Sine Cosine Algorithm (SCA), and HPO. The practical application of the algorithm is the proposed scope of the referred article. Figure 4 illustrates the flow chart of improved HPO.

Fig. 4
figure 4

Flow chart of IHPO

Inferences and discussions

From the above literature, HPO is a new nature-inspired meta-heuristic algorithm widely implemented in different engineering fields as an optimization technique. HPOA is applicable in complex systems, construction fields, and image processing because it handles multimodal and non-convex optimization problems. Figure 5 gives a pictorial representation of applications of the HPO algorithm.

Fig. 5
figure 5

Applications of HPO

Electrical engineering applications

The proposed algorithm HPO finds its application in electrical distribution systems to find the optimal placements of capacitor banks, custom power devices, and renewable energy devices for the system's loss reduction and voltage enhancement. The following figures accentuate the supremacy of HPO in determining the optimal locations of preferred devices with active loss and energy loss redemption. [84] studies the HPO for optimal capacitor banks in IEEE-33 and 69 bus systems. The comparative graph for power loss reduction for various algorithms highlighting HPO is depicted below:

Figures 6 and 7 portray the active loss reduction comparison using different algorithms for IEEE-33 and 69 bus systems with an optimal capacitor bank placement. [69] address the optimal PV placement for active loss reduction; Fig. 8 shows the same for prime locations of PV in an IEEE-33 bus system.

Fig. 6
figure 6

Graphical representation of active loss reduction by capacitor bank placement in IEEE-33 bus system using HPO

Fig. 7
figure 7

Graphical representation of active loss reduction by capacitor bank placement in IEEE-69 bus system using HPO

Fig. 8
figure 8

Graphical representation of active loss reduction by PV placement in IEEE-33 bus system using HPO.

HPO is analyzed on IEEE-33 and 69 buses for PV_STATCOM placement in article [68], comparing the energy losses. Figures 9 and 10 graphically depict the superiority of HPO in energy loss reduction collating with the rest.

Fig. 9
figure 9

Energy loss graphical depiction using various algorithms for an IEEE-33 bus system

Fig. 10
figure 10

Energy loss graphical depiction using various algorithms for an IEEE-69 bus system

From the graphical analysis in the above figures, the put-forward algorithm HPO outperforms the other algorithms and is considered effective in tackling electrical engineering problems. Some referred works focus on enlightening the HPO in various established fields other than electrical engineering.

Journalism

[71] applies HPO in journalism for fake news detection. The authors proposed HPO with hybrid deep learning (HDL) to identify the false news from the obtained information.

Civil engineering

An improved version of HPO is used to identify the multi-storey building damages in [75]. The process of damage detection by HPO is followed by the neural network technique, CNN. The two-stage approach is found effective in the respective field.

Robotics

[76] put forward an improvised HPO for robot path planning. The work highlights using the proposed algorithm combined with a search strategy to help robots search an obstacle-free path in unknown surroundings.

Rotor dynamics

[85] uses HPO to optimize the squeeze film damper's key parameters to reduce the rotor's vibration performance. The paper highlights the HPO over PSO in convergence speed and accuracy.

Cloud security

HPO finds its application in cloud security in the article [86], where the algorithm is used to enhance cloud data security by optimizing the multi-objective function with the preservation of information ratio, the ratio of hiding, and the modification degree performed at the critical generation stage as objective functions.

Medicine

A hybrid combination of HPO with ladybug beetle optimization algorithm is used in ophthalmoscopy to identify Diabetic Retinopathy (DR) [87].

HPO, along with other methodologies or its improved versions, is used in vast fields for the betterment of the desired output. Many works continue to explore the algorithm’s efficacy; the advantages and disadvantages of the algorithm are discussed below.

Highlights of HPO

  • Diverse Exploration: The predator–prey dynamics encourage the algorithm to cover multiple regions of the solution space simultaneously, aiding in fleeing from local optima.

  • Flexibility: HPOA can quickly adapt to different problem domains by tuning its parameters and mechanisms.

  • Convergence Speed: The algorithm often demonstrates faster convergence due to the dynamic interactions between hunters and prey than the other existing algorithms [60].

Drawbacks of HPO

  • Parameter Sensitivity: Effective parameter tuning is crucial for optimal performance, and improper settings can lead to premature convergence or stagnation. This problem is addressed in [75] work and introduces Cauchy’s distribution to avoid the same.

  • Hem in local maxima: Before getting entrapped in the local maximum and improvised versions, inscribing the same is referred from the literature.

Conclusion

The Hunter–Prey Optimization Algorithm presents an intriguing approach to optimization inspired by nature's predator–prey interactions. While it has demonstrated success in various applications, it is essential to recognize its strengths and weaknesses when considering its implementation. As research in this field continues to evolve, a deeper understanding of the algorithm's behavior and its potential to solve complex optimization problems will emerge.

In summary, this comprehensive analysis highlights the significance of the Hunter–Prey Optimization Algorithm as a promising optimization technique; Fig. 11 gives an overview of the HPO algorithm. Inspired by nature, its unique approach to exploration and exploitation provides a fresh perspective on optimization. The benefits of the review can be addressed below.

Fig. 11
figure 11

HPO algorithm

For Researchers:

  • Consolidated knowledge: The paper summarizes the current state of the art and key concepts of the HPO algorithm.

  • Future research directions: By highlighting areas for future work, as suggested previously, the paper can guide researchers toward promising avenues for further development and application of HPO.

  • Comparative analysis: A well-structured review paper can compare HPO with other optimization algorithms, highlighting its advantages, limitations, and suitable use cases. This can help researchers choose the most appropriate optimization technique for their problem.

For developers and practitioners:

  • Understanding HPO's potential: The paper can showcase the capabilities of HPO for solving various optimization problems. This can encourage developers to explore its application in their projects.

  • Practical implementation guidance: The review paper can provide practical insights for implementing HPO with existing algorithms. This can save developers time and effort during implementation.

For the field of optimization:

  • Advancement of HPO: The review paper can stimulate further research and development efforts on HPO, improving its performance, adaptability, and theoretical understanding.

  • Comparison with existing techniques: By comparing HPO with established optimization algorithms, the paper can contribute to a wider understanding of the strengths and limitations of different approaches. This can guide researchers toward developing more powerful and versatile optimization tools.

  • Promoting broader use: A well-disseminated review paper can raise awareness of HPO within the optimization community, potentially leading to its broader adoption in various scientific and engineering fields.

Future scope of HPO:

The potential areas where the review can have its future scope can be identified as:

Enhancing the HPO algorithm:

  • Parameter tuning: While HPO avoids excessive parameters, explore the impact of fine-tuning existing ones like initial population size, levy flight parameters, or the balance factor between exploration and exploitation. [88] enhances the standard HPO by parameter tuning.

  • Hybridization with other algorithms: [89, 90] suggests hybridizing with WHO and SFO for improved performance. Works on HPO hybridized with other algorithms like PSO and GA can be a suggested scope in hybridization.

  • Multi-objective optimization: [84] applies HPO for multi-objective functions, paving the way for many more works in the same direction.

The future outlook in theoretical analysis can be:

  • Investigating whether the current HPO algorithm accurately reflects real-world predator–prey dynamics and exploring potential improvements by incorporating biological concepts can be a direction of future extension.

Additional considerations like the parallelization of HPO for large-scale applications and its adaptiveness to a dynamic environment could also be expected.