Introduction

For power system operators, OPF is a crucial tool because it enables them to more effectively balance the supply and demand of electricity, lower the price of electricity production, and boost system dependability1. As it can assist in determining the ideal combination of generation resources and transmission facilities required to satisfy future demand growth, it is also used for long-term planning and design of energy systems2.

OPF is a method used in power systems engineering to allocate and use power generation resources as efficiently as possible to meet the demand for electricity while lowering costs and preserving system reliability. It entails resolving a mathematical optimization problem that accounts for a number of restrictions, such as the capacity of the transmission lines, the demand for electricity, and the generating limits3,4.

OPF's objective is to reduce energy production costs while meeting a variety of operational limitations, including voltage limits, generator capacity limits, and transmission line capacity limits. Mathematical programming approaches like linear programming, quadratic programming, or nonlinear programming can be used to address optimization problems, which often have many variables and restrictions5,6.

Numerous optimization approaches have been used to address the OPF issue over the course of the last few decades, and this has been the subject of extensive research. Numerous traditional deterministic optimization methods have had success in the past. According to the literature, most of these traditional methods use one of a number of techniques, such as gradient techniques, Newton's methodology, the Simplex methodology, sequential linear programming (SLP), sequential quadratic programming (SQP), and interior point methods (IPMs)7. Reference7 provides a summary of the typical optimization techniques that are most frequently employed to address the OPF issue. Although some of these deterministic methods have shown excellent convergence behaviour and are frequently employed in industry, they are not without drawbacks. One of their disadvantages is their inability to guarantee global optimality, which means they might converge to local optima. These methods were often designed under particular theoretical assumptions, such as convexity, differentiability, and continuity, which may not be appropriate for real OPF situations7,8. They are also not well-suited to handle binary or integer variables. Additionally, over the past 20 years, significant research in the field of heuristic optimization techniques has been done to address the OPF issue as a result of the fast development of recent computational intelligence tools9. To address the OPF issue, some of these techniques have been employed, including: Particle swarm optimization (PSO)10, Biogeography Based Optimization (BBO)11,12, artificial bee colony (ABC)13,14, Shuffle Frog Leaping Algorithm (SLFA)15, gravitational search algorithm (GSA)16,17, Grey wolf optimizer (GWO)18, Slime mould algorithm (SMA)19, Teaching Learning based Optimization (TLBO)20, modified pigeon-inspired algorithm (MPIO)21, backtracking search algorithm (BSA)22, Harmony Search (HS)23, Black Hole (BH)24, Harris Hawks Optimization (HHO)25, quasi-oppositional modified Jaya (QOMJaya)26,27, League Championship Algorithm (LCA)28, hybrid bat algorithm (HBA)29, and Group Search Optimization (GSO)30. These techniques are renowned for their capacity for quick searching of large solution spaces, their ability to avoid being constrained to local solutions, and their capability to take into account uncertainty in specific power system components. A survey of several optimization methods utilized to tackle the OPF issue is presented in9,31. Due to the diversity of objectives that can be taken into account when describing an OPF issue, no single method can be said to be the best when addressing all OPF problems. A new method that can effectively tackle some of the OPF difficulties is thus always needed.

In this paper, Fast Cuckoo Search (FCS)32, Salp Swarm Algorithm (SSA)33, Dynamic control Cuckoo search (DCCS)34, Gradient-Based Optimizer (GBO)35, Northern Goshawk Optimization (NGO)36, and Opposition Flow Direction Algorithm (OFDA)37 are utilized for tackling the OPF issue in the standard IEEE 30 Bus test system. A metaheuristic algorithm called Fast Cuckoo Search (FCS) was partly developed due to cuckoo bird behaviour. It uses heuristics and randomization to find the best answers32. Dynamic Control Cuckoo Search (DCCS) extends the standard cuckoo search method, which dynamically modifies its parameters while optimizing. With this strategy, DCCS can more quickly and effectively adapt to the shifting optimization environment34. Another metaheuristic algorithm that mimics the behaviour of salp swarms in the water is the Salp Swarm Algorithm (SSA). In order to obtain the best answers, SSA relies on the idea of social cooperation among the individuals in the swarm33. The Gradient-Based Optimizer (GBO) is a deterministic optimization method that seeks out the best answers by using gradient data35. The optimization method known as Northern Goshawk Optimization (NGO) was influenced by the way northern goshawks hunt36. To get the best answers, it combines local and global search techniques. These techniques have been developed to overcome some of the limitations of traditional optimization methods, such as their inability to find global solutions and handle uncertainties in the power system.

The following is a summary of this paper's significant contributions:

  • Use of FCS, SSA, DCCS, GBO, NGO, and OFDA optimization methods for practical OPF situations.

  • Putting into practice a comprehensive suite of tests to evaluate optimization algorithms employing various OPF issues, objective functions, and restrictions.

  • Addressing the OPF issue while taking into account security restrictions for more difficult circumstances.

  • The use of a novel comparison technique based on ideal and typical values.

  • Incorporating non-parametric statistics for validating the proposed optimization technique.

The rest of the article is structured as follows. The OPF issue is stated in “Problem formulation” section. The suggested optimization methodologies are described in “Optimization algorithms” section. “Results and discussion” section presents application examples and outcomes. Finally, “Conclusion” section draws the findings.

Problem formulation

OPF, as previously indicated, is a power flow issue that determines the ideal control variable adjustment for a specific load by minimizing a predetermined objective function, such as the generation price or transmission losses. Optimal power flow is a non-linear constrained optimization issue that takes the system's operational restrictions into account. It can be expressed as follows38,39:

$$\mathrm{Minimize \;\;F}({\text{x}},{\text{u}})$$
(1)
$$\mathrm{Subject\;\; to\;\; g}({\text{x}},{\text{u}}) = 0$$
(2)
$$\mathrm{and \;\; h}({\text{x}},{\text{u}})\le 0$$
(3)

Equation (1), in which \({\text{x}}\) signifies the vector of state variables, and \({\text{u}}\) signifies the vector of control variables, determines the objective function. The equality and inequality requirements are denoted by \({\text{g}}\) and \(h\), respectively. The dependent variables are stated in Eq. (4), and the independent variables are shown in Eq. (5)40,41.

$${u}^{T} =\left[{P}_{{G}_{2}}.......{P}_{{G}_{NG}}, {V}_{{G}_{1}}........ {V}_{{G}_{NG}}, {Q}_{{C}_{1}}.........{Q}_{{C}_{NC}}, {T}_{1}........{T}_{NT}\right]$$
(4)

where \({{\text{P}}}_{{\text{G}}}\) represents the real power provided by the power plant, \({{\text{V}}}_{{\text{G}}}\) represents the level of the voltage at the generator buses, \({{\text{Q}}}_{{\text{C}}}\) represents the reactive power provided by the shunt compensators, and \({\text{T}}\) represents the position of the transformer's tapping adjuster. \({\text{NG}}\), \({\text{NC}}\), and \({\text{NT}}\) stand for the quantity of transformers, shunt capacitors, and generators, respectively40,41.

$${x}^{T} =\left[{P}_{{G}_{1}}, {V}_{{L}_{1}}........ {V}_{{L}_{NL}}, {Q}_{{G}_{1}}.........{Q}_{{G}_{NG}}, {S}_{{l}_{1}}........{S}_{{l}_{nl}}\right]$$
(5)

In the equation, the symbol \({{\text{P}}}_{{\text{G}}1}\) depicts the power at the slack bus, \({{\text{V}}}_{{\text{L}}}\) denotes the voltage values at the load bus, \({{\text{Q}}}_{{\text{G}}}\) represents the reactive power supplied by the generator, and \({{\text{S}}}_{{\text{l}}}\) symbolizes the flow of apparent power through the transmission line. The number of loads, generators, and transmission lines is denoted by the symbols \({\text{NL}}\), \({\text{NG}}\), and \(nl\), respectively.

System constraints

Equality constraint

These restrictions establish particular requirements that must be strictly met. They reflect relationships that must hold precisely and are frequently represented by equations. The power flow equations, which make sure that real and reactive power injections at each node in the power system are balanced, could be represented in an ORPD problem by equality constraints. For a workable solution, these equations would need to be completely satisfied40,41.

$${P}_{Gi} - {P}_{Di}-\left|{V}_{i}\right|\sum \limits_{j=1}^{{\text{NB}}}\left|{{\text{V}}}_{{\text{j}}}\right|\left[{G}_{{ij}}{\text{cos}}{\alpha }_{{ij}}+{B}_{{ij}}{\text{sin}}{\alpha }_{{ij}}\right]=0$$
(6)
$${Q}_{Gi}- {Q}_{Di} - \left|{V}_{i}\right|\sum\limits_{j=1}^{{\text{NB}}}\left|{{\text{V}}}_{{\text{j}}}\right|\left[{G}_{ij}{\text{sin}}{\alpha }_{ij}-{B}_{ij}{\text{cos}}{\alpha }_{ij}\right]=0$$
(7)

where \({P}_{Gi}\) and \({Q}_{Gi}\) denote the injection of real and reactive power at bus i. \({P}_{Di}\) and \({Q}_{Di}\) denote the actual and reactive power that the load at bus i draws. \({{\text{B}}}_{{\text{ij}}}\) is the branch's susceptibility between i and j buses and \({{\text{N}}}_{{\text{B}}}\) represents the total number of nodes.

Inequality constraints

The variables or parameters of the optimization problem are restricted or limited by inequality constraints. They express requirements that must be met within a set of constraints and are frequently represented by inequalities. In an ORPD situation, inequality constraints could be used to set restrictions on the thermal capacity of transmission lines, voltage magnitude limitations, and generator reactive power limitations. These restrictions limit the range of conceivable solutions and guarantee that the solution stays within allowable operational bounds.

Constraints for generator

All generators in the system should operate within the predetermined maximum and minimum tolerances for real power generation, reactive power generation, and bus voltage magnitude. The following can be used to indicate these variables' upper and lower boundaries40,42:

$${{\text{P}}}_{Gi}^{min}\le {P}_{Gi}\le {{\text{P}}}_{Gi}^{max} \quad \mathrm{For} \;\; i = 1,\dots \dots \dots ,NG$$
(8)
$${Q}_{Gi}^{min}\le {Q}_{Gi}\le {Q}_{Gi}^{max} \quad \mathrm{For} \;\;i = 1,\dots \dots \dots , NG$$
(9)
$${V}_{Gi}^{min}\le {V}_{Gi}\le {V}_{Gi}^{max} \quad \mathrm{For} \;\; i = 1, \dots \dots \dots , NG$$
(10)

where \({P}_{Gi}\) signifies the actual power produced from generator i, and \({{\text{P}}}_{Gi}^{min}\) and \({{\text{P}}}_{Gi}^{max}\) denote the actual power output's lower and upper bounds. \({Q}_{Gi}\) denotes the generator i's capacity to produce reactive power, and \({Q}_{Gi}^{min}\) and \({Q}_{Gi}^{max}\) denote the capacity's minimum and maximum levels, respectively. \({V}_{Gi}\) is the voltage magnitude at bus i, and \({V}_{Gi}^{min}\) and \({V}_{Gi}^{max}\) denote the voltage magnitude's lower and upper bounds, respectively.

Constraints for transformers

Transformers employ tap changers to modify the transformer turns ratio and, consequently, the voltage level. The number of tap positions and magnitude of voltage change that a tap changer can alter are both limited. These restrictions can be shown as40,42:

$${T}_{i}^{min}\le {T}_{i}\le {T}_{i}^{max}\quad \mathrm{For} \;\; i = 1, \dots \dots \dots , NT$$
(11)

where \({T}_{i}\) stands for the tap position of transformer i, and \({T}_{i}^{min}\) and \({T}_{i}^{max}\) stand for the tap position's lower and higher bounds, respectively.

Constraints for shunt capacitors

Reactive power compensation capabilities of some devices, such as capacitors or compensators, are constrained. The inequality restriction can be stated as follows, for instance, where \({Q}_{Ci}^{min}\) and \({Q}_{Ci}^{max}\) stand for the lower and upper bounds of reactive power compensation for a compensator i40,42:

$${Q}_{Ci}^{min}\le {Q}_{Ci}\le {Q}_{Ci}^{max} \quad \mathrm{For} \;\; i = 1, \dots \dots \dots ,NC$$
(12)

where \({Q}_{Ci}\) is the reactive power injected from compensator i.

Security constraints

To ensure the safe and reliable operation of the associated loads, load buses may have restrictions on their voltage levels. These restrictions can be expressed as40,42:

$${V}_{Li}^{min}\le {V}_{Li}\le {V}_{Li}^{max} \quad \mathrm{For} \;\; i = 1,\dots \dots \dots ,NL$$
(13)

where \({V}_{Li}\) signifies the voltage magnitude at load bus i, and \({V}_{Li}^{min}\) and \({V}_{Li}^{max}\) denote the lower and upper boundaries, respectively, for the load voltage level. To prevent overheating and prospective damage, transmission cables have thermal capacity restrictions. The inequality restriction, for instance, can be written as follows if \({S}_{TLi}^{max}\) represents the thermal limit of transmission line i40,42:

$${S}_{li}\le {S}_{li}^{max} \quad \mathrm{For } \;\; i = 1,\dots \dots \dots , nl$$
(14)

where \({S}_{TLi}\) is the complex power flow on the transmission line i.

Objective function

It is important to note that control variables are bound by themselves. An objective function can incorporate quadratic penalty factor to account for the inequality constraints of dependent variables that include line loading, active power produced at slack bus, reactive power generated, and load bus voltage magnitude. In this case, the objective function is multiplied by a penalty term that equals the square of the disregard value of the dependent variable, and any impractical solution found is rejected. The following is a mathematical way to express the penalty function:

$$\mathrm{Penalty }= {\upgamma }_{{\text{P}}}{\left({P}_{{G}_{i}}-{P}_{{G}_{i}}^{lim}\right)}^{2}+{\upgamma }_{{\text{V}}}\sum_{{\text{i}}=1}^{{\text{NL}}}{\left({{\text{V}}}_{{{\text{L}}}_{{\text{i}}}}-{{\text{V}}}_{{{\text{L}}}_{{\text{i}}}}^{{\text{lim}}}\right)}^{2}+{\upgamma }_{{\text{Q}}}\sum_{{\text{i}}=1}^{{\text{NG}}}{\left({{\text{Q}}}_{{{\text{G}}}_{{\text{i}}}}-{{\text{Q}}}_{{{\text{G}}}_{{\text{i}}}}^{{\text{lim}}}\right)}^{2}+{\upgamma }_{{\text{S}}}\sum_{{\text{i}}=0}^{{\text{nl}}}{\left({S}_{{l}_{i}}-{S}_{{l}_{i}}^{lim}\right)}^{2}$$
(15)

where \({\upgamma }_{{\text{P}}}\), \({\upgamma }_{{\text{V}}}\), \({\upgamma }_{{\text{Q}}}\) and \({\upgamma }_{{\text{S}}}\) denote penalty terms and \({x}^{lim}\) signifies the limiting of the dependent variable \(x\). When \(x\) is greater than the maximum bound, \({x}^{lim}\) will be equal to the value of this one, and when \(x\) is lower than the minimum bound \({x}^{lim}\) will be equal to this limit:

$${x}^{lim}=\left\{\begin{array}{ll}{x}^{max};& \quad x>{x}^{max}\\ {x}^{min};&\quad x<{x}^{min}\end{array}\right.$$
(16)

Case 1: Minimization of generation fuel cost

The goal of Optimal Power Flow (OPF) problems is to reduce the cost of generating while taking into account the limitations of the system. Both fixed expenses and variable costs make up the generation cost in most cases. The power generation equipment's fixed costs are related to the initial capital expenditure, whereas the variable costs are related to the equipment's use and upkeep. The variable costs are the most critical consideration when minimizing the generation cost in an OPF situation. The variable costs are typically described as a quadratic function of the power output and depend on how much electricity is produced by each generator. The OPF problem's objective function is expressed as follows:

$${f}_{i}={(a}_{i}+{b}_{i}{P}_{{G}_{i}}+{c}_{i}{P}_{{G}_{i}}^{2}) (\$/h)$$
(17)

where, for the ith generator, \({a}_{i}\), \({b}_{i}\) and \({c}_{i}\) stand for, respectively, the standard cost rate, the linear cost rate, and the quadratic cost rate. Consequently, the objective function below can be used to express the system's overall fuel cost for all generators.

$$F(x,u)=\sum_{i=1}^{NG}{f}_{i}(\$/h)+Penalty$$
(18)

Case 2: Voltage profile improvement

One of the most crucial and critical indicators of safety and service goodness is voltage magnitude at buses. Therefore, reducing the cost of fuel used for the entire generation process might provide a workable solution, but the voltage profile might not be suitable. Calculating the total voltage deviations of PQ buses is one method of evaluating the goodness of the voltage shape43. The following is how the sum of the voltage variations reported is expressed:

$$\mathrm{VD }=\sum_{{\text{i}}=1}^{{\text{NL}}}\left|{{\text{V}}}_{{{\text{L}}}_{{\text{i}}}}-1\right|$$
(19)
$$F(x,u) = \sum_{{\text{i}}=1}^{{\text{NL}}}\left|{{\text{V}}}_{{{\text{L}}}_{{\text{i}}}}-1\right|+{\text{Penalty}}$$
(20)

Case 3: Voltage profile enhancement with minimization of fuel cost

Here, lowering costs while simultaneously enhancing the voltage profile is the goal. As a result, we have a dual objective function in this case, as provided by:

$$F(x,u) =\left(\sum_{i=1}^{NG}{(a}_{i}+{b}_{i}{P}_{{G}_{i}}+{c}_{i}{P}_{{G}_{i}}^{2})\right)+\left({\upkappa }_{{\text{VD}}}\sum_{{\text{i}}=1}^{{\text{NL}}}\left|{{\text{V}}}_{{{\text{L}}}_{{\text{i}}}}-1\right|\right)+{\text{Penalty}}$$
(21)

where \({\upkappa }_{{\text{VD}}}\) denotes a weighting factor that needs to be properly determined. Each of the two sections of the objective function is assigned a weight (an importance) by this selection. After numerous experiments with trial and error, the study's chosen value for \({\upkappa }_{{\text{VD}}}\) is 500.

Case 4: Voltage stability improvement

Voltage stability becomes a necessity in reality since power systems are under a lot of stress. A voltage stability index (\({{\text{L}}}_{{\text{index}}}\)) has been created by Kessel and Glavitch44 regarding the viability of power flow equations for each bus. It ranges from 0 to 1, with 0 and 1 representing circumstances of no load and voltage breakdown. Alternatively, the \({{\text{L}}}_{{\text{index}}}\) value measured at a bus determines whether voltage collapse is likely to occur at that bus. Therefore, to improve the voltage stability of the grid, it is important to minimize the maximum \({{\text{L}}}_{{\text{index}}}\) or \({{\text{L}}}_{{\text{max}}}\).

$$F(x,u) ={{\text{L}}}_{{\text{max}}}+{\text{Penalty}}$$
(22)

Case 5: Voltage stability improvement with minimization of fuel cost

Therefore, the following dual objective function to simultaneously improve voltage stability represented by \({{\text{L}}}_{{\text{max}}}\) and minimize the total cost of generating fuel has been suggested as follows:

$$F(x,u) =\left(\sum_{i=1}^{NG}{a}_{i}+{b}_{i}{P}_{{G}_{i}}+{c}_{i}{P}_{{G}_{i}}^{2}\right)+{\upkappa }_{{{\text{L}}}_{{\text{max}}}}\times {{\text{L}}}_{{\text{max}}}+{\text{Penalty}}$$
(23)

where \({\upkappa }_{{{\text{L}}}_{{\text{max}}}}\) is a scaling factor used to balance out the values of the objective function and prevent one objective from taking precedence over another. The value of \({\upkappa }_{{{\text{L}}}_{{\text{max}}}}\) in this study is set at 5000.

Case 6: Minimization of active power losses

This target tries to reduce all real power losses in the power system by maximizing reactive power dispatch. Real power losses can be decreased to increase system performance and lower the cost of energy supply. The expression of the objective function for minimizing the transmission losses is given as:

$$F(x,u) ={{\text{P}}}_{{\text{loss}}}+{\text{Penalty}}$$
(24)

Case 7: Minimization of Reactive transmission power losses

A secondary goal of an OPF issue is to reduce reactive power losses in the lines of the system in addition to lowering the cost of generation. Reactive power is the energy transferred from the generator to the loads in order to keep the system's voltage stable. Reactive components, such as capacitors and inductors, contribute to reactive power losses in the lines. Maximizing the generators' production of reactive power and strategically placing reactive compensating devices such as shunt capacitors and reactors can reduce these losses to a minimum. An OPF problem's objective function that includes minimizing reactive power losses is written as follows:

$$F(x,u) ={{\text{Q}}}_{{\text{loss}}}+{\text{Penalty}}$$
(25)

Optimization algorithms

The implementation of the optimization algorithms for solving the OPF problem can be outlined as follows:

  1. 1.

    Input Data Collection: The input data for system components such as lines, branches, generators, loads, and constraints are computed.

  2. 2.

    Optimization Technique Parameters: Parameters such as the number of individuals, iterations, and population size are determined.

  3. 3.

    Initialization of Individuals: Individuals from the solution set are randomly distributed within the solution space according to each algorithm's methodology.

  4. 4.

    Main Optimization Loop: The optimization method is executed using the initial objective function value, and constraints are checked against predefined limits.

  5. 5.

    Recording Best Solutions: The best value for each individual in the solution set is recorded and designated as the current better solution, comparing it with neighboring solutions.

  6. 6.

    Position Update: Based on each algorithm's updated position strategies, new solutions are generated. If a new solution outperforms the previous one, position adjustments are made. Otherwise, the current position remains the optimal solution.

  7. 7.

    Final Solution: The process continues until termination requirements are met, such as reaching a predefined objective function value or maximum iteration limit, typical in OPF problems.

Fast Cuckoo Search (FCS)

Cuckoo Search (CS) algorithm is a contemporary metaheuristic algorithm inspired by nature and gained extensive utilization in addressing challenging optimization problems. CS draws its inspiration from the brood parasitism behavior observed in cuckoo birds45. It employs a well-balanced combination of a local random walk and global random walks, which are governed by a switching coefficient called \({p}_{a}\). The local walk is expressed as follow:

$${x}_{i}^{t+1}={x}_{i}^{t}+\alpha s\otimes H\left({p}_{a}-\in \right)\otimes \left({x}_{j}^{t}-{x}_{k}^{t}\right)$$
(26)

\({x}_{j}^{t}\) and \({x}_{k}^{t}\) represent two distinct candidate solutions that are chosen randomly through a process of random permutation. \({\text{H}}({\text{u}})\) refers to a Heaviside relation, while \(\in\) represents an arbitrary number obtained according to uniform distribution. The variable \({\text{s}}\) represents the step size. In this context, the symbol "\(\otimes\)" denotes the entry-wise operation between two vectors. The global walk is executed using a specialized form of random walk known as Lévy Flights. The initial population is selected within predefined parameter bounds, which define the range of values within the domain.

Cuckoo breeding behavior

Certain species of cuckoos, such as Ani and Guria cuckoos, engage in a behavior known as brood parasitism, where they lay their eggs in communal nests. They may even destroy the host birds' eggs to increase the chances of their eggs hatching. By doing so, they enlist the host birds to raise their offspring, allowing the cuckoos to allocate more time and energy to laying additional eggs instead of parental care. The hosts can be individuals of the same species or different species. If a cuckoo chooses a nest of another individual of the same species to lay its eggs, it is referred to as “intra-specific brood parasitism”45.

Host birds have developed strategies to detect foreign eggs. If they identify an egg as not their own, they may either destroy it or abandon the nest to create a new one in another place. Some cuckoo species have evolved the ability to mimic the color and pattern of certain selected ones. They also select the timing and location of egg-laying intelligently. They lay their eggs in old nests, in which the host birds have recently laid eggs that will take longer to hatch. This leads to increased competition for food, causing the host bird chicks to frequently starve in the presence of cuckoo chicks. This behavior reduces competition, ensuring the survival of cuckoo chicks by decreasing the number of competing host chicks in the nest. In many cases, a cuckoo chick eliminates the eggs or kills the newly hatched host chicks. It repeats such behavior until it remains the sole occupant of the nest.

Cuckoo search via Lévy flights

In the case of a minimization problem, the fitness value can be inversely proportional to the value of the objective function. Alternatively, fitness functions can be formulated in a manner similar to genetic algorithms, where the principle of the "fittest chromosome (solution) survives" is applied. In the CS, each solution is likened to an egg in a nest, and a new solution represents an egg. The goal is to replace inadequate solutions with fresh ones that could be better. An updated solution, denoted as \({\text{x}}(\mathrm{t }+ 1)\), is produced by choosing a cuckoo, i, through the utilization of Lévy flights. Equation (27) illustrates the production of the updated solution.

$${x}_{i}^{t+1}={x}_{i}^{t}+\alpha \otimes Levy\left(\lambda \right)$$
(27)

Here, \(\mathrm{\alpha }\) represents a positive value known as the step size, which varies depending on the specific problem at hand. In many cases, \(\mathrm{\alpha }= 1\) is commonly employed. The Lévy flight, on the other hand, entails a random walk where the length of each random step is determined by a Lévy distribution. The Lévy distribution can be described by: Lévy \(\sim \mathrm{ u }=\mathrm{ t}-\uplambda\)(where \(1 <\uplambda \le 3\)). This distribution possesses an infinite variance and mean values.

Fast cuckoo search algorithm

The standard CS algorithm relies solely on random walks for search, which does not guarantee fast convergence32. Moreover, the replacement of old nests with lower-quality solutions is performed randomly, further diminishing the algorithm's convergence speed. In the proposed algorithm, a different approach is introduced where the global best solution directs the substitution of outdated nests. This method improves control over the step size while also quickening convergence. The updated equation for replacing old nests is formulated as:

$${nest}_{new}={nest}_{old}+rand\left({best}_{nest}-{nest}_{old}\right)\otimes K, \quad if \;\; K>{p}_{a}$$
(28)

The equation involves the variables \({nest}_{old}\) and \({best}_{nest}\), which represent permutation matrices produced from the old nest and the global best one. The variable \({nest}_{new}\) denotes the new nest created in the present iteration32. The proposed methodology aims to enhance the convergence characteristics and is therefore called the Fast Cuckoo Search (FCS) algorithm. In the case of FCS, the best nest up until the last iteration is employed. This characteristic of FCS algorithm maintains selection pressure regarding superior solutions, ensuring better outcomes. Furthermore, this improvement in the CSA avoids population overcrowding with highly fit solutions. Flowchart of the FCS algorithm is presented in Fig. 1.

Figure 1
figure 1

Flowchart of FCS optimization algorithm.

Dynamic control Cuckoo Search (DCCS)

Inspired by the breeding manner of cuckoos, the original CS technique is a global optimization methodology that mimics the natural process of cuckoos searching for nests and laying eggs by incorporating the Levy flight mechanism observed in birds. Professor Yang and Deb suggested three optimum algorithmic states34:

Each generation, a cuckoo lays a single egg and chooses a nest at random for incubation. Until a better nest is found, future reproduction takes place in the best nest at the moment. The number of available nests remains constant for every generation, and there is a probability, \({\text{Pa}}\), that the egg is discovered by the host. In the event that the egg is found, the host may choose to either leave the egg or the complete nest and look for a good place to establish a new nesting. Here, \({\text{Pa}}\) denotes the probability of the host bird recognizing the egg as not its own offspring. \({\text{Pa}}\) is commonly set to 0.25 in research.

The conventional CS approach simulates the searching mechanism of cuckoo for nests by generating a candidate population, choosing optimal choices, and performing random migrations. Building upon the aforementioned ideal states, the algorithm determines the search route and position of the cuckoos with reference to (29) and after that, generates candidate populations.

$${x}_{i}^{t+1}={x}_{i}^{t}+\alpha \otimes L\left(\beta \right)$$
(29)

Among these variables: \({x}_{i}^{t+1}\) and \({x}_{i}^{t}\) implies to the position vectors of the ith bird nest position in the (t + 1)th and tth generations, respectively. \({x}_{i}^{t}\) represents the vector components of the bird nest's position, with \({\text{d}}\) representing the dimensionality of each nest. The index \({\text{j}}\) signifies any specific dimension, ranging from 1 to d. The variable t denotes the iteration number of the algorithm, starting from \(\mathrm{t }= 1\) and ending at \({\text{tc}}\) when the algorithm converges. The value of \({\text{tc}}\) varies depending on the convergence of the algorithm. The parameter \(\alpha\) is a constant factor known as the step-size factor, which controls the range of random search. Its value is positive and can vary depending on the specific situation. In the equations, the symbol \(\otimes\) represents point-to-point multiplication, while \(L\left(\beta \right)\) represents the random optimization route following the Levy flight mechanism. The term \(\alpha \otimes L\left(\beta \right)\) denotes the step size of the Levy flight, representing the distance that a cuckoo needs to discover from the ith to the (i + 1)th generation in a randomly distributed way based on the Levy flight. The relationship between the Levy flight's random optimization path and the iteration time, \({\text{t}},\) follows a Levy distribution as given in (30). \(\beta\) represents the exponential parameter, and \(\mu\) denotes arbitrary number taken from a normal distribution. The expression illustrates how the CS algorithm's optimization route alternates between frequent small jumps and infrequent lengthy jumps. This approach enables the algorithm to explore a larger search area and facilitates escaping from local optima.

$$Levy\left(\beta \right)\sim \mu ={t}^{\beta },+\left(1<\beta \le 3\right)$$
(30)

To execute the CS algorithm, and to simulate the flight jump route that determines the step size. The step-size calculation method is demonstrated as follow.

$$Levy\left(\beta \right)=\frac{\mu }{{\left|v\right|}^{\frac{1}{\beta }}}$$
(31)

Among these variables: \(\mu\) and \(v\) are random numbers drawn from a normal distribution, \(\beta\) is a parameter representing skewness, typically set to \(\beta\) = 1.5.

In the context of an actual optimization issue, the position of the nest, \({x}_{i}^{t}\), denotes the feasible solution area for all variables in the problem (with d dimensions). The fitness values associated with each nest correspond to the objective function value for different variable values. During the evolution approach, after modifying the bird nest's position using (29), a random number, denoted as r and ranging from 0 to 1, is compared with the probability \({p}_{a}\). If r is greater than Pa, the position \({x}_{i}^{t+1}\) is randomly changed; otherwise, it remains unchanged. A set of bird nest positions with improved fitness is ultimately retained and represented as \({x}_{i}^{t+1}\).

By analyzing the original CS method and existing research findings, it is evident that the step-size factor, denoted as "\(\alpha\)" reduced linearly with advance in the simulation process. This linear decrease enhances the algorithm's convergence speed and improves its local exploration capabilities. Moreover, the step-size factor goes down as the fitness values vary, leading to improved convergence accuracy and solution quality. However, in the original CS methodology, the step-size factor continuously and randomly variations without considering the algorithm's progress, which results in slow convergence in the final stages. Additionally, the parameter skewness, represented by "\(\beta\)" in Eq. (31), significantly influences the production of the step-size.

To address these limitations, this study introduces the concept of the iteration ratio and fitness ratio to dynamically adapt the parameter skewness "\(\beta\)" and, consequently, the step-size. This dynamic adaptation is achieved by incorporating Eq. (32), which includes a dynamic balance factor denoted as "\(\mathrm{\vartheta }\)." The balance factor ensures a proper weighting of the iteration ratio and fitness ratio within the algorithm, balancing the trade-off between convergence speed and accuracy.

$${\beta }_{t}={\beta }_{min}+\vartheta \left(1-\frac{t}{m}\right)\left(1-\vartheta \right)\frac{{f}_{min}}{{f}_{max}}, \beta \in \left[1, 2\right]$$
(32)

In the proposed method, several parameters are introduced to enhance the step-size adaptation in the CS method. The minimum parameter skewness value, denoted as "\({\beta }_{min}\)" serves as a lower bound for the skewness parameter. A dynamic balance coefficient, represented by "\(\vartheta\)" is applied manage the proportion of the iteration ratio and fitness ratio in the step-size calculation. This factor is constrained to the range (0,1). With advances in the optimization program, the step-size decreases continuously in the algorithm. However, to prevent premature convergence and improve convergence accuracy, the introduction of the iteration ratio (\(\frac{t}{m}\)) and the fitness ratio (\(\frac{{f}_{min}}{{f}_{max}}\)) plays a crucial role. The iteration ratio reduces the step-size during the algorithm's progression, thereby accelerating convergence. On the other hand, the fitness ratio ensures that the step-size decreases as the program approaches the optimal solution. By dynamically adjusting these two ratios through the balance factor "\(\vartheta\)" the algorithm avoids blind and rapid convergence with increasing iterations and mitigates the risk of premature convergence and falling into local optima. Consequently, this approach improves both the convergence characteristics and solution quality of the CS technique. The flowchart of the DCCS algorithm is presented in Fig. 2.

Figure 2
figure 2

Flowchart of DCCS algorithm.

Salp Swarm Algorithm

The Salp Swarm Algorithm (SSA) is an optimization technique that draws inspiration from the movement of salps, small marine animals that expand and contract their bodies to move (see Fig. 3). First proposed in 2017 by Mirjalili et al., the SSA is a meta-heuristic algorithm that aims to solve complex engineering problems33. The SSA involves a population of simulated salps, where each salp represents a potential solution to the optimization problem. These virtual salps move in the search space by following a set of equations that imitate the movement patterns of real salps. During each iteration of the algorithm, the fitness of each salp is assessed, and the most promising solutions are selected for the next generation. The SSA utilizes several parameters to control the behavior of the salps, including the salp step size and the attraction and repulsion coefficient. In the modeling process, salps are divided into two groups: the leader and the followers. The leader, located at the front of the swarm, guides the group in their quest for food and prey, while the remaining salps are considered followers who trail behind the leader. The salp chain model aims to efficiently explore and exploit the space around both stationary and mobile food sources, including identifying the positions of local and global optimal solutions33.

Figure 3
figure 3

(a) single salp and (b) swarm of salps.

Suppose there is a specific system that requires optimization, where \({\text{N}}\) represents the number of variables to be optimized, \({\text{X}}\) corresponds to the position of a particular salp, and \({\text{M}}\) represents the objective, or source of food, that the salp swarm aims to achieve. The leader's position in the search process is updated using the following equation33:

$${x}_{i}^{1} =\left\{\begin{array}{ll}{y}_{i}+{r}_{1}\left(\left({ub}_{i}-{lb}_{i}\right){r}_{2}+{lb}_{i}\right)& \quad {r}_{3}\ge 0\\ {y}_{i}-{r}_{1}\left(\left({ub}_{i}-{lb}_{i}\right){r}_{2}+{lb}_{i}\right)& \quad {r}_{3}<0\end{array}\right.$$
(33)

In the above equation, \({x}_{i}^{1}\) and \({y}_{i}\) represent the position of the first salp and the position of the food source, respectively, in the ith dimension. The values of \(lb\) and \(ub\) correspond to the lower and upper bounds of the ith dimension. The variables \({r}_{1}\), \({r}_{2}\), and \({r}_{3}\) denote randomly generated numbers.

Of the three random numbers mentioned, \({r}_{1}\) is the most important, as it helps to maintain a balance between exploration and exploitation during the search process. The expression for \({r}_{1}\) is as follows:

$${r}_{1}= 2{e}^{-{\left(\frac{4l}{L}\right)}^{2}}$$
(34)

The equation for \({r}_{1}\) incorporates the maximum number of iterations, denoted by \(L\), the current iteration represented by \(l\), and two randomly generated numbers in the range of [0,1]. Newton's law of motion is used to update the positions of the followers, and the equation for this update is as follows:

$${x}_{i}^{j} =\frac{1}{2}\lambda {t}^{2}+{\delta }_{0}t$$
(35)

where \(j \ge 2\) and \(x_{i}^{j}\) signifies the location of the jth salp in the ith dimension, t is the time, \({\delta }_{0}\) is an initial speed, and \(\uplambda =\frac{{\delta }_{final}}{{\delta }_{0}}\) , where \(\updelta =\frac{{\text{x}}-{{\text{x}}}_{0}}{{\text{t}}}\).

In optimization models, the time interval t is equivalent to the iteration, and the initial speed \({\delta }_{0}\) is set to 0. With this in mind, the equation for updating the positions of the followers can be expressed as follows:

$${x}_{i}^{j} =\frac{1}{2}\left({x}_{i}^{j}+{x}_{i}^{j-1}\right)$$
(36)

where \({\text{j}}\ge 2\). The aforementioned equation indicates that the followers update their position based on their own position and the position of the salp preceding them. To ensure that the salps remain within the defined search area, a constraint equation is used to bring any salps that go beyond the predefined boundaries back into the search space. Figure 4 illustrates a flowchart of the Salp Swarm Algorithm (SSA).

Figure 4
figure 4

Flowchart of SSA.

$${x}_{i}^{j} =\left\{\begin{array}{ll}{l}^{j}& \quad if \;\; {x}_{i}^{j}\le {l}^{j}\\ {u}^{j}&\quad if \;\;{x}_{i}^{j}\le {u}^{j}\\ {x}_{i}^{j}&\quad otherwise\end{array}\right.$$
(37)

Gradient based optimizer (GBO)

Gradient-based optimizer (GBO) is a metaheuristic optimization method that uses gradient information to direct the search process in optimization problems, as proposed by Ahmadianfar et al. in 202035. The algorithm begins the search with a starting point and iteratively changes it depending on the gradient data of the goal function. The algorithm calculates the gradient of the objective function at the current solution for each iteration and utilizes it to update the solution by moving in the direction of the negative gradient. Each iteration's step size is determined by the learning rate parameter, and a random perturbation step is added to prevent getting stuck in local optima.

The gradient-based Newton's technique35 is the source of inspiration for the optimization algorithm GBO. The GBO algorithm consists of a set of vectors used to search the solution space and two main operators: the gradient search rule (GSR) and the local escaping operator (LEO). The GSR operator makes use of a gradient-based strategy to boost the algorithm's capacity for search space exploration and quicken the rate of convergence to a better solution. However, the LEO operator is made to assist the algorithm in escaping local optima and extending its search to additional areas of the solution space.

The GSR model's mathematical formulation is as follows:

$$\mathrm{GSR }=\mathrm{ rand}.{\upsigma }_{1}\frac{2 \cdot \Delta {\text{x}}.{{\text{x}}}_{{\text{n}}}}{({{\text{x}}}_{{\text{worst}}}-{{\text{x}}}_{{\text{best}}}+\upvarepsilon )}$$
(38)

The term "\({\text{rand}}\)" refers to a normally distributed random number, while "\(\upvarepsilon\)" represents a small value between 0 and 0.1. "\({{\text{x}}}_{{\text{best}}}\)" and "\({{\text{x}}}_{{\text{worst}}}\)" indicate the most favorable and unfavorable solutions obtained, respectively. "\({\upsigma }_{1}\)" is a coefficient used for balancing, which is mathematically defined as:

$${\upsigma }_{1}=2\cdot {\text{rand}}\cdot \mathrm{\alpha }-\mathrm{\alpha }$$
(39)
$$\mathrm{\alpha }=\left|\upbeta \cdot {\text{sin}}\left(\frac{3\uppi }{2}+{\text{sin}}\left(\upbeta \cdot \frac{3\uppi }{2}\right)\right)\right|$$
(40)
$$\upbeta ={\upbeta }_{{\text{min}}}+\left({\upbeta }_{{\text{max}}}-{\upbeta }_{{\text{min}}}\right)\times {\left(1-{\left(\frac{{\text{m}}}{{\text{M}}}\right)}^{3}\right)}^{2}$$
(41)

The values of \({\upbeta }_{{\text{min}}}\) and \({\upbeta }_{{\text{max}}}\), which are fixed at 0.2 and 1.2 respectively, along with the current iteration number '\({\text{m}}\)' and the total number of iterations '\({\text{M}}\)', are used to improve the utilization of the nearby region of '\({{\text{x}}}_{{\text{n}}}\)'. Additionally, a direction of movement (\({\text{DM}}\)) is included to aid in this enhancement, and it is defined as follows:

$$\mathrm{DM }=\mathrm{ rand}.{\upsigma }_{2}({{\text{x}}}_{{\text{best}}}-{{\text{x}}}_{{\text{n}}})$$
(42)
$${\upsigma }_{2}=2\cdot {\text{rand}}\cdot \mathrm{\alpha }-\mathrm{\alpha }$$
(43)

Using this method, the new position of the agent can be represented as:

$${{\text{x}}}_{{\text{n}}+1}= {{\text{x}}}_{{\text{n}}}-{\text{GSR}}+{\text{DM}}$$
(44)

The Local Event Operator (LEO) enables the GBO to escape from local optima. This step utilizes the positions created by the GBO, and the following pseudocode describes how it operates35:

figure a

The given statement describes various variables and their definitions in the context of GBO algorithm. Two solutions generated by GBO are represented as \({x1}_{n}^{m}\) and \({x2}_{n}^{m}\) for a population of \({\text{m}}\) elements and \({\text{n}}\) optimization variables. Additionally, two random solutions are denoted by \({x}_{r1}^{m}\) and \({x}_{r2}^{m}\). The probability is represented as "\({\text{pr}}\)". The variables \({f}_{1}\) and \({f}_{2}\) are random numbers with different distributions. The former is a uniform random number between − 1 and 1, while the latter is a random number from a normal distribution with a mean of 0 and a standard deviation of 1.

$${{\text{u}}}_{1}= \left\{\begin{array}{ll}2\cdot {\text{rand}}& \quad {\upmu }_{1}<0.5\\ 1& \quad {\text{otherwise}}\end{array}\right.$$
(45)
$${{\text{u}}}_{2}, {{\text{u}}}_{3}= \left\{\begin{array}{ll}{\text{rand}}& \quad {\upmu }_{1}<0.5\\ 1& \quad {\text{otherwise}}\end{array}\right.$$
(46)

where \({{\text{u}}}_{1}\) is a number in the [0, 1] range and \({\text{rand}}\) is a random number between [0, 1]. Figure 5 depicts the operating system of the GBO. GBO is superior to other optimization methods in a number of ways. It simply needs to compute the gradient of the objective function, which can be done quickly for many different function types, making it computationally efficient. It is also simple to construct because it simply calls for fundamental operations like addition and multiplication. Finally, on a set of benchmark functions, GBO has demonstrated higher performance compared to other optimization algorithms in terms of convergence speed and solution quality.

Figure 5
figure 5

Flowchart of GBO.

Northern Goshawk Optimization (NGO)

NGO is an optimization algorithm that is inspired by the hunting behavior of the Northern Goshawk bird. This algorithm was developed in 2016 by Seyedali Mirjalili and Andrew Lewis36. The Northern Goshawk bird is known for its speed, agility, and precision when hunting, and the NGO algorithm emulates this behavior by combining exploration and exploitation techniques to identify the optimal solution for a given problem.

The NGO algorithm starts by randomly initializing a population of candidate solutions, which are referred to as individuals. Each individual is represented as a vector of variables that can be adjusted to explore the solution space. The algorithm then evaluates the fitness of each individual by using an objective function. The search and attack operators are two critical components of the NGO algorithm. The search operator randomly adjusts an individual's variables to explore the solution space, while the attack operator selects the best individual and modifies its variables to exploit promising spots of the solution search area. The NGO algorithm also includes a memory mechanism known as the memory pool. This memory pool stores the best individuals found so far and guides the search and attack operators towards promising locations of the solution space.

The population-based NGO algorithm utilizes the searching behavior of Northern Goshawk birds as its guiding principle. Each member represents a candidate solution to the problem and is composed of a set of variable values. Mathematically, these members can be represented as vectors, and together, they constitute the population matrix of the method. The initialization of the population involves random placement of its members within the search space. The population matrix for the NGO technique is defined according to a specific formula as follow36:

$$X=\left[\begin{array}{c}{X}_{1}\\ \vdots \\ \begin{array}{c}{X}_{i}\\ \vdots \\ {X}_{N}\end{array}\end{array}\right]=\left[\begin{array}{ccc}{X}_{\mathrm{1,1}}& \dots & \begin{array}{ccc}{X}_{1,d}& \cdots & {X}_{1,m}\end{array}\\ \vdots & \ddots & \begin{array}{ccc}\vdots & \ddots & \vdots \end{array}\\ \begin{array}{c}{X}_{i,1}\\ \vdots \\ {X}_{N,1}\end{array}& \begin{array}{c}\cdots \\ \ddots \\ \cdots \end{array}& \begin{array}{c}\begin{array}{ccc}{X}_{i,d}& \cdots & {X}_{i,m}\end{array}\\ \begin{array}{ccc}\vdots & \ddots & \vdots \end{array}\\ \begin{array}{ccc}{X}_{N,d}& \cdots & {X}_{N,m}\end{array}\end{array}\end{array}\right]$$
(47)

The population of Northern Goshawks in the NGO algorithm is represented by variable \(X\). Each member of the population is denoted by \({X}_{i}\) and is a candidate solution to the problem. The values of the jth variable determined by the ith candidate solution are represented as \({X}_{i,j}\). \({\text{N}}\) represents the number of population members, while m represents the number of variables in the problem.

Each individual in the population is a potential remedy to the issue, as was already said. As a result, each member of the population can be used to evaluate the problem's objective function. You can use (48) to represent these values for the objective function as a vector.

$$F(X)=\left[\begin{array}{c}{F}_{1}=F({X}_{1})\\ \vdots \\ \begin{array}{c}{F}_{i}=F({X}_{i})\\ \vdots \\ {F}_{N}=F({X}_{N})\end{array}\end{array}\right]$$
(48)

where \({F}_{i}\) is the objective function value obtained by the ith suggested solution and \(F\) is the vector of achieved objective function values.

Phase 1: Prey identification (exploration)

During the initial phase of hunting, the Northern Goshawk selects its prey randomly and then swiftly attacks it. This approach enhances the exploration capability of the NGO algorithm by allowing for random selection of solutions from the search area. This process facilitates global search of the search area with the objective of defining the optimum region. The principles of this phase are mathematically represented as follow:

$${P}_{i}={X}_{k}, i =1, 2, ...., N, k =1, 2,....i-1, i+1,.....,N$$
(49)
$${X}_{i,j}^{new, P1}=\left\{\begin{array}{ll}{x}_{i,j}+r\left({p}_{i,j}-I{x}_{i,j}\right)& \quad {F}_{{P}_{i}}<{F}_{i}\\ {x}_{i,j}+r\left({{x}_{i,j} -p}_{i,j}\right)&\quad {F}_{{P}_{i}}\ge {F}_{i}\end{array}\right.$$
(50)
$${X}_{i}=\left\{\begin{array}{ll}{X}_{i}^{new, P1}& \quad {F}_{i}^{new, P1}<{F}_{i}\\ {X}_{i}& \quad {F}_{i}^{new, P1}\ge {F}_{i}\end{array}\right.$$
(51)

where Pi denotes the position of the prey for the ith Northern Goshawk, while \({F}_{P}\) represents the objective function value of the goshawk. The value of \({\text{k}}\) is a random natural number within the range of [1, N], and \({X}_{i}^{new, P1}\) is the new state of the ith proposed solution, with \({X}_{i,j}^{new, P1}\) denoting its jth dimension. \({F}_{i}^{new, P1}\) represents the objective function value of the proposed solution after the first phase of NGO. Additionally, the variables r and I are random numbers, with r within the range of [0, 1], and I either equal to 1 or 2.

Phase 2: Chase and escape operation (exploitation)

After the Northern Goshawk successfully attacks its prey, the prey will try to flee. The Northern Goshawk will continue to pursue the prey using a "tail and chase" strategy. Due to the Northern Goshawk's remarkable speed, it is able to pursue its prey in almost any situation until it captures it. Simulating this hunting behavior enhances the algorithm's ability to exploit local search spaces. In the proposed NGO algorithm, the hunting behavior is restricted to an attack position with a radius R. Figure 4 shows the chase process between the Northern Goshawk and the prey. Equations (52) to (54) mathematically model the concepts of this second phase.

$${X}_{i,j}^{new, P2}={x}_{i,j}+R(2r-1){x}_{i,j}$$
(52)
$$R =0.02\left(1-\frac{t}{T}\right)$$
(53)
$${X}_{i}=\left\{\begin{array}{ll}{X}_{i}^{new, P2}& \quad {F}_{i}^{new, P2}<{F}_{i}\\ {X}_{i}& \quad {F}_{i}^{new, P2}\ge {F}_{i}\end{array}\right.$$
(54)

This equation provides a representation of the variables employed to depict the new state and objective function value of the ith proposed solution in the second phase of the NGO algorithm. More specifically, \({X}_{i}^{new, P2}\) signifies the revised state of the proposed solution,\({X}_{i,j}^{new, P2}\) specifies the adjusted value of the jth dimension of the solution, and \({F}_{i}^{new, P2}\) denotes the objective function value as per the second phase of the NGO algorithm. The present iteration and the highest possible amount of iterations are denoted, respectively, by the variables t and T.

After updating all population members using the first and second stages of the NGO, and following one iteration of the method, the population members' new values, the objective function, and the best suggested solution are established. Up until the algorithm reaches its last iteration, this process is repeated. Once the full NGO method has been implemented, the best proposed solution that was discovered during the algorithm's iterations is regarded as a quasi-optimal solution for the specific optimization problem. The flowchart in Fig. 6 outlines the different stages of the NGO technique.

Figure 6
figure 6

Flowchart of the NGO algorithm.

Opposition-based flow directional algorithm (OFDA)

Liu and Lampinen created the Flow Direction Algorithm (FDA) in 2003 as an optimization technique37. It is a metaheuristic algorithm that takes its cues from how water flows through a landscape, always heading towards the lowest point. The FDA starts by generating a set of potential solutions, which are represented as points in a search region, and then assesses their effectiveness using an objective function. The algorithm then uses a series of steps to gradually improve the quality of the results.

The flow direction operator, which mimics water flowing towards the global minimum while moving downhill, is the FDA's main operation. Each point in the search space has its objective function's slope calculated by the algorithm, which then directs the points in that direction. The points are moved through this process repeatedly until they reach a local minimum. The FDA uses a diversity maintenance system to guarantee population diversity. It ensures that the population comprises a diversity of solutions that address different regions of the search space by randomly selecting individuals from the population and using mutation and crossover operators to create new solutions.

The FDA is a topic discussed in reference46. It takes inspiration from the flow of water in a drainage basin, which moves towards the outlet point with the lowest height. The direction of flow is influenced by neighboring flows and their slopes. In the FDA, each flow position, represented by \({Flow}_{X}\) and its corresponding height Flow fitness \(f\left({Flow}_{X}\right)\), acts as a search agent for the parameter α flow. This parameter is initialized within the boundaries \([{\text{ub}},\mathrm{ lb}]\) in the drainage basin. The FDA estimates new flow positions in two ways. The first method assumes that a flow generates a β neighbor flow, \({{\text{Neighbor}}}_{X}\) (refer to Eq. (3) in46), while moving towards the drainage basin, and then updates its location, \({Flow}_{newX}\) (refer to Eq. (8) in46), based on the best neighbor flow. The second way updates the flow positions, \({Flow}_{newX}\) (refer to Eq. (9) in46), by assuming that the present flow encounters a random flow and changes its path. Finally, the flow's position is updated if it is better than the previous one, expressed as \({Flow}_{X(i)}\).

$${Flow}_{X(i)}=\left\{\begin{array}{ll}{Flow}_{newX(i)}& \quad f\left({Flow}_{newX(i)}\right)<f\left({Flow}_{X(i)}\right)\\ {Flow}_{X(i)}& \quad otherwise\end{array}, \forall i \in \left[1, \alpha \right]\right.$$
(55)

The height of \({Flow}_{newX(i)}\) is denoted by \(f\left({Flow}_{newX(i)}\right)\). The algorithm updates the flow position iteratively until it converges to the optimal solution or reaches the maximum iteration, \({{\text{Max}}}_{I{\text{ter}}}\). The FDA has shown exceptional performance on benchmark functions and has yielded better results for real-world engineering design problems. Further details about the FDA can be found in reference46.

As previously mentioned, the FDA algorithm updates its solutions in the search space based on random neighbor flows or other random flows. However, this approach may lead the flow to a local optimal solution, which could be a trap. To avoid this issue, the opposition-based learning (OBL) method can be utilized47. OBL helps in the search process in both directions. Let, \({Flow}_{X(i)}\) be a flow in the d-dimensional search space with a range of \(\left[LB, UB\right]\), where

$$\begin{aligned}{Flow}_{X(i)} &=\left\{{Flow}_{X(i, 1)}, {Flow}_{X(i, 2)},...., {Flow}_{X(i, d)}\right\} \\ {\text{LB}} &= \left\{{\text{lb}}(1),\mathrm{ lb}(2),......,\mathrm{ lb}({\text{d}})\right\} \\ {\text{UB}} &= \left\{{\text{ub}}(1),\mathrm{ ub}(2),......,\mathrm{ ub}({\text{d}})\right\}\end{aligned}$$
(56)

The opposite flow can then be determined as:

$${OFlow}_{X(i)}=\left\{{OFlow}_{X(i, 1)}, {OFlow}_{X(i, 2)},...., {OFlow}_{X(i, d)}\right\}$$
(57)
$${OFlow}_{X(i,j)} ={\text{ub}}({\text{j}}) +{\text{lb}}({\text{j}}) -{Flow}_{X(i,j)}, , \forall i \in \left[1, \alpha \right] and , \forall j \in \left[1, d\right]$$
(58)

The opposition-based flow directional algorithm (OFDA) employs a selection mechanism, which is described below, to update its solutions in the search space.

$${Flow}_{X(i)}=\left\{\begin{array}{ll}{OFlow}_{X(i)}& \quad f\left({OFlow}_{X(i)}\right)<f\left({Flow}_{X(i)}\right)\\ {Flow}_{X(i)}& \quad otherwise\end{array}, \forall i \in \left[1, \alpha \right]\right.$$
(59)

Based on its opposing flow, the aforementioned equation modifies the flow position in OFDA. \(f\left({OFlow}_{X(i)}\right)\) denotes the height of \({OFlow}_{X(i)}\). The deployment of OFDA is shown in Fig. 7, which explains this update procedure.

Figure 7
figure 7

Flowchart of OFDA.

Results and discussion

The OPF problem is solved by implementing the suggested FCS, SSA, DCCS, GBO, NGO, and OFDA algorithms. In this study, the IEEE 30 Bus test system has been used to examine 7 different case studies. The produced programs for this paper were created in MATLAB and used on an i5 computer running at 2.20 GHz and 4.00 GB of RAM. Using all suggested techniques, the optimal power flow program is implemented 30 times for each case. The maximum number of iterations is adjusted at 100 iterations for all algorithms and the population size equals to 40 agents. The single line diagram of the IEEE 30 Bus system is presented in Fig. 8. The primary characteristics of the IEEE 30-bus test system are listed in Table 1 and its total power capacity is 435.0 MW and the reader can get specific information about this test system from48.

Figure 8
figure 8

Single line diagram of the IEEE 30 Bus system.

Table 1 The fundamental features of the IEEE 30-bus test system.

The application of the different algorithms for solving the OPF problem of the IEEE 30 Bus system has occurred by considering the searching variable can be described as follows:

$$x= [P1, P2, P5. P8, P11, P13, V1, V2, V5, V8, V11, V13, T11, T12, T15, T36, QC10, QC12, QC15, QC17, QC20, QC21, QC23, QC24, QC27]$$

And the objective function is one of those presented in “Problem formulation” section considering the system constraints.

Case-1

The proposed FCS, SSA, DCCS, GBO, NGO, and OFDA optimization methodologies have been implemented for 30 individual runs to address the optimization problem of OPF incorporating minimization of the fuel generation cost from the six generators of the system. The obtained results of the best value of the objective function in each run is recorded an presented in the graph shown in Fig. 9. Statistical study including the best and worst values of the fuel cost as well as the mean, standard deviation, and the root mean square error based on the recorded values of the cost function during the 30 individual runs has been conducted and the results are listed in Table 2. The elapsed time. Friedman's ANOVA Table and Wilcoxon signed rank test have been performed to evaluate the optimization algorithms. The results of Friedman's ANOVA Table show the p-value of 0.0015989 for columns has been obtained. With a p-value of 0.0015989, Friedman's ANOVA test rejects the null hypothesis at a standard 5% significance level. The values of meanranks are [3.5000 4.6667 2.5000 1 4.6667 4.6667] for [FCSSSADCCSGBONGOOFDA] respectively, which confirm the robustness of the GBO algorithm. Moreover, the Wilcoxon signed rank test results have been shown in Table 2. The results prove that the algorithms with the given p-values reject the null hypothesis with a rank h of 1. A boxplot based on the 30 values for the first case study obtained from each algorithm is provided in Fig. 10. The reader can notice that the GBO optimization technique provided promising results among the six proposed methods. The stability of the algorithm is proved by the narrow range in which the objective function is varied during the 30 runs of the optimization program. The minimum value of the fuel cost obtained from GBO is 799.0938 $/h compared to 799.5921 $/h for FCS, 799.6411 $/h for SSA, 799.3019 $/h for DCCS, 799.3542 $/h for NGO, and 799.3041 $/h for OFDA. The good convergence characteristics of the GBO is presented in the graph of the convergence curves of the proposed algorithms presented in Fig. 11. The findings of the optimization process regarding the values of the control variables, fuel cost, voltage deviation, voltage stability index \({\text{Lmax}}\), active power losses, and reactive power losses for case 1 compared to the base case are provided in Table 3. The total fuel generation cost based on GBO algorithm has been reduced by 11.404% from the base case compared with 11.348% for FCS, 11.343% for SSA, 11.381% for DCCS, 11.375% for NGO, and 11.3805% for OFDA. The active power flow in the 41 branches of the IEEE 30 Bus system is presented in Fig. 12a and the power losses in each branch is sketched in Fig. 12b. Similarly, the reactive power flow is presented in Fig. 13a and the reactive power losses in each branch is sketched in Fig. 13b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 14. Finally, the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 4.

Figure 9
figure 9

Variation of the objective function over the 30 runs for case 1.

Table 2 Statistical study for case 1.
Figure 10
figure 10

Boxplot for the results of the objective function of case 1.

Figure 11
figure 11

Variation of the generation fuel cost for case 1.

Table 3 Optimized values of the control variable for case 1.
Figure 12
figure 12

Active power flow and losses in the branches of the system for case 1.

Figure 13
figure 13

Reactive power flow and losses in the branches of the system for case 1.

Figure 14
figure 14

Voltage profile improvement for case 1.

Table 4 Active and reactive power balance for case 1.

Case-2

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF incorporating the objective function of case 2. The obtained results of the best value of the voltage deviation in each run is recorded an presented in the graph shown in Fig. 15. Statistical study has been conducted and the results are listed in Table 5. A boxplot based on the 30 values of the total voltage deviation is sketched in Fig. 16. Also, in this case, the GBO optimization technique provided the best performance compared with the others. The minimum value of the total voltage deviation obtained from GBO is 0.0.08682 p.u compared to 0.11033 p.u for FCS, 0.11010 p.u for SSA, 0.09262 p.u for DCCS, 0.10048 p.u for NGO, and 0.09474 p.u for OFDA. The variation of the total voltage deviation over the 100 iterations of the best runs for all algorithms presented in Fig. 17. The results of the optimization process for case 2 compared to the base case are provided in Table 6. The active power flow in the 41 branches is presented in Fig. 18a and the power losses in each branch is sketched in Fig. 18b. Similarly, the reactive power flow is presented in Fig. 19a and the reactive power losses in each branch is sketched in Fig. 19b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 20. Finally the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 7.

Figure 15
figure 15

Variation of the objective function over the 30 runs for case 2.

Table 5 Statistical study for case 2.
Figure 16
figure 16

Boxplot for the results of the objective function of case 2.

Figure 17
figure 17

Variation of the total voltage deviation of case 2.

Table 6 Optimization results for case 2.
Figure 18
figure 18

Active power flow and losses in the branches of the system for case 2.

Figure 19
figure 19

Reactive power flow and losses in the branches of the system for case 2.

Figure 20
figure 20

Voltage profile improvement for case 2.

Table 7 Active and reactive power balance for case 2.

Case-3

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF, incorporating the objective function of case 3. The obtained results of the best value of the voltage deviation in each run are recorded an presented in the graph shown in Fig. 21. Statistical study has been conducted and the results are listed in Table 8. A boxplot based on the 30 values of the total voltage deviation is sketched in Fig. 22. In this case, the GBO optimization technique provided the best performance compared with the others. The minimum value of the total voltage deviation while minimizing the total fuel cost obtained from GBO is 0.10474 p.u compared to 0.12739 p.u for FCS, 0.12657 p.u for SSA, 0.12045 p.u for DCCS, 0.12751 p.u for NGO, and 0.12203 p.u for OFDA. The variation of the best value of the objective function over the 100 iterations of the best runs for all algorithms is presented in Fig. 23 while the variation of the voltage deviation is provided in Fig. 24. The results of the optimization process for case 3 compared to the base case are provided in Table 9. The active power flow in the 41 branches is presented in Fig. 25a and the power losses in each branch is sketched in Fig. 25b. Similarly, the reactive power flow is presented in Fig. 26a and the reactive power losses in each branch is sketched in Fig. 26b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 27. Finally the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 10.

Figure 21
figure 21

Variation of the objective function over the 30 runs for case 3.

Table 8 Statistical study for case 3.
Figure 22
figure 22

Boxplot for the results of the objective function of case 3.

Figure 23
figure 23

Variation of the total fuel cost of case 3.

Figure 24
figure 24

Variation of the total voltage deviation of case 3.

Table 9 Optimization results for case 3.
Figure 25
figure 25

Active power flow and losses in the branches of the system for case 3.

Figure 26
figure 26

Reactive power flow and losses in the branches of the system for case 3.

Figure 27
figure 27

Voltage profile improvement for case 3.

Table 10 Active and reactive power balance for case 3.

Case-4

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF, incorporating the objective function of case 4. The obtained results of the best value of the voltage stability index Lmax in each run is recorded an presented in the graph shown in Fig. 28. Statistical study has been conducted and the results are listed in Table 11. A boxplot based on the 30 values of the total voltage deviation is sketched in Fig. 29. Also, in this case, the GBO optimization technique provided sufficient performance compared with the others. The minimum value of the voltage stability index Lmax obtained from GBO is 0.10052 compared to 0.1003 for FCS, 0.10121 for SSA, 0.1003 for DCCS, 0.10036 for NGO, and 0.1003 for OFDA. The variation of the voltage stability index Lmax is provided in Fig. 30. The results of the optimization process for case 4 compared to the base case are provided in Table 12. The active power flow in the 41 branches is presented in Fig. 31a and the power losses in each branch is sketched in Fig. 31b. Similarly, the reactive power flow is presented in Fig. 32a and the reactive power losses in each branch is sketched in Fig. 32b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 33. Finally the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 13.

Figure 28
figure 28

Variation of the Lmax over the 30 runs for case 4.

Table 11 Statistical study foe case 4.
Figure 29
figure 29

Boxplot for the results of the objective function of case 4.

Figure 30
figure 30

Variation of the Lmax of case 4.

Table 12 Optimization results for case 4.
Figure 31
figure 31

Active power flow and losses in the branches of the system for case 4.

Figure 32
figure 32

Reactive power flow and losses in the branches of the system for case 4.

Figure 33
figure 33

Voltage profile improvement for case 4.

Table 13 Active and reactive power balance for case 4.

Case-5

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF, incorporating the objective function of case 5. The obtained results of the best value of the voltage stability index in each run are recorded and presented in the graph shown in Fig. 34. Statistical study has been conducted and the results are listed in Table 14. A boxplot based on the 30 values of the voltage stability index Lmax is sketched in Fig. 35. In this case, the GBO optimization technique provided the best performance compared to the others. The minimum value of the voltage stability index obtained from GBO is 0.11369 compared to 0.11524 for FCS, 0.11677 for SSA, 0.11402 for DCCS, 0.11508 for NGO, and 0.11610 for OFDA. The variation of the best value of the objective function over the 100 iterations of the best runs for all algorithms is presented in Fig. 36, while the variation of the voltage stability index Lmax is provided in Fig. 37. The results of the optimization process for case 5 compared to the base case are provided in Table 15. The active power flow in the 41 branches is presented in Fig. 38a, and the power losses in each branch is sketched in Fig. 38b. Similarly, Fig. 39a presents the reactive power flow and the reactive power losses in each branch is sketched in Fig. 39b. The impact of the optimization process on the voltage profile of the system's PQ buses is presented in Fig. 40. Finally, the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 16.

Figure 34
figure 34

Variation of the Lmax over the 30 runs for case 5.

Table 14 Statistical study for case5.
Figure 35
figure 35

Boxplot for the results of the objective function of case 5.

Figure 36
figure 36

Variation of the total fuel generation cost of case 5.

Figure 37
figure 37

Variation of the Lmax of case 5.

Table 15 Optimization results for case 5.
Figure 38
figure 38

Active power flow and losses in the branches of the system for case 5.

Figure 39
figure 39

Reactive power flow and losses in the branches of the system for case 5.

Figure 40
figure 40

Voltage profile improvement for case 5.

Table 16 Active and reactive power balance for case 5.

Case-6

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF incorporating the objective function (minimization of active transmission power losses) of case 6. The obtained results of the best value of the voltage deviation in each run are recorded and presented in the graph shown in Fig. 41. Statistical study has been conducted and the results are listed in Table 17. A boxplot based on the 30 values of the total voltage deviation is sketched in Fig. 42. Also, in this case, the GBO optimization technique provided the best performance compared with the others. The minimum value of the fuel cost obtained from GBO is 2.5819 MW compared to 3.0994 MW for FCS, 2.9408 MW for SSA, 2.9773 MW for DCCS, 2.8983 MW for NGO, and 2.9273 MW for OFDA. The variation of the active power losses is provided in Fig. 43. The results of the optimization process for case 6 compared to the base case are provided in Table 18. The active power flow in the 41 branches is presented in Fig. 44a and the power losses in each branch is sketched in Fig. 44b. Similarly, the reactive power flow is presented in Fig. 45a and the reactive power losses in each branch is sketched in Fig. 45b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 46. Finally, the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 19.

Figure 41
figure 41

Variation of the objective function (Ploss) over the 30 runs for case 6.

Table 17 Statistical results for case 6.
Figure 42
figure 42

Boxplot for the results of the objective function of case 6.

Figure 43
figure 43

Variation of active power losses of case 6.

Table 18 Optimization results for case 6.
Figure 44
figure 44

Active power flow and losses in the branches of the system for case 6.

Figure 45
figure 45

Reactive power flow and losses in the branches of the system for case 6.

Figure 46
figure 46

Voltage profile improvement for case 6.

Table 19 Active and reactive power balance for case 6.

Case-7

The proposed algorithms have been implemented for 30 individual runs to address the optimization problem of OPF incorporating the objective function (minimization of reactive transmission power losses) of case 7. The obtained results of the best voltage deviation value in each run are recorded and presented in the graph shown in Fig. 47. Statistical study has been conducted and the results are listed in Table 20. A boxplot based on the 30 values of the total voltage deviation is sketched in Fig. 48. Also, in this case, the GBO optimization technique provided the best performance compared with the others. The minimum value of the fuel cost obtained from GBO is − 24.2129 MVar compared to − 23.1073 MVar for FCS, − 23.7994 MVar for SSA, − 23.8660 MVar for DCCS, − 24.0835 MVar for NGO, and − 24.0475 MVar for OFDA. The variation of the reactive power losses is provided in Fig. 49. The results of the optimization process for case 7 compared to the base case are provided in Table 21. The active power flow in the 41 branches is presented in Fig. 50a, and the power losses in each branch is sketched in Fig. 50b. Similarly, the reactive power flow is presented in Fig. 51a, and the reactive power losses in each branch is sketched in Fig. 51b. The impact of the optimization process on the voltage profile of the system's PQ buses is presented in Fig. 52. Finally, the active and reactive power balance based on the results of the six proposed algorithms is provided in Table 22.

Figure 47
figure 47

Variation of the objective function over the 30 runs for case 7.

Table 20 Statistical results for case 7.
Figure 48
figure 48

Boxplot for the results of the objective function of case 7.

Figure 49
figure 49

Variation of reactive power losses of case 7.

Table 21 Optimization results for case 7.
Figure 50
figure 50

Active power flow and losses in the branches of the system for case 7.

Figure 51
figure 51

Reactive power flow and losses in the branches of the system for case 7.

Figure 52
figure 52

Voltage profile improvement for case 7.

Table 22 Active and reactive power balance for case 7.

CASE 8: IEEE 118-bus test system (minimization of the fuel generation cost)

This case study investigated into optimizing and minimizing fuel generation costs within an extensive electrical grid using computer simulation, specifically focusing on the IEEE 118-bus test system. This system featured 54 generators, 9 transformers with tap change capabilities, and 12 capacitors and 2 reactors for voltage and power flow regulation48. The primary aim was to pinpoint the most efficient control settings to reduce fuel generation costs. Simulations were carried out to assess the effectiveness of various optimization techniques (FCS, SSA, DCCS, GBO, NGO, and OFDA), with visual representations of the results presented in Figs. 53, 54, 55, 56. Figure 53, in particular, provides a comparative analysis of the convergence speed of each method towards an optimal solution. The GBO method yielded the lowest fuel cost, amounting to $135,803.19/h, outperforming other algorithms. The convergence curves of the proposed algorithms depicted in Fig. 53 underscored the robust convergence characteristics of GBO. The results conclusively demonstrated that GBO produced the most favorable outcomes, as detailed in Table 23. Table 23 outlines the optimization findings for CASE 8: IEEE 118-bus test system, including control variable values, fuel costs, voltage deviation, voltage stability index (Lmax), active power losses, and reactive power losses, comparing case 1 to the base case. The active power flow in the all branches is presented in Fig. 54a and the power losses in each branch is sketched in Fig. 54b. Similarly, the reactive power flow is presented in Fig. 55a and the reactive power losses in each branch is sketched in Fig. 55b. The impact of the optimization process on the voltage profile of the PQ buses of the system is presented in Fig. 56. Finally the active power balance based on the results of the six proposed algorithms is provided in Table 24.

Figure 53
figure 53

Variation of reactive power losses of case 8; IEEE 118-bus test system.

Figure 54
figure 54

Active power flow and losses in the branches of the system for case 8; IEEE 118-bus test system.

Figure 55
figure 55

Reactive power flow and losses in the branches of the system for case 8; IEEE 118-bus test system.

Figure 56
figure 56

Voltage profile improvement for case 8; IEEE 118-bus test system.

Table 23 Optimization results for case 8, IEEE 118-bus test system.
Table 24 Active power balance for case 8, IEEE 118-bus test system.

Conclusion

In this work, Fast Cuckoo Search (FCS), Salp Swarm Algorithm (SSA), Dynamic control Cuckoo search (DCCS), Gradient-Based Optimizer (GBO), Northern Goshawk Optimization (NGO), Opposition Flow Direction Algorithm (OFDA) in order to address the OPF issue are suggested. For the purpose of evaluating the performance of the suggested strategies on the IEEE 30 bus test system, seven OPF formulations, various objectives, and limitations are taken into account. The outcomes of the scenarios examined show that (i) the suggested GBO technique is highly efficient and robust when compared to other widely recognized methods, and (ii) the suggested approach can be applied to a wide range of cases with multifaceted objective functions, security constraints, restricted zones, and various test systems. Moreover, a case study to explore the optimization and minimization of fuel generation costs in a large-scale electrical grid using computer simulation, with a particular emphasis on the IEEE 118-bus test system has been presented. The results of scuch case of IEEE 118-bus test system prove also the superiority of the GBO algorithm. Given the promise and excellent qualities of the suggested GBO, it is advised that a multi-objective algorithm based on GBO be created and used to address multi-objective OPF issues.