1 Introduction

Optimization issues in the real world are difficult. Due to the growing number of variables, dimensions, time complexity, space complexity, etc., they are becoming increasingly complex. To deal with such challenging optimization problems, one of the best choices is to use meta-heuristics algorithm (MA) [32]. This is due to the stochastic nature of MA, which allows them the ability to show less probability of stagnation in local optimums and provide high exploratory capability [28]. MA is less expensive and more efficient with respect to other traditional optimization algorithms. MA have derivation-free mechanisms in exploring search spaces to find the optimum solution [30]. Multiple solutions or population-based MA are better because they are capable of effective exploration of the search space of the optimization problem [20].

Nowadays, MAs are applied to figure out various technical issues like design and structural optimization problems. Usually, these algorithms initialize their runs with a set of randomly generated solutions and continue the procedure of evaluating the objective functions until the global optimum is obtained [14]. Broadly speaking, MAs are divided into two categories, viz. meta-heuristics with single solution (MSS) and population-based meta-heuristics (PBM) [17]. In MSS methods [such as simulated annealing (SA) [21], tabu search (TS) [13], variable neighbourhood search (VNS) [31]] a single search agent will perform the search operations, and on the contrary, a group of search agents is involved in PBM algorithms. In PBM optimization algorithms, each of the solution’s positions is updated using single and social information. Furthermore, search space can be checked using many solutions as quickly as possible. Therefore, PMB’s outcomes are exhibited best than MSS. In the literature, PBM is classified into five types. They are evolutionary algorithm (EA), swarm intelligence (SI), physics-based algorithm, human-based algorithm and math-based algorithm, respectively, [8]. The categorization of meta-heuristic algorithms is shown in Fig. 1. Nature-inspired meta-heuristics are categorized into five classes evolutionary algorithm, swarm intelligence, physical and chemical-based algorithm, human-based algorithm and math-based algorithm, respectively [1, 27].

Fig. 1
figure 1

Classification of meta-heuristics

Introducing new advanced MA is to get the optimal value of complicated optimization problems as quickly as possible and achieve a robust optimization process [43, 48]. For every optimization problem, a specific optimal solution exists called global optimal [33]. At the same time, all the solutions obtained from optimization algorithms are not necessarily globally optimal. However, they are acceptable if their solutions get incredibly near to the overall best solution. These unique solutions are called quasi-optimal solutions [5, 6]. Using this idea, we can conclude that an optimization algorithm is better than others in optimizing a particular problem based on the quasi-optimal solution, which is closer to global [7]. This is a new direction of research for developing new optimization algorithms. That question is whether there is still a need for new optimization algorithms to be introduced in light of the numerous MAs published in our scientific literature. As stated by the No Free Lunch theorem [46], no single optimization algorithm can find globally optimal solutions to all existing optimization problems. This explains the fact that MA can solve a set of optimization problems but not another set of optimization problems. Therefore, no optimization algorithm can solve all existing optimization problems. This is another reason for developing novel optimization algorithms to solve optimization issues considering acceptable quasi-optimal solutions. Every year thousands of nature-inspired optimization algorithms (NIOA) are coming into the state of the art of NIOA. So to give them a chance to survive, improving their efficiency and robustness is necessary.

The prairie dog optimization algorithm (PDO) [11] is a new natural-inspired population-based meta-heuristic algorithm. PDO is first introduced by Ezugwu et al. in 2022. In PDO, prairie dog movements of four types are considered to execute the exploration and exploitation process of optimization. The foraging and burrow-building actions of prairie dogs are utilized for exploring the decision variables in the search space. The exploitation process is executed through the transmission sign of the language of prairie dogs. It has the advantages of the most efficient Lévy flight random walk for producing the new position of prairie dogs, a unique updating process with special attributes like digging strength and predator effect, effective division of labours etc. Based on the results of the original PDO experiment and assessments of alternative algorithms, some issues have been raised. Premature convergence frequently affects the PDO algorithm. Its run-time is long due to the cumulative sum in a random walk. There is an imbalance between exploration and exploitation processes. It faces challenges due to the lack of diversity in its population. It needs modifications for enhancement of the overall performance of the algorithm. So we have proposed an improved design of PDO in this current thesis.

The opposition-based learning (OBL) strategy was first introduced by Tizhoosh [44].

Fig. 2
figure 2

Determining the optimal estimates x and x0 for a one-dimensional function with initial boundary [P1, Q1] using the concept of OBL strategy where T is iteration count

It is a powerful learning strategy which can enhance the potential of the searching power of the MA or other optimization algorithms [23]. The OBL strategy’s concept is illustrated in Fig. 2. In literature, many algorithms have been improved in terms of overall performance by using the concept of OBL [10, 18, 34]. OBL is used to increase the MAs’ accuracy and convergence rate. Generally, MAs initialize the starting population randomly or on the basis of search domain knowledge. But when this type of knowledge is missing, The ideal solution cannot be reached by MAs. In this situation, OBL can help to overcome this problem by searching opposite direction to the recent solution in the solution search space. It strengthens the exploitation space. There are many different formulations of OBL in the literature. Types of OBL are type-I opposition, type-I quasi-opposition, type-I super-opposition, type-II opposition, generalized OBL, quasi-reflection OBL, centre-based sampling, and OBL using current optimum [37]. The majority of OBL-based optimization approaches described in the literature aim at a potential opposite point using deterministic methods. To improve the exploitation process, it might not be enough. In paper [47], the author presented a dynamic opposite learning (DOL)-based version of the TLBO algorithm whose performance was improved than the original algorithm in terms of convergence rate, solution quality and balance between exploration and exploitation. Later the authors of these papers [3, 9, 38] also utilized the concept of DOL for enhancing the performance of MAs like AO, MFO algorithm, WOA etc. In our current study, we have introduced a version of the DOL strategy which enhances the diversity in the population of our proposed algorithm (E-PDO) and reduces the chances of falling into a locally optimal solution.

The novelty and contribution of this work is the improvement of the new optimization method PDO. An improved design of PDO is proposed with a modified random flight path and a dynamic opposite learning strategy. Various steps of the proposed refinement of PDO are described and modelled mathematically. A set of 33 objective functions, including uni-modal and multi-modal, was used to test the effectiveness of the proposed enhanced PDO. Furthermore, the performance of PDO is compared with known optimization algorithms: GWO [30], MFO [25], ALO [26], WOA [29], DA [27], SCA [28], RSA [1]. The contributions of the present study are summarized as follows:

  • A modified Lévy flight is proposed to perform a random search for the improved version of the standard PDO algorithm.

  • The existing PDO algorithm incorporates a DOL strategy. The DOL strategy serves two key functions: effective population initialization and another efficient generation jump. This will help improve the ability to diversify and enhance the new PDO version, the E-PDO algorithm.

  • The performance of the proposed enhanced PDO algorithm is calculated by examining a set of 33 benchmark functions consisting of uni-modal, multi-modal, uni-modal fixed-dimensional and multi-modal fixed-dimensional. Also, the proposed E-PDO algorithm was employed to solve two engineering design problems.

  • A statistical mean rank test method and the novel statistical multi-criteria analysis method MARCOS are used to evaluate how well the suggested method performed.

1.1 Paper structure

The rest of the paper is structured as follows. An inspiring synopsis of the original PDO is presented in Sect. 2. A complete mathematical implementation is also discussed in Sect. 2. Section 3 describes the Lévy flight concept and DOL strategy, including a mathematical template. The proposed E-PDO algorithm is explained in Sect. 4. The experimental setup is presented in Sect. 5. Additionally, Sect. 5 includes test work, results, and discussion. Section 5 also presents a statistical analysis of the outcomes of the experiment and the use of E-PDO to solve constrained optimization issues is demonstrated in Sect. 5.6. Section 6 provides conclusions on the current work.

2 Prairie dog optimization algorithm

Prairie dogs are ground squirrels [Fig. 3]. Prairie dogs are unique in their foraging and communication abilities. The prairie dog optimization (PDO) algorithm is based primarily on foraging movements, burrowing activities, communication skills, and predator prevention activities of the same coterie members of prairie dogs [11].

Fig. 3
figure 3

Burrow entrance of a colony of prairie dogs

2.1 Algorithm assumptions

The assumptions of the prairie dog optimization (PDO) algorithm can be summarized as follows:

  • The algorithm assumes that all prairie dogs within a colony are identical regarding their capabilities and behaviours. This assumption allows for a standardized approach to modelling the optimization process.

  • Foraging for food and building burrows are the main activities of prairie dogs. These actions are essential for their survival and are mimicked in the optimization algorithm.

  • Prairie dogs build their burrows around food sources. This relationship is incorporated into the algorithm, where the solution space exploration is initiated based on the location of the food source.

  • Different coteries (subgroups) have their boundary limits within a prairie dog colony. Each coterie is responsible for foraging and burrowing activities within its designated boundary. This helps distribute the optimization process and promotes exploration within different regions.

  • The algorithm considers the importance of the latest Burrow position in enhancing coterie operations. This implies that the previous actions and experiences of the prairie dogs influence their future exploration and exploitation strategies.

  • Prairie dogs communicate using distinct sounds that convey specific information. These sounds can relate to various scenarios, such as food availability and predator threats. The ability to communicate plays a crucial role in the prairie dogs’ survival and adaptation to predators.

  • Prairie dogs exhibit different responses to various predators. For example, they may hide in response to a message indicating the presence of a predator within the range of hawks while continuing to observe from their burrows in other situations. This adaptability to different methods of predation is incorporated into the algorithm.

  • Specific sounds emitted by prairie dogs trigger movement and response patterns within different coteries. These cycles of movement and response contribute to the exploration and exploitation phases of the PDO algorithm.

By incorporating these assumptions from the behaviour and characteristics of prairie dogs, the PDO algorithm attempts to mimic their foraging and burrowing behaviour to solve optimization problems.

2.2 Implementations of basic PDO

The initialization, fitness function evaluation, exploration, and exploitation of the fundamental PDO algorithm are all covered in this section’s mathematical formulation. Prairie dog populations are search agents, and prairie dog locations are possible solutions to the algorithm. Even in the case of PDO, random initialization is considered like other PBM algorithms. All necessary indexes and parameters are specified in Table 1.

Table 1 Nomenclature for proposed algorithm E-PDO

2.2.1 Initialization

Suppose N is a prairie dog of a coterie. Each Prairie Dog belongs to M coterie. The search space is filled with N prairie dogs at random within the specified boundary [ LB UB ] in this stage, where LB signifies the least value of the boundary of the associated variable of the test problem, and UB signifies the highest value of the boundary.

The positions of all coteries within the colony are represented by a coterie matrix (C) in Eq. (1):

$$\begin{aligned}{} & {} C = \begin{bmatrix} C_{1,1} &{} C_{1,2} &{} \cdots &{} C_{1,d}\\ C_{2,1} &{} C_{2,2} &{} \cdots &{} C_{2,d} \\ \vdots &{} \vdots &{} C_{i,j} &{} \vdots \\ C_{M-1,1} &{} C_{M-1,2} &{} \cdots &{} C_{M-1,d}\\ C_{M,1} &{} C_{M,2} &{} \cdots &{} C_{M,d} \end{bmatrix} \end{aligned}$$
(1)

\(C_{i,j}\) signifies the \(i^{th}\) coterie of \(j^{th}\) dimension within the colony.

The locations of all the prairie dogs in a coterie are provided by the Prairie matrix (P) in the Eq. (2):

$$\begin{aligned}{} & {} P = \begin{bmatrix} P_{1,1} &{} P_{1,2} &{} \cdots &{} P_{1,d}\\ P_{2,1} &{} P_{2,2} &{} \cdots &{} P_{2,d} \\ \vdots &{} \vdots &{} P_{i,j} &{} \vdots \\ P_{N-1,1} &{} P_{N-1,2} &{} \cdots &{} P_{N-1,d} \\ P_{N,1} &{} P_{N,2} &{} \cdots &{} P_{N,d} \end{bmatrix} \end{aligned}$$
(2)

\(P_{i,j}\) signifies the \(i^{th}\) prairie dog at \(j^{th}\) location in a coterie.

Equations (3) and (4) are used to assign each coterie and prairie dog location.

$$\begin{aligned} C_{{i,j}} = & rand(0,1) \times (UB_{j} - LB_{j} ) \\ & + LB_{j} ~~i = 1,2, \cdots ,N;~j = 1,2, \cdots ,M; \\ \end{aligned}$$
(3)
$$\begin{aligned} P_{i,j} =& rand(0,1) \times (ub_{j}-lb_{j}) \\ & + lb_{j} ~~ i=1,2,\cdots , N; ~ j = 1,2,\cdots , M; \end{aligned}$$
(4)

where \(N\le M\), \(ub_{j} = \frac{UB_{j}}{M}\) and \(lb_{j} = \frac{LB_{j}}{M}\) and using a uniform distribution, the value of rand(0,1) falls between 0 and 1.

2.2.2 Evaluation function assessment

The Evaluation function values for each prairie dog location in a coterie are enumerated by providing the optimal solution to the fitness function (fit(P)) in the (5) equation. At each iteration, fitness values are calculated for all prairie dog locations and stored as (n \(\times \) 1) matrix:

$$\begin{aligned}{} & {} fit(P) = \begin{bmatrix} fit_{1}(\begin{bmatrix} P_{1,1} &{} P_{1,2} &{} \cdots &{} P_{1,d} \end{bmatrix})\\ fit_{2}(\begin{bmatrix} P_{2,1} &{} P_{2,2} &{} \cdots &{} P_{2,d} \end{bmatrix})\\ \vdots \\ fit_{N}(\begin{bmatrix} P_{N,1} &{} P_{N,2} &{} \cdots &{} P_{N,d} \end{bmatrix}) \end{bmatrix} \end{aligned}$$
(5)

Each evaluation function’s fitness value based on the optimal location of the prairie dog reflects the food source quality, its ability to make new burrows, and its successful response to predators.

2.2.3 Exploration

In PDO, The location of the prairie dog is a probable decision, and the best decision at each stage is considered the best forager as well as the best response to a predator alert. Exploration and use of the PDO algorithm are accomplished through four strategies. These four strategies are applied in four iteration steps. Two strategies for exploration are applied in between \(0<t<\frac{T}{4}\) and \(\frac{T}{4}<t<\frac{T}{2}\) and other two for exploitation are used in between \(\frac{T}{2}<t<\frac{3T}{4}\) and \(\frac{3T}{4}<t<T\).

Burrow building is important for prairie dogs to protect themselves from environmental threats and predators. When their food source runs out, they start looking for new food sources and build new burrows around them. The initial step in the exploration phase is the movement of coterie members from the ward in search of new food sources. The random walk can mimic the behaviour of a prairie dog. This movement with long jumps ensures the effectiveness of the search for food sources. When a food source is found, it makes a unique sound to alert other members. Then they access food source quality, select the best foraging, and build new burrows based on food source quality. The position of the search update is given by the (6) equation during the algorithm’s search phase.

$$ P_{{(i + 1,j + 1)}}^{{new}} = P_{{(1,j)}}^{{Best}} - EC_{{(i,j)}}^{{Best}} \times \epsilon A - CP_{{(i,j)}} \times LF\quad \forall ~t{\text{ < }}\frac{T}{4} $$
(6)

The formula (7) yields the random collective impact of all colony’s prairie dogs as CP.

$$\begin{aligned}{} & {} CP_{(i,j)} =rand \times \frac{P_{(i,j)}^{Best}-P_{(randi([1 ~ N]),j)}}{P_{(i,j)}^{Best} + \epsilon } \end{aligned}$$
(7)

\(EC^{Best}\) measures the effectiveness of the currently obtained optimal solution, as shown in the formula (8).

$$ EC_{{(i,j)}}^{{Best}} = P_{{(i,j)}}^{{Best}} \times \left( {\tau + \frac{{P_{{(i,j)}} - mean(P_{{(N,M)}} )}}{{P_{{(i,j)}}^{{Best}} \times (UB_{j} - LB_{j} ) + \epsilon }}} \right) $$
(8)

A second strategy is to evaluate the quality of previous food sources as well as the intensity of digging. The digging strength is intended to decrease with further iterations, and new burrows are built on top of it. This circumstance aids in limiting the potential number of burrows that can form. The formula (9) provides the location update for building burrow.

$$\begin{aligned}{} & {} P_{(i+1,j+1)}^{new}=P_{(1,j)}^{Best} \times RP \times Ds \times LF ~~~ \forall ~ \frac{T}{4}\le t < \frac{T}{2} \end{aligned}$$
(9)

The coterie’s digging intensity is denoted by the variable Ds, which varies on the quality of the food supply and has random values as defined by the Eq. (10).

$$ Digging~strength,~\,Ds = 1.5 \times randn \times \left( {1 - \frac{t}{T}} \right)^{{\frac{{2t}}{T}}} \times Bool $$
(10)

2.2.4 Exploitation

Prairie dogs are capable of producing special types of sounds for, unlike situations. They can able to differentiate those special sounds for varying from the quality of food sources to the dangers of predators. Each prairie dog has an equal response ability to take care of these scenarios. Their communication signals assist them to handle predator-induced fear and also help them to fulfil their nutritional requirements. Once the signal reports that the food source is of good quality and safe, it converges on that signal source to meet nutritional requirements. And when a communication signal identifies a predator, prairie dogs hide in the way of predators while other dogs watch from their burrows.

Two specific behaviours, food alarm and anti-predation alarm cause prairie dogs to converge on specific places (promising if PDO is implemented), and further searches (exploitation) are conducted to find a better or near-optimal solution. The purpose of the exploit mechanism used in PDO is to focus the search on promising areas identified during the exploration phase. This phase is implemented according to the following equations

$$ P_{{(i + 1,j + 1)}}^{{new}} = P_{{(1,j)}} Best - EC_{{(i,j)}}^{{Best}} \times \epsilon - CP_{{(i,j)}} \times rand\quad \forall ~\frac{T}{2} \le t{\text{ < }}\frac{{3T}}{4} $$
(11)
$$ P_{{(i + 1,j + 1)}}^{{new}} = P_{{(1,j)}}^{{Best}} \times Pe \times rand~~~\forall ~\frac{{3T}}{4} \le t{\text{ < }}T $$
(12)

Where predator effect (Pe) is defined as

$$\begin{aligned}{} & {} ~Pe = 1.5 \times \Bigl (1-\frac{t}{T}\Bigl )^{\frac{2t}{T}} \times Bool \end{aligned}$$
(13)

The PDO algorithm benefits from the most effective Lévy flight random walk for generating the new position of prairie dogs, a particular updating process with distinctive characteristics like digging intensity and predator effect, and an efficient division of labour. There are some concerns based on original PDO experiment findings and comparisons to other algorithms. It is true to say that the PDO algorithm frequently experiences premature convergence. The cumulative sum of a random walk causes its lengthy run-time. The processes of exploration and exploitation are not balanced. Its population is not diverse, which presents problems. Therefore, those challenges have been a focus of our recent research.

3 Foundational framework for advancing prairie dog optimization (PDO) algorithm

3.1 Concepts of Lévy flight

A path consisting of a succession of randomly chosen steps in a mathematical space like integers is referred to as a random walk. This mathematical concept is also known as a stochastic process or random process. This is the common basis for the perturbations of the solution [51]. It is the movement of a particle, which can jump to a neighbouring node in the search space, starting at 0 and taking \(+1\) or \(-1\) steps with a given probability each time it moves. Using a normal distribution with a mean of zero and a variance of one, we can calculate the step size \(S_{i}\) of the random walk.

$$\begin{aligned}{} & {} S_{i} \sim \frac{1}{\sqrt{2\pi }} e^{-\frac{x^2}{2}} \end{aligned}$$
(14)

The current solution is represented here by x and \(\sim \) is used to denote the drawing of random numbers according to the distribution. Equation 15 gives the total of successive random walking steps (\(RW_{n}\)).

$$ RW_{n} = \sum\limits_{{i = 1}}^{n} {S_{i} } = S_{1} + S_{2} + \cdots + S_{n} $$
(15)

This is the same as given in the following Eq. (16).

$$\begin{aligned}{} & {} RW_{n} = RW_{n-1} + S_{n} \end{aligned}$$
(16)

The new state or transition phase depends only on the existing state (\(RW_{n-1}\)) and transition phase (\(S_{n}\)). This random walk is also called the Brownian motion or diffusion process. The search performance of the nature-inspired optimization algorithm (NIOA) depends on a probability distribution of the forager’s flight lengths that can only find target sites in the search space [50].

Random walks provide more randomness in both NIOA’s exploration and exploitation processes. The powerful model for characterizing non-directed animal motion was established on the conception of Brownian motion for several years [15, 19, 45]. In this configuration, the flight path of an animal in space is assumed to consist of a series of different random directional motion steps produced from a Gaussian distribution [36]. The Lévy flight model is also used to analyse animal movements. Shlesinger et al. [40] first suggested that the movement styles of some organisms could exhibit Lévy flight characteristics. To be precise, these ought to be motion styles of the Lévy Walk. This is because it is a continuous movement (usually of constant speed) rather than individual jumps. Lévy flight involves linear motion in random directions.

Fig. 4
figure 4

A trajectory of Lévy flight in a two-dimensional plane

Lévy flights are capable of efficient resource searches randomly in uncertain environments [50]. According to simple power law \(L(x) \sim |x|^{-1-\beta }\) where \(0<\beta \le 2\) is an index. It is possible to express the Lévy probability distribution as

$$\begin{aligned}{} & {} L(x, \mu , \gamma ) = \frac{\sqrt{\gamma /(2*\pi )}}{(x-\mu )^{3/2}} e^{-\frac{\gamma }{2(x-\mu )}}, ~ x > \mu \end{aligned}$$
(17)

where \(\mu > 0\) head the location of the search path and \(\gamma \) is scale factor.

The following is a definition of the Lévy probability function’s special case:

$$\begin{aligned}{} & {} L(x, \mu , \gamma ) \approx \frac{\sqrt{\gamma /(2*\pi )}}{(x)^{3/2}}, ~ x \rightarrow \infty \end{aligned}$$
(18)

For the more general case of exponential \(\beta \) we can define the Lévy distribution using the integral

$$\begin{aligned}{} & {} L(x) = \frac{1}{\pi } \int _{0}^{\infty } \cos {(k x)} * e^{-\alpha |k|^{\beta }} d k, ~ (0 < \beta \le 2) \end{aligned}$$
(19)

where \(\alpha > 0\) is scale parameter. If in Eq. (19), \(\beta = 1\) then it follows a Cauchy distribution, and when \(\beta = 2\) Eq. (19) obeys a normal distribution. This inverse integral can only be evaluated for large x. We have also

$$\begin{aligned}{} & {} L(x) \rightarrow \frac{\alpha \beta \Gamma (\beta ) \sin (\frac{\pi \beta }{2}) }{\pi | x |^{1 + \beta }}, ~x \rightarrow \infty \end{aligned}$$
(20)

where \(\Gamma (\beta ) \) = Gamma function.

According to the aim of implementation, generating random numbers in Lévy flights includes a pair of steps [49]. These include choosing a random direction and creating steps using the selected Lévy distribution. The uniform distribution should be used to generate directions, but step creation is quite challenging. There are so many measures available to reach this; however, one of the most effective and straightforward techniques for the symmetric stable Lévy distribution is the Mantegna algorithm [24]. Symmetry signifies both positive and negative actions.

The following equation can be used to calculate the Mantegna algorithm’s step length (S).

$$\begin{aligned}{} & {} S = \frac{u}{|v|^{1/\beta }} \end{aligned}$$
(21)

where u and v are taken from the continuous probability distributions known as normal distributions.

$$\begin{aligned}{} & {} u \sim N(0,\sigma _{u}^{2}); ~ v \sim N(0,\sigma _{v}^{2}) \end{aligned}$$
(22)
$$ where,~\,\sigma _{u} = \left[ {\frac{{\Gamma \left( {1 + \beta } \right) \times \sin (\frac{{\pi \beta }}{2})}}{{\Gamma \left( {\frac{{1 + \beta }}{2}} \right) \times \beta \times 2^{{\frac{{\beta - 1}}{2}}} }}} \right]^{{\frac{1}{\beta }}} \;and\;\sigma _{v} = 1. $$
(23)

In a two-dimensional plane, Fig. 4 shows an illustration of a Lévy flight. Figure 4 illustrates the divergence in motion, which is the most crucial aspect of Lévy flight. In some circumstances, it may considerably improve the search algorithm’s efficiency.

3.2 Concepts of dynamic opposite learning strategy (DOL)

3.2.1 Opposition-based learning (OBL)

To improve the search power of algorithms, the OBL (opposition-based learning) strategy is one of the best options for efficient learning. The optimization process of meta-heuristics is improved by this feature, which speeds up convergence. The conceptual idea of two strategies, namely the opposite number and opposite point, is identified in the following way.


Opposite number. Let X be in the interval [ab] (\( X \in [a,b]\)). Then, the opposite number \(X^{O}\) can be defined as eq. (24)

$$\begin{aligned}{} & {} X^{O} = a + b - X \end{aligned}$$
(24)

where the search region’s bottom and higher bounds are a and b, respectively.


Opposite point. Practically, in the application field, X can be a point of a multi-dimensional problem. Let X is a point in D-dimension, \(X_{j},\cdots , X_{D} \in [a_{j},b_{j}]\). A multidimensional opposite point is characterized as

$$\begin{aligned}{} & {} X_{j}^{O} = a_{j} + b_{j} - X_{j} \end{aligned}$$
(25)

3.2.2 Dynamic opposite learning strategy (DOL)

Considering opposite numbers, the OBL strategy can provide a solution close to the global optimum. The search area between the central and opposite places is expanded using quasi-opposite-based learning (QOBL), and several quasi-opposites are discovered. However, the quasi-reflection numbers are selected from the search space between the current solution location and the average position. In such circumstances, quasi-reflection-based learning (QRBL) can be generated. OBL, QOBL, and QRBL are all affected by the same issue. The search space will converge to local optimum if there is a local optimum between the current solution and its opposite solution. A dynamic oppositional learning (DOL) strategy helps to circumvent this problem. This increases the chances of converging on a global solution. The DOL approach was initially put forth by Xu et al. [47] for the TLBO algorithm.

Fig. 5
figure 5

The symmetric space of DOL strategy

Inspiration from QOBL and QRBL, there is a higher possibility of dynamical expansion of the search space of OBL to get the closer optimum solution. This is displayed in the Fig. 5 where search space is in between a and b, X is the current position number, \(X^{O}\) is the opposite number, and \(X^{S} = X + rand * (X^{O}- X), ~ rand \in [0,1]\) is the same number as before in the new location which is symmetric. Although using this technique increases the likelihood of obtaining the global solution, it is certain that the search space will tend towards the local optimal solution position that lies between the current location and its opposite number. In this regard, we are considering a novel concept called DOL strategy to avoid the local optimum solutions. For this, we need a random opposite number for corresponding \(X^{O}\). The random opposite number is defined as

$$\begin{aligned}{} & {} X^{RO} = rand * X^{O}, ~ rand \in [0,1] \end{aligned}$$
(26)

\(X^{O}\) is replaced by \(X^{RO}\), which broadens the search area and converts the symmetric search area into an asymmetric search area. This is shown in Figs. 6 and 7. This helps create a dynamic search space where the algorithm can avoid reaching a local optimum.

Fig. 6
figure 6

The asymmetric space of DOL strategy

Fig. 7
figure 7

The asymmetric space of DOL strategy

3.3 Mathematical templates for DOL

The DOL strategy can be implemented using the following mathematical model.


Dynamic opposite number The following definition 27 applies to the dynamic opposite number \((X^{DO})\).

$$\begin{aligned}{} & {} X^{DO} = X + \omega * rand * (X^{RO} - X) \end{aligned}$$
(27)

Where a and b denote the lower and upper boundaries of the value of X, and X is a real number, \(X \in [a,b]\). rand has the value (0, 1) at random. \(\omega \) is the weighting factor.


Dynamic opposite point Taking into account the D-dimensional space \((X_{1}, X_{2},\cdots , X_{D})\) with \(X_{j},\cdots , X_{D} \in [a_{j},b_{j}]\), the dynamic opposite point is defined by Eq. 28, where the lower and upper limits of the current search space, respectively, are \(a_{j}\) and \(b_{j}\).

$$\begin{aligned}{} & {} X_{j}^{DO} = X_{j} + \omega * rand * (X_{j}^{RO} - X) \end{aligned}$$
(28)

DOL-based optimization In D-dimensional space, consider the points \((X_{1},X_{2},\cdots ,X_{D})\) and \(X_{j},\cdots ,X_{D} \in [a_{j},b_{j}]\), where the lower and upper boundaries of the variable \(X_{j}\) are, respectively, \(a_{j}\) and \(b_{j}\). The dynamic opposite point \(X^{DO} = (X_{1}^{DO},X_{2}^{DO},\cdots ,X_{D}^{DO})\) is determined by Eq. (28) and the update is performed by updating X according to Eq. (28). This step validates \(X^{DO}\) value. \(X^{DO}\) is acceptable if compared to X, \(X^{DO}\) has a better fitness value. Otherwise, X value remains the same.

4 Proposed E-PDO

Exploration and exploitation are the two main phenomena of any optimization algorithm. Exploration is a broader search space perspective that gives the algorithm control over the entire search space during a search operation. Conversely, exploitation means finding solutions in the local search space. Another important aspect is the stability between the above two operations. Any optimization calculation algorithm should follow these three aspects. PDO is a novel population-based metaheuristic algorithm. It is easy to see that the PDO algorithm undergoes premature convergence. That is, it has a higher tendency to reach a local optimum. Poor quality of population diversity and low accuracy in generating optimal solutions. These issues can result in a poor balance between exploration and exploitation.

To overcome the above difficulties, we proposed an improved PDO algorithm (called E-PDO or enhanced-PDO) by integrating the DOL strategy. Also introduced is a modified random walk that randomizes the locations of prairie dogs. In the subsections below, all details of these modifications of the proposed PDO are described. The introduction of modified Lévy flight and DOL-based strategies for PDO is discussed in detail in Sects. 5.1 and 5.2, respectively.

4.1 Improved random walk

Lévy flight is a highly efficient random walk for nature-inspired optimization algorithms. For the algorithm here, we consider a new improved version of Lévy flight to generate new prairie dog locations in the problem search space. The new flight is called m-Lévy (modified Lévy flight). We have proposed the modified step length S using the concept of the Lévy flight (in Sect. 4.1). It is given by

$$\begin{aligned}{} & {} S = \frac{ w \times u \times \sigma _{u}}{|v|^{1/\beta }} \end{aligned}$$
(29)

where w, is weight factor and u and v are drawn from normal distributions

$$\begin{aligned}{} & {} u \sim N(0,1); ~ v \sim N(0,1) \end{aligned}$$
(30)
$$\begin{aligned}{} & {} where, ~ \sigma _{u} = \frac{\Gamma \left( 1+\beta \right) \times \sin (\frac{\pi \beta }{2})}{\Gamma \left( \frac{1+\beta }{2}\right) \times \beta \times 2^{\frac{\beta -1}{2}}}; \end{aligned}$$
(31)

Here, we consider the value of w = 0.001 for our proposed e-PDO.

Fig. 8
figure 8

A trajectory of m-Lévy flight in a two-dimensional plane

Figure 8 shows an illustration of m-Lévy flight in a two-dimensional plane. The steps are made up of numerous small steps and a few long steps because of the Lévy distribution. In some circumstances, these large steps may greatly improve e-PDO’s search efficiency as compared to other MAs.

4.2 DOL-based strategies for PDO

To prevent premature convergence when solving complex real-world optimization problems, the proposed DOL-based idea is built in PDO to speed up convergence. DOL increases population diversity by avoiding falling into a local optimum. For the proposed algorithm, the DOL-based strategy is composed of two stages: initialization of the population using DOL and generation jump with DOL.

4.2.1 Improved population initialization using DOL strategy

We can consider P to be the initial population which is generated randomly and \(P^{DOL}\) to be the population generated by the DOL initial population generation strategy. For each prairie dog \(i=1,2,\cdots , N\) and \(j=1,2,\cdots , D\) within maximum iteration over all iterations, the DOL population initialization method is defined as:

$$ P_{{i,j}}^{{DOL}} = P_{{i,j}} + r1_{i} \times \left( {r2_{i} *\left( {UB_{j} + LB_{j} - P_{{i,j}} } \right) - P_{{i,j}} } \right) $$
(32)

where the current search space’s upper and lower limits are \(UB_{j}\) and \(LB_{j}\). \(Rand1_{i}\), and \(Rand2_{i}\) are random numbers. Weight \(\omega \) is set to 1.

$$\begin{aligned}{} & {} P_{i,j}^{DOL} = rand(LB_{j}, UB_{j})~~ if~ P_{i,j}^{DOL} < LB_{j} || P_{i,j}^{DOL}>UB_{j} \end{aligned}$$
(33)

\(P_{i,j}^{DOL}\) must be in the range \([LB_{j}, UB_{j}]\). In the initialization phase, the best ones from among \((P \bigcup P^{DOL})\) are selected.

4.2.2 Improved generation jumping using DOL strategy

We can update the population using DOL Strategy taking into account the jump rate (Jr). If the selection probability at any iteration t is less than Jr, the DOL-generation transition procedure can be carried out as

$$ (P_{{new}} )_{{i,j}}^{{DOL}} = (P_{{new}} )_{{i,j}} + \omega \times Rand3_{i} \times (Rand4_{i} \times (UB_{j} + LB_{j} - (P_{{new}} )_{{i,j}} ) - (P_{{new}} )_{j} ) $$
(34)

\(Rand3_{i}\), and \(Rand4_{i}\) are random numbers.To make the DOL efficient, we need to check the range in the same format as Eq. 33.

$$LB_{j} = min(P_{i,j}),\,UB_{j} = max(P_{i,j})$$

In the DOL generation jumping phase, to discover the optimum solution, we can choose the best out of \((P \bigcup P^{DOL})\).

4.3 E-PDO algorithm steps

A new PDO variant called enhanced PDO (E-PDO) has been formulated with the addition of modified random walk, improved population initialization and generation jumping using DOL Strategy. To visualize a specific algorithm, the E-PDO algorithm’s steps are provided in Algorithm E-PDO in Table 2. The complete process of the E-PDO algorithm is included in the flow diagram in Fig. 9.

Table 2 Algorithm E-PDO
Fig. 9
figure 9

Flow diagram of the E-PDO algorithm

4.4 E-PDOUW

Here, we also consider E-PDO with m-Lévy flight step length with a weight value of \((w=1)\). We call this E-PDOUW.

4.5 PDOL

The combination of PDO algorithm, DOL strategy and old Lévy flight (based on Eqs. 21 to 23) is examined as PDOL.

4.6 The computational complexity of the E-PDO algorithm

Computational complexity is an important factor to consider when assessing an algorithm’s performance. The complexity of the EPDO is determined according to the values of the number of prairie dogs (N), the dimensions (D) and the maximum number of iterations (T). The space complexity of the EPDO algorithm hinges on two key parameters: the number of prairie dogs (N) and the dimensions of the optimization problem (D). This space complexity measurement reflects the memory space requirements, particularly during initialization. Consequently, the expression for EPDO’s space complexity is succinctly captured as \(O(N \times D)\).

The time complexity is intricately influenced by several factors, including the population size (N), problem dimensions (D), the number of iterations (T), and the cost of function evaluations (C). Consequently, the time complexity (O(EPDO)) can be precisely articulated as the sum of three main components: O(Initialization), O(cost function evaluation), and O(Updating strategy). The complexity as a whole is O(EPDO) = O(Initialization) + \(T \times \) (O(Fitness evaluation of all prairie dogs) + O(Generation updating process of all prairie dogs with new strategies)). Hence, the overall computational complexity of the EPDO is

$$ O(EPDO) = O((N \times D) + (T \times N \times D \times 4) + (T \times N \times D)) $$
(35)

According to a convergence analysis, an algorithm that is combined with the DOL method achieves a fast convergence rate compared to other conventional algorithms. A DOL technique is used to improve the EPDO algorithm’s ability to avoid local optima. The results of the runtime study show that the EPDO, when combined with the m-Lévy random walk and DOL approaches, can greatly improve computational efficiency.

According to a convergence analysis, an algorithm that is combined with the DOL method achieves a fast convergence rate compared to other conventional algorithms. A DOL technique is used to improve the e-PDO algorithm’s ability to avoid local optima. The results of the runtime study show that the E-PDO, when combined with the m-Lévy random walk and DOL approaches, can greatly improve computational efficiency.

5 Experimental problems, results, and discussions

Various tests are conducted in this section to show how well the e-PDO works and verify the accuracy of solving the global optimization problem. We combined the simulation results of the proposed e-PDO and compared the simulation results of the original PDO with seven other meta-heuristics, including GWO, MFO, ALO, WOA, DA, SCA, and RSA for reference functions, including uni-modal, multi-modal, and fixed-dimensional functions.

Table 3 Uni-modal benchmark functions
Table 4 Multi-modal benchmark functions

5.1 Benchmark function

Thirty-three reference functions are chosen and split into three categories: fixed-dimensional benchmark function, multi-modal benchmark function, and uni-modal benchmark function [16]. These benchmark functions are used to test the proposed e-PDO algorithm. With the exception of setting the CEC-2017 functions’ dimension to 30, algorithms are compared using identical parameter values. Uni-modal test functions (F1 to F7) are defined in Table 3. Multi-modal test functions (F8 to F13) are defined in Table 4. Tables 5 and 6 provide definitions for fixed-dimension uni-modal (F30 to F33) and multi-modal (F14 to F29) functions.

Table 5 Fixed-dimension multi-modal test functions
Table 6 Fixed-dimension uni-modal test functions

The uni-modal functions (F1 to F7) contain only one local optimum. The exploitation potential of optimization methods can be assessed with the aid of uni-modal functions. Consequently, a meta-heuristic method with the highest exploitation potential is used to optimise these functions. Selected multi-modal functions (F8 to F13) and fixed-dimensional multi-modal functions (F14 to F29) have many local minima associated with these functions. Due to the fact that these functions’ solutions might occasionally become stuck in the local optima and are impossible to escape, they can be more challenging to solve than uni-modal functions. The complexity level of multi-modal functions also increases as multiple dimensions, search domains, and local optima values increase. Because of their capability to discover new sites, these test the MA’s ability to explore. The experimental outcomes are contrasted with those of the original PDO and 7 other well-known algorithms. Results are evaluated using the statistical tests described in the next section.

5.2 Experimental setup

To ensure consistency across all validation tests, we chose (N) = 30 for the population size, (D) = 10 for the size of the dimensions, and (T) = 1000 for the number of iterations. The 30 iterations of each function are rounded to two decimal places to reduce statistical error and produce statistically significant output. Two measures are used to evaluate the algorithm’s performance: the mean and standard deviation measures. The best, the worst, the mean, and the standard deviation (SD) are shown as the final experimental results after each method has been run separately 30 times for each function. All tests are conducted using MATLAB R2020a on a computer running Windows 10 and equipped with an Intel(R) Core(TM) i7-4790 CPU clocked at 3.60GHz and 8GB of RAM. All the required parameter values for each algorithm are listed in Table 7.

Table 7 Algorithm control parameters

5.3 Experimental results

In Tables 8 and 9, along with e-PDO and the other seven algorithms, the mean and standard deviation for optimised uni-modal functions are shown. The table clearly shows that, in comparison with other algorithms, e-PDO offered the least values. The functions F1, F2, F3, F4, and F7 get the optimum results when using the e-PDO algorithm. It provides the \(5^{th}\) and \(4^{th}\) best outcomes for functions F5 and F6, respectively. Boldface indicates all average best results. So it stands to reason that our suggested method is a better algorithm than others.

In the case of multi-modal (F8–F13) functions (from Tables 10, 11), it achieves the optimum results for F8–F11 except for F12 and F13 functions. For fixed-dimension multi-modal test functions (F14–F29) (from Tables 12, 13, 14 and 15), with the exception of the F18–F19 and F21–F23 functions, e-PDO demonstrates its superiority over other algorithms. For fixed-dimension uni-modal test functions (F30–F33), e-PDO gives best values for F31.

In addition to the above analysis, if we compare e-PDO and other PDO versions on the CEC test problems from Tables 18 and 19, it can be concluded that e-PDO achieves better results than the original PDO algorithm. Tables 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 and 18 further show that, in terms of average and standard deviation indices, our suggested e-PDO algorithm outperformed the other 7 meta-heuristic algorithms. The outcomes showed that by combining the DOL technique and the modified random walk of prairie dogs, e-PDO might produce a better solution.

Table 8 Result of uni-modal test functions (F1–F7)(Dimension = 10)
Table 9 Result of uni-modal test functions (F1–F7)
Table 10 Result of multi-modal test functions (F8–F13) (Dimension = 10)
Table 11 Result of multi-modal test functions (F8–F13)
Table 12 Result of fixed-dimension multi-modal test functions (F14–F21)
Table 13 Result of fixed-dimension multi-modal test functions (F14–F21)
Table 14 Result of fixed-dimension multi-modal test functions (F22–F29)
Table 15 Result of fixed-dimension multi-modal test functions (F22–F29)
Table 16 Result of fixed-dimension uni-modal test functions (F30–F33)
Table 17 Result of fixed-dimension uni-modal test functions (F30–F33)
Table 18 Ranking and average ranking

5.4 Statistical analysis of experimental results

5.4.1 Performance indicators

Various statistical tools are used in this paper, namely the average value of the objective functions (\(Average_{F}\)) and the standard deviation (\(SD_{F}\)). Their mathematical formulations are given as follows.

The average measure, which can be expressed by Eq. 36, determines the average of the optimal values resulting from the algorithm and is assessed over a number of predefined runs.

$$\begin{aligned}{} & {} Average_{F} = \frac{1}{R_{n}} \sum _{i=1}^{R_{n}} F_{i} \end{aligned}$$
(36)

where \(R_{n}\) is the quantity of runs. The optimal result is represented by \(F_{i}\).

The standard deviation (SD) measurement is used to test whether the algorithm under evaluation can achieve the same best value across all runs, and to examine the consistency of algorithmic output, and can be represented by Eq. 37:

$$\begin{aligned}{} & {} SD_{F} = \sqrt{\frac{1}{R_{n}-1} \sum _{i=1}^{R_{n}} (F_{i}-Average_{F})^{2}} \end{aligned}$$
(37)

We specify the average value and standard deviation (SD) of the e-PDO along with seven other existing MAs such as GWO, MFO, ALO, WOA, DA, SCA and RSA for matching. Here, we also include another two versions of e-PDO.

5.4.2 Measurement of alternatives and ranking according to compromise solution (MARCOS) method

More than just scanning and comparing a set of benchmark function lists should be included in choosing the best method. This section provides a description of the MARCOS approach (developed by Stevic et al. [42]). This is a new approach to multi-criteria analysis [41]. The MARCOS technique is built on specifying how alternatives and reference values relate to one another (ideal and anti-ideal alternatives). The utility functions of the alternatives are established based on the aforementioned relationships, and compromise rankings with respect to ideal and anti-ideal solutions are generated. Utility functions are used to define the preference for decisions. Utility functions show a choice’s position in relation to ideal and anti-ideal possibilities. The optimal alternative is the one that is farthest from the anti-ideal reference point while also being the most likable to the ideal. The following steps comprise the MARCOS methodology:


Step 1. Constructing an initial decision-making matrix. In multi-criteria models, there are n criteria and m alternatives defined. A team of r experts should be put together to evaluate the options in light of the criteria when making decisions as a group. An initial group decision-making matrix is created in the event of group decision-making by merging expert assessment matrices.


Step 2. The formation of a lengthy initial matrix. The original matrix is enlarged at this stage by providing the ideal (AI) and anti-ideal (AAI) solution.

The anti-ideal solution (AAI) is the worst choice, whereas the ideal solution (AI) is the finest alternative. (AAI) and (AI)s are defined using equations according to the nature of the criteria:

$$\begin{aligned} AAI= {\left\{ \begin{array}{ll} \underset{i}{\min {x_{ij}}} &{} \text {if}~~j\in B ~ \text {(for maximizing problem)} \\ \underset{i}{\max {x_{ij}}} &{} \text {if} ~~j\in C ~ \text {(for minimizing problem)} \end{array}\right. } \end{aligned}$$
$$\begin{aligned} AI= {\left\{ \begin{array}{ll} \underset{i}{\max {x_{ij}}} &{} \text {if} ~~j\in B ~ \text {(for maximizing problem)} \\ \underset{i}{\min {x_{ij}}} &{} \text {if} ~~j\in C ~ \text {(for minimizing problem)} \end{array}\right. } \end{aligned}$$

where B represents a set of criteria for benefits and C represents a set of criteria for costs.


Step 3. The extended starting matrix (X) is normalised. The equations are used to calculate the elements of the normalized matrix \(N= [n_{ij}]_{m\times n}\).

$$\begin{aligned} n_{ij}={\left\{ \begin{array}{ll} \frac{x_{ij}}{x_{ai}} &{} \text {if} ~~j\in B ~ \text {(for maximizing problem)} \\ \frac{x_{ai}}{x_{ij}} &{} \text {if} ~~j\in C ~\text {(for minimizing problem)} \end{array}\right. } \end{aligned}$$

where \(x_{ij}\) and \(x_{ai}\) are indeed the elements of the matrix X and the ideal solution of the alternative, respectively.


Step 4. \(V=[v_{ij}]_{m\times n}\) is the weighted matrix. The weighted matrix V is formed by multiplying the normalized matrix N by weightage. Using the weight coefficients of the criteria \(w_j\), the equation may be solved.

\(v_{ij}=n_{ij} \times w_j\)


Step 5. The utility degree of alternative \(K_i\) is calculated. The utility degrees of an alternative in regard to the anti-ideal and ideal solutions are determined using equations \(K_i^-\) and \(K_i^+\).

$$ \begin{aligned} K_{i}^{ - } = & \frac{{S_{i} }}{{S_{{aai}} }} \\ K_{i}^{ + } = & \frac{{S_{i} }}{{S_{{ai}} }} \\ \end{aligned} $$

where \(S_i; (i= 1, 2,..., m)\) denotes the total number of items in the weighted matrix V.

$$ S_i=\sum _{i=1}^n{v_{ij}} $$

Step 6. Calculating the utility function of alternatives \(f(K_i)\). The utility function is the trade-off between the observable alternative and the ideal and anti-ideal solutions. The equation expresses the utility function of alternatives.

$$f(K_i)=\frac{K_i^++K_i^-}{1+\frac{1-f(K_i^+)}{f(K_i^+)}+\frac{1-f(K_i^-)}{f(K_i^-)}}$$

where \(f(K_i^-)\) represents the utility function in relation to the anti-ideal solution and \(f(K_i^+)\) represents the utility function in relation to the ideal solution. Equations are used to calculate utility functions in relation to the ideal and anti-ideal solutions.

$$ \begin{aligned} f\left( {K_{i}^{ - } } \right) = & \frac{{K_{i}^{ + } }}{{K_{i}^{ + } + K_{i}^{ - } }} \\ f\left( {K_{i}^{ + } } \right) = & \frac{{K_{i}^{ - } }}{{K_{i}^{ + } + K_{i}^{ - } }} \\ \end{aligned} $$

Step 7. The choices are ranked. To rank the options, the utility function’s final values are employed. The maximum utility function value that is possible should be assigned to an option.

5.4.3 MARCOS calculation

According to the MARCOS method, CEC test functions are the criteria used here, all algorithms are alternatives, and all problems(CEC test functions) are minimization problems. As a result, these criteria are cost criteria., using this procedure e-PDO is the best algorithm for solving CEC test functions. Table 19 clearly shows the computational supremacy of e-PDO.

Table 19 Ranking using MARCOS method
Fig. 10
figure 10

Convergence curves of EPDO and competitor algorithms for different CEC-2017 test functions

Fig. 11
figure 11

Convergence curves of EPDO and competitor algorithms for different CEC-2017 test functions

5.5 Convergence report

To compare the convergence speed of EPDO with other algorithms (with GWO, MFO, ALO, WOA, SCA, and RSA for a few benchmark functions), in Figs. 10 and 11, the values of the function estimate and the objective function are plotted on the horizontal and vertical axes, respectively, the curve is plotted with a dimension of 10. In most cases, EPDO has faster convergence optima compared to other methods. This performance is particularly noteworthy in the case of F1–F4, F7, F9–F11, F14–F20, F24–F29, and F31–F33. It is due to the presence of DOL initialization and generation jumping strategy as well as the m-Lévy. This helps in increasing the exploration capability. The special property of DOL called dynamic change, assists in excellent exploitation. As a result, EPDO is competitive and beats other algorithms in search accuracy, dependability, convergence speed, and exceeding the local optimum.

5.6 Applicability of EPDO for solving engineering design problems

The effectiveness of the EPDO algorithm in resolving engineering issues (constrained optimization problems) has been assessed in this subsection. Five engineering issues in total—the pressure vessel problem, the rolling element bearing design problem, the cantilever beam design problem, the tension/compression spring design problem and the gear train design problem have been used for this. Since the problem above has a lot of inequality constraints, if one of these constraints is violated, MAs use a penalty function that gets a good value. The results obtained by EPDO are compared with other MAs.

Fig. 12
figure 12

Pressure vessel design

5.6.1 Pressure vessel problem

To maintain gases or liquids at a pressure that is often substantially greater than the atmospheric pressure, a closed vessel known as a pressure vessel is utilised (Fig. 12). Hemispherical heads are used to cap off a pressure vessel that is cylindrical at both ends. This optimization issue was put forth by Sandgren [39], and pressure vessels are frequently utilised for engineering purposes.

The construction of this compressed air tank, which has a working pressure of 3000 psi and a minimum capacity of 750 cubic ft, complied with the boiler and pressure vessel code of the American Society of Mechanical Engineers (ASME). The goal is to reduce the overall cost, which is made up of the welding, material, and forming costs. The variables are the shell thickness (\(T_{s}\)), head thickness (\(T_{h}\)), inner radius (R) and length of the part of the vessel without the head (L). The integer multiples of 0.0625 inches are the only ones that the thicknesses (\(T_{s}\) and \(T_{h}\)) can accept. The following is how this problem is mathematically formulated [22]:

$$\begin{aligned}{} & {} Consider ~ \vec {z} = [T_{s}, T_{h}, R, L] \\{} & {} Minimize ~ f(\vec {z} ) = 0.6224 T_{s} R L + 1.7781 T_{h} R^2 + 3.1661 T_{s}^2 L + 19.84 T_{h}^2 L \\{} & {} \text {subject to} \\{} & {} g_{1}(\vec {z}) = - T_{s} +0.0193 R \le 0, \\{} & {} g_{2}(\vec {z}) = - T_{h} +0.00954 R \le 0, \\{} & {} g_{3}(\vec {z}) = - \pi R^2 L - \frac{4}{3} \pi R^3 + 750 \times 11728 \le 0,\\{} & {} g_{4}(\vec {z}) = L - 240 \le 0,\\{} & {} 0 \le T_{s}, T_{h} \le 99, 0 \le R, L \le 200 \end{aligned}$$

The optimal results of variables and objective function are inserted in Table 20. EPDO achieves a better result than the original PDO (from Table 20).

Table 20 Results estimated for pressure vessel design

5.6.2 Rolling element bearing design problem

This problem (Fig. 13) has 10 parameters and 10 constraints. The primary objective of the current problem is to maximize the load-carrying capacity of bearing [4, 35].

Fig. 13
figure 13

Rolling element bearing design

The mathematical formulation of the design issue with rolling elements in bearings is follows:

$$\begin{aligned}{} & {} Consider ~ [\vec {z}] = [z_{1}, z_{2}, z_{3}, z_{4}, z_{5}, z_{6}, z_{7}, z_{8},z_{9}, z_{10}] \\{} & {} ~~~~~~~~~~~~~~= [D_{m}, D_{b}, Z, f_{i}, f_{o}, K_{D_{min}}, K_{D_{max}}, \varepsilon , e, \zeta ] \\{} & {} Maximize {\left\{ \begin{array}{ll} f_{\vec {z}} = f_{c} Z^{2/3} D_{b}^{1.8} ~~~ if ~ D_{b} \le 25.4 mm \\ f_{\vec {z}} = 3.647 f_{c} Z^{2/3} D_{b}^{1.4} ~~~ if ~ D_{b} > 25.4 mm \end{array}\right. } \\{} & {} \text {subject to} \\{} & {} g_{1} (\vec {z}) = \frac{\phi _{o}}{2\sin ^{-1}{(D_{b}/D_{m})}}-z+1 \ge 0, \\{} & {} g_{2} (\vec {z}) = 2D_{b}-K_{D_{min}}(D-d) \ge 0, \\{} & {} g_{3} (\vec {z}) = K_{D_{max}}(D-d) - 2D_{b}\ge 0, \\{} & {} g_{4} (\vec {z}) = D_{m}-(0.5-e)(D+d) \ge 0,\\{} & {} g_{5} (\vec {z}) = (0.5+e)(D+d) - D_{m}\ge 0,\\{} & {} g_{6} (\vec {z}) = D_{m}-0.5(D+d) \ge 0,\\{} & {} g_{7} (\vec {z}) = 0.5 (D-D_{b}-D_{m})-\varepsilon D_{b} \ge 0, \\{} & {} g_{8} (\vec {z}) = \zeta B_{w}-D_{b} \le 0,\\{} & {} g_{9} (\vec {z}) = f_{i} \ge 0.515,\\{} & {} g_{10} (\vec {z}) = f_{o} \ge 0.515,\\{} & {} f_{c} = 37.91 \times \Bigl [ 1 + \Bigl \{1.04 \times \Bigl ( \frac{1-\gamma }{1-\gamma }\Bigr )^{1.72} \times \Bigl (\frac{f_{i}(2f_{o}-1)}{f_{o}(2f_{i}-1)}\Bigr )^{0.41}\Bigr \}^{10/3}\Bigr ]^{-0.3} \\{} & {} ~~~~~\times\Bigl [ \frac{\gamma ^{0.3} (1 - \gamma )^{1.39}}{(\gamma + 1)^{1/3} } \Bigr ] \times \Bigl [ \frac{2f_{i}}{ 2f_{i} -1} \Bigr ]^{0.41}, \\{} & {} \gamma = D_{b}/D_{m},~f_{o} = r_{o}/D_{b}, ~f_{i} = r_{i}/D_{b}, \\{} & {} \phi _{o} = 2\pi - 2 \cos ^{-1}{\frac{\{(D - d) /2 - 3 ( T/4)\}^{2} + \{D/2 - (T /4) - D_{b}\}^{2} - \{d/2 + (T /4)\}^{2}}{2 \{(D - d) /2 - 3 (T/4)\}\{D/2 - (T /4) - D_{b}\}}}, \\{} & {} T = D - d - 2 D_{b}, ~D = 160, ~d = 90, ~B_{w} = 30, ~r_{i} = r_{o} = 11.033, \\{} & {} 0.5(D+d) \le D_{m} \le 0.6(D+d), 0.15(D-d)\le D_{b} \le 0.45(D-d), \\{} & {} 4 \le Z \le 50, 0.515 \le f_{i} \le 0.6, \\{} & {} 0.515 \le f_{o} \le 0.6, 0.4 \le K_{D_{min}} \le 0.5, 0.6 \le K_{D_{max}} \le 0.7, 0.3 \le \varepsilon \le 0.4, \\{} & {} 0.02 \le e \le 0.1, 0.6 \le \zeta \le 0.85 \end{aligned}$$

The optimal results of variables and objective function are presented in Table 21. Compared to other algorithms, the optimal function value for this issue, \(6.96\times 10^{4}\), which is obtained via EPDO, can achieve greater solution accuracy.

Table 21 Comparative studies estimated by various algorithms for rolling element bearing design

5.6.3 Cantilever beam design problem

The cantilever beam’s free end is subject to a vertical load, while the other side is rigidly supported. Figure 14 illustrates the cantilever beam design problem’s structure. The goal is to reduce the weight of the beam, and the vertical displacement is a constraint that shouldn’t be exceeded by the ideal design at all. The design of the cantilever beam problem is mathematically described as follows:

$$\begin{aligned}{} & {} Minimize\, f(z) = 0.0624 (z_{1}+z_{2}+z_{3}+z_{4}+z_{5}) \\{} & {} \text {subject to:} \\{} & {} g(z) = \frac{61}{z_{1}^{3}} + \frac{37}{z_{2}^{3}} + \frac{19}{z_{3}^{3}} + \frac{7}{z_{4}^{3}} + \frac{1}{z_{5}^{3}} -1 \le 0 \\{} & {} 0.01 \le z_{i} \le 100;~ i = 1, 2, \cdots , 5 \end{aligned}$$
Fig. 14
figure 14

Structure of cantilever beam

Table 22 Comparative results for the cantilever beam design estimated using different algorithms

As compared to other algorithms, the optimal function value for this issue is 1.332 obtained from EPDO, indicating that it can achieve better solution accuracy (Table 22).

5.6.4 Tension/compression spring design problem

In engineering sciences, the design of tension/compression springs [22] is an optimization challenge with four constraints to lessen the weight of these springs, whose schematic construction is depicted in Fig. 15. The components of this design problem are subject to buckling, stress, and deflection constraints. The problem centres on identifying the optimal values of two variables—the cross-sectional regions of the truss bars.

Fig. 15
figure 15

Tension/compression spring design

The design of tension/compression springs is mathematically expressed as follows:

$$\begin{aligned}{} & {} Consider ~ [z] = [z_{1}, z_{2}, z_{3} ] = [d, D, P] \\{} & {} Minimize f(z) = (z_{3}+2)z_{2}z_{1}^{2}\\{} & {} \text {subject to:} \\{} & {} g_{1} (z) = 1 - \frac{z_{2}^{3}z_{3}}{71785z_{1}^{4}} \le 0 \\{} & {} g_{2} (z) = \frac{4z_{2}^{2}-z_{1}z_{2}}{12566(z_{2}z_{1}^{3})} + \frac{1}{5108z_{1}^{2}} - 1 \le 0 \\{} & {} g_{3} (z) = 1 - \frac{140.45 z_{1}}{z_{2}^{2}z_{3}} \le 0 \\{} & {} g_{4} (z) = \frac{z_{1} + z_{2}}{1.5} - 1 \le 0 \\{} & {} 0.05 \le z_{1} \le 2.00; ~ 0.25 \le z_{2} \le 1.30; ~ 2.00 \le z_{3} \le 15.00 \end{aligned}$$

In order to find the best solution for the tension/compression spring design variables, the EPDO and competing algorithms’ implementation results are provided in Table 23. Based on these findings, EPDO has offered the best solution to the tension/compression spring problem, with the goal function set to 0.0126 and the design variables set to (0.0547, 0.4211, 9.1). Table 23 also displays the statistical outcomes from optimizing this design issue using EPDO and competitive techniques.

Table 23 Comparative results for the design of tension/compression springs

5.6.5 Gear train design problem

The main objective of this problem is to minimize the cost of the gear ratio in a gear train, which includes four design variables, namely the number of teeth of gears.

$$\begin{aligned} \text {Gear ratio} = \frac{n_{A} \times n_{B}}{n_{C} \times n_{D}} \end{aligned}$$

Figure 16 illustrates this design issue. The gearwheel’s teeth come in four different varieties A, B, C, and D. \(n_{A}, n_{B}, n_{C},\) and \( n_{D}\) are used to denote the design variables. The range [12, 60] encompasses all of them as integers.

The gear train design challenge [39] is mathematically expressed as follows:

$$\begin{aligned}{} & {} Consider ~ [ n_{A}, n_{B}, n_{C}, n_{D}] \\{} & {} Minimize ~ f(n_{A}, n_{B}, n_{C}, n_{D}) = \Bigl ( \frac{1}{6.931}-\frac{n_{A} \times n_{B}}{n_{C} \times n_{D}}\Bigr )^{2}\\{} & {} \text {subject to:} \\{} & {} 12 \le n_{A}, n_{B}, n_{C}, n_{D} \le 60 \end{aligned}$$
Fig. 16
figure 16

Gear Train Design

Table 24 Comparative results for the gear train design problem

Table 24 presents the statistical data on the outcomes of numerous independent runs in comparison to the outcomes of other algorithms. Compared to other algorithms, it demonstrates that the suggested algorithm performs better.

5.7 Discussions

Incorporating DOL initialization and the generation-jumping strategy significantly improves EPDO’s exploration capabilities. Furthermore, utilizing the m-Lévy distribution enhances the algorithm’s adaptability and exploration prowess. The dynamic change feature of DOL further aids in achieving excellent exploitation. Consequently, EPDO stands out as a competitive algorithm, excelling in search accuracy, dependability, convergence speed, and its capacity to overcome local optima. EPDO consistently demonstrates superior performance in comparing results across diverse engineering design challenges, including pressure vessel, rolling element bearing, cantilever beam, tension/compression spring, and gear train problems. It exhibits lower objective function values and improved statistical measures compared to the original PDO and other MAs. EPDO’s innovations suit real-world engineering optimization challenges well, delivering accurate and efficient solutions, particularly in industries with diverse design constraints. A notable improvement in EPDO is its proficiency in handling engineering design problems with numerous inequality constraints. Introducing a penalty function enables graceful adaptation to constraint violations, significantly expanding its applicability to intricate real-world scenarios with stringent constraints. The practical implications of EPDO’s enhancements are evident in industries relying on efficient engineering design. By consistently surpassing its predecessor and other algorithms, EPDO emerges as a reliable and robust optimization tool for real-world scenarios emphasizing precision and reliability.

The EPDO algorithm, while showcasing notable advantages, has its limitations. Sensitivity to algorithm tuning poses a challenge, requiring meticulous parameter configuration for optimal performance across diverse problem sets. Additionally, the algorithm’s effectiveness is contingent on the quality of initial solutions, making it susceptible to suboptimal convergence if initialization needs to be carefully managed. The variability in EPDO’s performance across different optimization problems suggests a degree of problem-specific sensitivity. Although it excels in various benchmarks, its consistency and effectiveness might vary for specific complex, nonlinear, or multimodal functions. These limitations highlight the importance of addressing tuning challenges, improving robustness to diverse initializations, and enhancing performance across a broader range of optimization scenarios to fortify EPDO’s reliability and applicability.

6 Conclusions and plans for the future

In this research article, we have presented the EPDO algorithm as an enhanced variant of the PDO algorithm, addressing its limitations and improving its performance in the exploration and exploitation of global optimization problems. By incorporating the DOL method and a modified Lévy flight technique, EPDO achieves better intensification and diversification capabilities, improving overall optimization performance. To evaluate the effectiveness of EPDO, we conducted extensive experiments using 23 CEC-2017 benchmark test problems and additional benchmark problems. The results demonstrate that EPDO outperforms other state-of-the-art algorithms in terms of global optimization, showing its potential for solving engineering optimization problems and real-world applications. We further validated the performance of EPDO through average rank tests, MARCOS rank tests, and convergence analysis, all of which consistently confirmed its competitive advantage in handling shifting and rotation problems in global optimization.

While the EPDO algorithm demonstrates notable advantages, including improved convergence speed, solution quality, and versatility, several limitations should be acknowledged. EPDO’s sensitivity to parameter tuning poses challenges in achieving optimal configurations across diverse problem domains, and its dependency on the quality of initial solutions may lead to suboptimal performance with careless initialization. Additionally, the algorithm’s performance may exhibit variability across different optimization problems, particularly complex, nonlinear, or multimodal ones. Addressing these limitations is crucial for further enhancing EPDO’s robustness and applicability. Future work should focus on mitigating parameter sensitivity, exploring advanced initialization strategies, adapting the algorithm for specific problem types, investigating hybridization with other techniques, and conducting extensive real-world applications to validate its effectiveness. The algorithm’s application in various domains, including manufacturing, logistics, energy systems, financial modelling, healthcare planning, telecommunications network design, and agricultural planning, showcases its potential for optimizing diverse real-world scenarios. Moreover, we recommend exploring its potential for solving specific optimization problems, such as the travelling salesman problem, emission dispatch problem, nonlinear inventory optimization problem [2, 12], portfolio optimization, healthcare resource allocation problem and image segmentation problem. Furthermore, investigating many-objective optimization and conducting theoretical investigations, including Markov process modelling and stability analysis, can provide a stronger theoretical foundation for EPDO, further enhancing its understanding and capabilities.