MOIMPA: multi-objective improved marine predators algorithm for solving multi-objective optimization problems

This paper introduces a multi-objective variant of the marine predators algorithm (MPA) called the multi-objective improved marine predators algorithm (MOIMPA), which incorporates concepts from Quantum theory. By leveraging Quantum theory, the MOIMPA aims to enhance the MPA’s ability to balance between exploration and exploitation and find optimal solutions. The algorithm utilizes a concept inspired by the Schrödinger wave function to determine the position of particles in the search space. This modification improves both exploration and exploitation, resulting in enhanced performance. Additionally, the proposed MOIMPA incorporates the Pareto dominance mechanism. It stores non-dominated Pareto optimal solutions in a repository and employs a roulette wheel strategy to select solutions from the repository, considering their coverage. To evaluate the effectiveness and efficiency of MOIMPA, tests are conducted on various benchmark functions, including ZDT and DTLZ, as well as using the evolutionary computation 2009 (CEC’09) test suite. The algorithm is also evaluated on engineering design problems. A comparison is made between the proposed multi-objective approach and other well-known evolutionary optimization methods, such as MOMPA, multi-objective ant lion optimizer, and multi-objective multi-verse optimization. The statistical results demonstrate the robustness of the MOIMPA approach, as measured by metrics like inverted generational distance, generalized distance, spacing, and delta. Furthermore, qualitative experimental results confirm that MOIMPA provides highly accurate approximations of the true Pareto fronts.


Introduction
Over the past decade, there has been a considerable surge in the popularity and usage of evolutionary algorithms (EAs).These algorithms have proven to be effective in solving a wide range of real-world optimization problems in engineering and scientific research fields.Among the notable metaheuristic approaches, the genetic algorithm (GA) stands out as the pioneering stochastic algorithm initially introduced by John Holland in 1960 (Holland 1975).Another important algorithm is simulated annealing (SA), which was proposed by Kirkpatrick et al. (1983).In 1995, Kennedy and Eberhart introduced the particle swarm optimization (PSO) algorithm (Kennedy and Eberhart 1995).Additionally, numerous other approaches have been developed subsequently, such as ant bee colony (ABC) (Karaboga and Basturk 2007), Chernobyl disaster optimizer (CDO) (Shehadeh 2023), biology migration algorithm (BMA) (Zhang et al. 2019), RIME algorithm (RIME) (Su et al. 2023), spider wasp optimization (SWO) (Abdel-Basset et al. 2023), Siberian tiger optimization (STO) (Trojovský et al. 2022), predator-prey optimization (PPO) (Mohammad Hasani Zade and Mansouri 2022), Kepler optimization algorithm (KOA) (Abdel-Basset et al. 2023), leader-advocate-believer-based optimization (LAB) (Reddy et al. 2023), past present future (PPF) (Naik and Satapathy 2021), artificial neural networks (ANNs) (D 'Angelo et al. 2022), genetic programming (GP) (D 'Angelo et al. 2023), and so on.Although these intelligent algorithms offer benefits, they require enhancements to accommodate the diverse characteristics of complex realworld applications.This implies that no single approach is capable of adequately solving the wide range of optimization problems.Real-world issues often exhibit various challenging features, such as uncertainty, dynamicity, combinatorial complexity, multiple objectives, and constraints.In line with this, the no-free lunch (NFL) theorem (Wolpert and Macready 1997) confirms that no optimization approach can effectively address all types of problems.Alongside the development of novel algorithms, certain researchers have explored general improvement strategies to enhance the performance of meta-heuristic algorithms.Examples of these general improvement strategies include Lévy flight (Viswanathan et al. 1996), quantum behavior (Feynman 1986), chaotic behavior (Kaplan 1979), opposition-based learning (Tizhoosh 2005), Gaussian mutation (Liu 2012), and so on.Driven by the researchers' statements and the principles of the no-free-lunch theorem, in this paper, the quantum behavior was adopted in order to enhance the performance of the marine predators algorithm.This novel modification enhances the marine predators algorithm to effectively explore and exploit the search space and empowers the capability to balance between exploration and exploitation.Many optimization approaches typically encountered are singleobjective in nature and do not adequately address the diverse features of complex real-world applications, specifically the ability to optimize many objectives simultaneously (Andersson 2000;Coello et al. 2002).This includes the challenge of optimizing multiple objectives simultaneously, which is the second principle focus of this paper.In fact, multi-objective evolutionary algorithms (MOEAs) have gained significant attention from researchers in engineering fields.These algorithms have been continually developed and improved.However, solving multi-objective optimization problems (MOPs) remains a difficult and challenging task.Among the influenced multi-objective algorithms: multi-objective particle swarm optimization (MOPSO) (Coello et al. 2004), multiobjective evolutionary algorithm based on decomposition (MOEA/D) (Zhang and Li 2007), non-dominated sorting genetic algorithm (NSGA) (Srinivas and Deb 1994), and strength Pareto evolutionary algorithm (SPEA) (Zitzler and Thiele 1999).
The classification of MOEAs can be divided into three different classes: a priori (Kim and de Weck 2005), posterior (Branke et al. 2001), and interactive (Shin and Ravindran 1991).The priori technique combines the objectives into a single one using a set of weights by aggregation and requires a decision-maker that furnishes some preferences according to how important the objectives are.For a posteriori technique, the Pareto optimal set is determined in just one run antithesis the priori method that should be run multiple times.Besides, this method benefits from maintaining the multi-objective formulation, finding out all kinds of Pareto front, and the decision-maker is required after the optimization process.In the third class, the interactive technique maintains the multi-objective formulation and considers the decision maker during the optimization procedure, thus, this method necessitates a higher execution time and computational cost to obtain accurate Pareto optimal solutions.Along these lines, this work focuses on the algorithms-based posteriori method, and most existing multi-objective evolutionary algorithms follow this technique.Many researchers have been recently reported various multi-objective approaches, for instance: multi-objective slime mould algorithm (MOSMA) (Premkumar et al. 2021), non-sorted harris hawks optimizer (NSHHO) (Jangir et al. 2021), multi-objective bonobo optimizer-based decomposition (MOBO) (Das et al. 2020), multi-objective forensicbased investigation (MOFBI) (Chou and Truong 2022), multi-objective thermal exchange optimization (MOTEO) (Kumar et al. 2022), multi-objective ant lion optimizer (MOALO) (Mirjalili et al. 2017), non-dominated sorting manta ray foraging optimization (NSMRFO) (Daqaq et al. 2022), multi-objective multi-verse optimization (MOMVO) (Mirjalili et al. 2017), multi-objective adaptive guided differential evolution (MOAGDE) (Duman et al. 2021), multiobjective backtracking search algorithm (MOBSA) (Daqaq et al. 2021), and so on.However, the main contribution of this paper could be summarized as: -Introduction of MOIMPA: a multi-objective variant of the marine predators algorithm (MPA) that incorporates concepts from Quantum theory is presented.-Enhancing exploration and exploitation: MOIMPA aims to improve the MPA's ability to balance between exploration and exploitation by using a modified Schrödinger wave function to determine particle positions in the search space.The study is structured as follows: Sect. 2 reviews previous research conducted by other researchers on the marine predators approach.Section 3 presents the mathematical formulations and definitions related to multi-objective optimization, along with the basic MPA approach and the proposed MO version, MOIMPA.Section 4 presents the findings, analysis, and discussions.Lastly, Sect. 5 concludes the paper and outlines potential avenues for future research.

Related works
As aforementioned, a new multi-objective version of the marine predators approach (MPA) is suggested in this work to address multi-objective problems.MPA was primarily initiated by Faramarzi et al. (2020), and existing literature demonstrates that MPA has successfully tackled various problems and outperformed several established approaches.Hence, researchers have followed the No Free Lunch theorem (NFL) and improved MPA based on the complexity and nature of their specific problems, and several studies have investigated the effectiveness of their improved approach.Among the research studies that investigate its effectiveness: Aydemir and Onay improved MPA by integrating and combining the elite natural evolution, elite random mutation, and Gaussian mutation strategies Aydemir and Kutlu Onay (2023).In Ferahtia et al. (2022) utilized the MPA approach to minimize the operating costs of the energy management strategy for the microgrid.In another work (Dinh 2022) However, few scholars investigate its performance on multi-objective problems.On this basis, multi-objective MPA (MOMPA) based on crowding distance and elitist nondominated sorting mechanisms was presented by Jangir et al. in (2023), this approach was run on some constrained, unconstrained benchmark problems and engineering design applications.(Wang et al. 2023) introduced the concept of quantity into the deep learning model to examine a wind speed integrated probability predictive approach, utilizing a multi-objective marine predators algorithm.Additionally, according to (Hassan et al. 2022), a modified version of MPA including a comprehensive learning approach was investigated to handle the multiobjective combined economic emission dispatch (CEED) optimization problem.Finally, according to the recent study (Yousri et al. 2022), another enhanced MPA named MOEMPA was suggested to manage the sharing energy in an interconnected micro-grid with a utility network in India by optimizing the emission and operating cost.Along these lines, the significant powerful features of MPA and its borrowed evolutionary algorithms mentioned above, motivated us to improve a novel multiobjective variant of marine predators algorithm in this study, named multi-objective improved marine predators algorithm 123 (MOIMPA).The outstanding search mechanism in MPA is kept similar to the MOIMPA approach.

Background
This section presents a comprehensive introduction to optimization (MO) and its essential mathematical definitions, which is a foundational problem-solving technique utilized to discover the optimal solutions from a range of alternatives.In the context of multi-objective optimization, the primary aim is to concurrently optimize multiple objectives that often conflict with one another, a common occurrence in real-world decision-making situations.To gain a deeper understanding of the concepts and methods employed in this field, it is crucial to explore the marine predators approach and its multi-objective counterpart.Inspired by the efficient foraging strategies of marine predators like sharks and dolphins, the marine predators approach adopts a population-based search algorithm that emulates the movement and behavior of these creatures to solve optimization problems.By modeling the search process based on the natural behavior of marine predators, this approach seeks to enhance the efficiency and effectiveness of the optimization process.Additionally, this study also incorporates the multi-objective variant of the marine predators approach.Multi-objective optimization expands upon the traditional optimization framework by incorporating multiple conflicting objectives.The goal of the multi-objective variant is to explore and identify a set of solutions that represent trade-offs between these objectives, rather than finding a single optimal solution.This approach empowers decision-makers to gain insights into the tradeoffs involved and make well-informed decisions based on the available options.First, we'll outline key mathematical definitions in the field of optimization (MO) before discussing and reviewing the marine predators approach and its multiobjective variant.

Multi-objective optimization
As its name signifies, multi-objective optimization is the subject of addressing various conflicting objectives concurrently.Thus, the arithmetic relational operators are not efficacious in comparing different solutions.Therefore, the Pareto optimal dominance concept is utilized to determine which solution is better than another.The MOPs mathematical formulation as a minimization problem is given as follows: where O( x) is the objective function to be minimized.h j ( x) and g j ( x) are the equality and inequality constraints.N obj , n, m, and p are the numbers of objective functions, variables, inequality, and equality constraints.X min j and X max j are the boundaries of the jth variable.
The essential relationships that are able to take into account all objectives concurrently are defined as follows (Pareto 1964;Coello Coello 2009): Let us take two vectors x = (x 1 , x 2 , . . ., x n ) and y = (y 1 , y 2 , . . ., y n ) Definition 1 (Pareto dominance) x is said to dominate y if and only if x is partially less than y (i.e., x ≤ y): Definition 2 (Pareto optimality) x ∈ X is called a Pareto optimal solution iff: Definition 3 (Pareto optimal set) The Pareto optimal set is a set that comprises all Pareto optimal solutions (neither x dominates y nor y dominates x): Definition 4 (Pareto optimal front) The Pareto optimal front is defined as: The optimal non-dominated outputs set, which represents the set of solutions for each multi-objective optimization problem, plays a significant role in evaluating and comparing different solution options.Figure 1 provides a visual representation of this concept, illustrating the relationship between the objective space and the corresponding Pareto optimal front.As depicted in Fig. 1, the red shapes represent the solutions in the objective space, while their projection onto the objective space is known as the Pareto optimal front.The Pareto optimal front consists of solutions that are not dominated by any other solution in terms of all the objectives under consideration.In other words, these solutions represent the best possible trade-offs between conflicting objectives, where improving one objective would result in a deterioration of at least one other objective.By examining the solutions depicted in both spaces, it becomes evident that  1 dominates all the other shapes.Domination refers to a solution being superior to another solution in terms of at least one objective, without being worse in any other objective.In this case, the star shape outperforms all other shapes across all objectives, making it the most desirable solution within the given optimization problem.The visualization provided by Fig. 1 highlights the importance of identifying the Pareto optimal front and understanding the dominant relationship between solutions.This knowledge enables decision-makers to make informed choices by considering the trade-offs associated with different solutions and selecting the most appropriate one based on their preferences and constraints.

Marine predators algorithm
The principle phases of the MPA approach are introduced in the following subsection.

Description of MPA
MPA is a population-based approach, inspired by the widespread foraging in ocean predators, and the optimal encounter rate in biological interaction between predator and prey, by Faramarzi et al. (2020).Like other meta-heuristics, in which the initial solution X 0 is uniformly distributed throughout the search area as: where X min is the lower bound, X max is the upper bound for variables, and rand is a uniform random vector in the range of 0-1.
According to the survival of the fittest theory, top predators in nature are better at hunting.As a result, the fittest solution is designated as a top predator to form a matrix known as the Elite.This matrix arrays looking for and locating prey depending on information about the prey's location.

Elite
where − → X t denotes the top predator vector, which is copied n times to create the Elite matrix.n denotes the number of search agents, whereas d denotes the number of dimensions.Prey is another matrix with the same dimension as Elite, and it is used by predators to update their locations.
The MPA optimization method is separated into three primary phases, each of which takes into account a particular velocity ratio while simulating the whole life of a predator and prey as follows:

High velocity ratio
This situation occurs when the prey moves faster than the predator, or during the early stages of optimization, when exploration is important.In a high-velocity ratio (v 10) situation, the optimum predator tactic is to stay still.This rule's mathematical model is as follows: where R B is a vector of random integers from the Normal distribution that reflect Brownian motion.The notation ⊗ depicts entry-by-entry multiplications.Prey movement is simulated by multiplying R B by Prey.P = 0.5 is a constant, and R is a vector of uniform random values in the range [0,1].This situation occurs during the first third of iterations when the step size or velocity of movement is large, indicating a high level of exploratory ability.iter is the current iteration while iter max is the maximum one.

Unit velocity ratio
When both the predator and the prey are running at about the same speed, it suggests that both are seeking their prey.This portion happens during the intermediate phase of optimization when exploration attempts to be transiently transformed into exploitation.Exploration and exploitation are important at this stage.As a result, half of the population is earmarked for exploration and the other half for exploitation.During this stage, the prey is in charge of exploitation while the predator is in charge of exploration.As a result, if prey travels in Lévy, the best predator tactic is Brownian.
For the first half of the population The second half of the population where CF is an adjustable parameter that is used to track the step size.

Low velocity ratio
During this particular phase, a low-velocity ratio occurs when the predator's speed surpasses that of the prey.This situation occurs near the end of the optimization process, which is typically associated with high exploitation capability.Lévy is the best predator approach for low-velocity ratios (v = 0.1).This stage is depicted as: In the Lévy method, multiplying R L and Elite mimics predator movement, whereas adding the step size to Elite position simulates predator movement to aid in the updating of prey location.
The additional feature of MPA is that it simulates predator behavior, increasing the chances of escape from local optima.This advantage stems from the fact that factors such as eddy formation and fish aggregating devices (FADs) can influence predator behavior.As a consequence, the predators would jump into other locations with abundant prey 20% of the time, while searching for their prey in the local area the rest of the time.FADs can be made in the following way: where FADs = 0.2 denotes the probability of F ADs influencing the optimization process.U is a binary vector containing arrays of zero and one and built by creating a random vector in the range [0, 1] and setting its array to zero if it is less than 0.2 and one if it is larger than 0.2.r is the uniform random number in the range [0,1].The subscripts r 1 and r 2 are random prey matrix indexes.
The main steps followed in MPA are demonstrated in Fig. 2.This flow chart depicted in Fig. 2 illustrates the step-by-step process of the MPA algorithm employed in this study.The algorithm presents a clear and structured approach to solving the optimization problem at hand.The flow chart provides a visual representation of the algorithm's structure and the sequence of operations involved.It serves as a valuable tool for understanding the algorithm's inner workings and aids in analyzing its efficiency and effectiveness in solving complex optimization problems.

The proposed algorithm
In this subsection, quantum mechanics is used to develop the original MPA technique.This quantum model of the MPA technique is named the IMPA algorithm.Quantum algorithm (QA) was first proposed in Benioff (1980).It was declared that QA could solve many difficulties based on the concepts and principles of quantum theory, including the superposition of quantum states, entanglement, and intervention.Quantum-Inspired Evolutionary Algorithm (QEA) is one of the developing algorithms which was inspired by the concept of quantum computing (Han and Kim 2002).This algorithm was successfully applied to solve several combinational optimization problems.
Where x best denotes the best solution, α is a design parameter, u and w represent uniform probability distribution in the range [0,1], h is the random value ranging from 0 to 1. Mbest is the mean best of the population and it is defined as the mean of the x best positions and it is calculated as follows: x best (24) The flow chart of the proposed IMPA technique is shown in Fig. 3, which showcases the flow chart depicting the revised steps and procedures implemented to enhance the optimization process.This modified approach aims to overcome the limitations or shortcomings of the original algorithm, offering improved performance, convergence, and solution quality.The flow chart serves as a valuable reference to analyze and evaluate the efficacy of the modified algorithm in solving multi-objective optimization problems.

Description of the proposed algorithm MOIMPA
Based on the mathematical emulation explained above, the swarm behaviors of preys can be simulated clearly.When dealing with multi-objective problems, two issues need to be adjusted for the IMPA algorithm.First, the MOIMPA needs to store multiple solutions as the optimal solutions for a multi-objective problem.Second, in each iteration, IMPA updates the preys source with the best solution, but in the multi-objective problem, single best solutions do not exist.
In MOIMPA, the first issue is settled by equipping the IMPA technique with a repository of preys sources.The repository can store a limited number of non-dominated solutions.In the process of optimization, each prey is compared with all the residents in the repository using the Pareto dominance operators.If a prey dominates only one solution in the repository, it will be swapped.If a prey dominates a set of solutions in the repository, they all should be removed from the repository and the prey should be added to the repository.If at least one of the repository residents dominates prey in the new population, it should be discarded straight away.If the prey is non-dominated in comparison with all repository residents, it has to be added to the archive.If the repository becomes full, we need to remove one of the similar non-dominated solutions in the repository.For the second issue, an appropriate way is to select it from a set of non-dominated solutions with the least crowded neighborhood.This can be done using the same ranking process and roulette wheel selection employed in the repository maintenance operator.The pseudo-code of the MOIMPA algorithm is shown in Algorithm 1: The included Algorithm 1 provides a step-by-step framework for the MOIMPA algorithm.
Step 1: In this step, the prey positions are updated based on Equ (11).This equation defines the update rule for the prey in the first phase of the algorithm, which occurs during the initial one-third of the maximum iterations.
Step 2 and Step 3: During the second phase of the algorithm (between one-third and two-thirds of the maximum iterations), the prey positions are updated differently for the first half and the other half of the population.Step 2 involves updating the positions of the prey in the first half of the population using Equation ( 14).Similarly, Step 3 updates the positions of the prey in the other half of the population using Equ (16).
Step 4: In the final phase of the algorithm (when the iteration count exceeds two-thirds of the maximum iterations), the prey positions are updated using Equ (20).This equation defines the update rule for the prey during this phase.
Step 5: Applying the effect of eddy formation and fish aggregation devices (FADs) on the prey positions is done in this step.The positions of the prey are adjusted according to Equ ( 21), which incorporates the influence of FADs.
Step 6: The prey positions are updated further using Equ ( 23).This equation specifies the position update rule for the prey, enhancing their exploration and exploitation capabilities.
Step 7: The fitness of the new solution is calculated in this step.The fitness represents the quality or objective value associated with the current solution.
Step 8: Finally, the non-dominated solutions obtained are updated in the archive, ensuring the preservation of the best solutions throughout the iterations.
These step-by-step explanations provide a clear understanding of the work performed and the variables involved at each stage of the MOIMPA algorithm presented in Algorithm 1.

Performance metrics
With a view to affirm the effectiveness of such a multiobjective algorithm, three main principles must be proved: convergence, distribution, and coverage.On this basis, to compare the MOIMPA reliability and efficiency with other competitive algorithms, four performance indicators were considered such as generation distance (GD) (Van Veldhuizen and Lamont 1998), inverse generation distance (IGD) (Sierra and Coello 2005), spacing (SP) (Schott 1995), and the delta metrics.The IGD and GD indicators are employed to evaluate the approximations to the Pareto front in relation to diversity and convergence.The spacing indicator evaluates the space between the non-dominated solutions (coverage).It is worth noting that the smaller the metric output, the higher the quality of obtained solutions.The mathematical formulations of these metrics are described as follows:

For the first half of the population
Update the prey position based on Eq. ( 14) For the other half of the population Update the prey position based on Eq. ( 16) else if I ter ≥ 2×Max_I ter 3 then Update the prey position based on Eq. ( 20) end if end for Applying FADs effect and update the prey based on Eq. ( 21) Update the position using Eq. ( 23) Calculate the fitness for the new solution Update Archive; Update the non-dominated solution to the Archive end while where d i is the Euclidean distance between the i th true Pareto optimal solution and the Pareto optimal solution obtained in the reference set.n tpf indicates the number of true Pareto optimal solutions.

SP
where d is the average of all d i .

Experimental results and analysis
To validate the efficiency of the suggested algorithm MOIMPA, a set of test functions with diverse characteristics (various forms of Pareto front) were investigated notably five biobjective ZDT Zitzler et al. (2000), six three-objective DTLZ (Deb et al. 2002) functions, and the CEC'09 (Zhang et al. 2008) multi-objective problem that contains ten UF functions with bi-and three-objective functions, in addition to the weld beam issue, which is listed in Table 1.Thus, four popular indicator metrics were considered namely: generation distance (GD), spacing (SP), inverse generation distance (IGD), spacing (SP), and the delta metrics as presented in the "Performance Metrics" section.On the other hand, three significant multi-objective optimization algorithms were reimplemented for comparison with a view to affirming the effectiveness of the MOIMPA approach called multiobjective marine predator algorithm (MOMPA) (Jangir et al. 2023), multiobjective ant lion optimization (MOALO) (Mirjalili et al. 2017), and multiobjective multi-verse optimization (MOMVO) (Mirjalili et al. 2017), their control parameters are listed in Table 2.However, these selected parameters are the best in most of the cases in their original papers and are kept as suggested.Thus, for ensuring fair comparison, all multi-objective approaches were executed 20 times and the maximum number of iterations is 300.The detailed settings of the system utilized in this article are presented in Table 3.The outcomes are illustrated qualitatively and quantitatively.
The experimental findings based on the average and standard deviation of the 20 runs are listed in Tables 4, 5, 6 and 7.It is worth noting here that the better approach is the one with the lower metric value, and the optimized solutions were highlighted in boldface.Further, the best Pareto optimal fronts attained were depicted in Figs. 4, 5, 6 and 7. On all twenty benchmark test suites under consideration, the MOIMPA reaches statistically significant values in most case studies.Hence, this proposed approach outstrip all five ZDT test problem, in which, four functions on IGD, three on GD, three on SP, and four on delta out of five functions in total.Besides, the MOIMPA also ranks first in the DTLZ problem on four functions out of five on each IGD, GD, SP, and delta metrics.According to the MOMVO algorithm, it outperforms the MOIMPA on UF benchmark test suites in all indicator metrics except the Spacing one, but it does not show any accuracy in terms of std performance compared to MOIMPA.However, the MOALO and the MOMPA obtained the lowest performance.Additionally, it can be seen from Figs. 4, 5, 6 and 7 that the MOIMPA yields better coverage, convergence, and well spread through the true Pareto front for all case studies.The MOMPA approach also provides a good Pareto front on ZDT except for the ZDT6 function.By contrast, the MOMVO and MOALO competitor approaches show the worst Pareto front.To sum up, the suggested optimizer MOIMPA managed to significantly outperform the competitors even on its multi-objective variant MOMPA,

Conclusion
This study has presented a modified version of the marine predators approach (MPA) that incorporates quantum opti-  The proposed approach has undergone quantitative evaluation using four performance indicators: inverted generational distance (IGD), spacing metric (SP), generational distance (GD), and delta metric.The evaluation was conducted over 20 runs.Comparative analysis was carried out by comparing the results of the MOIMPA optimizer with other algorithms such as MOMPA, MOALO, and MOMVO.Additionally, various engineering design problems were examined to assess the efficiency of MOIMPA.The results demonstrate that the developed optimizer is more efficient based on the outcomes of the evaluation.In our future work, we plan to extend the application of the proposed algorithm to solve more multiobjective engineering problems.This includes optimizing power flow and economic emission dispatch while considering renewable energy resources.By testing the algorithm on these real-world engineering problems, we aim to validate its effectiveness and practicality in solving complex optimization challenges.

Fig. 1
Fig. 1 Search and objective spaces

Fig. 6
Fig. 6 Comparison of obtained Pareto fronts on DTLZ functions

Table 1
Descriptions of the considered benchmark functions

Table 2
Parameter

Table 3
The detailed settings of the utilized system