An approach for optimizing multi-objective problems using hybrid genetic algorithms

Optimization problems can be found in many aspects of our lives. An optimization problem can be approached as searching problem where an algorithm is proposed to search for the value of one or more variables that minimizes or maximizes an optimization function depending on an optimization goal. Multi-objective optimization problems are also abundant in many aspects of our lives with various applications in different fields in applied science. To solve such problems, evolutionary algorithms have been utilized including genetic algorithms that can achieve decent search space exploration. Things became even harder for multi-objective optimization problems when the algorithm attempts to optimize more than one objective function. In this paper, we propose a hybrid genetic algorithm (HGA) that utilizes a genetic algorithm (GA) to perform a global search supported by the particle swarm optimization algorithm (PSO) to perform a local search. The proposed HGA achieved the concept of rehabilitation of rejected individuals. The proposed HGA was supported by a modified selection mechanism based on the K-means clustering algorithm that succeeded to restrict the selection process to promising solutions only and assured a balanced distribution of both the selected to survive and selected for rehabilitation individuals. The proposed algorithm was tested against 4 benchmark multi-objective optimization functions where it succeeded to achieve maximum balance between search space exploration and search space exploitation. The algorithm also succeeded in improving the HGA’s overall performance by limiting the average number of iterations until convergence.


Evolutionary-based algorithms
In computer science, an evolutionary-based algorithm (EA) is an artificial intelligence technique that targets global optimization by mimicking the biological process of evolution.EAs operate by utilizing operators driven from biological evolution such as breeding, crossover, mutation, and selection (Li et al. 2020;Nopiah et al. 2010).EAs are population based where each individual in an EA's population represents a possible solution to the optimization problem.The quality of a possible solution is determined by a fitness function that measures how good a candidate as a solution to the optimization problem.The evolution process in an EA commences by repeating the evolution operators mentioned above (Luo et al. 2020).

Single-versus multi-objective optimization problems
In computer science, an optimization problem is a problem where the target is to find the best possible solution among all available solutions.In these problems, an algorithm traverses a search space to find the best possible solution.A single-objective optimization problem is a problem that contains one and only one optimization function.In such problems, an algorithm needs to focus only on this function and attempts to find the global minimum/maximum according to the target of optimization and the nature of the problem.On the other hand, a multi-objective optimization problem is a problem that contains more than one optimization function.In such problems, the algorithm needs to focus on more than one optimization function and traverses through the search space to find a solution or a set of solutions that achieve the optimization goal considering all the given optimization functions (Li et al. 2019;Luo et al. 2020).
There exist various single-and multi-objective mathematics-based optimization functions for testing; however, a real-life example of a single-objective optimization problem would be ''an attempt to find the best car design that can achieve a very high speed; in this problem, the algorithm will focus only on finding the car design that when manufactured will produce a fast car regardless of any other feature.''On the other hand, a real-life example of a multi-objective optimization problem would be ''an attempt to find the best car design that when manufactured will produce a car that is fast, cheap, robust, light weight, and with high-quality materials.''Obviously from this example, it is notoriously hard to design a car that is fast, robust with high-quality materials and at the same time cheap, which introduces the challenge of optimizing a multi-objective optimization problem especially when there are conflicting objectives.
The rest of the paper includes a background in the coming section, followed by challenges section that discusses the challenges facing this research and then a section for the proposed model architecture followed by the results then a section discussing the results and finally the conclusion.

Genetic algorithms
A genetic algorithm (GA) is a search heuristic that mimics the process of natural evolution.In GAs, the fittest individuals are selected to produce the offspring of the new generation (Durairaj and Dhanavel 2018).
A genetic algorithm represents a mimetic technique that tackles optimization problems.A possible solution is referred to as a chromosome (individual) that consists of genes (features).Each gene describes one feature of the possible solution.The chromosome structure definition is a result of the problem encoding process where the algorithm implementer encodes the targeted problem in such a way that will enable the GA to attempt to solve this problem.A fitness function is a specific function that measures the fitness of a chromosome, in other words, how good a chromosome is as a possible solution to the optimization problem in hand.Once the problem encoding is done and the chromosome is structured, an initial population of randomly generated individuals is created (Kaur and Aggarwal 2013).After then, an iterative process takes place to evaluate the fitness of each chromosome and searches for the best solution (chromosome/individual) that will achieve the fitness function's goal; this is usually to find either a global minimum or a global maximum.Based on the fitness results, one or more individuals will be selected to survive and move to the next generation and the unselected individuals will be dismissed.After then, the survived group of individuals will start the mating process via crossover and mutation.Crossover takes place between two individuals where both individuals will share genes to form new individuals according to a previously defined probability (Pc).Then mutation follows for one or more genes with predefined probability of mutation (Pm).Once fitness evaluation, crossover and mutation steps are completed, a new generation (offspring) is now ready to replace the old population and become the main population where the iterative process will start all over again to create a new generation and so on until a termination criterion is reached Fig. 1 Basic genetic algorithm flow (Chen et al. 2018).The basic flow of a genetic algorithm is shown in Fig. 1.

Particle swarm optimization
In nature, members of bird flocks synchronously and precisely perform intelligent behavior without colliding with each other.Such interesting behavior has been studied in several researches (Heppener and Grenander 1990;Reynolds 1987).In computer science, the particle swarm optimization algorithm (PSO) has been developed as a result of the general belief that information sharing among members of a bird flock creates intelligent behavior.The particle swarm optimization algorithm belongs to the wide category of swarm intelligence techniques (Prado et al. 2010).PSO was proposed in 1995 (Rakitianskaia and Engelbrecht 2014) as an optimization method to simulate social behavior of swarms, since then PSO was successfully applied in a variety of optimization problems such as function optimization and training of neural networks (Rakitianskaia and Engelbrecht 2014).One of the PSO's greatest advantages is being computationally inexpensive as its system requirements are low (Prado et al. 2010).The PSO utilizes a population-based search technique to optimize a targeted objective function.The main component of PSO is the population in which the algorithm searches for the optimal solution.The population consists of particles where each particle is considered a possible solution.Particles in the population are a metaphor of birds in bird flocks or fish in fish pools.In PSO, particles are initialized with random values and can traverse the search space.During operation, each member of the swarm updates its own velocity and position depending on the best result reached so far by this member in addition to the best result reached by the entire swarm.The continuous updating methodology will drive all particles in the swarm toward the area in the search space that have the optimal result that is the global maximum/minimum according to the objective function.Initially, a population of swarm members is generated and randomly and initialized from a permissible range of values.Secondly, the velocity updating process takes place where all velocities of all swarm members are updated according to Eq. 1: where p ~i and v ~i represent the position and velocity of a particle i; p ~i;best represent the personal best of particle I and g ~i;best represent the best objective function value found so far by entire population; w represents a parameter that dominates the movement dynamics of a particle; R 1 and R 2 both represent random variables with permissible domain of [0, 1]; c 1 and c 2 both represent factors that dominate the weighting of the corresponding term.The existence of random variables grants PSO the ability to perform random searching, while c 1 and c 2 both represent weighting factors that compromise the trade-off between search space exploration and search space exploitation.As the updating process commences, v ~i is checked and maintained within a predefined domain to prevent stray random walking.
Then PSO updates the position of its member particles according to Eq. 2: Once the particles position is updated, p ~i should be checked and constrained to the permissible domain of values.Then the algorithm updates the saved personal best and global best p ~i;best and g ~i;best according to Eqs. 3 and 4: where f x ð Þ represents the objective function targeted for optimization.Finally, the algorithm loops through from the second to the fourth step until a predefined termination condition is reached.For example, a predefined iterations limit or when there are no new results reached by the algorithm for a predefined number of generations.If a termination condition is met, the algorithm presents the values of g ~i;best and f g ~i;best À Á as its final solution.Figure 2 presents the basic flow of the particle swarm optimization algorithm.

Genetic algorithms versus hybrid genetic algorithms
A genetic algorithm (GA) is a population-based metaheuristic search and optimization algorithm.It mimics the process of natural evolution in such a way that it utilizes the concepts of natural selection and genetic dynamics to solve search and optimization problems.The concept of An approach for optimizing multi-objective problems using hybrid genetic algorithms genetic algorithms was first laid down by Holland (1975) and is discussed further with examples in De Jong (1975) and Goldberg (1989).In theory, GA's performance depends on the ability to optimally balance search space exploration and search space exploitation (Li et al. 2019).
Realistically, problems arise because Holland assumed that the population size is infinite, and the fitness function accurately reflects the suitability of a solution and the interactions between the genes are very small (Beasley et al. 1993).In practice, the population size is finite which affects the sampling ability of the GA and its performance.Utilizing a local search method with GA (Hybridization) can help neutralize most of the obstacles that arise as a result of the finite population size, it also accounts for the genetic drift problem (Asoh and Mu ¨hlenbein 1994) by introducing new genes.It can also accelerate the search process to reach the global optimum (Hart 1994).The approaches in Goldberg (1999) have shown that hybridization has been one effective way to build competent genetic algorithms.

K-means clustering
The K-means clustering algorithm belongs to the partitioning-based and non-hierarchical clustering techniques (Abhishekkumar and Sadhana 2017), and it is one of the most used clustering techniques that has been applied in many scientific and technological fields (Xu and Wunschii 2005;Everitt et al. 2011).The k-means clustering algorithm is used commonly because of its applicability on different data types.The algorithm starts with a set of targeted numeric objects X and an integer number k.The algorithm then pursues an effort to partition all members of X into k clusters while minimizing the sum of squared errors (Hamerly and Drake 2014).Initially, the algorithm randomly initializes the k cluster centers; then the algorithm starts to assign each member of X to its closest center according to the square of the Euclidean distance from the cluster (Shrivastava et al. 2016).Consequently, the value of each center is updated by computing the mean value of each cluster; this updating process is a result of the change of membership of the cluster members (Lei 2008).The algorithm then iterates through updating cluster centers to membership reassigning until no more changes in the cluster's membership is achieved.To calculate how near a data vector is to a cluster's center, the following formula is used: 2.5 Multi-objective optimization test functions Benchmark problems are usually utilized in order to evaluate the performance of optimization algorithms (Beasley et al. 1993).Using benchmark functions for this purpose facilitates performance comparison between different multi-objective optimization algorithms.In this research, several multi-objective optimization benchmark functions are used to evaluate the proposed algorithm.

Genetic algorithms challenges
In theory, a genetic algorithm is supposed to achieve the perfect balance between search space exploration and search space exploitation.Search space exploration is a concept where the GA traverses through the search space looking for new solutions, while search space exploitation is another concept where the GA attempts to exploit possible opportunities to get the most out of the searching process.GA is supposed to achieve the perfect balance between search space exploration and search space exploitation as assumed in Beasley et al. (1993), such that ''the population size is infinite and the fitness function accurately reflects the suitability of a solution and the gene interactions are minimum.''In practice, the population size is finite which affects both the performance and the sampling ability of the genetic algorithm.On the other hand, the GA behavior is highly influenced by the fitness function that selects fit chromosomes to survive to the next generation, while rejecting chromosomes that do not pass the fitness function even if they have good genes.This is simply because the GA searches for good chromosomes not good genes.This behavior may punish individuals that may not pass the fitness function but may possess good genes that can take the search cursor to promising places in the search space.

The challenge of multi-objective optimization problems
Optimization algorithms have been used in a variety of fields including image processing (Chen et al. 2019;Zitzler and Kunzli 2004), industry (Li and Mcmahon 2007;Lin et al. 2016;Zhu and Zhou 2006), and manufacturing (Gui and Zhang 2016;Zhang et al. 2016).Optimization algorithms also pose a significant challenge in applied science (Kim et al. 2017;Ni et al. 2016;Tao and Zhang 2013), especially when the optimization algorithms are dealing with a multi-objective optimization problem (MOP) (Bandaru et al. 2014;Coello 2006) that contains two or more optimization objectives.The significant challenge posed to an optimization algorithm in such a case is that the algorithm has to synchronously consider all the objectives in the optimization process.MOP can formally be described as follows: where x represents the decision variable vector and X represents the search space and Rm represents the objective vector space.F(x) is the objective vector with m real value objective functions.In a multi-objective optimization problem, there is a relation between the two optimization functions which makes it difficult to a single point in the search space to minimize/maximize both objective functions at the same time.The approach to solve multi-objective optimization problems is to search for several promising points in the search space whose objective functions evaluation achieves a balanced minimum/maximum optimization value.Multi-objective evolutionary algorithms (MOEAs) utilize the evolution process to search for solutions to a multi-objective optimization problem.During this process, the algorithm performs many calculations as all individuals are evaluated in all generations.In some fields, the computation cost of an algorithm is critical.Therefore, it would be optimal to reduce the number of fitness evaluations and maximize the quality of solutions.
4 Proposed model architecture

Proposed algorithm
The proposed hybrid GA utilizes GA search for a set of optimal solutions that will minimize the objective functions for a given benchmark multi-objective optimization problem.The algorithm will also utilize the K-means clustering algorithm to support the selection process by ensuring a fair feature distribution in both selected to survive (fit) and selected for rehabilitation (non-fit) chromosomes.We assume that the non-fit chromosomes may contain good genes that may take the cursor of the searching process to places in the search space where promising results could be found.Accordingly, the non-fit chromosomes are passed to the particle swarm optimization algorithm for rehabilitation, where all selected for rehabilitation individuals (non-fit chromosomes) will form the population of the PSO algorithm and communicate with each other to update their velocity and position to reach the best possible outcome from these non-fit individuals.The proposed model is as shown in Fig. 3.In Fig. 3, condition one evaluates an individual and checks the predefined stopping criterion that is the maximum number of generations; if the maximum number of generations is reached, the algorithm will stop (F); otherwise, it will continue (T).Condition two can be considered as a dual selection mechanism, where on the one hand it selects the fittest individuals to survive to next generation (A).On the other hand, and with the support of the k-means algorithm, it splits the rejected individuals into k clusters where k is the number of optimization functions in a multiobjective optimization problem and fairly transfers a group of rejected individuals to the PSO for rehabilitation (R).
Condition 3 checks for the stopping criteria of the PSO that is a maximum number of iterations, such that, it will either continue looping through the PSO (F) or return the rehabilitated individuals into the GA's population (T).Condition 4 will either inject the incoming individuals into the new population of the GA (T) or force the GA to resume (F) if the GA has already stopped for an additional 5000 iterations.This is the first and main component of the proposed hybrid algorithm.GetRandomPopulation() is a function used to generate initial chromosomes with random values of X and Y.BenchMarkFunction() is a delegate consuming the targeted benchmak function.GetRandom() is a function used to get a random value to be compared against Pm and Pc.CrossOver(Individual) applies a fixed point crossover on a targeted individual.Mutate(Gene) applies mutation on a targeted gene.Separate(GAPopulation, out Selected, out rejected) scans the current population and outputs the selected individuals and the rejected individuals.KMeansSelection(Rejected) applies clustering where K = the chromosome length to assure fair distribution of the rejected individuals that are selected for rehabilitation.PSORehabilitation(Rehabilitate) applies the rehabilitation process according to the next algorithm either online or offline.GetParetoSet(OffSpring) gets the solution pareto set from the evaluated offspring.The algorithm above describes the rehabilitation process using the PSO where output is represented as the global best achieved by all particles.

Problem encoding and solution decoding
The proposed algorithm operates on a set of benchmark multi-objective optimization problems with objective functions that require two inputs.Thus, the problem will be encoded in a chromosome structure consisting of two genes (one gene for each input) as shown in Fig. 4.
4.1.1.1Encoding All four benchmark functions targeted in this research share the same characteristic of having two inputs X and Y and two objective functions F1 and F2.Both X and Y represent coordinates for a point in the search space with constrains, where the proposed algorithm is supposed to find the point with the minimum value of F1 and F2 in the search space of each targeted benchmark function.The proposed algorithm searches for X and Y and for each proposed X and Y we calculate the value of both objective functions F1 and F2.As a result, we encode a possible optimal solution of each targeted benchmark function in the form of a chromosome of X and Y as shown in Fig. 4.
4.1.1.2Decoding As we target multi-objective optimization benchmark functions, it may not be of an obvious business value to attempt to translate the output of these benchmark functions to assume decoding the X and Y value; in the end, the encoded input of our targeted benchmark functions represents X and Y coordinates where the proposed algorithm is supposed to find the X and Y that will lead to the minimum value of F1 and F2, considering that these multi-objective benchmark functions were designed so that F1 and F2 are conflicting in such a way that generally minimizing F1 will maximize F2 and vice versa.This way, the targeted benchmark functions can test the multi-objective optimization ability of a proposed optimization algorithm.Decoding the chromosome of X and Y will lead to nothing but the X value and Y value that the proposed algorithm is searching for in each search space of the targeted benchmark functions.

Population specifications
A population of individuals is randomly generated to initialize the algorithm with a pre-determined population size n.In the execution phase, the algorithm is tested on different values of n to examine the effect of the population size on the algorithm's performance.

Genetic operators
The genetic operators are an essential part of the proposed algorithm as the algorithm utilizes them to mimic the process of natural evolution.Genetic operators include crossover, mutation, evaluation and selection.As soon as the problem is encoded properly, the algorithm can apply these operators on the individuals in the search for the best possible solution.

Crossover
The chromosome structure is common in the entire test functions that are used in this research; hence, a single point crossover will be applied in all test cases so that chromosomes may share genes to facilitate the search for the optimal result.

Mutation
The proposed algorithm can mutate the value of a gene in a chromosome according to the value of the probability of mutation Pm.However, some of the benchmark functions used in this research have a constrained search domain, and so in a test case that is subjected to such a function, the algorithm is permitted to mutate the genes within a range of the permissible values.

Evaluation
In each generation, the algorithm evaluates all individuals to assign a fitness score for each one.In this research, the focus is on multi-objective An approach for optimizing multi-objective problems using hybrid genetic algorithms problems.Each of the used benchmark functions have two objective functions, so the fitness of each chromosome will be two values each of them represent one of the fitness functions.

Selection
The algorithm performs elitism selection where a group of best performing chromosomes is selected to survive to the next generation.Only fitter chromosomes can replace these elite individuals in order; otherwise, this group continues through all generations unchanged.In addition to that, rejected individuals will be clustered by the K-means algorithm according to their fitness values.This step assures a balanced distribution of individuals passed to PSO for rehabilitation.

K-means clustering
The K-means clustering algorithm is utilized to support the individuals filtering process in the selection phase of the proposed algorithms.It can be viewed as a secondary selection technique that operates to gather all individuals that did not pass the fitness function, cluster them according to the values of their fitness functions that will lead to K = 2 clusters, and finally pass a balanced group of rejected individuals to the PSO to assure the existence of all unique individuals without losing an individual in an offspring that had no like in this offspring as shown in Fig. 5.
Figure 5 represents hypothetical individuals with hypothetical values to show the filtration mechanism of the K-means-based selection method.After individuals in a generation are rejected, their evaluation value will be targeted for clustering by the K-means algorithm to produce K clusters where K = the chromosome length.Then, based on the clustering result, the algorithm will select a group of individuals for rehabilitation while making sure to select at least one individual from each cluster, hence asserting the existence of all unique chromosomes.The PSO will then operate on them and return a better individual.

Stopping criterion
The proposed algorithm's stopping criterion is when the algorithm reaches the maximum number of generations (in this research this number is fixed to 10,000).

PSO Integration
Individuals that did not pass the fitness function are clustered and passed to the PSO algorithm for rehabilitation.These individuals act as particles in a swarm where they all communicate with each other updating their velocity and position continuously.When all particles agree on the best solutions, this individual is passed back to the offspring replacing the least fit offspring in case the algorithm did not terminate.If the algorithm has already terminated, PSO will force it to continue and if no better result is reached for a predefined number of iterations (5000), the algorithm will finally terminate representing a group of solutions called a pareto set.
The algorithm was executed under 2 configurable parameters (Algorithm and PS) with two options each which produced 16 different test cases.The 16 test cases were executed on a 2.6 GHz Intel Core i7 vPro machine with 16 GB of RAM and a magnetic HDD on a 64-bit OS.
The test cases conducted in this research are described in Table 1 where the used algorithm is either the genetic algorithm (GA) or the proposed hybrid genetic algorithm (HGA)-PS specifies the used population size.
Figure 6 shows the average value of function 1 for Binh and Korn for the different cases.
Figure 7 shows the average value of function 2 for Binh and Korn for the different cases.Figure 9 shows the average value of function 2 for Chakong and Haimes for the different cases.
Figure 10 shows the average value of function 1 for Constr-Ex Problem for the different cases.
Figure 11 shows the average value of function 2 for Constr-Ex Problem for the different cases.
Figure 12 shows the average value of function 1 for Poloni's Two Objective Function for the different cases.
Figure 13 shows the average value of function 2 for Poloni's Two Objective Function for the different cases.
Figures 14, 15, 16 and 17 show the average iterations until convergence for each benchmark function on each test case.

Discussion of results
The proposed HGA has been tested against four benchmark functions and compared against a normal genetic algorithm (GA) and a hybrid genetic algorithm (HGA) both in different population sizes (10/100) that produced a total of 16 test cases.All 4 multi-objective optimization functions used in this research have 2 objective functions F1 and F2, both have been targeted with a genetic algorithm and a population size of 10, a hybrid genetic algorithm and a population size of 10, a genetic algorithm and a population size of 100 and finally a hybrid genetic algorithm and a population size of 100.

Results analysis
Results of test cases 1, 2, 3 and 4 targeted the Binh and Korn optimization function F1 showed that the GA with population size 10 achieved good minimization value of F1, however when switched to HGA with population size 10, a better minimization value was achieved.As the population size increased to 100, the GA seemed not to An approach for optimizing multi-objective problems using hybrid genetic algorithms 397 take advantage of the large population size; on the other hand the HGA with population size 100 succeeded to achieve the least possible minimization value.
The same test cases also targeted Binh and Korn's F2 where the GA with population size 10 achieved a good minimization value that decreased when switched to HGA.However, in F2, the GA seemed to benefit from the increase in the population size as it decreased the minimization value and finally achieved the best result when using HGA with a population size of 100.Results of test cases 5, 6, 7 and 8 targeted the Chakong and Haimes optimization function F1 and showed that the GA with population size 10 achieved fair minimization value of F1, switching to HGA with population size of 10 individuals achieved a much better minimization value.Increasing the population size to 100 seemed to disrupt the GA, however switching to HGA with population size 100 succeeded to achieve the least possible minimization value.The same test cases also targeted Chakong and Haimes F2 where the GA with population size 10 achieved a good minimization value that decreased when switched to HGA.In F2, the GA seemed also to benefit from the increase in the population size that decreased the minimization value and finally the best possible minimization result was achieved using HGA with a population size of 100.
Results of test cases 9, 10, 11 and 12 targeted the Constr-Ex Problem optimization function F1 where a serial decrease in the minimization value has been witnessed when switching from GA to HGA and from population size of 10 to population size of 100.The same phenomenon has been also witnessed in F2 with a serial decrease in the minimization value when switching from GA to HGA and from population size 10 to 100.
Results of test cases 13, 14, 15 and 16 targeted the Poloni's Two Objective optimization function F1 and showed a smooth decrease in the minimization value when switching from GA to HGA and from population size 10 to population size 100.The same test cases also targeted Poloni's Two Objective F2 where almost the same phenomenon has been witnessed except for a slight increase in the minimization value when used a GA with population size 100.
All test cases on all multi-objective optimization functions have shared the same phenomenon in terms of average iterations until convergence where GA with population size 10 has consumed the most average iterations until convergence, while the average iterations until convergence have slightly decreased on switching to population size 100.On the other hand, a great decrease in the average iterations until convergence has been witnessed when HGA was used and decreased more on switching population size from 10 to 100.

Complexity
The proposed hybrid algorithm utilizes two evolutionary algorithms as well as the K-means clustering algorithm with overall 5 loop structures as assumed below: (n): GA maximum number of iterations.The combined complexity will be as follows: That can be further calculated as: where generally O nx ð Þ is observed as the GA's maximum number of iterations by the genetic operators process, while O ny 2 ð Þ is observed as the GA's maximum number of  An approach for optimizing multi-objective problems using hybrid genetic algorithms generations by the K-means process and finally O zm ð Þ is observed as the PSO's population size by the PSO's velocity and position process.Hence, we can conclude that better optimization was achieved with a trade-off in performance and resource consumption.

Contributions
The proposed technique utilizes a genetic algorithm that targets search space exploration supported by the K-means algorithm to enhance the selection mechanism as described in Sect.4.1.4.The particle swarm optimization algorithm was also utilized to target the rejected individuals of each generation to fulfill the concept of the rehabilitation of rejected individuals to maximize the utilization of all individuals in each generation.To test the effect of each component of the proposed hybrid algorithm, the hybrid algorithm was tested against 4 benchmark functions under several configurations that resulted in 16 different test cases where their results were finely analyzed in Sect.6.1.
From this analysis, it was observed that the proposed K-means-based selection mechanism enhanced the optimization ability of the genetic algorithm, however added     From Sect.6.2 it was also observed that the overall optimization process is enhanced significantly, however, and as a trade-off to this enhancement, more computational cost was added compared to a basic GA.

Conclusion
In this research, a hybrid genetic algorithm was proposed to solve multi-objective optimization problems.The hybrid genetic algorithm utilized the particle swarm optimization (PSO) as well as the K-means algorithm in order to solve multi-objective optimization problems.In this research, four benchmark multi-objective optimization problems have been used to test the proposed hybrid genetic algorithm (HGA).The three main components of the proposed hybrid algorithm (GA, PSO and K-means) have been utilized to achieve better optimization results as well as performance.In concept, the genetic algorithm was used to achieve search space exploration supported by the K-means algorithm to enhance the selection operation of the GA, while the PSO was used to achieve search space exploration.
In the experiments phase of this research, these concepts have been put to test on four benchmark multi-objective optimization functions with different settings in terms of population size (10 or 100) and the algorithm mode (GA or HGA) which produced 16 different test cases (four benchmark functions x four different settings) as shown in Table 1.All 16 test cases were executed, and the results were noted in Tables 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,      An approach for optimizing multi-objective problems using hybrid genetic algorithms 403 15,16,17,18,19,20,21,22,23,24,25,26,27 and The results were discussed in detail in the previous section from which we can conclude that the proposed HGA has achieved better optimization results in terms of minimizing the objective functions in all test cases as well as better performance in terms of average iterations until convergence.For each benchmark function, better results in terms of minimizing the objective functions were achieved when the algorithm was switched from GA to HGA.It was noticed that increasing the population size enhanced the minimization ability of both the GA compared to itself and HGA compared to itself but gave no superiority to GA over HGA even with small population size.Better performance was also achieved as the proposed HGA has significantly decreased the average iterations needed until convergence when compared to GA.

Compliance with ethical standards
Conflict of interest Author Ahmed Maghawry declares that he has no conflict of interest.Author Rania Hodhod declares that she has no conflict of interest.Author Yasser Omar declares that he has no conflict of interest.Author Mohamed Kholief declares that he has no conflict of interest.
Human and animal rights This article does not contain any studies with human participants or animals performed by any of the authors.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.

Table 1
Test cases

Table 4
Pareto set of test case 3

Table 11
Pareto set of test case 7 (input)

Table 22
Pareto set of test case 14(output)

Table 28
Pareto set of test case 17 (output) Fig. 6 Binh and Korn F1 average