1 Introduction

In recent years, Multiobjective Evolutionary Algorithms (MOEA) have gained significant attention due to their ability to solve Multiobjective Optimization Problems (MOPs). MOEA's main goal is to find a set of Pareto-optimal solutions that cannot be improved without sacrificing another objective. Among the MOEA, Grasshopper Optimization Algorithm (GOA) is a novel algorithm inspired by the swarming behavior of grasshoppers in their search for food. GOA is a metaheuristic algorithm that is based on the movement and swarming behavior of grasshoppers in their search for food. GOA has been shown to be an effective and efficient technique for solving optimization problems [1,2,3]. However, GOA has some limitations, including the need for proper parameter tuning, the tendency to get stuck in local optima, and slow convergence. To overcome these limitations, several hybrid algorithms have been proposed in the literature. In this study, the authors propose two efficient hybrid algorithms based on the idea of a simulated annealing algorithm. Simulated Annealing Multiobjective Grasshopper Optimization Algorithm (SAMOGOA) is a hybrid algorithm that integrates Simulated Annealing (SA) and chaos theory with MOGA to improve its local and global search capabilities [4,5,6,7,8,9,10,11,12,13]. Simulated Annealing (SA) is a stochastic global optimization algorithm that is based on the concept of annealing in metallurgy. SA is a metaheuristic algorithm that has the ability to avoid getting trapped in local optima and guide the search process towards global optima. Chaos theory is being used with different metaheuristic algorithms to improve their exploration and exploitation abilities. Chaos theory is a phenomenon that defines the complex dynamical behavior of nonlinear systems. Chaos theory is random, regular, and ergodic in nature [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. The SAMOGO algorithm consists of multiple steps. Initially, a random population is generated, followed by the calculation of fitness values for each individual. The selection operator is then employed to choose parents for reproduction. Through the crossover and mutation operators, new offspring are generated. The Pareto dominance operator is used to identify non-dominated solutions, while the archive operator stores these Pareto-optimal solutions. To maintain diversity within the population, the crowded distance operator is utilized. The roulette wheel selection operator is employed to select a target from the archive. Control parameters are updated using techniques such as simulated annealing (SA) and chaos theory. The control parameter are used for controlling movement of grasshoppers in optimization process to reach the global optima, by applying the update formula to generate new solutions. The algorithm continues until the stopping criteria are satisfied, signifying termination. [29,30,31,32,33,34,35]. In essence, the SAMOGOA algorithm is a hybrid approach that leverages the advantages of MOGOA, SA, and chaos theory to enhance both local and global search capabilities. The subsequent section will provide a comprehensive description of the individual steps involved in the SAMOGO algorithm.

The remainder of this paper is organized as follows, Section 2 provides a literature review and an overview of the principle behind the grasshopper algorithm, which is likely the basis for the proposed improvement, Section 3 is where the new algorithm, the "Simulated Annealing Multiobjective Grasshopper Optimization Algorithm," is introduced, and the specific steps of the algorithm are detailed, Section 4 describes the experimental setup, including equipment, environment, reference function, and parameters. The experimental results and statistical comparison among the algorithms are presented in this section and Finally, in the last section, the authors summarize their findings and suggest future research directions based on their results.

1.1 Literature review

Classical methods for multiobjective optimization typically involve transforming the multiobjective problem into a single objective problem, often by aggregating the multiple objectives into a single function. This can be done using various techniques, such as weighted sum, epsilon constraint, and goal programming. However, these methods may not always provide an accurate representation of the true Pareto optimal set. On the other hand, evolutionary methods for multiobjective optimization are inspired by natural selection and survival of the fittest. These methods typically use a population of candidate solutions and iteratively evolve the population towards the Pareto optimal set through selection, reproduction, and mutation operators. Evolutionary methods can effectively handle multiple objectives without requiring the use of any aggregation techniques, and can often provide a more accurate representation of the Pareto optimal set [27].

2 Classical methods

Evolutionary methods, on the other hand, are population-based metaheuristic algorithms that can directly search for multiple solutions in the Pareto front without the need for problem scaling. Some popular evolutionary algorithms for multiobjective optimization include NSGA-II [36], SPEA2 [37], MOEA/D [37], and PAES [37]. These algorithms use a variety of techniques such as elitism, fitness assignment, and diversity preservation to generate and maintain a diverse set of non-dominated solutions. Evolutionary algorithms are generally preferred for solving multiobjective optimization problems because they can handle complex, nonlinear, and high-dimensional problems more efficiently than classical methods.

2.1 GA-based approaches

Genetic Programming (GP) is another type of evolutionary algorithm used for solving multiobjective optimization problems. GP is a heuristic search algorithm that uses the principles of natural selection and genetic recombination to evolve computer programs. In multiobjective GP, each individual is a program that has multiple objectives to optimize. The goal is to evolve a population of programs that represent a set of Pareto-optimal solutions to the problem at hand [38]. Particle Swarm Optimization (PSO) is a population-based optimization algorithm inspired by the collective behavior of swarms of birds or fish. Each particle in the swarm represents a potential solution to the problem and moves through the search space based on its own position and velocity, as well as the best position found by the swarm as a whole. PSO has been adapted for multiobjective optimization in several ways, such as the Multiobjective Particle Swarm Optimization (MOPSO) algorithm [39] and the Multiobjective Particle Swarm Optimization with Crowding (MOPSO-C) algorithm [40]. Other types of multiobjective optimization algorithms include the Evolutionary Strategies with Pareto Archived Competition (PAES) algorithm [41], the Strength Pareto Ant Colony Optimization (SPACO) algorithm [47], and the Multiobjective Differential Evolution (MODE) algorithm [42]. Each of these algorithms has its own strengths and weaknesses, and the choice of algorithm will depend on the specific problem being solved and the available computational resources.

2.2 Other evolutionary algorithms

Multiobjective Particle Swarm Optimization (MOPSO): This algorithm is extension of PSO for multi-objective optimization, incorporating concept of pareto dominance and use of external archive to store non dominated solutions. Non dominated solutions will guide the particles to find their leader and direction of flight. Search space is divided in hypercubes, which help locate position of particles [43]. A particle swarm optimizer for multi-objective optimization (SMOPSO) is next version of MOPSO which apparatus elitism and mutation, used for altering one dimension of the particle. “Another Multi Objective Particle Swarm Optimization” (AMOPSO) [44] which extends PSO to solve several objectives. The main novelty of the approach consists on using a clustering technique in order to divide the population of particles into several swarms in order to have a better distribution of solutions in decision variable space. AMOPSO does not use external population as elitism is implemented by migration of leader particles. Dynamic Population Size in PSO-Based Multiobjective Optimization uses dynamic population and adaptive local archive to enhance exploration capability as compare to the older version of MOPSO, most of which use fixed size population [45].

Flower pollination algorithm (FPA) is an algorithm inspired by natural phenomena of flower pollination. Biotic cross-pollination is comparable to global optimization as it takes place between flowers which are distant from each other and pollinators obeying the Levy flight movement. Local optimization is denoted by abiotic and self-pollination. So FPA can be employed to solve optimization problems [46]. FPA for Multi-objective optimization (MOFPA) is extension of FPA for solution of MO [47]. Non-Dominated Sorting Flower Pollination Algorithm (NSFPA) for Dynamic Economic Emission Dispatch (DEED) [48] is combination of NS, fuzzy logic, predator-prey model and FPA for DEED. There are many other variations of FPA such as A discrete flower pollination algorithm for graph coloring problem [49], An adaptive flower pollination algorithm for software test suite minimization [50] and many more listed in literature.

Multiobjective versions of BAT algorithm [51], Cuckoo search Algorithm (CS) [52], Multi-objective ant lion optimizer (MOALO)[53], Ant Colony Optimization(ACO) [54], Artificial Bee Colony Optimization(ABCO) [55], Differential Evolution(DE) [56], Grasshopper optimization algorithm for multi-objective optimization problems (MOGOA) [29], Fire Fly Algorithm[57], learner performance based behavior algorithm (LPB) [58], Dragonfly Algorithm (DA) [59], cat swarm optimization (CSO) [60], Backtracking search optimization algorithm (BSA) [61], Donkey and Smuggler Optimization Algorithm (DSO) [62], Fitness Dependent Optimizer (FDO) [63], IFDO [64], Multiple Filter-Based Rankers to Guide Hybrid Grasshopper Optimization Algorithm and Simulated Annealing for Feature Selection With High Dimensional Multi-Class Imbalanced Datasets [65], Particle ranking: an efficient method for multi-objective particle swarm optimization feature selection [66], and Particle distance rank feature selection by particle swarm optimization [67], Slime Mould Algorithm: A Comprehensive Survey of Its Variants and Applications, A multi-agent system based for solving high-dimensional optimization problems: A case study on email spam detection, An improved particle swarm optimization with backtracking search optimization algorithm for solving continuous optimization problems, An improved cuckoo search optimization algorithm with genetic algorithm for community detection in complex networks, Quantum-inspired metaheuristic algorithms: comprehensive survey and classification, Feature Selection with Binary Symbiotic Organisms Search Algorithm for Email Spam Detection [68,69,70,71,72] are nature-inspired Multi-Objective Evolutionary Algorithms (MOEAs) have been documented in the literature, along with surveys that offer extensive insights into this field, providing a wealth of comprehensive knowledge.

2.3 Multiobjective grasshopper optimization algorithms

In this section firstly GOA, MOGOA, SA and chaotic maps are summarized.

  1. a.

    Grasshopper optimization algorithm (GOA)

GOA is one of the new algorithms and mimics the swarming behavior of grasshoppers. Each grasshopper in the swarm has its own position which corresponds to a possible solution of a given optimization problem [35]. Xi represents the position of ith grasshopper given in equation 1.

$${\text{X}}\_{\text{i}} = {\text{S}}\_{\text{i}} + {\text{G}}\_{\text{i}} + {\text{A}}\_{\text{i}}$$
(1)

It has three subcomponent Si the social interaction, Gi the force of gravity on ith grasshopper, and Ai the wind advection. Social interaction is dominant part all coming from grasshoppers themselves defined in equation 2.

$$S_{i} \, = \,\sum\limits_{i = 1}^{N} {s\left( {d_{ij} } \right)} \,\widehat{{d_{ij} }}$$
(2)

where d_ij=|X_j-X_i | the distance between ith and jth grasshopper, s is a function to define the strength of social forces as shown in equation 3, and (d_ij=) (X_j-X_i )/d_ij is a unit vector from ith grasshopper to the jth grasshopper.

$${\text{s}}\left( {\text{r}} \right)\, = \,{\text{fe}}^{ \wedge } \left( {\left( { - {\text{r}}} \right)} \right)/1 - {\text{e}}^{ \wedge } \left( { - {\text{r}}} \right)$$
(3)

where f indicates the intensity of attraction, r is force of repulsion and l is the attractive length scale, value of which lies in interval [0,4] controls attraction or repulsion between grasshoppers. The area where there is no attraction and repulsion is called comfort area. The comfort area exists at exact distance of 2.079. The distance needs to be normalized to interval [1, 4], as s function can’t handle strong forces with large distances.

The G component has two parts, g is the gravitational constant and (e_g ) ̂ shows a unity vector towards the center of earth. Mathematical definition is given in equation 4.

$$G_{i} = - g\widehat{{e_{g} }}$$
(4)

The wind advection A is calculated as follows:

$$A_{i} = u\widehat{{e_{w} }}$$
(5)

where u is a constant drift and eˆw is a unity vector in the direction of wind. Using components we can write equation 1 as:

$$X_{i} = \mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} s \left| {x_{j} - j_{i} } \right|{ }\frac{{\left| {x_{j} - j_{i} } \right|{ }}}{{d_{ij} }} - g\widehat{{e_{g} }} + u\widehat{{e_{w} }}$$
(6)

The balance between exploration and exploitation in a stochastic algorithm helps to find the global optimum. So some special parameters were included to show exploration and exploitation in different stages of optimization. Mathematical model in equation 6 changes to:

$$X_{i}^{d} = c\left( {\mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} c\frac{{ub_{d} - lb_{d} }}{2}s \left| {x_{j} - j_{i} } \right|{ }\frac{{\left| {x_{j} - j_{i} } \right|{ }}}{{d_{ij} }} } \right) + T_{d}$$
(7)

where 〖ub〗_d is the upper bound, 〖lb〗_d is the lower bound in the d-th dimensions and Td is the value of d-th dimension in the target (best solution found so far). G component is ignored assuming no gravitational force and wind direction is always towards a target. The decreasing coefficient ‘c’ is used twice in equation 7 for controlling forces between grasshoppers and is updated with equation 8. The outer ‘c’ maintains balance between exploration and exploitation, while inner ‘c’ reduces repulsion/attraction forces between grasshoppers proportional to the number of iterations.

$$c = cmax - l\frac{cmax - cmin}{L}$$
(8)

where cmax=1 is the maximum value, cmin=0.00001 is the minimum value, l indicates the current iteration, and L is the maximum number of iterations.

  1. b.

    Multi-objective grasshopper optimization algorithm (MOGOA)

MOGOA is extension of GOA to solve multi-objective optimization problems. A multi-objective algorithm should be able to get very accurate approximations of the true Pareto optimal solutions well-distributed across all objectives. To achieve these goals Pareto optimal dominance is used as two solutions cannot be compared with the regular relational operators and the best Pareto optimal solutions are stored in an archive.

The main challenge in designing MOGOA was to choose the target. For the purpose of target selection an archive of Pareto optimal solutions is maintained and target is one of them. Target selection is based on crowding distance similar to one in MOPSO [36] using following equation 9.

$${\text{P}}_{i} \, = \,\frac{1}{{{\text{N}}_{i} }}$$
(9)

P_i is the probability of choosing the target from the archive and N_i number of solutions in neighborhood of the ith solution. Later this probability helps find target using roulette wheel selection.

The size of the archive is constant to control the computational cost of MOGOA, which may result in the issue of a full archive. To address this issue solutions in more populated area of archive are removed again using a roulette wheel and inverse of P_i. The archive is updated regularly in this way.

2.4 Simulated annealing

Simulated Annealing (SA) is a stochastic search method derived from Monte Carlo simulation, and was introduced in 1983. It is able to solve difficult combinatorial optimization problems. Annealing process is simulation of the thermal motion of atoms with heat bath, changing temperature from high to low [37]. SA is capable of avoiding local optima by adjusting temperature and changing solution based on probability function; given in Eq. 10.

$$P\left( {\Delta E} \right) = e^{{ - \frac{\Delta E}{{TK_{B} }}}}$$
(10)

where KB is Boltzmann’s constant, T is current temperature and E is energies of atoms. New solution is accepted or rejected on the value of probability function [38].

2.5 Chaotic theory

The nonlinear systems exhibit a kind of complex dynamical behavior, which is known as chaos. Chaotic variables are ordered, random, regular and argotic in nature [39]. Chaotic variable having these characteristics can be employed in optimization process to avoid local optima and facilitate global search [40]. To use chaos in optimization a linear mapping between the optimal variables and the chaotic variables is required. To accomplish this task chaotic mapping is used and most widely logistic maps are applied. Mathematical function which defines logistic map is as following Eq. 11:

$$z_{{\text{k + 1}}} \, = \,\mu \times z_{k} \times \,\left( {1 - z_{k} } \right)$$
(11)

where z_k is value of z in iteration k and its initial value is set to rand() in the interval [0,1], z_(k + 1) is new value of z and 3.57 < μ ≤ 4, but best results are obtained when μ = 4.

3 Proposed methodology

3.1 Simulated annealing multiobjective grasshopper optimization algorithm (SAMOGOA)

The parameter 'c' plays a crucial role in MOGOA, as it maintains a balance between exploration and exploitation. It also governs the attraction and repulsion forces and ensures the presence of a comfort area. However, it should be noted that the linear decrease of 'c' may not be suitable for all problem types when aiming to discover the true Pareto Front.

In this study, a new hybrid model of MOGOA is introduced, incorporating Simulated Annealing (SA) with symmetric perturbation. This approach involves mapping the position of the new grasshopper to the current optimal position within a symmetrical interval, which is determined by the product of the current temperature and a random number mapped to the dimensional space. By applying SA, the algorithm can randomly alter the current value of the control parameter 'c'. This adaptation helps enhance the search process, leading to the discovery of high-quality and diverse solutions in the Pareto front using Eq. 12.

$$c_{new} = c_{old} *\left( {1 + ns} \right)*e^{{ - ns*^{\frac{1}{N}} }}$$
(12)

where \({c}_{new}\) is new perturb ‘c’, \({c}_{old}\) is its value in previous iteration; constant in first iteration later but updated in every iteration, ns is number of steps in SA and N is number of grasshoppers in the swarm. In the process of annealing temperature is adjusted by following Eq. 13.

$$T = T*\alpha ;$$
(13)

In Eq. 13, \(\alpha \in (\mathrm{0,1})\) is cooling coefficient which decreases temperature in each iteration. The value of α is set to 0.95 as in [42] where SA was used to change value of inertia weight. If the fitness of population increases new value of ‘c’ is accepted, otherwise probability is calculated by applying Gaussian probability function, shown in Eq. 14.

$$G\left( t \right) = \min \left( {1.0, e^{{ - \left( {\frac{{fitness_{new} - fitness_{old} }}{{K_{B} T}}} \right)}} } \right)$$
(14)

where \({fitness}_{new}\) is the fitness after obtaining new value of c using Eq. 12, \({fitness}_{old}\) is fitness in previous iteration, T is annealing temperature and \({K}_{B}\) is Boltzmann’s constant.

Following Eq. 15 changes ‘c’ using G(t) and next iteration starts.

$$c_{new} = c_{old} *G\left( t \right)$$
(15)

The updated values of 'c' obtained through the SA process are utilized in MOGOA to adjust the positions of grasshoppers, facilitating quicker convergence of the algorithm. Through the optimization process, the SA search component aids SAMOGOA in escaping local optima and reaching global solutions. The flowchart depicted in Figure 1 illustrates the steps of SAMOGOA, with the modified steps highlighted within circles.

Fig. 1
figure 1

a Flowchart of SAMOGOA b Flowchart of CSAMOGOA

3.2 Chaotic simulated annealing multiobjective grasshopper optimization algorithm (CSAMOGOA)

This algorithm is variation of SAMOGOA by using chaos. The cooling coefficient α is tuned using logistic map given in equation 11, instead of constant value. So equation 13, will change to equation 17.

$$\alpha_{k + 1} = \mu \times \alpha_{k} \times \left( {1 - \alpha_{k} } \right)$$
(16)
$$T = T \times \alpha_{k + 1}$$
(17)

The initial value of α_k in the modified logistic map is set to 0.95, which is different from the use of rand() in the original logistic map. Subsequently, the Simulated Annealing (SA) process utilizes the new value of T generated using Eq. 17. This initial exploration of the search space allows the algorithm to explore different regions. As the algorithm progresses, the chaotic parameter chaotically exploits the neighborhood to converge towards optimal solutions. The steps of both newly proposed algorithm are shown in flowcharts in Fig. 1.

The computational complexity of SAMOGOA and CSAMOGOA is of O(MN2) where M is the number of objectives and N is the number of solutions. The complexity is equal to MOGOA.

4 Results and discussion

In this study thirteen different benchmark functions are used for evaluating performance of SAMOGOA. Three of them are from ZDT test suite, remaining ten belong to CEC2009 test suite. The Pareto optimal front of these functions have diverse geometries like convex, concave, linear and separated with multiple local fronts making it difficult to get true Pareto optimal front.

All the algorithms are implemented using MATLAB (R2015a) on Windows 10 Enterprise and Intel® Core™ i7-4700QM CPU@2.40 GHz processor with 16 GB ram. The parameter settings; Population size n = 200, maximum iterations = 100, dimensions dim = 30 and archive size = 100. The results of the newly proposed algorithm SAMOGOA and CSAMOGOA are compared with simple MOGOA. In this paper three different metrics are employed for quantitative analysis of the results for convergence and coverage. IDG shows the convergence, MS and SP are used to measure coverage. The metrics used are defined in following sub-sections.

4.1 Inverted generational distance (IGD)

Euclidian distance Eq. 18, is calculated for solutions in true PF denoted by P and a nearest point in obtained where N is number of reference points. PF Desired values of IDG are small.

$$IGD\, = \,\frac{{\sqrt {\sum\nolimits_{i = 1}^{p} {\left( {dis_{i}^{t} } \right)^{2} } } }}{N}$$
(18)

4.2 Spacing (SP)

This metric calculates relative distance between consecutive points of the obtained PF using following Eq. 19.

$$SP = \sqrt {\frac{1}{n - 1}\mathop \sum \limits_{i = 1}^{n} \left( {\overline{d} - d_{i} } \right)^{2} }$$
(19)

where \(\overline{d }\) is average of all \({d}_{i}\) and n is number of objectives.

$$d_{i } = \left( {\left| {f_{1}^{i} \left( {\vec{x}} \right) - f_{2}^{i} \left( {\vec{x}} \right)} \right| - \left| {f_{1}^{j} \left( {\vec{x}} \right) - f_{2}^{j} \left( {\vec{x}} \right)} \right|} \right)$$

For all \(i,j=\mathrm{1,2},3\dots n.\) Where \(\overrightarrow{x}\) is solution in objective space. Small value of SP shows that points in the Pareto front are evenly distributed.

4.3 Maximum Spread (MS)

This metric gives strength of algorithm in terms of coverage. The higher value of MS shows better coverage. MS is calculated by finding Euclidean distance between minimum value \({b}_{i}\) and maximum value \({a}_{i}\) in each objective using following Eq. 20.

$$MS = \sqrt {\mathop \sum \limits_{i = 1}^{n} {\text{max}}(d\left( {a_{i} - b_{i} } \right)}$$
(20)

4.4 Results on ZDT Test Suite

Table 1 provides detailed information on three functions from the ZDT suite, all of which are bi-objective and not multimodal. The specifics include the function names and their characteristics.

Table 1 ZDT Test Suite

Tables 2, 3, and 4 present the results for the best, average, worst, standard deviation, and median values of the Inverted Generational Distance (IGD), Spread (SP), and Maximum Spread (MS) metrics, respectively. These results are obtained from ten independent runs of the newly proposed algorithms, while the MOGOA results for the IGD metric are sourced from its original study. For the SP and MS metrics, the results are calculated by implementing MOGOA (referred to as MOGOA*) since only graphical representation was provided in the original study.

Table 2 IGD results for ZDT Suite
Table 3 SP values for ZDT Suite
Table 4 MS values for ZDT Suite

Upon analyzing Table 2, it is evident that SAMOGOA outperforms MOGOA across all three test functions. However, CSAMOGOA also demonstrates comparable results. The low values of the IGD metric indicate the accuracy of the algorithms in approximating the Pareto optimal front or set. Consequently, these results signify the good convergence of SAMOGOA towards Pareto optimal solutions.

By examining the results in Table 3, it becomes clear that CSAMOGOA outperforms the other two algorithms in terms of the Spread (SP) performance measure for all test functions. CSAMOGOA demonstrates a good distribution of solutions along the obtained Pareto front. However, SAMOGOA still produces better results than its parent algorithm, MOGOA. This observation is further supported by the graphs presented in Fig. 2, which depict the performance on three functions from the ZDT suite. SAMOGOA generates Pareto fronts with a greater number of solutions and a good distribution of solutions for ZDT1 and ZDT3. On the other hand, CSAMOGOA performs better than SAMOGOA on the ZDT2 function, although it is unable to surpass the performance of MOGOA. In summary, CSAMOGOA shows superior results in terms of SP performance measure and a good distribution of solutions along the Pareto front. However, SAMOGOA still demonstrates improvements over MOGOA, as evidenced by the graphs in Fig. 2.

Fig. 2
figure 2

Pareto fronts of ZDT Test Suite obtained by different algorithms

It can be seen from Table 4 that SAMOGOA comes with better results for MS metric on ZDT1 function but for other two functions performance is not good. This degradation of values may be due to nature of the functions. The results on ZDT test suite demonstrate that Pareto optimal solution sets obtained by SAMOGOA has better diversity convergence, and uniform distribution of solutions along Pareto front, emerging as best among three algorithms, on the other hand its chaotic variant was unable to perform as good as expected. The graphical results of Pareto fronts obtained by different algorithms are shown in Fig. 3 illustrating superiority of newly proposed algorithm.

Fig. 3
figure 3

Pareto Front of UF1-UF7

4.5 Results on CEC2009 test suite

All the ZDT problems are all bi-objective, uni-modal and first function is simple linear function, so CEC2009 test functions are utilized to benchmark the performance of newly proposed algorithms. Although test functions in this suite are all unconstraint but still complex, more difficult and diverse in the literature of MOPs used to verify the superiority of algorithms. The seven functions (UF 1-UF7) are bi-objective and three (UF8-UF10) are tri-objective, with mathematical definitions in Table 5. The results for these test functions are shown in Tables 68 and compared. On examining Table 6 it can be seen that SAMOGOA and CSAMOGOA come up with better results on all of the test functions in UF suite as compared to original MOGOA. IDG values produced by these algorithms are small providing evidence of fast convergence towards the Pareto optimal fronts accurately. Observing these results in detail reveal that chaotic version of newly proposed algorithm performed better on most of the bi-objective functions of CEC2009 test problems. These functions are UF1, UF2, UF3, UF4 and UF7. On the other side SAMOGOA shows up as winner on functions with three objectives.

Table 5 CEC2009 Test Suite
Table 6 IGD values for CEC2009 test Suite

Both of the newly proposed algorithms have shown superiority in terms of convergence of algorithms, for ensuring coverage is good too we need to inspect Tables 7 and 8 for other two performance measures. The results obtained by new algorithms demonstrate that SP and MS values are improved or almost same as for MOGOA. These results tell that proposed algorithms have advantage of better coverage as well.

Table 7 SP values for CEC2009 Test Suite
Table 8 MS values for CEC2009 Test Suite

Upon closer examination of the results, it becomes evident that SAMOGOA is a highly effective and valuable algorithm compared to other algorithms. Its effectiveness can be attributed to its fast convergence and high coverage. The fast convergence is achieved by incorporating target selection from MOGOA and using SA to escape local optima, thus supporting the global optimization process. The superior coverage of SAMOGOA is a result of proper selection pressure built through SA-assisted target selection archive maintenance, which improves diversity and spread of solutions across all objectives.

While SAMOGOA has demonstrated promising results, it shares a limitation with its parent algorithm, MOGOA, in that it is more suitable for problems with a small number of objectives. The algorithm's selection process relies on the Pareto dominance relation among different solutions, and as the number of non-dominant points increases, the archive fills up quickly, leading to slower optimization. Increasing the size of the archive to accommodate more objectives may hinder the algorithm's ability to converge to the true Pareto front. Therefore, SAMOGOA is most applicable to problems with a maximum of four objectives and continuous variables.

The movement of grasshoppers in SAMOGOA is influenced by the best non-dominated solutions in the archive and guided by changes in temperature during SA, enhancing both convergence and coverage. The results of the study provide evidence that SAMOGOA is highly competent for multi-objective optimization problems, as it combines the capabilities of both local and global search to strike a balance between exploration and exploitation. This enables the swarm to explore the entire solution space by controlling attraction/repulsion forces and maintaining a comfort area between grasshoppers.

5 Conclusion and future works

This research introduces two novel techniques to enhance the performance of the Multi-Objective Grasshopper Simulated Annealing (MOGOA) algorithm. The first technique involves using Simulated Annealing (SA) to perturb the parameter 'c' multiple times, evaluating the fitness values of grasshoppers at the end of each SA iteration, and selecting the best value of 'c' based on the comparison with the previous iteration. This selected value of 'c' is then utilized to control the optimization process of SAMOGOA.

The second technique combines chaos theory and SA to modify 'c', taking advantage of their favorable properties. To assess the effectiveness of the proposed algorithms, several benchmark test functions are employed, and their performance is compared against MOGOA. The results demonstrate that these new algorithms outperform MOGOA in terms of convergence and coverage.

Initially, the algorithms are tested on unconstrained continuous optimization problems, but they can be adapted to handle constraints and discrete variables in future work. The benchmark functions used in the evaluation have either two or a maximum of three objectives. Furthermore, the proposed algorithms can be extended to address many-objective optimization problems and tackle large-scale practical problems.

Currently, the same value of 'c' is utilized to control both exploration and exploitation, as well as attraction/repulsion forces and the comfort area between grasshoppers. However, it is also possible to explore the use of two different values of 'c' to fine-tune the algorithms. In summary, this study introduces innovative techniques that combine SA, symmetric perturbation, and chaos theory to improve the optimization process in multi-objective problems, showcasing superior performance compared to MOGOA. The algorithms have the potential to be applied to a wide range of practical applications, including engineering and various real-world scenarios.