Abstract
The multi-objective grasshopper optimization algorithm (MOGOA) is a relatively new algorithm inspired by the collective behavior of grasshoppers, which aims to solve multi-objective optimization problems in IoT applications. In order to enhance its performance and improve global convergence speed, the algorithm integrates simulated annealing (SA). Simulated annealing is a metaheuristic algorithm that is commonly used to improve the search capability of optimization algorithms. In the case of MOGOA, simulated annealing is integrated by employing symmetric perturbation to control the movement of grasshoppers. This helps in effectively balancing exploration and exploitation, leading to better convergence and improved performance.
The paper proposes two hybrid algorithms based on MOGOA, which utilize simulated annealing for solving multi-objective optimization problems. One of these hybrid algorithms combines chaotic maps with simulated annealing and MOGOA. The purpose of incorporating simulated annealing and chaotic maps is to address the issue of slow convergence and enhance exploitation by searching high-quality regions identified by MOGOA.
Experimental evaluations were conducted on thirteen different benchmark functions to assess the performance of the proposed algorithms. The results demonstrated that the introduction of simulated annealing significantly improved the convergence of MOGOA. Specifically, the IDG (Inverse Distance Generational distance) values for benchmark functions ZDT1, ZDT2, and ZDT3 were smaller than the IDG values obtained by using MOGOA alone, indicating better performance in terms of convergence. Overall, the proposed algorithms exhibit promise in solving multi-objective optimization problems.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In recent years, Multiobjective Evolutionary Algorithms (MOEA) have gained significant attention due to their ability to solve Multiobjective Optimization Problems (MOPs). MOEA's main goal is to find a set of Pareto-optimal solutions that cannot be improved without sacrificing another objective. Among the MOEA, Grasshopper Optimization Algorithm (GOA) is a novel algorithm inspired by the swarming behavior of grasshoppers in their search for food. GOA is a metaheuristic algorithm that is based on the movement and swarming behavior of grasshoppers in their search for food. GOA has been shown to be an effective and efficient technique for solving optimization problems [1,2,3]. However, GOA has some limitations, including the need for proper parameter tuning, the tendency to get stuck in local optima, and slow convergence. To overcome these limitations, several hybrid algorithms have been proposed in the literature. In this study, the authors propose two efficient hybrid algorithms based on the idea of a simulated annealing algorithm. Simulated Annealing Multiobjective Grasshopper Optimization Algorithm (SAMOGOA) is a hybrid algorithm that integrates Simulated Annealing (SA) and chaos theory with MOGA to improve its local and global search capabilities [4,5,6,7,8,9,10,11,12,13]. Simulated Annealing (SA) is a stochastic global optimization algorithm that is based on the concept of annealing in metallurgy. SA is a metaheuristic algorithm that has the ability to avoid getting trapped in local optima and guide the search process towards global optima. Chaos theory is being used with different metaheuristic algorithms to improve their exploration and exploitation abilities. Chaos theory is a phenomenon that defines the complex dynamical behavior of nonlinear systems. Chaos theory is random, regular, and ergodic in nature [14,15,16,17,18,19,20,21,22,23,24,25,26,27,28]. The SAMOGO algorithm consists of multiple steps. Initially, a random population is generated, followed by the calculation of fitness values for each individual. The selection operator is then employed to choose parents for reproduction. Through the crossover and mutation operators, new offspring are generated. The Pareto dominance operator is used to identify non-dominated solutions, while the archive operator stores these Pareto-optimal solutions. To maintain diversity within the population, the crowded distance operator is utilized. The roulette wheel selection operator is employed to select a target from the archive. Control parameters are updated using techniques such as simulated annealing (SA) and chaos theory. The control parameter are used for controlling movement of grasshoppers in optimization process to reach the global optima, by applying the update formula to generate new solutions. The algorithm continues until the stopping criteria are satisfied, signifying termination. [29,30,31,32,33,34,35]. In essence, the SAMOGOA algorithm is a hybrid approach that leverages the advantages of MOGOA, SA, and chaos theory to enhance both local and global search capabilities. The subsequent section will provide a comprehensive description of the individual steps involved in the SAMOGO algorithm.
The remainder of this paper is organized as follows, Section 2 provides a literature review and an overview of the principle behind the grasshopper algorithm, which is likely the basis for the proposed improvement, Section 3 is where the new algorithm, the "Simulated Annealing Multiobjective Grasshopper Optimization Algorithm," is introduced, and the specific steps of the algorithm are detailed, Section 4 describes the experimental setup, including equipment, environment, reference function, and parameters. The experimental results and statistical comparison among the algorithms are presented in this section and Finally, in the last section, the authors summarize their findings and suggest future research directions based on their results.
1.1 Literature review
Classical methods for multiobjective optimization typically involve transforming the multiobjective problem into a single objective problem, often by aggregating the multiple objectives into a single function. This can be done using various techniques, such as weighted sum, epsilon constraint, and goal programming. However, these methods may not always provide an accurate representation of the true Pareto optimal set. On the other hand, evolutionary methods for multiobjective optimization are inspired by natural selection and survival of the fittest. These methods typically use a population of candidate solutions and iteratively evolve the population towards the Pareto optimal set through selection, reproduction, and mutation operators. Evolutionary methods can effectively handle multiple objectives without requiring the use of any aggregation techniques, and can often provide a more accurate representation of the Pareto optimal set [27].
2 Classical methods
Evolutionary methods, on the other hand, are population-based metaheuristic algorithms that can directly search for multiple solutions in the Pareto front without the need for problem scaling. Some popular evolutionary algorithms for multiobjective optimization include NSGA-II [36], SPEA2 [37], MOEA/D [37], and PAES [37]. These algorithms use a variety of techniques such as elitism, fitness assignment, and diversity preservation to generate and maintain a diverse set of non-dominated solutions. Evolutionary algorithms are generally preferred for solving multiobjective optimization problems because they can handle complex, nonlinear, and high-dimensional problems more efficiently than classical methods.
2.1 GA-based approaches
Genetic Programming (GP) is another type of evolutionary algorithm used for solving multiobjective optimization problems. GP is a heuristic search algorithm that uses the principles of natural selection and genetic recombination to evolve computer programs. In multiobjective GP, each individual is a program that has multiple objectives to optimize. The goal is to evolve a population of programs that represent a set of Pareto-optimal solutions to the problem at hand [38]. Particle Swarm Optimization (PSO) is a population-based optimization algorithm inspired by the collective behavior of swarms of birds or fish. Each particle in the swarm represents a potential solution to the problem and moves through the search space based on its own position and velocity, as well as the best position found by the swarm as a whole. PSO has been adapted for multiobjective optimization in several ways, such as the Multiobjective Particle Swarm Optimization (MOPSO) algorithm [39] and the Multiobjective Particle Swarm Optimization with Crowding (MOPSO-C) algorithm [40]. Other types of multiobjective optimization algorithms include the Evolutionary Strategies with Pareto Archived Competition (PAES) algorithm [41], the Strength Pareto Ant Colony Optimization (SPACO) algorithm [47], and the Multiobjective Differential Evolution (MODE) algorithm [42]. Each of these algorithms has its own strengths and weaknesses, and the choice of algorithm will depend on the specific problem being solved and the available computational resources.
2.2 Other evolutionary algorithms
Multiobjective Particle Swarm Optimization (MOPSO): This algorithm is extension of PSO for multi-objective optimization, incorporating concept of pareto dominance and use of external archive to store non dominated solutions. Non dominated solutions will guide the particles to find their leader and direction of flight. Search space is divided in hypercubes, which help locate position of particles [43]. A particle swarm optimizer for multi-objective optimization (SMOPSO) is next version of MOPSO which apparatus elitism and mutation, used for altering one dimension of the particle. “Another Multi Objective Particle Swarm Optimization” (AMOPSO) [44] which extends PSO to solve several objectives. The main novelty of the approach consists on using a clustering technique in order to divide the population of particles into several swarms in order to have a better distribution of solutions in decision variable space. AMOPSO does not use external population as elitism is implemented by migration of leader particles. Dynamic Population Size in PSO-Based Multiobjective Optimization uses dynamic population and adaptive local archive to enhance exploration capability as compare to the older version of MOPSO, most of which use fixed size population [45].
Flower pollination algorithm (FPA) is an algorithm inspired by natural phenomena of flower pollination. Biotic cross-pollination is comparable to global optimization as it takes place between flowers which are distant from each other and pollinators obeying the Levy flight movement. Local optimization is denoted by abiotic and self-pollination. So FPA can be employed to solve optimization problems [46]. FPA for Multi-objective optimization (MOFPA) is extension of FPA for solution of MO [47]. Non-Dominated Sorting Flower Pollination Algorithm (NSFPA) for Dynamic Economic Emission Dispatch (DEED) [48] is combination of NS, fuzzy logic, predator-prey model and FPA for DEED. There are many other variations of FPA such as A discrete flower pollination algorithm for graph coloring problem [49], An adaptive flower pollination algorithm for software test suite minimization [50] and many more listed in literature.
Multiobjective versions of BAT algorithm [51], Cuckoo search Algorithm (CS) [52], Multi-objective ant lion optimizer (MOALO)[53], Ant Colony Optimization(ACO) [54], Artificial Bee Colony Optimization(ABCO) [55], Differential Evolution(DE) [56], Grasshopper optimization algorithm for multi-objective optimization problems (MOGOA) [29], Fire Fly Algorithm[57], learner performance based behavior algorithm (LPB) [58], Dragonfly Algorithm (DA) [59], cat swarm optimization (CSO) [60], Backtracking search optimization algorithm (BSA) [61], Donkey and Smuggler Optimization Algorithm (DSO) [62], Fitness Dependent Optimizer (FDO) [63], IFDO [64], Multiple Filter-Based Rankers to Guide Hybrid Grasshopper Optimization Algorithm and Simulated Annealing for Feature Selection With High Dimensional Multi-Class Imbalanced Datasets [65], Particle ranking: an efficient method for multi-objective particle swarm optimization feature selection [66], and Particle distance rank feature selection by particle swarm optimization [67], Slime Mould Algorithm: A Comprehensive Survey of Its Variants and Applications, A multi-agent system based for solving high-dimensional optimization problems: A case study on email spam detection, An improved particle swarm optimization with backtracking search optimization algorithm for solving continuous optimization problems, An improved cuckoo search optimization algorithm with genetic algorithm for community detection in complex networks, Quantum-inspired metaheuristic algorithms: comprehensive survey and classification, Feature Selection with Binary Symbiotic Organisms Search Algorithm for Email Spam Detection [68,69,70,71,72] are nature-inspired Multi-Objective Evolutionary Algorithms (MOEAs) have been documented in the literature, along with surveys that offer extensive insights into this field, providing a wealth of comprehensive knowledge.
2.3 Multiobjective grasshopper optimization algorithms
In this section firstly GOA, MOGOA, SA and chaotic maps are summarized.
-
a.
Grasshopper optimization algorithm (GOA)
GOA is one of the new algorithms and mimics the swarming behavior of grasshoppers. Each grasshopper in the swarm has its own position which corresponds to a possible solution of a given optimization problem [35]. Xi represents the position of ith grasshopper given in equation 1.
It has three subcomponent Si the social interaction, Gi the force of gravity on ith grasshopper, and Ai the wind advection. Social interaction is dominant part all coming from grasshoppers themselves defined in equation 2.
where d_ij=|X_j-X_i | the distance between ith and jth grasshopper, s is a function to define the strength of social forces as shown in equation 3, and (d_ij=) (X_j-X_i )/d_ij is a unit vector from ith grasshopper to the jth grasshopper.
where f indicates the intensity of attraction, r is force of repulsion and l is the attractive length scale, value of which lies in interval [0,4] controls attraction or repulsion between grasshoppers. The area where there is no attraction and repulsion is called comfort area. The comfort area exists at exact distance of 2.079. The distance needs to be normalized to interval [1, 4], as s function can’t handle strong forces with large distances.
The G component has two parts, g is the gravitational constant and (e_g ) ̂ shows a unity vector towards the center of earth. Mathematical definition is given in equation 4.
The wind advection A is calculated as follows:
where u is a constant drift and eˆw is a unity vector in the direction of wind. Using components we can write equation 1 as:
The balance between exploration and exploitation in a stochastic algorithm helps to find the global optimum. So some special parameters were included to show exploration and exploitation in different stages of optimization. Mathematical model in equation 6 changes to:
where 〖ub〗_d is the upper bound, 〖lb〗_d is the lower bound in the d-th dimensions and Td is the value of d-th dimension in the target (best solution found so far). G component is ignored assuming no gravitational force and wind direction is always towards a target. The decreasing coefficient ‘c’ is used twice in equation 7 for controlling forces between grasshoppers and is updated with equation 8. The outer ‘c’ maintains balance between exploration and exploitation, while inner ‘c’ reduces repulsion/attraction forces between grasshoppers proportional to the number of iterations.
where cmax=1 is the maximum value, cmin=0.00001 is the minimum value, l indicates the current iteration, and L is the maximum number of iterations.
-
b.
Multi-objective grasshopper optimization algorithm (MOGOA)
MOGOA is extension of GOA to solve multi-objective optimization problems. A multi-objective algorithm should be able to get very accurate approximations of the true Pareto optimal solutions well-distributed across all objectives. To achieve these goals Pareto optimal dominance is used as two solutions cannot be compared with the regular relational operators and the best Pareto optimal solutions are stored in an archive.
The main challenge in designing MOGOA was to choose the target. For the purpose of target selection an archive of Pareto optimal solutions is maintained and target is one of them. Target selection is based on crowding distance similar to one in MOPSO [36] using following equation 9.
P_i is the probability of choosing the target from the archive and N_i number of solutions in neighborhood of the ith solution. Later this probability helps find target using roulette wheel selection.
The size of the archive is constant to control the computational cost of MOGOA, which may result in the issue of a full archive. To address this issue solutions in more populated area of archive are removed again using a roulette wheel and inverse of P_i. The archive is updated regularly in this way.
2.4 Simulated annealing
Simulated Annealing (SA) is a stochastic search method derived from Monte Carlo simulation, and was introduced in 1983. It is able to solve difficult combinatorial optimization problems. Annealing process is simulation of the thermal motion of atoms with heat bath, changing temperature from high to low [37]. SA is capable of avoiding local optima by adjusting temperature and changing solution based on probability function; given in Eq. 10.
where KB is Boltzmann’s constant, T is current temperature and E is energies of atoms. New solution is accepted or rejected on the value of probability function [38].
2.5 Chaotic theory
The nonlinear systems exhibit a kind of complex dynamical behavior, which is known as chaos. Chaotic variables are ordered, random, regular and argotic in nature [39]. Chaotic variable having these characteristics can be employed in optimization process to avoid local optima and facilitate global search [40]. To use chaos in optimization a linear mapping between the optimal variables and the chaotic variables is required. To accomplish this task chaotic mapping is used and most widely logistic maps are applied. Mathematical function which defines logistic map is as following Eq. 11:
where z_k is value of z in iteration k and its initial value is set to rand() in the interval [0,1], z_(k + 1) is new value of z and 3.57 < μ ≤ 4, but best results are obtained when μ = 4.
3 Proposed methodology
3.1 Simulated annealing multiobjective grasshopper optimization algorithm (SAMOGOA)
The parameter 'c' plays a crucial role in MOGOA, as it maintains a balance between exploration and exploitation. It also governs the attraction and repulsion forces and ensures the presence of a comfort area. However, it should be noted that the linear decrease of 'c' may not be suitable for all problem types when aiming to discover the true Pareto Front.
In this study, a new hybrid model of MOGOA is introduced, incorporating Simulated Annealing (SA) with symmetric perturbation. This approach involves mapping the position of the new grasshopper to the current optimal position within a symmetrical interval, which is determined by the product of the current temperature and a random number mapped to the dimensional space. By applying SA, the algorithm can randomly alter the current value of the control parameter 'c'. This adaptation helps enhance the search process, leading to the discovery of high-quality and diverse solutions in the Pareto front using Eq. 12.
where \({c}_{new}\) is new perturb ‘c’, \({c}_{old}\) is its value in previous iteration; constant in first iteration later but updated in every iteration, ns is number of steps in SA and N is number of grasshoppers in the swarm. In the process of annealing temperature is adjusted by following Eq. 13.
In Eq. 13, \(\alpha \in (\mathrm{0,1})\) is cooling coefficient which decreases temperature in each iteration. The value of α is set to 0.95 as in [42] where SA was used to change value of inertia weight. If the fitness of population increases new value of ‘c’ is accepted, otherwise probability is calculated by applying Gaussian probability function, shown in Eq. 14.
where \({fitness}_{new}\) is the fitness after obtaining new value of c using Eq. 12, \({fitness}_{old}\) is fitness in previous iteration, T is annealing temperature and \({K}_{B}\) is Boltzmann’s constant.
Following Eq. 15 changes ‘c’ using G(t) and next iteration starts.
The updated values of 'c' obtained through the SA process are utilized in MOGOA to adjust the positions of grasshoppers, facilitating quicker convergence of the algorithm. Through the optimization process, the SA search component aids SAMOGOA in escaping local optima and reaching global solutions. The flowchart depicted in Figure 1 illustrates the steps of SAMOGOA, with the modified steps highlighted within circles.
3.2 Chaotic simulated annealing multiobjective grasshopper optimization algorithm (CSAMOGOA)
This algorithm is variation of SAMOGOA by using chaos. The cooling coefficient α is tuned using logistic map given in equation 11, instead of constant value. So equation 13, will change to equation 17.
The initial value of α_k in the modified logistic map is set to 0.95, which is different from the use of rand() in the original logistic map. Subsequently, the Simulated Annealing (SA) process utilizes the new value of T generated using Eq. 17. This initial exploration of the search space allows the algorithm to explore different regions. As the algorithm progresses, the chaotic parameter chaotically exploits the neighborhood to converge towards optimal solutions. The steps of both newly proposed algorithm are shown in flowcharts in Fig. 1.
The computational complexity of SAMOGOA and CSAMOGOA is of O(MN2) where M is the number of objectives and N is the number of solutions. The complexity is equal to MOGOA.
4 Results and discussion
In this study thirteen different benchmark functions are used for evaluating performance of SAMOGOA. Three of them are from ZDT test suite, remaining ten belong to CEC2009 test suite. The Pareto optimal front of these functions have diverse geometries like convex, concave, linear and separated with multiple local fronts making it difficult to get true Pareto optimal front.
All the algorithms are implemented using MATLAB (R2015a) on Windows 10 Enterprise and Intel® Core™ i7-4700QM CPU@2.40 GHz processor with 16 GB ram. The parameter settings; Population size n = 200, maximum iterations = 100, dimensions dim = 30 and archive size = 100. The results of the newly proposed algorithm SAMOGOA and CSAMOGOA are compared with simple MOGOA. In this paper three different metrics are employed for quantitative analysis of the results for convergence and coverage. IDG shows the convergence, MS and SP are used to measure coverage. The metrics used are defined in following sub-sections.
4.1 Inverted generational distance (IGD)
Euclidian distance Eq. 18, is calculated for solutions in true PF denoted by P and a nearest point in obtained where N is number of reference points. PF Desired values of IDG are small.
4.2 Spacing (SP)
This metric calculates relative distance between consecutive points of the obtained PF using following Eq. 19.
where \(\overline{d }\) is average of all \({d}_{i}\) and n is number of objectives.
For all \(i,j=\mathrm{1,2},3\dots n.\) Where \(\overrightarrow{x}\) is solution in objective space. Small value of SP shows that points in the Pareto front are evenly distributed.
4.3 Maximum Spread (MS)
This metric gives strength of algorithm in terms of coverage. The higher value of MS shows better coverage. MS is calculated by finding Euclidean distance between minimum value \({b}_{i}\) and maximum value \({a}_{i}\) in each objective using following Eq. 20.
4.4 Results on ZDT Test Suite
Table 1 provides detailed information on three functions from the ZDT suite, all of which are bi-objective and not multimodal. The specifics include the function names and their characteristics.
Tables 2, 3, and 4 present the results for the best, average, worst, standard deviation, and median values of the Inverted Generational Distance (IGD), Spread (SP), and Maximum Spread (MS) metrics, respectively. These results are obtained from ten independent runs of the newly proposed algorithms, while the MOGOA results for the IGD metric are sourced from its original study. For the SP and MS metrics, the results are calculated by implementing MOGOA (referred to as MOGOA*) since only graphical representation was provided in the original study.
Upon analyzing Table 2, it is evident that SAMOGOA outperforms MOGOA across all three test functions. However, CSAMOGOA also demonstrates comparable results. The low values of the IGD metric indicate the accuracy of the algorithms in approximating the Pareto optimal front or set. Consequently, these results signify the good convergence of SAMOGOA towards Pareto optimal solutions.
By examining the results in Table 3, it becomes clear that CSAMOGOA outperforms the other two algorithms in terms of the Spread (SP) performance measure for all test functions. CSAMOGOA demonstrates a good distribution of solutions along the obtained Pareto front. However, SAMOGOA still produces better results than its parent algorithm, MOGOA. This observation is further supported by the graphs presented in Fig. 2, which depict the performance on three functions from the ZDT suite. SAMOGOA generates Pareto fronts with a greater number of solutions and a good distribution of solutions for ZDT1 and ZDT3. On the other hand, CSAMOGOA performs better than SAMOGOA on the ZDT2 function, although it is unable to surpass the performance of MOGOA. In summary, CSAMOGOA shows superior results in terms of SP performance measure and a good distribution of solutions along the Pareto front. However, SAMOGOA still demonstrates improvements over MOGOA, as evidenced by the graphs in Fig. 2.
It can be seen from Table 4 that SAMOGOA comes with better results for MS metric on ZDT1 function but for other two functions performance is not good. This degradation of values may be due to nature of the functions. The results on ZDT test suite demonstrate that Pareto optimal solution sets obtained by SAMOGOA has better diversity convergence, and uniform distribution of solutions along Pareto front, emerging as best among three algorithms, on the other hand its chaotic variant was unable to perform as good as expected. The graphical results of Pareto fronts obtained by different algorithms are shown in Fig. 3 illustrating superiority of newly proposed algorithm.
4.5 Results on CEC2009 test suite
All the ZDT problems are all bi-objective, uni-modal and first function is simple linear function, so CEC2009 test functions are utilized to benchmark the performance of newly proposed algorithms. Although test functions in this suite are all unconstraint but still complex, more difficult and diverse in the literature of MOPs used to verify the superiority of algorithms. The seven functions (UF 1-UF7) are bi-objective and three (UF8-UF10) are tri-objective, with mathematical definitions in Table 5. The results for these test functions are shown in Tables 6–8 and compared. On examining Table 6 it can be seen that SAMOGOA and CSAMOGOA come up with better results on all of the test functions in UF suite as compared to original MOGOA. IDG values produced by these algorithms are small providing evidence of fast convergence towards the Pareto optimal fronts accurately. Observing these results in detail reveal that chaotic version of newly proposed algorithm performed better on most of the bi-objective functions of CEC2009 test problems. These functions are UF1, UF2, UF3, UF4 and UF7. On the other side SAMOGOA shows up as winner on functions with three objectives.
Both of the newly proposed algorithms have shown superiority in terms of convergence of algorithms, for ensuring coverage is good too we need to inspect Tables 7 and 8 for other two performance measures. The results obtained by new algorithms demonstrate that SP and MS values are improved or almost same as for MOGOA. These results tell that proposed algorithms have advantage of better coverage as well.
Upon closer examination of the results, it becomes evident that SAMOGOA is a highly effective and valuable algorithm compared to other algorithms. Its effectiveness can be attributed to its fast convergence and high coverage. The fast convergence is achieved by incorporating target selection from MOGOA and using SA to escape local optima, thus supporting the global optimization process. The superior coverage of SAMOGOA is a result of proper selection pressure built through SA-assisted target selection archive maintenance, which improves diversity and spread of solutions across all objectives.
While SAMOGOA has demonstrated promising results, it shares a limitation with its parent algorithm, MOGOA, in that it is more suitable for problems with a small number of objectives. The algorithm's selection process relies on the Pareto dominance relation among different solutions, and as the number of non-dominant points increases, the archive fills up quickly, leading to slower optimization. Increasing the size of the archive to accommodate more objectives may hinder the algorithm's ability to converge to the true Pareto front. Therefore, SAMOGOA is most applicable to problems with a maximum of four objectives and continuous variables.
The movement of grasshoppers in SAMOGOA is influenced by the best non-dominated solutions in the archive and guided by changes in temperature during SA, enhancing both convergence and coverage. The results of the study provide evidence that SAMOGOA is highly competent for multi-objective optimization problems, as it combines the capabilities of both local and global search to strike a balance between exploration and exploitation. This enables the swarm to explore the entire solution space by controlling attraction/repulsion forces and maintaining a comfort area between grasshoppers.
5 Conclusion and future works
This research introduces two novel techniques to enhance the performance of the Multi-Objective Grasshopper Simulated Annealing (MOGOA) algorithm. The first technique involves using Simulated Annealing (SA) to perturb the parameter 'c' multiple times, evaluating the fitness values of grasshoppers at the end of each SA iteration, and selecting the best value of 'c' based on the comparison with the previous iteration. This selected value of 'c' is then utilized to control the optimization process of SAMOGOA.
The second technique combines chaos theory and SA to modify 'c', taking advantage of their favorable properties. To assess the effectiveness of the proposed algorithms, several benchmark test functions are employed, and their performance is compared against MOGOA. The results demonstrate that these new algorithms outperform MOGOA in terms of convergence and coverage.
Initially, the algorithms are tested on unconstrained continuous optimization problems, but they can be adapted to handle constraints and discrete variables in future work. The benchmark functions used in the evaluation have either two or a maximum of three objectives. Furthermore, the proposed algorithms can be extended to address many-objective optimization problems and tackle large-scale practical problems.
Currently, the same value of 'c' is utilized to control both exploration and exploitation, as well as attraction/repulsion forces and the comfort area between grasshoppers. However, it is also possible to explore the use of two different values of 'c' to fine-tune the algorithms. In summary, this study introduces innovative techniques that combine SA, symmetric perturbation, and chaos theory to improve the optimization process in multi-objective problems, showcasing superior performance compared to MOGOA. The algorithms have the potential to be applied to a wide range of practical applications, including engineering and various real-world scenarios.
Data availability
Data sharing is not applicable to this article as this paper generates and analyze simulations that are done through Matlab.
References
Gharehchopogh FS, Abdollahzadeh B. An efficient harris hawk optimization algorithm for solving the travelling salesman problem. Clust Comput. 2022;25(3):1981–2005.
Osaba E, Villar-Rodriguez E, Del Ser J, Nebro AJ, Molina D, LaTorre A, Herrera F. A tutorial on the design, experimentation and application of metaheuristic algorithms to real-world optimization problems. Swarm Evolut Comput. 2021;64:100888.
Huang W, Zhang Y, Li L. Survey on multi-objective evolutionary algorithms. J phys Conf Series. 2019;1288(1):012057.
Gu ZM, Wang GG. Improving NSGA-III algorithms with information feedback models for large-scale many-objective optimization. Futur Gener Comput Syst. 2020;107:49–69.
He Z, Yen GG, Yi Z. Robust multiobjective optimization via evolutionary algorithms. IEEE Trans Evol Comput. 2018;23(2):316–30.
Demir K, Nguyen BH, Xue B, Zhang M. A decomposition based multi-objective evolutionary algorithm with relieff based local search and solution repair mechanism for feature selection. In 2020 IEEE congress on evolutionary computation (CEC). IEEE. 2020;1–8.
Morales-Castañeda B, Zaldivar D, Cuevas E, Fausto F, Rodríguez A. A better balance in metaheuristic algorithms: does it exist? Swarm Evol Comput. 2020;54:100671.
Salih SQ, Alsewari AA. A new algorithm for normal and large-scale optimization problems: nomadic people optimizer. Neural Comput Appl. 2020;32(14):10359–86.
Hussain K, Salleh MNM, Cheng S, Shi Y. On the exploration and exploitation in popular swarm-based metaheuristic algorithms. Neural Comput Appl. 2019;31(11):7665–83.
Salgotra R, Singh U, Saha S. New cuckoo search algorithms with enhanced exploration and exploitation properties. Expert Syst Appl. 2018;95:384–420.
Primeau N, Falcon R, Abielmona R, Petriu EM. A review of computational intelligence techniques in wireless sensor and actuator networks. IEEE Commun Surveys Tutorials. 2018;20(4):2822–54.
Houssein EH, Mahdy MA, Shebl D, Mohamed WM. A survey of metaheuristic algorithms for solving optimization problems in metaheuristics. in machine learning: theory and applications. Cham: Springer; 2021.
Rauf HT, Bangyal WHK, Lali MI. An adaptive hybrid differential evolution algorithm for continuous optimization and classification problems. Neural Comput Appl. 2021;33(17):10841–67.
Oliveira PM, Solteiro Pires EJ, Boaventura-Cunha J, Pinho TM. Review of nature and biologically inspired metaheuristics for greenhouse environment control. Trans Inst Meas Control. 2020;42(12):2338–58.
Godzik M, Dajda J, Kisiel-Dorohinicki M, Byrski A, Rutkowski L, Orzechowski P, Moore JH. Applying autonomous hybrid agent-based computing to difficult optimization problems. J Comput Sci. 2022. https://doi.org/10.1016/j.jocs.2022.101858.
Talbi EG. Machine learning into metaheuristics: a survey and taxonomy. ACM Comput Surveys (CSUR). 2021;54(6):1–32.
Moayedi H, Le Van B. The applicability of biogeography-based optimization and earthworm optimization algorithm hybridized with ANFIS as Reliable Solutions in Estimation of Cooling Load in Buildings. Energies. 2022;15(19):7323.
Abbasi A, Firouzi B, Sendur P, Heidari AA, Chen H, Tiwari R. Multi-strategy Gaussian Harris hawks optimization for fatigue life of tapered roller bearings. Eng Comput. 2022;38:4387–413.
Aldosari F, Abualigah L, Almotairi KH. A Normal distributed dwarf mongoose optimization algorithm for global optimization and data clustering applications. Symmetry. 2022;14(5):1021.
Gharehchopogh FS, Gholizadeh H. A comprehensive survey: whale optimization algorithm and its applications. Swarm Evol Comput. 2019;48:1–24.
Xiong H, Qiu B, Liu J. An improved multi-swarm particle swarm optimizer for optimizing the electric field distribution of multichannel transcranial magnetic stimulation. Artif Intell Med. 2020;104:101790.
Xin J, Li S, Sheng J, Zhang Y, Cui Y. Application of improved particle swarm optimization for navigation of unmanned surface vehicles. Sensors. 2019;19(14):3096.
Dereli S, Köker R. Strengthening the PSO algorithm with a new technique inspired by the golf game and solving the complex engineering problem. Complex Intell Syst. 2021;7(3):1515–26.
Marzoughi A, Savkin AV. Autonomous navigation of a team of unmanned surface vehicles for intercepting intruders on a region boundary. Sensors. 2021;21(1):297.
Karim AA, Isa NAM, Lim WH. Modified particle swarm optimization with effective guides. IEEE Access. 2020;8:188699–725.
Ji Y, Liew AWC, Yang L. A novel improved particle swarm optimization with long-short term memory hybrid model for stock indices forecast. IEEE Access. 2021;9:23660–71.
Zhang J, Sheng J, Lu J, Shen L. UCPSO: a uniform initialized particle swarm optimization algorithm with cosine inertia weight. Comput Intell Neurosci. 2021. https://doi.org/10.1155/2021/8819333.
Lee JH, Delbruck T, Pfeiffer M. Training deep spiking neural networks using backpropagation. Front Neurosci. 2016;10:508.
Mirjalili SZ, Mirjalili S, Saremi S, Faris H, Aljarah I. Grasshopper optimization algorithm for multi-objective optimization problems. Appl Intell. 2018;48(4):805–20.
Del Ser J, Osaba E, Molina D, Yang XS, Salcedo-Sanz S, Camacho D, Herrera F. Bio-inspired computation: Where we stand and what’s next. Swarm Evolut Comput. 2019;48:220–50.
Okuyama T, Hayashi M, Yamaoka M. An Ising computer based on simulated quantum annealing by path integral Monte Carlo method. In 2017 IEEE international conference on rebooting computing (ICRC). IEEE. 201;1–6.
Vökler S, Baier D. Investigating machine learning techniques for solving product-line optimization problems. Archives Data Sci. 2020;6:1–11.
Pilatowsky-Cameo S, Villaseñor D, Bastarrachea-Magnani MA, Lerma-Hernández S, Santos LF, Hirsch JG. Ubiquitous quantum scarring does not prevent ergodicity. Nat Commun. 2021;12(1):1–8.
Asghari K, Masdari M, Gharehchopogh FS, Saneifard R. A chaotic and hybrid gray wolf-whale algorithm for solving continuous optimization problems. Progress Artificial Intell. 2021;10(3):349–74.
Sayed GI, Khoriba G, Haggag MH. A novel chaotic salp swarm algorithm for global optimization and feature selection. Appl Intell. 2018;48(10):3462–81.
Schaffer JD. "Multiple Objective Optimization with Vector Evaluated Genetic Algorithms," in Proceedings of the 1st International Conference on Genetic Algorithms, New Jersey, USA.
Srinivas N, Deb K. Multiobjective optimization using Nondominated sorting in genetic algorithms. Evol Comput. 1994;2(3):221–48.
Cao YJ, Wu QH. teaching genetic algorithm using matlab. Int J Elect Enging Educ. 1999;36:139–53.
Fonseca CM, Fleming PJ. Multiobjective genetic algorithms. In IEEE colloquium on genetic algorithms for control systems engineering. 1993;6–1.
Fonseca CM, Fleming PJ. Genetic algorithms for multiobjective optimization: formulation. Discuss General. 1993;93:416–23.
Yang XS, He X. Bat algorithm: literature review and applications. Int J Bio-Inspired Comput. 2013;5(3):141–9.
Deb K, Agrawal S, Pratap A, Meyarivan T. (2000). A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In Parallel Problem Solving from Nature PPSN VI: 6th International Conference Paris, France, September 18–20, 2000 Proceedings , 849–858.
Coello CC, Lechuga MS. MOPSO: a proposal for multiple objective particle swarm optimization. Proce Congress Evolut Comput. 2002;2:1051–6.
Pulido GT, Coello CA. Using Clustering Techniques to Improve the Performance of a Multi-objective Particle Swarm Optimizer. In: Deb K, editor. Genetic and Evolutionary Computation–GECCO 2004: Genetic and Evolutionary Computation Conference. Seattle: Springer; 2004.
Leong WF, Yen GG. Dynamic population size in PSO-based multiobjective optimization. Vancouver: In IEEE Congress on Evolutionary Computation; 2006. p. 1718–25.
Sakib N, Kabir MWU, Subbir M, Alam S. A comparative study of flower pollination algorithm and bat algorithm on continuous optimization problems. Int J Soft Comput Eng. 2014;4:13–9.
Yang XS, Karamanoglu M, He X. Flower pollination algorithm: a novel approach for multiobjective optimization. Eng Optim. 2014;46(9):1222–37.
Paramasivan P, Santhi RK. Non-dominated sorting flower pollination algorithm for dynamic economic emission dispatch. Int J Comput Appl. 2015;130(9):19–26.
Bensouyad M, Saidouni DE. A discrete flower pollination algorithm for graph coloring problem. In IEEE 2nd international conference on cybernetics (CYBCONF). 2015.
Kabir MN, Ali J, Alsewari AA, Zamli KZ. An adaptive flower pollination algorithm for software test suite minimization. In 2017 3rd international conference on electrical information and communication technology (EICT). 2017;1–5.
Kabir MWU, Sakib N, Chowdhury SMR, Alam MS. A novel adaptive bat algorithm to control explorations and exploitations for continuous optimization problems,". Int J Comput Appl. 2014. https://doi.org/10.5120/16402-6079.
Wang G-G, Gandomi AH, Yang X-S, Alavi AH. A new hybrid method based on krill herd and cuckoo search for global optimisation tasks,". Int Bio-Inspired Comput. 2016;8(5):286–99.
Mirjalili S, Jangir P, Saremi S. Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl Intell. 2017;46(1):79–95.
Dorigo M, Birattari M, Stutzle T. Ant colony optimization. IEEE Comput Intell Mag. 2007;1(4):28–39.
Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Global Optim. 2007;39(3):459–71.
Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim. 1997;11(4):341–59.
Nicoară E. (2007). Performance measures for multi-objective optimization algorithms Buletinul Universităţii Petrol–Gaze din Ploieşti. Seria Matematică-Informatică-Fizică. 59(1):19–28.
Rahman CM, Rashid TA. A new evolutionary algorithm: learner performance based behavior algorithm. Egypt Inf J. 2021;22(2):213–23.
Rahman CM, Rashid TA. Dragonfly algorithm and its applications in applied science survey. Comput Intell Neurosci. 2019;2019:9293617.
Ahmed AM, Rashid TA, Saeed SM. Cat swarm optimization algorithm: a survey and performance evaluation. Comput Intell Neurosci. 2020;2020:20.
Hassan BA, Rashid TA. Operational framework for recent advances in backtracking search optimisation algorithm: a systematic review and performance evaluation. Appl Math Comput. 2019;370:124919.
Shamsaldin AS, Rashid TA, Al-Rashid RA, Al-Salihi NK, Mohammadi M. Donkey and smuggler optimization algorithm: a collaborative working approach to path finding. J Comput Design Eng. 2019;6:562–83.
Abdullah JM, Rashid T. Fitness dependent optimizer: inspired by the bee swarming reproductive process. IEEE Access. 2019;7:43473–86.
Muhammed DA, Saeed SAM, Rashid TA. Improved fitness-dependent optimizer algorithm. IEEE Access. 2020;8:19074–88.
Sharifai AG, Zainol ZB. Multiple filter-based rankers to guide hybrid grasshopper optimization algorithm and simulated annealing for feature selection with high dimensional multi-class imbalanced datasets. IEEE Access. 2021;9:74127–42. https://doi.org/10.1109/ACCESS.2021.3081366.
Rashno A, Shafipour M, Fadaei S. Particle ranking: an efficient method for multi-objective particle swarm optimization feature selection. Knowl-Based Syst. 2022;245:108640.
Shafipour M, Rashno A, Fadaei S. Particle distance rank feature selection by particle swarm optimization. Expert Syst Appl. 2021;185:115620.
Gharehchopogh FS, Ucan A, Ibrikci T, Arasteh B, Isik G. Slime mould algorithm: a comprehensive survey of its variants and applications. Archives Comput Methods Eng. 2023. https://doi.org/10.1007/s11831-023-09883-3.
Zaman HRR, Gharehchopogh FS. An improved particle swarm optimization with backtracking search optimization algorithm for solving continuous optimization problems. Eng Comput. 2022;38(Suppl 4):2797–831.
Shishavan ST, Gharehchopogh FS. An improved cuckoo search optimization algorithm with genetic algorithm for community detection in complex networks. Multimedia Tools Appl. 2022;81(18):25205–31.
Gharehchopogh FS. Quantum-inspired metaheuristic algorithms: comprehensive survey and classification. Artif Intell Rev. 2023;56(6):5479–543.
Mohammadzadeh H, Gharehchopogh FS. Feature selection with binary symbiotic organisms search algorithm for email spam detection. Int J Inf Technol Decis Mak. 2021;20(01):469–515.
Acknowledgements
This work is partially funded by Brazilian National council for Scientific and Technological Development-CNPq, via Grant No 313036//2020-9.
Author information
Authors and Affiliations
Contributions
F.S. Conceptualization; M.R. Supervision; A.Z. review; K.Z. Methodology; A.K.D funding and Editing; B.F. Investigation; A.A. Software, Writing—review & editing; S.R Review & editing; J.P.C.R Review.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sajjad, F., Rashid, M., Zafar, A. et al. An efficient hybrid approach for optimization using simulated annealing and grasshopper algorithm for IoT applications. Discov Internet Things 3, 7 (2023). https://doi.org/10.1007/s43926-023-00036-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43926-023-00036-3