Abstract
The Mountain Gazelle Optimizer (MGO) algorithm has become one of the most prominent swarm-inspired meta-heuristic algorithms because of its outstanding rapid convergence and excellent accuracy. However, the MGO still faces premature convergence, making it challenging to leave the local optima if early-best solutions neglect the relevant search domain. Therefore, in this study, a newly developed Chaotic-based Mountain Gazelle Optimizer (CMGO) is proposed with numerous chaotic maps to overcome the above-mentioned flaws. Moreover, the ten distinct chaotic maps were simultaneously incorporated into MGO to determine the optimal values and enhance the exploitation of the most promising solutions. The performance of CMGO has been evaluated using CEC2005 and CEC2019 benchmark functions, along with four engineering problems. Statistical tests like the t-test and Wilcoxon rank-sum test provide further evidence that the proposed CMGO outperforms the existing eminent algorithms. Hence, the experimental outcomes demonstrate that the CMGO produces successful and auspicious results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Recently, several challenging non-linear optimization problems have been addressed by employing non-conventional meta-heuristic algorithms inspired by natural phenomena as conventional algorithms often fail to yield desired results in certain situations [1]. Meta-heuristic Algorithms (MAs) use social behavior of animals or natural systems to find the optimal solution for specific problems by generating a random population through stochastic processes and iterations [2]. The majority of MAs may be categorized into two main groups: Firstly, those influenced by biological processes inherent in nature and secondly, those entirely dependent on natural phenomena. Several meta-heuristics that have attracted many scholars are cited over the past 20 years. Some of the MAs which are influenced by biological process are: Evolutionary Programming (EP) [3], Evolutionary Strategy (ES) [4], Genetic Algorithm (GA) [5], Genetic Programming (GP) [6], Differential Evolution (DE) [7], Particle Swarm Optimization (PSO) [8], Ant Colony Optimization (ACO) [9], Cuckoo Search (CS) [10], Artificial Bee Colony (ABC) [11], Grey Wolf Optimizer (GWO) [12], Whale Optimization Algorithm (WOA) [13], Salp Swam Algorithm (SSA) [14], Moth-Flame Optimization (MFO) [15], Ant Lion Optimizer (ALO) [16], Aquila Optimizer (AO) [17], White Shark Optimizer (WSO) [18], American zebra optimization algorithm (AZOA) [19], and Modified Whale Optimisation Algorithm (MWOA) [20]. Numerous MAs which are dependent on natural phenomenon are: Simulated Annealing (SA) [21], Big Bang-Big Crunch (BBBC) [22], Gravitational Search Algorithm (GSA) [23], Black Hole Algorithm (BHA) [24], Sine Cosine Algorithm (SCA) [25], Multi-verse Optimizer (MVO) [26], Equilibrium Optimizer (EO) [27], Tabu Search (TS) [28], Harmony Search (HS) [29], Teaching Learning-Based Optimization (TLBO) [30], Successful History-based Adaptive DE variants with linear population size reduction (LSHADE) [31], Covariance Matrix Adaptation of Evolution Strategy (CMAES) [32], Biogeography-Based Optimization (BBO) [33], Farmland Fertility Algorithm (FFA) [34], Cosine Swarm Algorithm (CSA) [35], and Young double-slit experiment optimizer (YDSE) [36].
Despite the fact that most of the aforementioned MAs have been extensively deployed to address optimization problems, poor convergence or getting stuck in local optima are still prevalent problems. Thus, the growth of thought-provoking optimization problems has led to the expansion of new optimization algorithms that consistently generate better outcomes and amend existing techniques. The “No Free Lunch” (NFL) [37] theory is applicable in these circumstances. This theorem has demonstrated that no optimization technique can effectively address every optimization challenge. According to this NFL theorem, several scholars have projected many natural-inspired meta-heuristic algorithms. Therefore, in this study, the Mountain Gazelle Optimizer (MGO) [38], which was projected by Benyamin Abdollahzadeh et al. in 2022, was a thought-provoking MA that was an alternative for global optimization. MGO draws its inspiration from the hierarchy and social organization of wild mountain gazelles and is mathematically simulated based on this inspiration. The algorithm is modeled using the fundamental factors, including motherhood herds, bachelor male herds, territorial males, solitary, and foraging movements. The exploitation (intensification) and exploration (diversification) stages of the MGO are performed concurrently using four processes. Despite MGO’s superior performance, simulations show that early iterations may become stuck in less desirable search space domains when interacting with high-dimensional problems, indicating that solutions may not reach diversity, but interesting domains are discovered after a few cycles [39]. Examining the advantages and disadvantages of MGO algorithm and improving their effectiveness by implementing new or modified mechanisms have become fascinating research challenges [40, 41]. Chaos theory is one of the fascinating concepts that is widely used in non-linear dynamic systems due to its ergodicity, unpredictability, and regularity properties. It has been effectively applied in technical optimization procedures, as its initial population diversity helps avoid local optima and early convergence trappings, making it an effective tool in various applications [42]. They have not offered a strong theoretical foundation for improving the effectiveness of meta-heuristic algorithms with chaotic maps. Therefore, numerous chaos maps have been applied to enhance variants of meta-heuristic algorithms by analyzing the literature [43]. Inspired by the benefits of chaotic maps and the shortcomings of MGO, as explained above, this paper proposes a unique chaos-based algorithm termed the Chaotic Mountain Gazelle Optimizer (CMGO). The chaotic maps are incorporated into MGO to enhance its diversification and intensification while preventing premature convergence and local optimum traps. It is thus anticipated that a detailed experimental study will be carried out later in the article to demonstrate the robustness of the newly designed CMGO approach. The main contribution of the paper is provided as below:
-
In this work, the MGO algorithm is combined with ten distinct chaotic maps to introduce the CMGO technique for the first time.
-
Chaos theory has enhanced the exploitation ability of the classical MGO and helped it avoid falling into local optimal solutions.
-
The effectiveness of 23 standard benchmarks of CEC2005 and 10 complex functions of CEC2019 has been assessed through the CMGO approach.
-
To validate that the suggested CMGO is statistically superior, the t-test and Wilcoxon statistical tests have been performed.
-
For evaluating the proposed CMGO algorithm’s problem-solving capability, four real engineering design challenges are addressed.
The remaining section of the research is summarized as follows: Sect. 2 describes related works on the modified MAs and hybridization of chaotic maps. Section 3 yields details of the Mountain Gazelle Optimizer and chaotic maps. Section 4 embodies the description of the projected CMGO algorithm. The numerical experiments and result analysis are established in Sect. 5. Section 6 describes the implementation of the CMGO algorithm to solve practical engineering challenges, and Sect. 7 presents the final conclusion and suggests some few ideas for further research.
2 Related Work
An efficient meta-heuristic strikes a good balance between exploration and exploitation to preserve population diversity and increase the algorithm’s reliability and rate of convergence. Hence, several attempts have been tried in past to increase the efficiency of the existing meta-heuristics. The African Vultures Optimization Algorithm (AVOA) has been enhanced with the Quantum Rotation Gate mechanism by enhancing population diversity and optimizing local trap escapes [44]. The proposed ResNet50 Convolutional Neural Network (CNN) model utilizes a hybridization of Particle Swarm Optimization (PSO) to introduce different subtypes [45]. To overcome the shortcomings of WOA, an alternative based on multi-population evolution (MEWOA) has been proposed recently [46]. Similarly, the chaotic functions are mostly used to balance the exploration and exploitation phases in meta-heuristic optimization algorithms. It can increase convergence speed and diversity in meta-heuristic algorithms with chaotic maps. The chaos theory is one of the characteristics of nonlinear systems [47] which defines as the randomness generated by the systems. Many researchers have added chaos theory to different meta-heuristic optimization algorithms to increase the algorithm’s ability to obtain the optimum solution. In this section, some of the hybridization, modification and chaos-based algorithms are discussed. Gupta et al. [48] proposed novel algorithm OCS-GWO that enhances the performance of original GWO by introducing the opposition-based learning to approximate the closer search candidate solution to the global optima and chaotic local search for the exploitation of the search regions efficiently. In OCS-GWO, a chaotic local search is used for balancing the exploration and exploitation operators that are the underlying features of any stochastic search algorithm. Kumar et al. [49] proposed chaotic teaching learning-based algorithm premature convergence and lack of trade-off between local search and global search. Li et al. [50] suggested chaotic arithmetic optimization algorithm (AOA) to improve the exploration and exploitation capabilities of AOA. Firstly, ten chaotic maps are separately embedded into two parameter Arithmetic Optimization Accelerator (MOA) and Arithmetic Optimization Probability, then a combination test was carried out by embedding ten chaotic maps into MOA and MOP at the same time. Kaur et al. [51] introduces chaos theory into WOA to enhance the global convergence speed. The fast random opposition-based learning Golden Jackal Optimization algorithm (FROBL-GJO) was introduced by Mohapatra et al. in 2023 [52], a new technique that enhances the precision and convergence speed of the GJO algorithm. Botnet detection in the Internet of Things is addressed by an innovative binary multi-objective dynamic Harris Hawks Optimization (HHO) improved with mutation operator (MODHHO) proposed by Gharehchopogh et al. [53]. A multi-strategy improved Harris Hawks optimization method has been presented by Gharehchopogh for social network community discovery [54]. Chandran, Vanisree et al. introduces an Enhanced Opposition-Based Learning and is incorporated into Grey Wolf Optimizer to form EOBGWO, a novel technique, to boost the efficiency of the traditional GWO method [55]. An improved version of the Golden Jackal Optimization (GJO) algorithm, incorporates the opposition-based learning (OBL) approach with a probability rate, enabling the algorithm to escape from local optima [56]. The proposed chaotic IAS (Interactive Autodidactic School) model utilizes ten chaotic maps and the intra-cluster summation fitness function to enhance the results of the chaotic IAS algorithm and is introduced by Gharehchopogh et al. [57]. An improved chaotic PSO algorithm, based on adaptive inertia weight (AIWCPSO), enhances population diversity and particle periodicity by appropriately generating the initial population [58]. The Chaotic Marine Predators Algorithm (CMPA) is a novel metaheuristic introduced by Sumit et al. that optimizes engineering problems by combining the exploration capabilities of the Marine Predators Algorithm with the exploitation capabilities of chaotic maps [59]. New 10 chaotic maps are incorporated in SHO to increase the performance of SHO by producing chaotic values instead of random ones, aiming to improve convergence speed and avoid local optimums [60].
3 Overview of MGO and Chaotic Maps
3.1 Mountain Gazelle Optimizer (MGO)
Mountain Gazelle Optimizer is a novel nature-inspired algorithm proposed in 2022 [38], which is inspired by social structure and hierarchy of wild mountain gazelles. A mathematical representation of the MGO algorithm has been developed using the fundamental ideas underlying the social and group behavior of mountain gazelles. The four primary aspects of the mountain gazelle’s life— territorial solitary males, maternity herds, bachelor male herds, and migration in search of food—are employed by the MGO algorithm to execute optimization procedures. Figure 1 illustrates the herd of mountain gazelles, known for their continuous long-distance journey in search of food. Indigenous to the Arabian Peninsula and its surroundings, these gazelles boast a widespread distribution across the region, despite its low population density. Its habitat shares a close connection with the habitat of the Robinia tree species.
3.1.1 Territory Solitary Males
When mountain gazelle males are powerful enough and have reached adulthood, they establish solitary territories and are fiercely protective of themselves, with considerable distances separating the territories. Adult male gazelles engage in conflict over the ownership or territory of the female. The young males attempt to claim either the female or the territory, while mature males work to preserve their habitat. Below is an illustration of the territory of an adult male:
Here, the best male adult’s position vector is denoted as malegazelle. The coefficient vector of the young male herd is identified as BH. \(ri_1\) and \(ri_2\) are random numbers with values of 1 or 2. \(X_{ra}\) signifies as young male in the range of ra, while \(M_{pr}\) implies the average number of populations \(\big \lceil \frac{N}{3}\big \rceil\) that were arbitrarily nominated. \(Cof_i\) symbolizes an arbitrarily generated coefficient vector that is updated after every iteration to increase the effectiveness of the search region. The total number of gazelles is N, while \(r_1\), \(r_2\), \(r_3\) and \(r_4\) designate as arbitrary values in between 0 to 1. The cosine function denoted as cos, whereas, exp embodies the exponential function. \(N_1\) symbolizes the arbitrary number drawn from the standard distribution. \(N_2\), \(N_3\) and \(N_4\) characterizes the arbitrary numbers in the normal range and the dimensions of the problem. Finally, Maxit and it expresses the maximum iterations and the current iteration, respectively.
3.1.2 Maternity Herds
Mountain gazelles are a species that depend on maternity herds to produce strong male offspring, which is a crucial part of their life cycle. Male gazelles may additionally be associated with the birth of gazelles and the attempts of young males to seize females. This behavior is articulated as:
Here, the arbitrary integers 1 or 2 are represented by \(ri_3\) and \(ri_4\).A gazelle is arbitrarily chosen from the total gazelles, and \(X_{rand}\) displays its vector position.
3.1.3 Bachelor Male Herd
Male gazelles have a tendency to generate territories and seize control of female gazelles as they mature. An abundance of violence may be involved when the young male gazelles engage in this conflict with the adult males for dominance of the female gazelles’ territory. The mathematical model of this behavior is expressed below:
Here, the random number between 1 and 2 is selected as \(ri_5\) and \(ri_6\). The random number ranges from 0 to 1 is denoted as \(r_6\). The vector position of gazelle in the current generation is indicated by the symbol X(t).
3.1.4 Migration in Search of Food
Mountain gazelles are perpetually on the lookout for sources of food and will travel large distances to seek food and migrate. Mountain gazelles, on the other hand, have a rapid running gait and strong leaping abilities. This behavior of gazelles has been numerically described using the below equation:
Here, \(r_7\) indicates the arbitrary number between 0 to 1. lb and ub illustrate the lower and upper bounds, respectively. An overview of the optimization method based on the four MGO’s components is displayed in Fig. 2. The MGO algorithm operates on exploiting and exploring phases simultaneously by employing four mechanisms in parallel, according to its inherent nature.
3.2 Chaotic Maps
Chaos is a phenomenon that can exhibit non-linear behavior changes in response to minor changes in its beginning phase. It may be defined mathematically as the arbitrary nature of a fundamental dynamic deterministic framework, and can be regarded as a cause of randomness. It possesses the characteristics of non-repetition and ergodicity; as a result, it will execute searching at a faster rate than random searches, which rely largely on probability. Chaos exhibits regularity as it is produced by specified functions called the chaotic functions or chaotic maps. These chaotic maps are extensively utilized in numerical analytic techniques, image encryption, and cryptology, as well as modeling complex problems in the domains of medicine, ecology, economics, and engineering [61]. In this study, chaotic sets are produced using 10 discrete non-invertible one-dimensional mappings. Table 1 emphasizes the mathematical model of the well-known chaotic maps. Figure 3 illustrates how the deterministic chaotic map equations contain stochastic variations. According to observations, \(Cof_i\) is crucial for maintaining an appropriate balancing in MGO between the intensification and diversification phases. MGO’s exploitation and convergence speed will be significantly improved by incorporating chaotic maps.
4 Proposed Chaotic MGO (CMGO)
Chaotic maps have a significant impact on the quality of the solutions generated by optimization algorithms. Despite having a better convergence rate, MGO is still unable to outperform other methods in locating global optima that have an impact on the algorithm’s convergence rate. Therefore, the CMGO algorithm was established by incorporating chaos into the MGO algorithm to improve efficiency in searching for the global optimum by preventing getting entangled in local optima. In the MGO algorithm, four strategies are applied to simultaneously complete the exploration (diversification) and exploitation (intensification) stages. According to observations in MGO, the coefficient vector (\(Cof_i\)) is crucial for maintaining an appropriate balance in MGO between the intensification and diversification phases. MGO’s exploitation and convergence speeds will be significantly improved by incorporating chaotic maps. The mathematical expression of the three strategies is given by Eqs. (1), (6), and (7). To enhance the exploration and exploitation phases of the MGO algorithm, the proposed CMGO approach generates the values of these coefficient variables using 10 distinct chaotic maps which is displayed in Fig. 4. Chaos variables are generated through the utilization of chaotic maps instead of coefficient factors. The variables acquired from chaotic maps are generated in a certain order, while they are unordered with coefficient factor. This explains why chaotic maps ought to be incorporated into optimization techniques to improve their efficacy [49]. Mathematically, Eqs. (1), (6) and (7) in the MGO has been transformed to Eqs. (10), (11) and (12), resulting in the CMGO proposal.
Here, Chas(t) is utilized instead of the coefficient vector as the chaotic variable. Ten different chaotic maps are simultaneously used in the formation of CMGO. Algorithm-1 demonstrates the CMGOs pseudo-code. Figure 5 displays the flowchart of the proposed CMGO algorithm.
5 Numerical Experiments and Result Analysis
This section stipulates the outcomes of simulations and a comprehensive analysis of the newly introduced CMGO algorithm to address global optimization problems with less time and greater accuracy. The proposed CMGO was assessed over 33 distinct benchmark functions (23 from CEC2005 [62] and 10 from CEC2019 [63]) and applied to four real-world engineering design issues to measure its performance. In this method, several chaotic maps were implemented in MGO individually to generate CMGO1 to CMGO10, which characterize Chebyshev maps, circle maps, Gauss/mouse maps, Iterative maps, logistic maps, piecewise maps, sine maps, singer maps, sinusoidal maps, and tent maps, and was compared to MGO. Furthermore, the validation process of the CMGO algorithm involved conducting both t-test [64] and Wilcoxon-rank tests [65]. These statistical assessments were performed to validate the robustness and effectiveness of the proposed CMGO algorithm. The results obtained from CMGO across the 33 comparison functions were contrasted with those derived from the original MGO, alongside widely recognized meta-heuristic optimization algorithms like PSO [8], GWO [12], DE [7], LSHADE [31], CMAES [32], FFA [34], and WSO [18] found in the existing literature. For each function, 30 individual runs are conducted, employing 30 search agents and performing 500 iterations. The CMGO algorithm was rigorously tested and executed on an 11th Gen Intel Core CPU running Windows 11 with a processing power of 2.42GHz and 16GB of RAM using MATLAB R2021b.
5.1 Test Functions and Parameter Settings
A test function is often described as an artificial problem that may be employed to estimate how well an algorithm works under various challenging conditions. For the purpose of assessing CMGO performance, a set of 23 benchmark functions from CEC2005 [62] are separated into three groups: The first (F1–F7) unimodal groups which have one global optimum are utilized to determine the exploitation (intensification) of the algorithm. The second (F8–F13) multi-modal groups that have multiple local extrema are employed to evaluate an algorithm’s capacity to prevent local sluggishness in sub-optimal regions. Finally, the third (F14–F23) fixed-dimension multi-modal groups have several optimal points and fewer local minima to assess the algorithm’s exploration (diversification) ability. The derivation of the benchmark functions and their global optimum values are presented in Tables 2, 3 and 4.
The complexity of 10 functions in CEC2019 [63] is significantly higher than standard functions. The algorithm evaluation method also emphasizes the need for generating accurate outcomes. The mean and standard deviation (std) measurements were frequently considered to be the best options for these benchmark functions. The purpose of standard deviation analysis is to verify that the performance of the algorithms has remained constant across the thirty different runs. Setting algorithm parameters to their default values is a prudent and appropriate method, as illustrated in Table 5.
5.2 Performance of Different Chaotic Maps with MGO.
This subsection describes the approach of employing 10 different alternative chaotic maps separately to enhance and refine the performance of the MGO algorithm. The results of different chaotic maps and MGO algorithm obtained from 23 functions of CEC2005 and 10 functions of CEC2019 with averages (Mean) and standard deviations (Std) for each test function are separately presented in Tables 6, 7, respectively. The best mean values have been highlighted in boldfaced in Table 6, 7. As observed in Table 6, the optimization efficiency of the method CMGO for the unimodal functions F1–F7 is considerably increased, indicating that CMGO has excellent exploitation capability and can discover the best global solution (F1–F5 and F7). It has been observed that the best outcomes achieved with CMGO2-CMGO10 for F1, CMGO3-CMGO10 for F2, CMGO1 for F3, CMGO8-CMGO9 for F4, CMGO6 and CMGO9 for F5, and CMGO10 for F7. In the group of multi-modal functions, F8, F9 and F11 shows the similar performance for MGO and all other chaotic MGO. CMGO2-CMGO10 for F10 and CMGO6 for F13 outperforms better than MGO. This demonstrates that the newly proposed CMGO has excellent exploration and flexibility when dealing with multimodal functions. In the group of fixed-dimension multi-modal functions, F14 and F16 achieve the same performance in MGO and all other chaotic MGOs. Except for F15 and F18, CMGO1 outperforms better results than MGO. The three enhancement techniques increase the algorithm’s optimization performance and convergence speediness. Moreover, the optimization accuracy of the CMGO method for the CEC2019 functions has been enhanced to different degrees when compared to the original MGO algorithm. On the basis of the results observed in Table 7, CMGO1 for cec02, cec03 and cec06 along with CMGO2 for cec10 achieved the best outcomes when compared to MGO. Therefore, from the aforementioned experiment research, the projected chaotic MGO (CMGO) outperform better in maximum functions and is appropriate for evaluating exploration (diversification) and exploitation (intensification) capability.
5.3 Performance of CMGO with Other State-of-the-Art Algorithms
This study compares the performance and overall efficiency of the proposed CMGO technique against several top-performing algorithms, including PSO, GWO, DE, LSHADE, CMAES, FFA, WSO, and MGO. The outcomes presented in Table 8 demonstrate that the newly designed CMGO algorithm outperformed the comparative algorithms for the majority of the first group of unimodal functions (F1–F5 and F7). Therefore, the CMGO approach is best suited to assess its exploitation potential. The suggested CMGO technique may produce fascinating results on the second group of multi-modal functions for F9, F11, and F13, whereas WSO and MGO offer superior results for F8 and F12, respectively. It reveals that for addressing multimodal functions, the projected CMGO has outstanding exploration and adaptability. On the group of fixed-dimension multi-modal functions, the designed CMGO algorithm achieved superior results for F14, F21, F22, and F23, while achieving comparable results for F16 and F17. The experimental investigation serves as an illustration of how the CMGO algorithm appropriately blends exploitation and exploration. However, Table 9 divulges that the proposed CMGO algorithm outperforms better in fewer functions like cec02, cec03, and cec10, whereas LSHADE performs better than the CMGO algorithm. In general, the suggested CMGO delivers complimentary benefits to improve the global search capacity, enabling it to identify an accurate solution.
5.4 Sensitivity Analysis
The sensitivity analysis of the CMGO algorithm is examined to assess the quality of the proposed CMGO. Here, the impact of varying the parameters of the CMGO on how it operates is investigated. Generally speaking, an initial point is the only major parameter in CMGO; hence, several scenarios are constructed based on the values of these factors. These factors are assessed at one of the following values: 0.1, 0.3, or 0.7. The test functions that have been selected for the experiment are F1, F4, and F7. The graphical and statistical results for each of the conditions are presented in Fig. 6 and Table 10, respectively. The outcomes demonstrate that the CMGO approach yields excellent results when the initial point (IP) is set at 0.7.
5.5 Statistical Analysis
The purpose of the statistical tests is to identify significant differences between the suggested CMGO approach and other well-established methods. In this work, the benchmark functions applied to all other algorithms are statistically analyzed employing the commonly-used t-test [64] and the non-parametric Wilcoxon rank-sum test [65]. The t-values for each function in the t-test may be determined by considering both approaches simultaneously. The statistical t-values are presented in Tables 8, 9. The last row in Tables 8, 9, gives the win, tie, and loss total for CMGO, which contributes in producing more efficient outcomes than other various meta-heuristics. Additionally, the outcomes of the two-tailed Wilcoxon rank-sum test are presented in Tables 11, 12. Wilcoxon rank-sum test is a non-parametric statistical test that analyses the importance of two sets of parameters depending on the differences between them. It is performed at a 5% significance level. Throughout the experiment, when p-value \(< 0.05\), W-value is determined as ‘\(+\)’, which demonstrates that there is a statistically significant difference between the two groups’ data, otherwise W-value is determined as ‘−’ which shows that there is no significant difference. NA stands for “not applicable” that symbolizes ‘\(=\)’ which demonstrate that the proposed CMGO algorithm have equivalent difference with other meta-heuristics. From the Table 11, it is observed that the proposed CMGO algorithm has significant difference for the functions F1–F8, F17, F19 and F22 compared with all other algorithms. However, compared with MGO, CMGO has no significant results for the functions F10 and F20. Compared with LSHADE, FFA, WSO and MGO, CMGO illustrates equivalent outcomes for the functions F14 and F9. Therefore, when compared to prominent algorithms, the suggested CMGO demonstrates outstanding results for the majority of the functions. As observed in Table 12, it is perceived that the proposed CMGO has significant difference for the functions cec01, cec05 and cec06 when compared with other meta-heuristics. Compared with GWO, CMAES, and FFA, the CMGO has significant results in the maximum number of functions except for cec07 and cec08. In general, CMGO proved to be the best optimizer based on the aforementioned statistical analysis, outperforming the other recent and top-performing algorithms on the majority of the problems with the environment.
5.6 Convergence Analysis
This subsection illustrates the graphical investigation for a more comprehensive evaluation of the performance of all the algorithms. Line graphs exhibiting the convergence of several benchmark functions using the CMGO method and other algorithms have been displayed in Figs. 7, 8, which make it simple to analyze the rate of convergence of each algorithm over the course of numerous iterations. These figures exhibit the convergence curves of the CMGO for several unimodal, multi-modal, and fixed-dimensional multi-modal functions, as well as CEC2019 benchmark functions. As can be observed from the convergence curves in Fig. 7, over the period of each iteration, CMGO tends to accelerate its convergence. In other words, searching individuals can identify desirable areas in the early iterations and accelerate convergence. Functions F1–F5 and F7 each exhibit this behavior. Consequently, while solving the multi-modal functions, CMGO might reveal an outstanding convergence rate. Based on these curves, CMGO sustains a consistent rate of convergence in the final stage of the iterations. This is because the explorers are covering all of the search space, and functions F9, F11 and F13 exhibit this apparent behavior. However, CMGO performs equivalent results than the compared algorithms in context of convergence rate in most of the functions like F17, F21–F23. On the basis of convergence curve in Fig. 8, it is obvious that the CMGO algorithm has better convergence accuracy than competing algorithms at certain functions, such as cec02, cec03, cec06, and cec10. These evaluations demonstrate that the search region is initially explored by the CMGO gazelles to evaluate multiple potential destinations. Then, the search area is exploited by the CMGO gazelles to seek out the global optimal solution while compelling them to change locally rather than globally. As a result, it is shown that the enhancements recommended in this study result in a better balance between the exploration and exploitation abilities of the classical MGO algorithm. Because of these enhancements, the convergence and search rate of CMGO outperform the other compared algorithms.
6 CMGO for Real-Life Engineering Problems
Engineering design is the technique of addressing the demands necessary to assemble a product. An intricate objective function and an extensive range of decision factors, including weight, strength, and wear, constitute this process of making decisions. Real design challenges might have a substantial number of design variables and a complex, nonlinear impact on the objective function that has to be optimized. Therefore, design considerations for seven traditional engineering design challenges have been addressed in this section to overcome the projected CMGO algorithm, and its outcomes are evaluated against other competing algorithms.
6.1 Tension/Compression Design Problem
The primary objective of this engineering design problem [66] is to reduce the weight of the spring through the combination of three decision variables: wire diameter (d), mean coil diameter (D), and number of active coils (N) as displayed in Fig. 9. The problem may be described numerically as follows: Suppose \(\vec {y}=[y_1, y_2,y_3]=[N,d,D]\)
Minimize \(f_1(\vec {y})=(y_3+2)y_2y_{1}^2\) Subject to
The results of the analysis are summarized in Table 13, demonstrating that the CMGO can address this problem effectively by providing a better design than the enumerated algorithms such as PSO, GWO, WOA, MFO, ABC, and MGO.
6.2 Gear Train Design Problem
An excellent illustration of a mixed challenge is the design of a gear train, which requires determining a number of design elements that are continuous, integer variables, and discrete [30]. The challenge is basically that follows: Given a set input drive and a number of fixed output drive spindles, how can the output drive spindles be driven by the input retaining the least amount of connecting gear in the train. The purpose of this challenge is to substantially reduce the cost of the gear ratio in the aforementioned mechanical engineering problem, depicted in Fig. 10. Four integer variables, \(T_A\), \(T_B\), \(T_C\), and \(T_D\), which represent the number of teeth on four different gearwheels, are contained in this problem. Below is an investigation of the challenge’s formulation.
Consider
Minimize
Variable range 12 \(\le y_1, y_2, y_3, y_4 \le 60\)
In Table 14, the performance of CMGO is contrasted with that of the PSO, GSA, GWO, MVO, WOA, and MGO. These results illustrate how CMGO outperforms competing algorithms and generates outcomes comparable to MGO. These results show that CMGO is adept at addressing various problems.
6.3 Speed Reducer Design Problem
The objective of the speed reducer design challenge, which is schematically portrayed in Fig. 11, is to produce a speed reduction with the least amount of weight [67]. Since there are seven design variables in this problem, it is more challenging. The following factors constitute this category: width (\(x_1\)), teeth module (\(x_2\)), number of pining teeth (\(x_3\)), length of shaft 1 between bearings (\(x_4\)), length of shaft 2 between bearings (\(x_5\)), diameter of shaft 1 (\(x_6\)), and diameter of shaft 2 (\(x_7\)). For this design problem, a mathematical equation is outlined as follows.
Consider
Minimize
Subject to
Where the range of the design variables b, m, p, \(l_1\), \(l_2\), \(D_1\), and \(d_1\) were given as
Table 15 compares the best solutions obtained through CMGO with other optimizers like PSO, GWO, WOA, MVO, SCA, and MGO. The statistical results summarized in Table 15 divulge that CMGO generated the best optimum solutions among the competing algorithms.
6.4 Three-Bar Truss Design Problem
Three-bar truss design is a standard optimization problem in civil engineering [68]. Its primary objective is to establish the best values for two parameters (A1 and A2, albeit a third value, A3 = A1, is also generated), which might help decrease weight when designing a truss similar to the one in Fig. 12 by determining the best values for those two parameters. The following equation quantitatively expresses the truss design problem:
Consider
Minimize
Subject to
Variable range \(0\le y_1\), \(y_2\le 1\)
Here, \(l=100\,cm\), \(\sigma ={2\,kN}/{cm}^2\), \(P={2\,kN}/{cm}^2\)
The outcomes of the projected CMGO algorithm are juxtaposed with those of other optimizers like PSO, GSA, GWO, MVO, WOA, and MGO to assess how effectively it performs in solving this problem. The approximate optimal cost achieved through the proposed approach and the added algorithms are summarized in Table 16. As this table demonstrates, the outcomes of the CMGO are relatively comparable to all the other algorithms. The suggested algorithm’s optimal cost is utilized as an indication of the outstanding performance of the CMGO in addressing this challenge.
7 Conclusions and Future Works
This study attempts to improve the recently proposed swarm-based algorithm called Mountain Gazelle Optimizer (MGO) by incorporating 10 distinct chaotic maps to form a newly developed algorithm, namely, Chaotic Mountain Gazelle Optimizer (CMGO), to accelerate convergence, prevent local optimums, and enhance MGO’s capacity to balance between exploration and exploitation stages. At first, 10 different chaotic maps were implemented individually and compared to different chaotic maps along with MGO. Afterwards, ten chaotic maps were incorporated into MGO simultaneously and compared to top-performing algorithms. The outcomes of the investigation indicate that for the majority of the tested functions of CMGO, the accuracy of the generated solutions and the convergence rate were superior to those of other optimization techniques for the majority of the functions. Engineering challenge evaluations reveal that the recently established chaotic-based CMGO outperforms its competitors. Additionally, the statistical results of the Wilcoxon sign-rank test showed that the CMGO has a p-value of less than 0.05 for the CEC2005 and CEC2019 benchmark functions, indicating its statistically significant superiority over competitor algorithms. Regardless of its outstanding performance, the main limitation of the proposed CMGO method is that it has insignificant results when solving complex multi-modal functions and CEC2019 functions compared to other peers, which could affect the computational time. When optimizing problems with varying degrees of uncertainty, there is a deficiency of diverse search strategies during the exploration and exploitation stages. In future work, the CMGO method may be utilized for image processing, data mining, and design problems across a broad range of engineering disciplines. Furthermore, the CMGO can be modified to optimize the network’s parameters to improve the network’s performance.
Data availability
All data generated or analyzed during this study are included in this article.
References
Gendreau, M., Potvin, J.-Y., et al.: Handbook of Metaheuristics, vol. 2. Springer (2010)
Yang, X.-S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press (2010)
Cao, Y., Wu, Q.: Evolutionary programming, In: Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC’97), IEEE, pp. 443–446 (1997)
Rechenberg, I.: Evolutionsstrategien, In: Simulationsmethoden in der Medizin und Biologie: Workshop, Hannover, 29. Sept.–1. Okt. 1977, Springer, pp. 83–114 (1978)
Holland, J.H.: Genetic algorithms. Sci. Am. 267(1), 66–73 (1992)
Koza, J.R.: Genetic programming as a means for programming computers by natural selection. Stat. Comput. 4, 87–112 (1994)
Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341 (1997)
Kennedy, J., Eberhart, R.: Particle swarm optimization, In: Proceedings of ICNN’95-international conference on neural networks, Vol. 4, IEEE, pp. 1942–1948 (1995)
Dorigo, M., Di Caro, G.: Ant colony optimization: a new meta-heuristic, In: Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), Vol. 2, IEEE, pp. 1470–1477 (1999)
Yang, X.-S., Deb, S.: Cuckoo search via lévy flights, In: 2009 World congress on nature & biologically inspired computing (NaBIC), Ieee, pp. 210–214 (2009)
Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J. Glob. Optim. 39, 459–471 (2007)
Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Softw. 69, 46–61 (2014)
Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Softw. 95, 51–67 (2016)
Mirjalili, S., Gandomi, A.H., Mirjalili, S.Z., Saremi, S., Faris, H., Mirjalili, S.M.: Salp swarm algorithm: a bio-inspired optimizer for engineering design problems. Adv. Eng. Softw. 114, 163–191 (2017)
Mirjalili, S.: Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl.-Based Syst. 89, 228–249 (2015)
Mirjalili, S.: The ant lion optimizer. Adv. Eng. Softw. 83, 80–98 (2015)
Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A.A., Al-Qaness, M.A., Gandomi, A.H.: Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput. Ind. Eng, 157, 107250 (2021)
Braik, M., Hammouri, A., Atwan, J., Al-Betar, M.A., Awadallah, M.A.: White shark optimizer: a novel bio-inspired meta-heuristic algorithm for global optimization problems. Knowl.-Based Syst. 243, 108457 (2022)
Mohapatra, S., Mohapatra, P.: American zebra optimization algorithm for global optimization problems. Sci. Rep. 13(1), 5211 (2023)
Gopi, S., Mohapatra, P.: A modified whale optimisation algorithm to solve global optimisation problems, In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, Springer, pp. 465–477 (2022)
Kirkpatrick, S., Gelatt, C.D., Jr., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983)
Erol, O.K., Eksin, I.: A new optimization method: big bang-big crunch. Adv. Eng. Softw. 37(2), 106–111 (2006)
Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: Gsa: a gravitational search algorithm. Inf. Sci. 179(13), 2232–2248 (2009)
Hatamlou, A.: Black hole: a new heuristic optimization approach for data clustering. Inf. Sci. 222, 175–184 (2013)
Mirjalili, S.: Sca: a sine cosine algorithm for solving optimization problems. Knowl.-Based Syst. 96, 120–133 (2016)
Mirjalili, S., Mirjalili, S.M., Hatamlou, A.: Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Comput. Appl. 27, 495–513 (2016)
Faramarzi, A., Heidarinejad, M., Stephens, B., Mirjalili, S.: Equilibrium optimizer: a novel optimization algorithm. Knowl.-Based Syst. 191, 105190 (2020)
Glover, F.: Future paths for integer programming and links to artificial intelligence. Comput. Oper. Res. 13(5), 533–549 (1986)
Geem, Z.W., Kim, J.H., Loganathan, G.V.: A new heuristic optimization algorithm: harmony search. SIMULATION 76(2), 60–68 (2001)
Rao, R.V., Savsani, V.J., Vakharia, D.: Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput. Aided Des. 43(3), 303–315 (2011)
Tanabe, R., Fukunaga, A.S.: Improving the search performance of shade using linear population size reduction, In: 2014 IEEE Congress on Evolutionary Computation (CEC), IEEE, pp. 1658–1665 (2014)
Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (cma-es). Evol. Comput. 11(1), 1–18 (2003)
Simon, D.: Biogeography-based optimization. IEEE Trans. Evol. Comput. 12(6), 702–713 (2008)
Shayanfar, H., Gharehchopogh, F.S.: Farmland fertility: a new metaheuristic algorithm for solving continuous optimization problems. Appl. Soft Comput. 71, 728–746 (2018)
Sarangi, P., Mohapatra, P.: A novel cosine swarm algorithm for solving optimization problems, In: Proceedings of 7th International Conference on Harmony Search, Soft Computing and Applications: ICHSA 2022, Springer, pp. 427–434 (2022)
Abdel-Basset, M., El-Shahat, D., Jameel, M., Abouhawwash, M.: Young’s double-slit experiment optimizer: a novel metaheuristic optimization algorithm for global and constraint optimization problems. Comput. Methods Appl. Mech. Eng. 403, 115652 (2023)
Wolpert, D.H., McReady, W.: No free lunch theorems for optimization. IEEE Trans. Evol. Comput. 1, 67–82 (1997). (IEEE Press, NY, USA 10, 4235–585893)
Abdollahzadeh, B., Gharehchopogh, F.S., Khodadadi, N., Mirjalili, S.: Mountain gazelle optimizer: a new nature-inspired metaheuristic algorithm for global optimization problems. Adv. Eng. Softw. 174, 103282 (2022)
Chandrasekaran, K., Thaveedhu, A.S.R., Manoharan, P., Periyasamy, V.: Optimal estimation of parameters of the three-diode commercial solar photovoltaic model using an improved Berndt-hall-hall-Hausman method hybridized with an augmented mountain gazelle optimizer. Environ. Sci. Pollut. Res. 30(20), 57683–57706 (2023)
Gharehchopogh, F.S., Ucan, A., Ibrikci, T., Arasteh, B., Isik, G.: Slime Mould algorithm: a comprehensive survey of its variants and applications. Arch. Comput. Methods Eng. 30(4), 2683–2723 (2023)
Piri, J., Mohapatra, P., Acharya, B., Gharehchopogh, F.S., Gerogiannis, V.C., Kanavos, A., Manika, S.: Feature selection using artificial gorilla troop optimization for biomedical data: a case analysis with covid-19 data. Mathematics 10(15), 2742 (2022)
Pecora, L.M., Carroll, T.L.: Synchronization in chaotic systems. Phys. Rev. Lett. 64(8), 821 (1990)
Zhang, Y.-T., Zhou, W., Yi, J.: A novel adaptive chaotic bacterial foraging optimization algorithm, In: 2016 International Conference on Computational Modeling, Simulation and Applied Mathematics (2016)
Gharehchopogh, F.S., Ibrikci, T.: An improved African vultures optimization algorithm using different fitness functions for multi-level thresholding image segmentation. Multimed. Tools Appl. 83(6), 16929–16975 (2023)
Özbay, E., Özbay, F.A., Gharehchopogh, F.S.: Peripheral blood smear images classification for acute lymphoblastic leukemia diagnosis with an improved convolutional neural network. J. Bionic Eng. (2023). https://doi.org/10.1007/s42235-023-00441-y
Shen, Y., Zhang, C., Gharehchopogh, F.S., Mirjalili, S.: An improved whale optimization algorithm based on multi-population evolution for global optimization and engineering design problems. Expert Syst. Appl. 215, 119269 (2023)
Saha, S., Mukherjee, V.: A novel chaos-integrated symbiotic organisms search algorithm for global optimization. Soft. Comput. 22, 3797–3816 (2018)
Gupta, S., Deep, K.: An opposition-based chaotic grey wolf optimizer for global optimisation tasks. J. Exp. Theoret. Artif. Intell. 31(5), 751–779 (2019)
Kumar, Y., Singh, P.K.: A chaotic teaching learning based optimization algorithm for clustering problems. Appl. Intell. 49(3), 1036–1062 (2019)
Li, X.-D., Wang, J.-S., Hao, W.-K., Zhang, M., Wang, M.: Chaotic arithmetic optimization algorithm. Appl. Intell. 52(14), 16718–16757 (2022)
Kaur, G., Arora, S.: Chaotic whale optimization algorithm. J. Comput. Des. Eng. 5(3), 275–284 (2018)
Mohapatra, S., Mohapatra, P.: Fast random opposition-based learning golden jackal optimization algorithm. Knowl.-Based Syst. 275, 110679 (2023)
Gharehchopogh, F.S., Abdollahzadeh, B., Barshandeh, S., Arasteh, B.: A multi-objective mutation-based dynamic harris hawks optimization for botnet detection in iot. Internet of Things 24, 100952 (2023)
Gharehchopogh, F.S.: An improved Harris hawks optimization algorithm with multi-strategy for community detection in social network. J. Bionic Eng. 20(3), 1175–1197 (2023)
Chandran, V., Mohapatra, P.: Enhanced opposition-based grey wolf optimizer for global optimization and engineering design problems. Alex. Eng. J. 76, 429–467 (2023)
Mohapatra, S., Mohapatra, P.: An improved golden jackal optimization algorithm using opposition-based learning for global optimization and engineering problems. Int. J. Comput. Intell. Syst. 16(1), 147 (2023)
Gharehchopogh, F.S., Khargoush, A.A.: A chaotic-based interactive autodidactic school algorithm for data clustering problems and its application on covid-19 disease detection. Symmetry 15(4), 894 (2023)
Li, J.-W., Cheng, Y.-M., Chen, K.-Z.: Chaotic particle swarm optimization algorithm based on adaptive inertia weight, In: The 26th Chinese Control and Decision Conference (2014 CCDC), IEEE, pp. 1310–1315 (2014)
Kumar, S., Yildiz, B.S., Mehta, P., Panagant, N., Sait, S.M., Mirjalili, S., Yildiz, A.R.: Chaotic marine predators algorithm for global optimization of real-world engineering problems. Knowl.-Based Syst. 261, 110192 (2023)
Özbay, F.A.: A modified seahorse optimization algorithm based on chaotic maps for solving global optimization and engineering problems. Eng. Sci. Technol. Int. J. 41, 101408 (2023)
dos Santos Coelho, L., Mariani, V.C.: Use of chaotic sequences in a biologically inspired algorithm for engineering design optimization. Expert Syst. Appl. 34(3), 1905–1913 (2008)
Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y.-P., Auger, A., Tiwari, S.: Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization. KanGAL Report 2005005(2005), 2005 (2005)
Liang, J.-J., Qu, B., Gong, D., Yue, C.: Problem definitions and evaluation criteria for the cec 2019 special session on multimodal multiobjective optimization, Computational Intelligence Laboratory, Zhengzhou University (2019)
Mohapatra, P., Das, K.N., Roy, S.: A modified competitive swarm optimizer for large scale optimization problems. Appl. Soft Comput. 59, 340–362 (2017)
García, S., Fernández, A., Luengo, J., Herrera, F.: Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: Experimental analysis of power. Inf. Sci. 180(10), 2044–2064 (2010)
Arora, J.: Introduction to Optimum Design. Elsevier (2004)
Mezura-Montes, E., Coello, C.A.C.: Useful infeasible solutions in engineering optimization with evolutionary algorithms, In: MICAI 2005: Advances in Artificial Intelligence: 4th Mexican International Conference on Artificial Intelligence, Monterrey, Mexico, November 14-18, 2005. Proceedings 4, Springer, pp. 652–662 (2005)
Ray, T., Liew, K.-M.: Society and civilization: an optimization algorithm based on the simulation of social behavior. IEEE Trans. Evol. Comput. 7(4), 386–396 (2003)
Acknowledgements
The authors would like to thanks VIT University for supporting this research work.
Funding
No funding was received for conducting this study.
Author information
Authors and Affiliations
Contributions
Priteesha Sarangi: Conceptualization, Methodology, writing original draft. Prabhujit Mohapatra: Conceptualization, Methodology, Supervision, writing-review and editing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no Conflict of interest relating to this work.
Informed consent
Informed consent was obtained from all individual participants included in the study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sarangi, P., Mohapatra, P. Chaotic-Based Mountain Gazelle Optimizer for Solving Optimization Problems. Int J Comput Intell Syst 17, 110 (2024). https://doi.org/10.1007/s44196-024-00444-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44196-024-00444-5