1 Introduction

In domains of practical application, a multitude of constrained and unconstrained optimization questions necessitate resolution. As the forms of these questions become more and more complex, the traditional gradient-based optimization methods can no longer meet the actual needs. However, the meta-heuristic algorithm has the following characteristics: simple coding, strong applicability, a limited number of assumptions that must be satisfied, and no derivative tools. Therefore, they are regarded as a new method for solving optimization questions and are popularly employed in numerous fields, for instance, engineering, computer science, mathematics, energy, medicine, neuroscience, and so on [1]. Meta-heuristic algorithms can be divided into four categories according to their source of inspiration: evolutionary algorithms (inspired by the law of survival of the fittest), physics-based algorithms (inspired by chemical or physical laws), human-based algorithms (inspired by various human behaviors), and swarm intelligence algorithms (inspired by the swarm behavior of creatures). Among them, the most representative algorithms among evolutionary algorithms, human-based algorithms, and physics-based algorithms are the genetic algorithm (GA) [2,3,4], teaching–learning-based optimization (TLBO) [5,6,7], and gravitation search algorithm (GSA) [8,9,10], respectively. The classic swarm intelligence (SI) algorithms include particle swarm optimization (PSO) [11,12,13], ant colony optimization (ACO) [14,15,16], and artificial bee colony algorithm (ABC) [17,18,19], etc. The grasshopper optimization algorithm (GOA) mentioned in this paper is a novel swarm intelligence (SI) algorithm proposed by Saremi [20] in 2017. The idea is derived from the imitation of the migration and foraging behaviors of grasshopper populations. The advantages of the GOA over other SI algorithms include its straightforward structure, limited number of parameters, and easy implementation. The experiment on the benchmark function demonstrates that the GOA outperforms the previously proposed SI algorithms, for example, PSO, in terms of convergence speed and accuracy. It has already been used successfully in a variety of fields. For example, Xu et al. [21] introduced the bare-bones Gaussian strategy and elite opposition-based learning into GOA to improve algorithm performance and applied its binary version to feature selection with good effects. Jalali et al. [23] proposed an enhanced grasshopper optimization algorithm (EGOA), which added the Levy flight strategy and tent mapping to the original GOA, where logistic mapping was utilized for population initialization to enhance population diversity, and the velocity perturbation mechanism was employed for perturbing individual positions to enhance the ability to break away from the local optima and apply to three engineering design problems. Jalali et al. [23] put forward an enhanced grasshopper optimization algorithm (EGOA) employing tent mapping and the Levy flight strategy. Then, combined with the mutual information (MI) feature selection algorithm, the long short-term memory (LSTM) neural network architecture is optimized for wind speed prediction. Wu et al. [24] proposed an improved grasshopper optimization algorithm (IGOA), which adopted logistic mapping for initialization and added differential evolution and linear optimization strategies in the position update stage. Then it was employed to identify the parameters of polycrystalline silicon solar cells. Liu et al. [25] proposed a comprehensive strategy combining the original GOA with the linear weighted sum to address energy management issues. Alhejji et al. [26] added the spiral path strategy and Levy flight mechanism to the original GOA, proposed an adaptive grasshopper optimization algorithm, and applied it to solve the optimal power flow problem. Bhukya et al. [27] utilized GOA to optimize the membership functions of FLC to deal with nondeterminacies caused by changeable temperatures and irradiances, thereby enhancing the performance of maximum power point tracking.

However, in the same vein as other SI algorithms, the GOA is prone to local optima and exhibits relatively slow convergence when dealing with multimodal or high-dimensional optimization questions. In response to these issues, Dong et al. [28] suggested a modified grasshopper algorithm (CC–GOA), which employs logistic mapping for initialization to enhance population diversity and adds the Cauchy mutation strategy to enhance the algorithm's capability to escape from the local optima. Zhao et al. [29] proposed a grasshopper algorithm that incorporates the nonlinear decreasing coefficient, the Levy flight, and the random jumping strategy. Among them, the nonlinear decreasing coefficient accelerates the algorithm's convergence, and the Levy flight and the random jumping strategy enhance the population diversity and assist the algorithm in escaping from the local optima. Bekana et al. [30] proffered a modified grasshopper algorithm (Crazy–GOA) by adding the crazy factor to the GOA. Adding the crazy factor to the position update expression of the GOA helps to explore the entire search space and enhance population diversity. Yldz et al. [31] proposed a hybrid grasshopper algorithm, which combined the original GOA with the simplex method to enhance local exploitation ability. Zhou et al. [32] proposed a modified grasshopper algorithm, which employs the orthogonal learning mechanism to enhance convergence speed and introduces the genetic mutation and the Cauchy mutation strategies to enhance the solution accuracy. Huang et al. [33] suggested a grasshopper algorithm that first divides the population into two subpopulations, then introduces the social interaction mechanism to balance the exploitation and exploration, and finally incorporates the learning strategy and the nonlinear coefficient to the enhance global exploration capability.

Although the aforementioned improvement methods have somewhat enhanced the performance of the original GOA, most studies have not comprehensively considered the shortcomings of the original GOA, and there is still room for improvement. Meanwhile, the NFL theorem also points out that there is no algorithm that can effectively and efficiently solve every optimization problem. At this point, this paper presents a multi-strategy improved grasshopper optimization algorithm (MSIGOA), which aims to address the shortcomings of the original GOA, including its slow convergence, vulnerability to trapping into local optima, and low accuracy. Firstly, the circle mapping is used to initialize the population, making the population distribution more uniform and having higher diversity. Secondly, a nonlinear decreasing coefficient is employed instead of an original decreasing coefficient to meet the needs of the algorithm at different stages and improve both local exploitation and global exploration capabilities. Thirdly, the modified golden sine mechanism is added during the position update stage to change the single position update mode of GOA and enhance the local exploitation capability. Fourthly, the greedy strategy is added to greedily select the new and old positions of the individual to retain the better position and increase the speed of convergence. Finally, the quasi-reflection-based learning mechanism is utilized to construct new populations to improve population multiplicity and the capability to escape from the local optima. In the experimental simulation, the performance of the proposed MSIGOA was evaluated and compared with other advanced algorithms by employing 12 classical test functions and the CEC2017 test functions. The experimental results indicate that the MSIGOA outperforms the original GOA and other compared algorithms, and it has faster convergence speed, better stability, and stronger searching ability. In addition, six engineering design problems are solved using the MSIGOA. The results reveal that the proposed MSIGOA is more competitive than other algorithms.

The remainder of this paper is structured as follows: Sect. 2 describes the principles of the original GOA and golden sine algorithm (Gold-SA); Sect. 3 details the proposed MSIGOA; Sect. 4 conducts comparative experiments to validate the performance of the MSIGOA and applies it to six engineering design problems; Sect. 5 concludes the paper and proposes future study.

2 Background

2.1 Grasshopper Optimization Algorithm (GOA)

Grasshoppers are a kind of group behavior insect whose life cycle is mainly divided into two phases: larva and adulthood, and they have completely different characteristics in the two phases. The larval phase is characterized by small steps and slow movements, while the adulthood phase is characterized by long-range and abrupt movements [34]. When modeling the behavior of grasshopper swarms, their motions are often thought to be influenced by gravity, social interaction, and wind advection. Therefore, the following equation can be used to mimic the behavior of grasshopper swarms in nature [20, 35]:

$$X_{i} = r_{1} S_{i} + r_{2} G_{i} + r_{3} A_{i} ,$$
(1)

where \(X_{i} = (x_{i,1} ,x_{i,2} ,x_{i,3} , \ldots ,x_{i,d} )\) defines the position of the ith grasshopper, \(r_{1}\), \(r_{2}\), and \(r_{3}\) are random numbers in [0, 1], G represents the force of gravity acting on the grasshopper, A depicts wind advection, and S represents social interaction. The specific calculation equation for S is as follows:

$$S_{i} = \mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} s(d_{ij} )\widehat{{d_{ij} }},$$
(2)

where N denotes the quantity of grasshoppers in the swarm, and \(d_{ij}\) and \(\widehat{{d_{ij} }}\) represent the distance and unit vector between the ith and jth grasshoppers, which are computed using \(d_{ij} = |x_{j} - x_{i} |\) and \(\widehat{{d_{ij} }} = \frac{{x_{j} - x_{i} }}{{d_{ij} }}\), respectively. The social forces between the grasshoppers are shown in Fig. 1, and s is the function that defines the social forces.

Fig. 1
figure 1

The attraction and repulsion between grasshoppers and the comfort zone

There exist social forces between grasshoppers within a certain distance, which manifest as attraction and repulsion, respectively, depending on the distance. Specifically, when the distance between grasshoppers is relatively small, the social force appears as repulsion, and when the distance is relatively large, the social force appears as attraction. The distance between grasshoppers at which neither attraction nor repulsion occurs is commonly referred to as the comfort distance or comfort zone. As the distance between grasshoppers exceeds their comfort zone, attraction increases until a certain threshold is reached. Beyond this point, attraction weakens gradually with increasing distance until it disappears altogether. The social forces between grasshoppers are defined as follows [20]:

$$s(r) = fe^{{\frac{ - r}{l}}} - e^{ - r} ,$$
(3)

where \(l\) is the scale of attracting length and f is the strength of attraction. The values assigned to these variables have a direct impact on the extent of the repulsion region, attraction region, and comfort zone. Usually, f is a constant of 0.5, and l is a constant of 1.5. The variable r represents the distance between grasshoppers, which is usually mapped to the interval [1, 4] to avoid situations where the attraction is zero when the distance between grasshoppers is large. When \(s(r)\) is greater than 0, the social force appears as attraction; otherwise, it appears as repulsion. Furthermore, the specific expressions of \(G_{i}\) and \(A_{i}\) in Eq. (1) are as follows:

$$G_{i} = - g\widehat{{e_{g} }},$$
(4)
$$A_{i} = u\widehat{{e_{w} }},$$
(5)

where \(\widehat{{e_{w} }}\) and \(\widehat{{e_{g} }}\) stand for the unit vectors in the direction of the wind and the center of the earth, and u and g stand for the drift constant and gravitational constant.

To sum up, Eq. (1) can be expanded as follows by substituting \(G_{i}\), \(A_{i}\), and \(S_{i}\):

$$X_{i} = \mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} s(x_{j} - x_{i} )\frac{{x_{j} - x_{i} }}{{d_{ij} }} - g\widehat{{e_{g} }} + u\widehat{{e_{w} }}.$$
(6)

However, Eq. (6) cannot be directly used to solve the optimization problems because the grasshoppers quickly reach the comfort zone and the population does not converge to a specified position. Assuming that the wind direction is consistent with the movement direction of the target individual, ignoring the influence of gravity, a modified version of this equation is as follows [20]:

$$x_{i,d} \left( {t + 1} \right) = c\left( {\mathop \sum \limits_{{\begin{array}{*{20}c} {j = 1} \\ {j \ne i} \\ \end{array} }}^{N} c\frac{{ub_{d} - lb_{d} }}{2}s\left( {\left| {x_{i,d} \left( t \right) - x_{i,d} \left( t \right)} \right|} \right)\frac{{x_{j} \left( t \right) - x_{i} \left( t \right)}}{{d_{ij} \left( t \right)}}} \right) + \widehat{{D_{d} }}\left( t \right),$$
(7)

where \(x_{i,d} (t)\) denotes the position of the ith grasshopper in the d dimension of the search space at the t iteration. The search space’s lower and upper bounds are represented by \(lb_{d}\) and \(ub_{d}\), respectively, and \(\widehat{{D_{d} }}(t)\) denotes the position of the target grasshopper (or current best grasshopper) in the d dimension at the t iteration. The variable c is a coefficient that linearly declines with the quantity of iterations, defined as follows:

$$c = c_{\max } - t\frac{{c_{\max } - c_{\min } }}{T},$$
(8)

where the values of \(c_{\min }\) and \(c_{\max }\) are 0.00001 and 1, respectively, and stand for the lowest and highest values of c, and T and t stand for the largest and current iterations, respectively.

2.2 Golden Sine Algorithm (Gold-SA)

As a basic and important tool, the sine function has a wide range of applications in many fields, especially in time series analysis methods and change detection methods [36]. Inspired by the sine function, Tanyildizi et al. [37] proposed a meta-heuristic algorithm called the golden sine algorithm (Gold-SA) in 2017, which has the characteristics of fast convergence speed and good robustness. Gold-SA combines the golden section coefficient and sine function in the iterative optimization process. Among them, the sine function endows the algorithm with good global exploration ability, while the golden section coefficient continuously refines the search space, endowing the algorithm with strong local exploitation ability.

The core of Gold-SA is the process of individual position updating, and each individual position corresponds to a potential solution to the question. Assume the number of individuals and the dimension of the question (or search space) are N and Dim, respectively. Gold-SA first randomly generates N individuals in the Dim dimension search space, then update the individual positions according to the corresponding formula, and finally iterate until the stop condition is met. Assuming that at the tth iteration, the individual i’s \((i = 1,2,3, \ldots ,N)\) position in the search space is shown as \(V_{i} (t)\), then update the position of the t + 1 iteration according to the following formula:

$$V_{i} (t + 1) = V_{i} (t)\left| {\sin (r_{1} )} \right| + r_{2} \sin (r_{1} ) \times \left| {k_{1} P_{i} (t) - k_{2} V_{i} (t)} \right|,$$
(9)
$$k_{1} = - \pi + (1 - \tau ) \times 2\pi ,$$
(10)
$$k_{2} = - \pi + \tau \times 2\pi ,$$
(11)
$$\tau = \frac{\sqrt 5 - 1}{2},$$
(12)

where Pi(t) represents the optimal position of individual i at th tth iteration, r1 is a random number between [0, 2π] that determines how far the individual moves on each iteration, r2 is a random number between [0, π], which determines the movement direction of the individual at each iteration, and k1 and k2 are the coefficients obtained by the golden number \(\tau\). These coefficients can effectively narrow the search space, and other individuals can be directed to move closer to the current best individual.

3 Proposed MSIGOA

For the sake of enhancing the optimization performance of the original GOA, this paper combines multiple strategies to improve it and names the improved algorithm MSIGOA. The following sections will specifically introduce the improvement strategies for MSIGOA.

3.1 Circle Mapping

Since the original grasshopper optimization algorithm lacks the ability to incorporate prior information during population initialization, it is limited to generating the initial population through random initialization. However, this random initialization may result in an uneven distribution of individuals within the search space, thereby impacting the algorithm’s solution precision and convergence speed.

The chaotic sequence has the characteristics of non-periodicity, ergodicity, and regularity [38, 39]. Compared with random initialization, the incipient grasshopper population produced by employing chaotic mapping displays higher diversity and a more uniform distribution. The fundamental idea of chaotic mapping is to generate a chaotic sequence according to the mapping relationship on the interval [0, 1] and then transform the chaotic sequence into the search space. There are various types of chaotic mappings, including commonly used ones such as the logistic mapping, tent mapping, circle mapping, and so on. Among the various chaotic mapping methods, circle mapping has been found to exhibit superior performance [40]. Therefore, this paper utilized circle mapping to produce the grasshopper population with the aim of enhancing its diversity. Circle mapping is formulated as follows:

$$y_{i,j + 1} = \bmod \left( {y_{i,j} + 0.2 - \left( {\frac{0.5}{{2\pi }}} \right)\sin (2\pi y_{i,j} ),1} \right),$$
(13)

where \(y_{i,j}\) denotes the \(j\)th variable of chaotic sequence i, \(\bmod (b,a)\) mod (b , a) represents the remainder operation of b on a.

After all chaotic sequences are obtained by Eq. (13), and then the chaotic sequences are inversely mapped to the search space according to Eq. (14), the initial positions of individuals can be obtained.

$$x_{i,j} = lb_{j} + y_{i,j} \times (ub_{j} - lb_{j} ).$$
(14)

3.2 Nonlinear Decreasing Coefficient

As per the original GOA principle, the decreasing coefficient c has a vital role in the local exploitation and global exploration of the algorithm [41, 42]. From Eq. (8), the coefficient c decreases linearly as the number of iterations increases. However, this linear variation cannot meet the actual needs of the algorithm at different stages, leading to low convergence accuracy. As a result, this paper proposes a nonlinear decreasing coefficient to replace the linear decreasing coefficient, and the equation is as follows:

$$c = 3^{{ - \frac{{16t^{2} }}{{T^{2} }}}} ,$$
(15)

where T and t stand for the largest and current iteration numbers, respectively.

The rapid decrease of the new coefficient c in the early stage of iteration is beneficial for the algorithm to quickly converge to the vicinity of the optimal value, while its slow decrease in the later stage of iteration allows the algorithm for more detailed exploitation. Therefore, this coefficient can effectively improve the exploration and exploitation capabilities of the algorithm.

3.3 Modified Golden Sine Mechanism

The original GOA is prone to local optima in the middle and later iterations due to a lack of excellent local exploitation capability, which leads to poor solving accuracy. Compared with GOA, Gold-SA has a splendid convergence rate and exploitation ability. Therefore, the idea of Gold-SA was incorporated into GOA to change the single position update mode of GOA and enhance the local exploitation capability. Specifically, the golden sine mechanism is added during the position update stage to make the ordinary grasshopper individuals move toward the target individuals in a golden sine manner, reducing the blindness of the individual optimization. In addition, this mechanism will also promote information exchange between ordinary individuals and the target individual so that ordinary individuals can sufficiently absorb the position information from the target individual, thereby improving the local exploitation capability of the algorithm.

In MSIGOA, the choice of two position update methods is determined by switching probability \(P_{v}\), where \(P_{v} = 0.5\). When \(P_{v}\) is smaller than the random number \(r\) in [0, 1], the golden sine mechanism is used to update the grasshopper positions. Otherwise, the position is updated in the original way by GOA. Besides, in an effort to further enhance the local exploitation capability, Eq. (15) is added to the current individual position of Eq. (9) as an adaptive weight coefficient. When the position is updated, the adaptive weight coefficient w adjusts the influence weight of the current individual position on the new position according to the number of iterations, thus fully utilizing the current individual position information [43,44,45]. Therefore, the updated formula of the modified golden sine mechanism is as follows:

$$X_{i} (t + 1) = wX_{i} (t)\left| {\sin (r_{1} )} \right| + r_{2} \sin (r_{1} )\left| {k_{1} \hat{D}(t) = wk_{2} X_{i} (t)} \right|,$$
(16)

where \(X_{i} (t)\) and \(\hat{D}(t)\) are the positions of the ith grasshopper and the current optimal grasshopper at the t iteration, respectively, w is the dynamic weight coefficient, k1 and k2 are the golden section coefficients to reduce the search area, and r1 and r2 are random numbers to control the moving distance and direction of grasshopper individuals (specific definitions in Sect. 2.2). Equation (16) employs these coefficients to control the impact of the current individual position on the new position and gradually guide the current individual to approach the best individual.

3.4 Greedy Strategy

As the effect of position updates is uncertain, the quality of the newly generated position might not be as good as the individual’s original position. Therefore, greedy selection of individual positions before and after renewal is carried out to maintain population quality and improve convergence speed. The main idea of the greed strategy is to compare the fitness value of each grasshopper’s new position with its original position. If the new position has a higher fitness value, the grasshopper’s position will be updated; otherwise, it remains unchanged. The following is a description of the greed strategy [46]:

$$X_{i} (t + 1) = \left\{ {\begin{array}{*{20}c} {X_{i}^{{{\text{new}}}} , f(X_{i}^{{{\text{new}}}} ) < f(X_{i}^{{{\text{old}}}} )} \\ {X_{i}^{{{\text{old}}}} , f(X_{i}^{{{\text{new}}}} ) \ge f(X_{i}^{{{\text{old}}}} )} \\ \end{array} } \right..$$
(17)

3.5 Quasi-reflection-Based Learning

In 2005, the opposition-based learning (OBL) mechanism was initially suggested by Tizhoosh et al. [47]. Studies show that there is a greater chance that opposite solutions will approach the global optimal solution than random ones, and this mechanism can also significantly increase population diversity and population quality [48,49,50]. At present, OBL has been widely utilized in the modification of the SI algorithms to enhance their solving accuracy and convergence speed. Rahnamayan et al. suggested a variation of OBL in 2007, namely the quasi-opposition-based learning (QOBL) mechanism [51]. Research has confirmed that finding the global optimal solution by employing quasi-opposite solutions is more efficient than employing opposite solutions [52,53,54]. Later, based on the principles of OBL and QOBL, a new variant called the quasi-reflection learning (QRBL) mechanism was proposed [55]. The fundamental principle is to construct a quasi-reflective population by figuring out the quasi-reflective solution for each individual in the current population. After that, combine the quasi-reflective population with the present population and arrange the two populations based on fitness. Finally, pick the top N individuals with excellent fitness values to construct a new population. Fan et al. [56] combined OBL, QRBL, and QOBL with HHO to conduct comparison experiments. The outcomes reveal that HHO that combines the QRBL mechanism performs better in terms of solution accuracy and convergence speed. In order to improve the diversity and quality of the population and the capability of the algorithm to escape from the local optima, this paper adds the QRBL mechanism after the position update phase.

Assuming \(X_{i} = (x_{i,1} ,x_{i,2} ,x_{i,3} , \ldots ,x_{i,d} )\) is an individual in the d-dimensional search space, the definition of its quasi-reflective solution \(X_{i}^{qr} = (x_{i,1}^{qr} ,x_{i,2}^{qr} ,x_{i,3}^{qr} , \ldots ,x_{i,d}^{qr} )\) is as follows:

$$x_{{i,j}}^{{qr}} = \left\{ {\begin{array}{*{20}l} {x_{{i,j}} + \left( {\frac{{u_{j} \left( t \right) + l_{j} \left( t \right)}}{2} - x_{{i,j}} } \right) \cdot rand,} \hfill & {x_{{i,j}} {\text{ < }}\frac{{u_{j} \left( t \right) + l_{j} \left( t \right)}}{2}} \hfill \\ {\frac{{u_{j} \left( t \right) + l_{j} \left( t \right)}}{2} + \left( {x_{{i,j}} - \frac{{u_{j} \left( t \right) + l_{j} \left( t \right)}}{2}} \right) \cdot rand,} \hfill & {x_{{i,j}} \ge \frac{{u_{j} \left( t \right) + l_{j} \left( t \right)}}{2}} \hfill \\ \end{array} } \right.,$$
(18)

where xi,j denotes the position of individual i in the jth dimension of the search space, \(u_{j} (t)\) and \(l_{j} (t)\) represent the upper and lower bounds of the population dynamic boundary at the t iteration, and \({\text{rand}}\) is a random number in [0, 1].

3.6 Specific Steps of MSIGOA

To sum up, the specific steps of MSIGOA are as follows:

Step 1 Set key parameters like the population number N, question dimension Dim, and maximum iteration number T.

Step 2 Perform the chaotic initialization for the grasshopper population according to the circle mapping of Eqs. (13) and (14).

Step 3 Calculate the fitness values of all grasshopper individuals and update the position of target individuals.

Step 4 Update the nonlinear decreasing coefficient c.

Step 5 Generate a random number r in the interval [0, 1].

Step 6 If r is smaller than Pv, update the position according to Eqs. (7) and (17).

Step 7 If r is greater than Pv, update the position according to Eqs. (16) and (17);

Step 8 Perform the quasi-reflection learning mechanism according to Eq. (18) to construct a new population.

Step 9 Update the position of the target individuals after calculating the fitness values of all grasshopper individuals.

Step 10 Check whether the stop conditions are met. If the conditions are met, the search is stopped, and the global optimal solution and fitness value are displayed. Otherwise, go to Step 4.

The specific flowchart of MSIGOA is shown in Fig. 2.

Fig. 2
figure 2

Flowchart of MSIGOA

4 Experimental Simulation and Analysis

There are two sections in the experiment described in this paper: (1) compare MSIGOA with several novel SI algorithms; (2) compare MSIWOA with other modified GOAs. The experiment picked the 12 classical benchmark functions [57], of which F1–F6 are unimodal and F7–F12 are multimodal. The capacity of algorithms to optimize can be more effectively examined using different types of test functions. Table 1 displays the names, expressions, optimal values, domains, and dimensions of these test functions. Additionally, a comparison of the CEC2017 test functions was added to the first section of the experiment to make it more challenging.

Table 1 Benchmark functions

4.1 Experimental Environment

All algorithms use the same hardware and software platform to ensure the testing experiment's neutrality and impartiality. The operating system is Windows 10 (64-bit), the hardware is an Intel (R) Core (TM) i5-8250U CPU running at 1.60 GHz and 1.80 GHz, and the software is MATLAB R2018b.

4.2 Compare MSIGOA with Other SI Algorithms

Harris hawks optimization (HHO) [58], dung beetle optimizer (DBO) [59], butterfly optimization algorithm (BOA) [60], whale optimization algorithm (WOA) [61], and grasshopper optimization algorithm (GOA) are the five novel SI algorithms that were chosen in the first part of the experiment. The population size N and maximum number of iterations T for each algorithm are set at 500 and 30, respectively.

4.2.1 Compare MSIGOA with Other SI Algorithms on Classical Benchmark Functions

In this section, DBO, BOA, HHO, WOA, and GOA are chosen to perform comparative experiments with MSIGOA on the 12 benchmark test functions listed in Table 1 to verify the viability and efficacy of MSIGOA. Under the same conditions, the experimental results were evaluated through four performance indicators: maximum value, mean value, minimum value, and standard deviation. The maximum and minimum values represent the worst and best accuracy to which the algorithm converges, respectively. The mean value denotes the algorithm’s mean convergence accuracy, while the standard deviation signifies its stability and robustness. The statistical results of each algorithm after 30 independent runs on 12 test functions are displayed in Table 2. The convergence curves for each algorithm on 12 test functions are illustrated in Fig. 3.

Table 2 Comparisons of MSIGOA and other SI algorithms on classical test functions
Fig. 3
figure 3

Convergence curves of MSIGOA and other SI algorithms on classical test functions

It is evident from Table 2’s comparison findings between the MSIGOA and the other five algorithms that the MSIGOA is capable of reaching the theoretical best value on the unimodal test functions F1–F4, multimodal test functions F7–F9, and F11, and the standard deviation and mean accuracy are zero. Moreover, on the test function F10, the result of MSIGOA is quite near the theoretical best value, and the standard deviation is also zero. On most test functions, MSIGOA can reach or approach the theoretical best value, indicating that it has powerful local exploitation and global exploration capabilities and can promptly jump out and find the global optimal solution (theoretical best value) when falling into the local optima. At the same time, the standard deviation of MSIGOA on most test functions is zero, which indicates that the algorithm has strong stability and that the optimization results are not accidental.

Compared with these five algorithms, MSIGOA has better mean accuracy, standard deviation, worst accuracy, and best accuracy on the seven functions F1–F5 and F7–F8, and the optimization effect is obviously better than theirs. On the function F10, the mean accuracy, standard deviation, worst accuracy, and best accuracy obtained by MSIGOA are exactly the same as that of HHO and DBO and basically the same as WOA, but still far superior to BOA and GOA. On the functions F9 and F11, MSIGOA, DBO, HHO, and WOA can all reach the theoretical optimal value. Among them, the indicators of HHO and MSIGOA are identical. Although both WOA and DBO can achieve the theoretical best value, WOA sometimes sinks into local optima on F11, while DBO sometimes sinks into the local optima on F9. Therefore, their mean accuracy and other indicators are different from MSIGOA. For the function F6, HHO is superior to MSIGOA in mean accuracy, worst accuracy, and best accuracy, and the standard deviation is basically the same. MSIGOA is basically the same as WOA, DBO, and BOA in mean accuracy, standard deviation, worst accuracy, and best accuracy, but still better than GOA. For the function F12, HHO and DBO are superior to MSIGOA in the mean accuracy, worst accuracy, standard deviation, and optimal accuracy. MSIGOA is superior to WOA, BOA, and GOA.

The convergence curves of MSIGOA and the other five SI algorithms on 12 benchmark functions are illustrated in Fig. 3 to more intuitively demonstrate the convergence accuracy and speed of each algorithm. The convergence graph's horizontal axis (iteration) and vertical axis (fitness) stand for the number of iterations and the fitness value, respectively. Figure 3 demonstrates that MSIGOA has the highest convergence accuracy and the fastest convergence speed on the test functions F1–F5 and F7–F8. Especially for the functions F1 and F3, MSIGOA can reach the theoretical optimal value within 300 iterations. For the test functions F9–F11, if we do not consider the situation where DBO and WOA sink into the local optima, then DBO, WOA, and HHO have almost the same convergence accuracy as MSIGOA. But compared to these three algorithms, the convergence curve of MSIGOA drops faster, and the fitness value reaches or approaches the theoretical best value at a faster speed. In addition, even though the optimization effect of MSIGOA on the test functions F6 and F12 is slightly worse than that of DBO and HHO, it is still better than the original GOA in terms of convergence accuracy and speed. In summary, MSIGOA can reach theoretical optimal values on most test functions. This demonstrates that the algorithm has strong global exploration and local development capabilities and can effectively escape from local optima. In addition, the convergence curve and standard deviation show that the algorithm has excellent convergence speed and stability. All these fully demonstrate the effectiveness and feasibility of MSIGOA.

4.2.2 Comparison of MSIGOA with Other SI Algorithms on the CEC2017 Test Functions

This section compares MSIGOA with five algorithms, such as WOA and HHO, on the CEC2017 test function to further validate the effectiveness and feasibility of MSIGOA in resolving intricate issues. The experimental outcomes for MSIGOA and five compared algorithms on 30 test functions are displayed in Table 4, and the convergence curves are illustrated in Fig. 4. CEC2017 has 30 test functions (as shown in Table 3), which are divided into four categories: unimodal F1–F3, multimodal F4–F10, mixed F11–F20, and composite F21–F30. However, for some reasons, F2 has now been removed from the CEC2017 test functions. The structure of the CEC2017 test functions is more intricate than that of classical test functions, and it is difficult to discover the optimal solution [62]. Because of this, all algorithms have 1000 iterations set for them. The dimension of all test functions to are adjusted to 10 in consideration of computer performance and time costs.

Fig. 4
figure 4figure 4figure 4

Convergence curves of MSIGOA and other SI algorithms on CEC2017 test functions

Table 3 CEC2017 test functions

After separately running each of the MSIGOA and five compared algorithms 30 times on 29 test functions, Table 4 displays their standard deviation and mean accuracy. The data in Table 4 shows that MSIGOA has suboptimal accuracy on the 8 functions F5, F9–F10, F16, F19–F20, F23, and F25 and optimal accuracy on the 16 functions F1, F3–F4, F7–F8, F11–F15, F17–F18, F22, and F28–F30. Additionally, the MSIGOA has better standard deviations than the five compared algorithms on the majority of test functions. From this, it can be known that MSIGOA can obtain outstanding optimization outcomes on 80% of the CEC2017 test functions and has good stability. The convergence curves for the six algorithms on all CEC2017 test functions are displayed in Fig. 4. It is evident from the curve in Fig. 4 that MSIGOA outperforms the other algorithms for convergence speed on over half of the test functions. For example, on testing functions F3, F4, F11, F14, and F22, although the accuracy of these five compared algorithms is not significantly different from or approximately equal to that of MSIGOA, the convergence speed of MSIGOA is faster. In summary, MSIGOA achieved better optimization results than the other five algorithms on more than half of the CEC2017 test functions, which sufficiently demonstrates the validity and feasibility of MSIGOA in resolving intricate issues.

Table 4 Comparisons of MSIGOA and other SI algorithms on CEC2017 test functions

4.3 Comparison with Other Modified Grasshopper Optimization Algorithms

To validate that MSIGOA is more competitive than other improved GOAs, the following GOAs were chosen for comparison experiments: EGOA [23], IGOA [24], CC–GOA [28], Crazy–GOA [30], and the original GOA. For the sake of fairness in experiment, parameters such as the highest iterations Tmax, population size N, and question dimension Dim are set at 500, 30, and 30, respectively. The above six algorithms were independently run 30 times on 12 benchmark test functions in Table 1. Then the corresponding worst and best convergence accuracy, standard deviation, and mean convergence accuracy were statistical. Table 5 shows the experimental data for each algorithm, and Fig. 5 displays their convergence curves.

Table 5 Comparisons of MSIGOA and other improved GOAs on classical test functions
Fig. 5
figure 5

Convergence curve of MSIGOA and other improved GOAs on classical test functions

In Table 5, it can be seen that CC–GOA, IGOA, and Crazy–GOA are superior to GOA in terms of mean accuracy, standard deviation, worst accuracy, and best accuracy for unimodal test functions F1–F6 and multimodal test functions F7–F12. As for EGOA, it performed slightly worse than GOA on the test functions F8 and F9 and slightly better than GOA on the other functions. From the perspective of the improvement effect, although CC–GOA, IGOA, and Crazy–GOA have enhanced the mean accuracy and other performance indicators, there is a big gap compared with the improvement effect of MSIGOA. Taking CC–GOA, which has the best optimization effect among the three algorithms, as an example, the best convergence accuracy of CC–GOA on the test functions F1–F4 and F7–F8 is far from reaching the theoretical best value. However, the best accuracy of MSIGOA on these six test functions can reach the theoretical best value, and the standard deviation and mean accuracy are zero. On the test functions F5 and F6, the mean accuracy, standard deviation, worst accuracy, and best accuracy obtained by MSIGOA are basically the same as those of CC–GOA, IGOA, and Crazy–GOA, but better than that of EGOA and GOA.

The convergence curves of each algorithm on different test functions are displayed in Fig. 5 to facilitate a more visual analysis. Figure 5 shows that the optimization performance of MSIGOA on all test functions is better than that of other modified grasshopper algorithms. On the functions F1–F4 and F7–F12, it is clear that MSIGOA has the highest convergence accuracy and the fastest convergence speed. Especially for the functions F9 and F11, the MSIGOA can reach the theoretical optimal value within 50 iterations. Finally, although the convergence accuracy of MSIGOA on the test functions F5 and F6 is basically the same as that of CC–GOA, IGOA, and Crazy–GOA, its convergence speed is better than these three algorithms. In summary, the convergence accuracy and speed of MSIGOA, CC–GOA, IGOA, and Crazy–GOA on 12 test functions are better than those of GOA. However, the performance of MSIGOA is much better than that of other improved GOAs. This fully demonstrates that MSIGOA is more competitive than other improved GOAs.

4.4 The Engineering Application of MSIGOA

To validate the effectiveness of MSIGOA when resolving actual problems, WOA, HHO, BOA, DBO, and Aquila optimizer (AO) [63] are selected to be compared with MSIGOA in six engineering problems, such as compression spring design, gear train design, and three-bar truss design. The population size and iterations of each algorithm are set to 30 and 500, respectively, and the optimal solutions obtained by each algorithm are compared after 30 independent runs.

4.4.1 Compression Spring Design

A common problem in mechanical engineering is the design of compression springs [64], where the objective is to minimize the spring's weight while meeting the requirements of minimal deflection, shear stress, flutter frequency, etc. Figure 6 depicts the general construction of the spring. Three design variables are involved in this problem: the diameter w of the spring wire, the number N of effective coils in the spring, and the average coil diameter W of the spring. The objective function and constraint conditions are as follows:

$$\min f(x) = x_{2} x_{1}^{2} (2 + x_{3} ), w = x_{1} , W = x_{2} , N = x_{3} ,$$
(19)

subject to

$$g_{1} (x) = 1 - \frac{{x_{3} x_{2}^{3} }}{{71,785x_{1}^{4} }} \le 0,$$
$$g_{2} (x) = \frac{1}{{5108x_{1}^{2} }} + \frac{{(4x_{2} - x_{1} )x_{2} }}{{12,566(x_{2} - x_{1} )x_{1}^{3} }} - 1 \le 0,$$
$$g_{3} (x) = 1 - \frac{{140.45x_{1} }}{{x_{3} x_{2}^{2} }} \le 0,$$
$$g_{4} \left( x \right) = \frac{{2(x_{1} + x_{2} )}}{3} - 1 \le 0,$$

where

$$0.05 \le x_{1} \le 2,$$
$$0.25 \le x_{2} \le 1.3 ;2 \le x_{3} \le 15.$$
Fig. 6
figure 6

Diagram of the compression spring

Table 6 indicates the compression spring design optimization outcomes for MSIGOA and the other four algorithms. When the optimization outcomes of each algorithm are compared, it becomes evident that the design scheme obtained by MSIGOA is the best because it minimizes the mass of the spring. This means MSIGOA outperforms other algorithms in compression spring design in terms of optimization performance.

Table 6 Comparison of the results between MSIGOA and other SI algorithms on the CSD problem

4.4.2 Gear Train Design

The objective of the mechanical engineering unconstrained discrete design problem known as the gear train design problem is to determine an optimal amount of gears to minimize the transmission ratio [65]. The ratio between the angular velocities of the output and input shafts is known as the transmission ratio. Figure 7 depicts the general construction of the gear train. Each of the gears in the gear train has a different number of teeth, which corresponds to four design variables \((T_{a} ,T_{b} ,T_{c} ,T_{d} )\). The problem can be described as:

$$\min f\left( x \right) = (\frac{1}{6.931} - \frac{{x_{1} x_{2} }}{{x_{3} x_{4} }})^2, T_{a} = x_{1} , T_{b} = x_{2} , T_{c} = x_{3} , T_{d} = x_{4} ,$$
(20)

where

$$12 \le x_{1} , x_{2} , x_{3} , x_{4} \le 60.$$
Fig. 7
figure 7

Diagram of the gear train

Table 7 indicates the gear train design optimization outcomes for MSIGOA and the other four algorithms. The statistical findings shown in Table 7 indicate that MSIGOA, WOA, HHO, and AO outperform the other comparative algorithms. The gear train designed based on the combination of the number of gears obtained from the above four algorithms has the minimum transmission ratio. It follows that the majority of advanced algorithms can effectively handle unconstrained engineering design problems.

Table 7 Comparison of the results between MSIGOA and other SI algorithms on the GTD problem

4.4.3 Three-Bar Truss Design

One characteristic nonlinear structural design problem in civil engineering is the design of a three-bar truss structures [66]. The objective is to minimize the volume of the three-bar truss by determining the ideal cross-sectional area combination of truss members while satisfying constraints like buckling, stress, and deflection. Figure 8 depicts the general construction of the three-bar truss. Since the construction of the three-bar truss is symmetrical, there are only two design variables in this problem, which are the cross-sectional areas A1 of bar 1 and A2 of bar 2. The mathematical description of this problem is shown below:

$$\min f(x) = (x_{2} + 2\sqrt 2 x_{1} )L, A_{1} = x_{1} , A_{2} = x_{2} ,$$
(21)

subject to

$$g_{1} (x) = \frac{{P(x_{2} + \sqrt 2 x_{1} )}}{{2x_{1} x_{2} + \sqrt 2 x_{1}^{2} }} - \sigma \le 0,$$
$$g_{2} (x) = \frac{{Px_{2} }}{{2x_{1} x_{2} + \sqrt 2 x_{1}^{2} }} - \sigma \le 0,$$
$$g_{3} (x) = \frac{P}{{x_{1} + \sqrt 2 x_{2} }} - \sigma \le 0,$$

where

$$L = 100\;{\text{cm}},\;P = \sigma = 2\;{\text{kN/cm}}^{2} ,$$
$$0 \le x_{1} , x_{2} \le 1.$$
Fig. 8
figure 8

Diagram of the three-bar truss

Table 8 indicates the three-bar truss design optimization outcomes for MSIGOA and the other four algorithms. When the optimization outcomes of each algorithm are compared, it becomes evident that the design scheme obtained by MSIGOA is the best because it minimizes the volume of the three-bar truss. This means MSIGOA outperforms comparative algorithms in three-bar truss design in terms of performance.

Table 8 Comparison of the results between MSIGOA and other SI algorithms on the TBTD problem

4.4.4 Welded Beam Design

The intention of the welded beam design problem is to minimize the fabrication cost. The problem is subject to buckling load, shear stress, bending stress in the beam, and end deflection of the beam [67]. Figure 9 depicts the general construction of the welded beam. The clamped beam length \({\text{l}}\), weld thickness h, beam thickness b, and beam height t are the design variables of this problem. The objective function and constraint conditions are as follows:

$$\min f(x) = 0.04811x_{4} x_{3} (x_{2} + 14) + 1.10471x_{2} x_{1}^{2} , \; h = x_{1} , l = x_{2} , \; t = x_{3} , \;b = x_{4} ,$$
(22)

subject to

$$g_{1} (x) = - x_{4} + x_{1} \le 0,$$
$$g_{2} (x) = - x_{1} + 0.125 \le 0,$$
$$g_{3} (x) = P - P_{b} (x) \le 0,$$
$$g_{4} (x) = \sigma_{s} (x) - \sigma_{sm} \le 0,$$
$$g_{5} (x) = \beta (x) - \beta_{m} \le 0,$$
$$g_{6} \left( x \right) = \sigma_{b} (x) - \sigma_{bm} \le 0,$$
$$g_{7} (x) = 0.04811x_{4} x_{3} (x_{2} + 14) + 1.10471x_{1}^{2} - 5 \le 0,$$

where

$$P_{b} (x) = \left( {1 - \sqrt{\frac{E}{4G}} \frac{{x_{3} }}{2L}} \right)\frac{{\sqrt {\frac{{x_{4}^{6} x_{3}^{2} }}{36}} 4.013E}}{{L^{2} }}, \;G = 1.2 \times 10^{7} \;{\text{psi}},$$
$$\beta (x) = \frac{{6L^{3} P}}{{Ex_{4} x_{3}^{2} }},\; \sigma_{b} (x) = \frac{6LP}{{x_{3}^{2} x_{4} }}, \;E = 3 \times 10^{7} \;{\text{psi}}, \; P = 6000\;{\text{lb}},$$
$$\sigma_{s} (x) = \sqrt {(\sigma_{s}^{{\prime\prime}} )^{2} + (\sigma_{s}^{\prime} )^{2} + 2\sigma_{s}^{{\prime\prime}} \sigma_{s}^{\prime} \frac{{x_{2} }}{2R}} , \sigma_{s}^{{\prime\prime}} = \frac{MR}{J}, \sigma_{s}^{\prime} = \frac{P}{{\sqrt 2 x_{2} x_{1} }},$$
$$J = 2\left\{ {\sqrt 2 x_{2} x_{1} \left[ {\left( {\frac{{x_{3} + x_{1} }}{2}} \right)^{2} + \frac{{x_{2}^{2} }}{4}} \right]} \right\}, \; R = \sqrt {\left( {\frac{{x_{3} + x_{1} }}{2}} \right)^{2} + \frac{{x_{2}^{2} }}{4}} , \; M = P\left( {\frac{{x_{2} }}{2} + L} \right),$$
$$\sigma_{sm} = 13{,}600\,{\text{psi}}, \; \beta_{m} = 0.25\;{\text{in}}, \sigma_{bm} = 30{,}000\;{\text{psi}},\;L = 14\;{\text{in}},$$
$$0.1 \le x_{1} , \; x_{4 } \le 2, \; 0.1 \le x_{2} ,\; x_{3} \le 10.$$
Fig. 9
figure 9

Diagram of the welded beam

Table 9 indicates the welded beam design optimization outcomes for MSIGOA and the other four algorithms. Upon comparing the optimization results of all algorithms, it is apparent that the design scheme acquired by MSIGOA is the best one, as its cost is the lowest. Consequently, MSIGOA is capable of handling the welded beam design problem efficiently.

Table 9 Comparison of the results between MSIGOA and other SI algorithms on the WBD problem

4.4.5 Corrugated Bulkhead Design

The objective of this problem is to minimize the weight of the corrugated bulkhead of the tanker while the corresponding constraints are met [68]. The plate thickness t, depth d, width w, and length l are the design variables for this problem. The problem can be described as:

$$\min f\left( x \right) = \frac{{5.885(x_{3} + x_{1} )x_{4} }}{{\sqrt {\left| {x_{3}^{2} - x_{2}^{2} } \right| + x_{1} } }}, x_{1} = w, x_{2} = d, x_{3} = l, x_{4} = t,$$
(23)

subject to

$$g_{1} (x) = x_{2} - x_{3} \le 0,$$
$$g_{2} (x) = 1.05 - x_{4} \le 0,$$
$$g_{3} (x) = 0.15 + 0.0156x_{3} - x_{4} \le 0,$$
$$g_{4} (x) = 0.15 + 0.0156x_{1} - x_{4} \le 0,$$
$$g_{5} (x) = 8.94\left( {\sqrt {\left| {x_{3}^{2} - x_{2}^{2} } \right|} + x_{1} } \right) - x_{2} x_{4} \left( {\frac{{x_{3} }}{6} + 0.4x_{1} } \right) \le 0,$$
$$g_{6} (x) = 2.2\left( {8.94\left( {\sqrt {\left| {x_{3}^{2} - x_{2}^{2} } \right|} + x_{1} } \right)} \right)^{4/3} - x_{2}^{2} x_{4} \left( {\frac{{x_{3} }}{12} + 0.2x_{1} } \right) \le 0,$$

where

$$0 \le x_{1} , x_{2} , x_{3} \le 100,$$
$$0 \le x_{4} \le 5.$$

Table 10 indicates the corrugated bulkhead design optimization outcomes for MSIGOA and the other four algorithms. It is clear from comparing the optimization results of each algorithm that the design scheme attained by MSIGOA is the best one, since it minimizes the weight of the corrugated bulkhead. This means MSIGOA outperforms the comparative algorithms in corrugated bulkhead design in terms of performance.

Table 10 Comparison of the results between MSIGOA and other SI algorithms on the CBD problem

4.4.6 Tubular Column Design

This engineering problem aims to produce tubular columns with homogeneous sections employing particular materials [69]. In the meantime, it is required that the tubular column can sustain a certain compressive load, and the production cost must be as low as possible. Figure 10 depicts the general construction of the tubular column. The thickness t of the tube and the average diameter d of the column are the design variables for this problem. The specific mathematical expressions are as follows:

$$\min f(x) = 2x_{1} + 9.8x_{2} x_{1} , x_{1} = d, x_{2} = t,$$
(24)

subject to

$$g_{1} (x) = \frac{0.2}{{x_{2} }} - 1 \le 0,$$
$$g_{2} (x) = \frac{2}{{x_{1} }} - 1 \le 0,$$
$$g_{3} (x) = \frac{{x_{1} }}{14} - 1 \le 0,$$
$$g_{4} (x) = \frac{{x_{2} }}{8} - 1 \le 0,$$
$$g_{5} (x) = \frac{{8L^{2} P}}{{E\pi^{3} x_{2} x_{1} (x_{2}^{2} + x_{1}^{2} )}} - 1 \le 0,$$
$$g_{6} (x) = \frac{P}{{\pi \sigma_{y} x_{2} x_{1} }} - 1 \le 0,$$

where

$$P = 2{,}500\;{\text{kgf}},\;\sigma_{y} = 500\;{\text{kgf/cm}}^{2} ,$$
$$E = 8.5 \times 10^{5} \;{\text{kgf/cm}}^{2} ,\;L = 250\;{\text{cm}},$$
$$2 \le x_{1} \le 14, 0.2 \le x_{2} \le 0.8.$$
Fig. 10
figure 10

Diagram of the tubular column

Table 11 indicates the tubular column design optimization outcomes for MSIGOA and the other four algorithms. It is clear from comparing the optimized outcomes of each algorithm that the design scheme produced by MSIGOA is the best one, since it minimizes the production cost of the tubular columns. This indicates that MSIGOA performs better than other algorithms when it comes to tubular column design.

Table 11 Comparison of results between MSIGOA and other SI algorithms on the TCD problem

5 Conclusions

This paper presents a multi-strategy improved grasshopper optimization algorithm named MSIGOA to overcome the drawbacks of the original grasshopper optimization algorithm, such as slow convergence, vulnerability to trapping into local optima, and low accuracy. Firstly, circle mapping is used to initialize the population, making the population distribution more uniform and having higher diversity. Secondly, a nonlinear decreasing coefficient is employed instead of an original decreasing coefficient to meet the needs of the algorithm at different stages and improve both local exploitation and global exploration capabilities. Thirdly, the modified golden sine mechanism is added during the position update stage to change the single position update mode of GOA and enhance the local exploitation capability. Fourthly, the greedy strategy is added to greedily select the new and old positions of the individual to retain the better position and increase the speed of convergence. Finally, the quasi-reflection-based learning mechanism is utilized to construct new populations to improve population multiplicity and the capability to escape from local optima.

To examine the performance of the proposed MSIGOA, comprehensive comparative experiments were conducted with the original GOA and other advanced algorithms on 12 classic test functions and the CEC2017 test functions. The results reveal that MSIGOA has stronger comprehensive optimization capabilities and is superior to the compared algorithms in terms of search ability, convergence speed, and stability. In addition, the MSIGOA solves six engineering optimization problems of the compression spring design, gear train design, three-bar truss design, welded beam design, tubular column design, and corrugated bulkhead design. The experimental outcomes show that the MSIGOA achieves the best results on all design problems, can provide better design solutions than the compared algorithms, and is more competitive than the compared algorithms. It is worth noting that the MSIGOA does not achieve the best results on all test functions. The results of MSIGOA on some test functions are not as good or slightly worse than the compared algorithms. Therefore, MSIGOA still has a lot of development space in the future.

In future work, the proposed MSIGOA will be applied to practical problems such as wind power forecasting and UAV path planning. For example, one of our ongoing projects is to apply MSIGOA to ultra-short-term wind power forecasting, and some progress has been made. In addition, another research direction worth exploring is applying MSIGOA to the maximum power point tracking of photovoltaic power generation or further enhancing the performance of MSIGOA by introducing new improvement strategies.