1 Introduction

Many problems in real life could be converted into optimization problems subject to complex constraints. These optimization problems are often accompanied by multiple constraints and massive computation, so that traditional optimization approaches are unable to cope with such problems [1]. To overcome the limitations of traditional methods, such as obtaining only local optimal solution and relying too much on gradient information of objective functions, researchers have proposed a new optimization method, the metaheuristic algorithm. The meta-heuristic algorithm is featured as non-derivative, high efficiency and low computation cost. It provides an efficient solution for NP-hard problems, by obtaining the optimal solution or suboptimal solution within an acceptable running time.

Due to no free lunch (NFL) theorem [2], no single optimization algorithm can be used to solve all constrained optimization problems, so it is essential to constantly explore new algorithms and improve existing ones. In recent decades, a variety of flexible and versatile metaheuristic algorithms has been put forward to solve the increasingly complicated optimization problems in various areas. For example, some standard algorithms proposed by early researchers include: genetic algorithm (GA) derived from the law of natural selection in the biosphere [3], simulated annealing algorithm (SA) based on solid annealing principle [4], ant colony optimization algorithm (ACO) inspired by the ants foraging paths [5], particle swarm optimization algorithm (PSO) proposed by simulating the foraging actions of birds [6], gravitational search algorithm (GSA) inspired by the law of gravity and Newton's second law [7], and cuckoo search algorithm (CS) which is proposed by simulating the breeding habits of cuckoo [8]. In recent years, based on previous studies, many new standard optimization algorithms have been developed. For instance, grey wolf optimization algorithm (GWO) is proposed based on the hierarchy and predation behavior of grey wolf population [9], ant lion optimization algorithm (ALO) simulates the mechanism of ant lion predation on ants [10], whale optimization algorithm (WOA) is based on the mechanism of whale rounding up prey [11], sparrow search algorithm (SSA) simulates the foraging and anti-predatory actions of sparrows [12], and butterfly optimization algorithm (BOA) is based on the behaviors of butterflies feeding on nectar and mating in nature [13].

Besides, researchers have put forward different strategies to enhance the optimization performance of the standard algorithms. For instance, introducing the mechanism of nelder-mead local search algorithm to the WOA, an algorithm named HWOANM is proposed to speed up the convergence of the WOA and to solve engineering optimization problems [14]. A new hybrid algorithm (HPSOGWO) is proposed by effectively combining PSO and GWO [15]. The negative correlation search algorithm is introduced to GSA to achieve the differentiation of search behaviors, and the test results show that the optimization accuracy of GSA is enhanced [16]. A memory-based grey wolf optimization algorithm (mGWO) is proposed to enhance the ability of balancing global and local search [17]. To overcome the disadvantages of sine and cosine search algorithm (SCA), Ref. [18] proposed the m-SCA algorithm, which introduces reverse learning strategy and self-adaptation strategy.

The seagull optimization algorithm (SOA) is proposed by Dhiman and Kumarti, simulating the migration and aggressive behavior of seagulls [19]. The SOA has the merits of simple parameter setting, easy and flexible adjustment and easy implementation. It has been studied and improved by many scholars, and been applied to different fields. Like other optimization algorithms, it also has the defects of low population diversity, easily plunging into local optimum, and weak convergence [20].

In the literature, the SOA is investigated mainly from two aspects: one is to study the optimization process of the SOA, and improve the algorithm by changing the population initialization, parameters, convergence factor or position updating method; the other is to apply the SOA to the parameter optimization of other algorithms or to some practical fields. For example, an improvement of the SOA was made by replacing the convergence factor with a nonlinear dynamic factor and using lévy flight mechanism to increase the randomness [21]. Parameters and attack angle \(\uptheta \) in the SOA were adjusted and dynamic reverse learning strategy was utilized, with an application to the PID controller model [22]. The levy-flight mechanism was added to the search method of the SOA to improve the convergence rate. Then it was applied to the optimization model of the PEMFC system [23]. Ref. [24] combined the shrink-wrap mechanism of WOA with the spiral search mode of SOA, to avoid premature convergence and improve the convergence accuracy. Three strategies were introduced to improve the convergence capability of the SOA, with an application to the blind source separation [25]. The SOA was utilized to calculate the threshold points for threshold segmentation of otsu images to achieve better segmentation effect [26]. The heat transfer formula of the heat exchange optimization algorithm (TEO) was used to improve the attack formula of the SOA, providing solutions to the feature selection problems [27]. Natural selection mechanism was introduced to the SOA to avoid trapping in the local optimum, and was used to solve the dynamic optimization problems together with the unequal division method [28].

Although the researches enhanced the performance of the SOA in a way, most of them have not considered the drawbacks of the SOA comprehensively enough. At this point, this paper proposes hybrid strategies to enhance the convergence of the SOA, working on the three main drawbacks of slow convergence, poor ability to balance global and local search, and single search mode. Firstly, the helical coefficient of the SOA is improved, so that the helical radius of seagulls attacking can change with the number of iterations, and the local search ability is enhanced; secondly, an adaptive weight factor is added to the position updating method to balance the global and local search in the optimization process; finally, the chaotic local strategy is used to update the seagull position twice to prevent falling into a local optimum. In the experimental simulation, 12 test functions are utilized. Experimental comparison of solution results and convergence curves with other recent related optimization algorithms shows that the ISOA has stronger searching ability and faster convergence. In addition, four engineering optimization problems with constraints are solved by the ISOA. The results indicate that the proposed algorithm (ISOA) has strong competitiveness compared with other algorithms.

The rest of this paper is structured as follows: Sect. 2 describes the principle and optimization mechanism of the standard SOA, and analyzes the problems existing in the SOA; Sect. 3 puts forward the improvement strategies and describes the optimization procedure of the ISOA; In Sect. 4, experimental comparative analysis is carried out to validate the performance of the ISOA; Sect. 5 applies the ISOA to four engineering design problems; Sect. 6 concludes the paper, and proposes further research.

2 Seagull Optimization Algorithm and Shortcomings

The main inspiration for the SOA is the migration and aggressive behavior of seagulls in nature. Migration is the seasonal movement of seagulls in search of the richest food source to provide sufficient energy. During migration, seagulls are supposed to keep from colliding with each other, and each seagull updates the position with the best one in the population. Then, seagulls assault prey in a spiral trajectory through the air. The migration is the global exploration phase of the SOA, while the attack denotes the local exploitation phase. The SOA is to constantly adjust the positions of seagulls to seek an optimal solution by imitating the two behaviors.

2.1 Migration Behavior

During the migration phase, the algorithm simulates how a flock of seagulls moves from one location to another. Every search agent needs to meet three conditions: avoid colliding with each other, move in the direction of the best search agent, and update the position with the best search agent.

1. Avoid collision: To avoid the mutual collision, the variable A is introduced into the algorithm to evaluate the updated position of search agents.

$${\overrightarrow{C}}_{s}=A\times {\overrightarrow{P}}_{s}(t)$$
(1)

where \({\overrightarrow{C}}_{s}\) represents the location where the seagull does not collide with another individual; \({\overrightarrow{P}}_{s}(t)\) is the current position of the search agent; \({\text{t}}\) indicates the number of iterations; \({\text{A}}\) represents the movement behavior of seagulls in the feasible region.

$${\text{A}}={f}_{c}-\left(t\times \frac{{f}_{c}}{{T}_{maxitera}}\right)$$
(2)

where \({T}_{maxitera}\) is the maximum number of iterations, \(\mathrm{t }=\mathrm{ 1,2},...,{T}_{maxitera}\), the hyper parameter \({f}_{c}\) is used to control the size of the variable A, \({f}_{c}\) is set to 2, and the variable A decreases linearly from \({f}_{c}\) to 0.

2. Move in the direction of the best search agent: The seagulls will move to the best search agent when there is no collision between individuals.

$${\overrightarrow{M}}_{s}=B\times ({\overrightarrow{P}}_{best}(t)-{\overrightarrow{P}}_{s}(t))$$
(3)
$${\text{B}}=2\times {{\text{A}}}^{2}\times {\text{rd}}$$
(4)

where \({\overrightarrow{M}}_{s}\) represents the position of the seagull moving toward the best search agent; \({\overrightarrow{P}}_{best}(t)\) is the current position of the best search agent, which has a small fitness value; \({\text{B}}\) is a random number, which is used to balance global exploration and local exploitation; \({\text{rd}}\) is a random number between 0 and 1.

3. Update the position with the best search agent: After the convergence direction is determined, the seagull constantly approaches the best search agent.

$${\overrightarrow{D}}_{s}=|{\overrightarrow{C}}_{s}+{\overrightarrow{M}}_{s}|$$
(5)

where \({\overrightarrow{D}}_{s}\) denotes the distance between the search agent and the best one.

2.2 Attack Behavior

During attacking, seagulls constantly change the angle and speed, and they use their wings and weight to maintain their hovering altitude. When attacking prey, seagulls move through the air in a spiral motion. Their behavior is described below in terms of x, y and z coordinates:

$${\text{x}}={\text{r}}\times {\text{cos}}(\uptheta )$$
(6)
$${\text{y}}={\text{r}}\times {\text{sin}}(\uptheta )$$
(7)
$${\text{z}}={\text{r}}\times\uptheta $$
(8)
$${\text{r}}={\text{u}}\times {{\text{e}}}^{\mathrm{\theta v}}$$
(9)

where, \({\text{r}}\) is the spiral radius of each turn; \(\uptheta \) is a random number in \([\mathrm{0,2\pi }]\). \({\text{u}}\) and \({\text{v}}\) are constants; e is the base of the natural logarithm. In the standard SOA, u and v are both 1. Equations (6)–(9) are used to figure out the updated position of a seagull as shown below:

$${\overrightarrow{P}}_{s}(t+1)=({\overrightarrow{D}}_{s}\times x\times y\times z)+{\overrightarrow{P}}_{best}(t)$$
(10)

where \({\overrightarrow{P}}_{s}\left(t+1\right)\) is the updated position of the search agents.

2.3 Shortcomings

The SOA has the following shortcomings: (a) slow convergence. According to the position updating method of the SOA, the spiral radius \({\text{r}}\) determines the size of the search range of the seagulls. However, r is determined by the coefficients \({\text{u}}\) and \({\text{v}}\) that are constants. As a result, the search radius is too large in the later stage, causing an oscillation near the optimal solution and a failure to achieve fast convergence. (b) Poor ability of balancing global and local search. From Eq. (10), the position of the current best individual has strong influence on the position of the seagulls. The influence is supposed to vary with different stages of the algorithm. However, in the standard SOA, the weight value given to the best individual is always 1 in both early and late stages, which leads to a poor ability of balancing the global and local search. (c) Single search mode, causing a local optimal. The standard SOA has only one position updating method, as a result, search agents can only search in one way, reducing the diversity of species. This makes the algorithm susceptible to the local optimization, particularly for the multi-peak test functions.

3 Improved Seagull Optimization Algorithm

To overcome the drawbacks existing in the SOA, this paper proposes three optimization strategies: The spiral coefficient v is improved by using the hyperbolic tangent function (Tanh) to speed up the convergence; the adaptive weight factor is introduced to strengthen the ability of balancing global and local search; the chaotic local strategy is introduced to increase the diversity of search methods to improve the convergence accuracy. Finally, the solving procedure of the ISOA is provided.

3.1 Improvement of Helical Coefficient \(\mathbf{v}\)

From the iterative process of the SOA, when the search agent launches the attack behavior, the spiral radius r affects the size of attack range, thus has a deep impact on the optimization accuracy of the SOA. According to Eq. (9), the spiral radius r is determined by the values of the spiral coefficients u and v. In the standard SOA, the values of u and v are set to 1. As a result, the size of the spiral shape is constant and cannot be adjusted continuously with the iterations. Especially, in the later stage of the algorithm, the spiral radius cannot be decreased, even causing a failure to converge to the optimum value. At this point, the Tanh is introduced to improve the helix coefficient v.

Figure 1 is the curve of the Tanh expressed by Eq. (11), indicating that Tanh is a continuous and increasing function. Equation (12) is the improved spiral coefficient v, performed by telescopic translation of the Tanh function. From Fig. 2, it can be seen that the value of v gradually approaches 0 with the iteration. In this way, in the early stage of the algorithm, the search agent can search with a large radius, enhancing the global exploration ability; in the later stage, the spiral radius decreases gradually, which allows the algorithm converge to the optimal solution rapidly and enhances the local search ability.

Fig. 1
figure 1

The Tanh function

Fig. 2
figure 2

Iterative curve of the helical coefficient v

$${\text{Tanh}}(x)=\frac{2}{1+{{\text{e}}}^{-2x}}-1$$
(11)
$${\text{v}}=\frac{2}{1+{{\text{e}}}^{(2{\text{t}}/{T}_{maxitera})}}$$
(12)

where, \({T}_{maxitera}\) is the maximum number of iterations, \({\text{t}}=\mathrm{1,2},...,{T}_{maxitera}\).

3.2 Adaptive Weight Factor Strategy

The adaptive weight factor is one of the essential factors used in optimization algorithms to balance global exploration and local exploitation [29,30,31]. As illustrated in Sect. 2.3, the current best individual is supposed to have greater influence on the early stage of the SOA search than on the late one. So, a larger weight should be given to the current best search agent in the early stage to speed up the convergence to the neighbor of global optimal solution; in the late stage, a smaller weight value should be chosen to refrain from falling into the local optimum caused by an excessively fast convergence, thus to enhance the capability of local search. Therefore, the adaptive weight factor, expressed by Eq. (13), is utilized to improve the position updating method, as shown in Eq. (14).

$$\upomega ({\text{t}})=2\times \left(1-{\text{cos}}\left(\frac{\uppi }{2}\times \left(1-\frac{{\text{t}}}{{T}_{maxitera}}\right)\right)\right)$$
(13)
$${\overrightarrow{P}}_{s}(t+1)=({\overrightarrow{D}}_{s}\times x\times y\times z)+\upomega *{\overrightarrow{P}}_{best}(t)$$
(14)

where, \({\overrightarrow{D}}_{s}\), \(x\), \(y\), \(z\) and \({\overrightarrow{P}}_{best}(t)\) have the same meaning as in Eq. (10).

The iterative curve of the adaptive weight ω(t) is shown in Fig. 3a, which indicates that the value range of \(\upomega \) reduces from 2 to 0. The weight value given to the best individual in the early stage is greater than 1; the weight value in the late stage decreases rapidly and gradually approaches 0. To further prove the validity of the proposed adaptive weight factor \(\upomega \), comparison of \(\upomega \),\({\upomega }_{1}\),\({\upomega }_{2}\) will be made in what follows, where \({\upomega }_{1}\),\({\upomega }_{2}\) are the weight factors proposed in [30] and [31], as expressed by Eqs. (15) and (16) respectively and shown in Fig. 3a.

Fig. 3
figure 3

Comparison of different weights \(\upomega \),\({\upomega }_{1}\),\({\upomega }_{2}\)

$${\upomega }_{1}({\text{t}})={\text{sin}}\left(\uppi +\frac{{\text{t}}}{2{T}_{maxitera}}\times\uppi \right)+\frac{({\upomega }_{{\text{max}}}+{\upomega }_{{\text{min}}})}{2}$$
(15)
$${\upomega }_{2}({\text{t}})=\frac{{{\text{e}}}^{2(1-{\text{t}}/{T}_{maxitera})}-{{\text{e}}}^{-2(1-{\text{t}}/{T}_{maxitera})}}{{{\text{e}}}^{2(1-{\text{t}}/{T}_{maxitera})}+{{\text{e}}}^{-2(1-{\text{t}}/{T}_{maxitera})}}$$
(16)

where, the values of \({\upomega }_{{\text{max}}}\) and \({\upomega }_{{\text{min}}}\) are the same as in Ref. [30].

When Eq. (10) in the SOA is changed to Eq. (14), we obtain a new algorithm and name it SOA-\(\upomega \). Similarly, SOA-\({\omega }_{1}\) and SOA-\({\omega }_{2}\) are the new algorithms obtained by replacing \(\upomega \) in Eq. (14) with \({\upomega }_{1}\), \({\upomega }_{2}\) respectively. Three test functions F1, F4 and F7 are randomly selected for convergence comparison of SOA-\(\upomega \), SOA-\({\omega }_{1}\) and SOA-\({\omega }_{2}\). As can be seen from Fig. 3b and Fig. 3c, the SOA-\(\upomega \) is nearer to the optimal solution than the other two, i.e., it has higher convergence accuracy. According to Fig. 3d, although the SOA-\(\upomega \) converges to the same optimal solution as SOA-\({\omega }_{1}\) and SOA-\({\omega }_{2}\), it has the fastest convergence rate. In summary, the adaptive weight factor proposed is effective for balancing the global search and local search of the SOA.

3.3 Chaotic Local Search Strategy

The chaotic local search strategy uses chaotic systems to generate chaotic variables. Due to the random and uniform distribution of chaotic variables, an algorithm can perform two searches near the optimal individual, which reduces the possibility of plunging into the local optimum. Moreover, the chaotic local strategy has been introduced into other algorithms, and the experiments have proved that it can effectively improve the performance [32, 33]. The standard SOA has only one spiral position search mode, which both reduces the diversity of the search agent and limits the search scope. It easily falls into local optimal, especially on the multi-peak functions. Therefore, to overcome the limitation of single search mode, an improved chaotic local strategy is introduced into the search process. Equations (17)–(18) describe the mathematical formulation of the improved chaotic local search.

$${X}^{\prime}(t)={\overrightarrow{X}}_{best}(t)+(z(t)-0.5)\times r(t)\times (ub-lb)$$
(17)
$$r(t+1)=r(t)\times 0.998$$
(18)

where, \({X}^{\prime}(t)\) is the position of an individual, \({\text{z}}({\text{t}})\) represents the chaotic variable mapped through the chaotic system; \({\overrightarrow{X}}_{best}(t)\) is the current position of the best individual; \({\text{ub}}\) and \({\text{lb}}\) describe the boundaries of the search space; \(r(t)\) is the radius of chaotic search; Eq. (18) describes how the chaotic search radius is updated, and the initial value is set as 0.01. In this paper, the strategy is adjusted and applied to the position updating method of the SOA to form a mechanism of secondary updating, as shown in Eqs. (19)–(21), so as to increase the diversity of the population.

$${P}_{s}^{\prime}(t)={\overrightarrow{P}}_{s}(t)+(z(t)-0.5)\times (r(t)\times (ub-lb))$$
(19)
$${\text{z}}({\text{t}})=\upmu \times z(t-1)\times (1-z(t-1))$$
(20)

where, \({P}_{s}^{\prime}(t)\) is the position of a seagull obtained by the improved chaotic local strategy, and \({\overrightarrow{P}}_{s}(t)\) is the position obtained by Eq. (14). The logistic chaotic mapping model is adopted in this paper, as shown in Eq. (20), \({\text{z}}({\text{t}})\) is a chaotic variable whose initial value is 0.152, and \(\upmu \) is 4. Finally, the position \({P}_{s}^{\prime}(t)\) is compared with \({\overrightarrow{P}}_{s}(t)\) in terms of fitness. If the fitness of \({P}_{s}^{\prime}(t)\) is less than that of \({\overrightarrow{P}}_{s}(t)\), maintain the current position \({P}_{s}^{\prime}(t)\); otherwise, \({P}_{s}^{\prime}(t)\) will be abandoned, as expressed by Eq. (21).

$$P({\text{t}})=\left\{\begin{array}{l}{P}_{s}^{\prime}(t), \quad if\;fit({P}_{s}^{\prime}(t))<fit({\overrightarrow{P}}_{s}(t))\\ {\overrightarrow{P}}_{s}(t),\quad otherwise \end{array}\right.$$
(21)

3.4 Pseudo-Code and Flowchart of ISOA

In this section, the pseudocode of the ISOA is provided based on the improvement strategies of the previous three sections. And the flowchart of the ISOA is shown in Fig. 4.

Fig. 4
figure 4

The flowchart of the ISOA

figure a

4 Experimental Simulation and Result Analysis

Section 4 consists of five parts. Section 4.1 introduces the basic information of the test functions and the experimental environment; A comparison of the ISOA with the standard SOA and other standard optimization algorithms is in Sect. 4.2; In Sect. 4.3, the ISOA is compared with other improved seagull optimization algorithms, including ISOA-1 (introducing strategy (1), ISOA-2 (introducing strategy 1 and strategy (2), BSOA [23] and WSOA [24]. Section 4.4 makes a comparison of convergence curves of all the algorithms. In Sect. 4.5, a comparison of MAE of the algorithms is presented to further verify the optimization ability and stability of the ISOA.

4.1 Experimental Environment and Benchmark Functions

Twelve benchmark functions are utilized in this paper. F1–F4 are single-peak test functions mainly utilized to test the search ability and convergence velocity of an algorithm. F5–F12 are multi-peak test functions, among which F9–-F12 are fixed-dimensional. The multi-peak test functions have many local minimums in the search space and are used to verify the ability of an algorithm to jump out of the local minima. Table 1 illustrates the basic information about the test functions. The population size (N) for all algorithms is 30 and the maximum number of iterations (\({T}_{maxitera}\)) is 500. Each algorithm is independently run 30 times to obtain four indexes: the minimum (Best), the maximum (Worst), average (Ave) and standard deviation (Std). The experiments were performed in MATLAB R2020b and on a computer having an Intel(R) Core (TM) i5-7200U CPU, 8 GB of RAM, and 64-bit Windows 10 operating system.

Table 1 Basic information about benchmark functions

4.2 Comparison with Standard Optimization Algorithms

We compare the ISOA with the ant lion optimization algorithm (ALO), butterfly optimization algorithm (BOA), grey wolf optimization algorithm (GWO), whale optimization algorithm (WOA) and standard SOA. The parameter settings are the same as in the corresponding references, listed in Table 2. Table 3 shows the comparison results on 12 test functions. The value in bold indicates the best result in a row.

Table 2 Main parameter settings of each algorithm
Table 3 Comparison between the ISOA and the standard optimization algorithms

According to Table 3, only the ISOA can converge to the theoretical optimum values in the F1–F4 tests among all the algorithms, moreover, it has the smallest standard deviations. This demonstrates that the ISOA has a great global optimization capability and stability in solving single-peak functions. Among the multi-peak functions, the ISOA performs the best in the F5, F7, F8, F10 and F12 tests, in terms of the four indexes. Both the ISOA and WOA converge to the theoretical optimum in the F6 test. The running result of the ISOA is slightly inferior to WOA and GWO in the F9 test. In the F11 test, although the ISOA has a slightly worse standard value than GWO, it performs better in terms of the other indexes, especially Ave. In general, the ISOA has better optimization ability and stronger stability than the other algorithms.

4.3 Comparison with Improved Seagull Optimization Algorithms

To further validate the effectiveness of the algorithm, the ISOA is compared experimentally with other improved seagull optimization algorithms, including ISOA-1, ISOA-2, BSOA and WSOA. The ISOA-1 is the algorithm improved only by the strategy proposed in Sect. 3.1. The ISOA-2 is the algorithm improved by the strategies in Sects. 3.1 and 3.2. BSOA and WSOA are recent related algorithms proposed in Refs. [23] and [24] respectively. Table 4 illustrates the test results of the algorithms. The value in bold indicates the best result in a row.

Table 4 Comparison between the ISOA and other improved SOAs

According to Table 4, the ISOA has better performance than the SOA, ISOA-1, and ISOA-2 on all the functions except for F8, and ranks second on the function F8, while the SOA has the worst performance among the four algorithms. This also implies that each of the strategies proposed in this paper is valid in improving the optimization performance of the SOA. Especially, the ISOA-1 can reach the theoretical optimum in the F6 test. The ISOA-2 can converge to the optimal values in the F1–F4 and F6 tests. The ISOA can reach the theoretical optimum values in the F1–F6, F10, and F12 tests.

The ISOA performs better in the F1–F4, F7–12 tests than the BSOA and WSOA. And the three algorithms perform nearly the same in the F6 test. In the F5 test, the ISOA converges to the theoretical optimal solution in terms of the index Best, while ranks second in the Ave and Std. On the whole, the ISOA has strong competitiveness in search ability and stability.

4.4 Comparison of Convergence Curves

The convergence curves of all algorithms in the F1–F12 tests are shown in Fig. 5. From Fig. 5, the ISOA has the fastest convergence rate and the highest convergence accuracy in the F1–F4 tests. In particular, the ISOA can converge to the optimum within about 380 iterations in the F1 test. In the F5–F12 tests, the ISOA still converges dramatically faster than the comparison algorithms except for function F8. In the F8 test, the ISOA performs slightly worse in the early part of the test, but better in the later part. Especially in the F10–F12 tests, ISOA converges to the optimal value within less than 200 iterations. It can also be found in Fig. 5 that the comparison algorithms such as BSOA and WSOA easily fall into the local optimums, and their capability to jump out of the local optimums is worse than ISOA’s. All above shows that the ISOA has faster convergence speed and stronger global exploration and local exploitation ability.

Fig. 5
figure 5

Convergence curves of all algorithms in the F1–F12 tests

4.5 Sorting by Mean Absolute Error (MAE)

Now the ISOA is compared with the published algorithms from Sects. 4.2 and 4.3 in terms of MAE. The MAE is an indicator used to describe the gap between the actual optimum value of an algorithm and the theoretical optimum value, which is expressed as follows.

$${\text{MAE}}=\frac{\sum_{{\text{i}}=1}^{{\text{N}}}|{{\text{F}}}_{{\text{i}},{\text{best}}}-{\overline{{\text{F}}} }_{{\text{i}}}|}{{\text{N}}}$$
(22)

where, \({\text{N}}\) is the number of test functions selected;\({{\text{F}}}_{{\text{i}},{\text{best}}}\) is the average value of the results acquired by running algorithms for 30 times on the i-th test function; and \({\overline{{\text{F}}} }_{{\text{i}}}\) is the theoretical optimal value in the i-th test function. Table 5 shows the sorted MAE values. The ISOA has the smallest MAE value, i.e., the optimization result of the ISOA is the closest to the theoretical optimal value. This further indicates that the ISOA has better convergence accuracy.

Table 5 MAE value ranking

5 Engineering Applications

This section verifies the advantages of the ISOA in optimizing four engineering design problems with different complexities. Two types of algorithms are selected for comparison tests, including the standard algorithms and improved algorithms. All algorithms are run independently 30 times, with the population size and the number of iterations set to 30 and 1000 respectively, and all the comparisons are based on the best-case results.

5.1 Three-bar Truss Design Problem

The three-bar truss design problem [34] is a typical problem in engineering applications, whose optimization objective is to minimize the volume of the truss structure under certain conditions. Figure 6 shows the model structure diagram of the three-bar truss. The model uses two variables \({A}_{1}\) and \({A}_{2}\) to modify the cross-sectional area of the rods. The cross-sectional area of the two sides is \({A}_{1}\), and the cross-sectional area of the middle rod is \({A}_{2}\). The objective function and constraint conditions are as follows:

Fig. 6
figure 6

Model structure diagram of the three-bar truss

Consider \(x=[{x}_{1} {x}_{2}]=[{A}_{1} {A}_{2}]\),

$$\mathrm{Minimize\;f}\left(x\right)=l\left(2\sqrt{2}{x}_{1}{+x}_{2}\right),$$
$${\text{s}}.{\text{t}}.\left\{\begin{array}{c}{g}_{1}(x)=\frac{{\sqrt{2}x}_{1}+{x}_{2}}{\sqrt{2}{x}_{1}^{2}+2{x}_{1}{x}_{2}}p-\delta \le 0\\ {g}_{2}(x)=\frac{1}{{x}_{1}+\sqrt{2}{x}_{2}}p-\delta \le 0 \\ {g}_{3}(x)=\frac{{x}_{2}}{\sqrt{2}{x}_{1}^{2}+2{x}_{1}{x}_{2}}p-\delta \le 0\end{array}\right.$$
(23)

where, \(0<{x}_{1},{x}_{2}<1\), other parameters: \(l=100\) cm, \(p=2KN/{cm}^{2}\), \(\delta =2KN/{cm}^{2}\).

The ISOA is compared with other algorithms proposed in recent years, including ALO [10], WOA [11], AOA [35], SCA [18], HHO [36], MFO [37], m-SCA [18], MMPA [38], GSA-GA [39], AGWO [40], BWOA [41], NCCO [42], PSO-DE [43], DEDS [44] and ESOA [20]. As shown in Table 6, the convergence result of the ISOA is the best, and the ISOA has stronger optimization accuracy. The optimal solution is 263.8956 and the corresponding optimum variable is \(x=[0.78812, 0.4098\)]. Figure 7 gives the convergence curve of the ISOA, indicating that it takes only 400 iterations to converge to the optimum value, and the ISOA has a fast convergence speed.

Table 6 Experimental results of three-bar truss design problem
Fig. 7
figure 7

Convergence curve of the ISOA in solving three-bar truss design problem

5.2 Pressure Vessel Design Problem

The optimization objective of the pressure vessel design problem [45] is to minimize the total manufacturing cost consisting of material cost, molding cost and welding cost of cylindrical vessel under four constraints. Figure 8 shows the structure diagram of the pressure vessel model. This design problem involves four optimization variables, which are the thickness of the shell (\({T}_{s}\)), the thickness of the side of the head (\({T}_{h}\)), cylinder radius (\({\text{R}}\)) and the length of the cylindrical shell (\({\text{L}}\)), where R and L are both continuous variables. The specific mathematical expressions are as follows:

Fig. 8
figure 8

Structure diagram of the pressure vessel model

Consider \(x=[{x}_{1} {x}_{2} {x}_{3} {x}_{4}]=[{T}_{s} {T}_{h} R L]\),

$$\mathrm{Minimize\;f}\left(x\right)=0.6224{x}_{1}{x}_{3}{x}_{4}+1.7781{x}_{2}{x}_{3}^{2}+3.661{x}_{4}{x}_{1}^{2}+19.84{x}_{3}{x}_{1}^{2}.$$
$${\text{s}}.{\text{t}}.\left\{\begin{array}{l}{g}_{1}(x)=-{x}_{1}+0.0193{x}_{3}\le 0 \\ {g}_{2}(x)=-{x}_{2}+0.00954{x}_{3}\le 0 \\ {g}_{3}(x)=-\pi {x}_{3}^{2}{x}_{4}-\frac{4}{3}\pi {x}_{3}^{3}+1296000\le 0 \\ {g}_{4}(x)={x}_{4}-240\le 0 \\ 0.0625\le {x}_{1},{x}_{2}\le 99\times 0.0625 \\ 10\le {x}_{3},{x}_{4}\le 200 \end{array}\right.$$
(24)

In tackling this problem, the ISOA is compared with the standard optimization algorithms, including PSO [6], SOA [18], GWO [9], WOA [11], AOA [35], and SOS [46], and other recent related improved optimization algorithms, including ESOA [20], WSOA [24], RCSA [47], IDARSOA [48], TLMPA [49], EEGWO [50], hHHO-SCA [51], MMPA [38] and ASOINU [52]. According to Table 7, the lowest manufacturing cost for the pressure vessel solved by the ISOA is 5805.7158, and the corresponding optimum variable value is \(x=[0.7735698, 0.3679545, 41.59672, 182.9594]\). The ISOA has the best result among all the algorithms, showing its strong competitiveness in searching optimal solution. As shown by the convergence curve of the ISOA in Fig. 9, the ISOA converges rapidly in the initial stage, jumps out of the local optimum within only about 400 iterations, and reaches the optimal solution within only about 480 iterations.

Table 7 Experimental results of pressure vessel design problem
Fig. 9
figure 9

Convergence curve of the ISOA in solving pressure vessel design problem

5.3 Welded Beam Design Problem

The optimization objective of the welded beam design problem [53] is to minimize the manufacturing cost subject to seven constraint conditions. This design problem involves four variables, which are weld thickness (h), cleat length (l), beam height (t) and beam thickness (b), as illustrated in Fig. 10. It also involves four functions \(\uptau \), \(\upsigma \), \(\updelta \),\({ P}_{c}\), which denote the beam bending stress, shear stress of the welded beam, the deflection at the end of the beam, and bar buckling load respectively. The following is the mathematical model of the problem:

Fig. 10
figure 10

Structure diagram of the welded beam model

Consider \(x=[{x}_{1} {x}_{2} {x}_{3} {x}_{4}]=[h l t b]\),

$$\mathrm{Minimize\;f}\left(x\right)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}\left(14+{x}_{2}\right),$$
$${\text{s}}.{\text{t}}.\left\{\begin{array}{l}{g}_{1}(x)=\tau (x)-13600\le 0 \\ {g}_{2}(x)=\sigma (x)-30000\le 0 \\ {g}_{3}(x)=\delta (x)-0.25\le 0 \\ {g}_{4}(x)={x}_{1}-{x}_{4}\le 0 \\ {g}_{5}(x)=6000-{P}_{c}(x)\le 0 \\ {g}_{6}(x)=0.125-{x}_{1}\le 0 \\ {g}_{7}(x)=1.10471{x}_{1}^{2}{x}_{2}+0.04811{x}_{3}{x}_{4}(14+{x}_{2})-5\le 0 \end{array}\right.$$
(25)

where, \(0.1\le {x}_{1}\le 2, 0.1\le {x}_{2}\le 10, 0.1\le {x}_{3}\le 10, 0.1\le {x}_{4}\le 2\),

$$ \tau (x) = \sqrt {{(\tau^{\prime})^2 + (\tau^{\prime \prime})^2 + (l\tau^{\prime}\tau^{\prime \prime})/\sqrt {0.25(l^2 + (h + t)^2 } }} , $$
$${\tau }^{\prime}=\frac{6000}{\sqrt{2}hl},\sigma \left(x\right)=\frac{\mathrm{504,000}}{{t}^{2}b},\updelta \left({\text{x}}\right)=\frac{\mathrm{65,856,000}}{\left(30\times {10}^{6}\right){{\text{bt}}}^{3}},$$
$${\tau }^{{\prime}{\prime}}=\frac{6000(14+0.5l)\sqrt{0.25({l}^{2}+{(h+t)}^{2})}}{2[0.707hl({l}^{2}/12+0.25{(h+t)}^{2})]},$$
$${P}_{c}\left(x\right)=\mathrm{64,746.022}\left(1-0.0282346t\right)t{b}^{3}.$$

The ISOA is compared with the algorithms proposed recently, including HWOANM [14], WSOA [24], IDARSOA [48], BFOA [54], hHHO-SCA [51], GSA [7], SCA [18], SBO [55], HHO [36], T-cell [56], HEAA [57], Random [58] and Coello [59], and the experimental results are illustrated in Table 8. The optimization result of the ISOA is superior to the others except for HWOANM, with an average improvement of 14.67%. Although the ISOA ranks second in optimization result, it converges to the optimal solution within only about 500 iterations, while the HWOANM takes 2300 iterations [14], as shown by the convergence curve in Fig. 11. The ISOA also has advantage in addressing the complex engineering problem.

Table 8 Experimental results of welding beam design problem
Fig. 11
figure 11

Convergence curve of the ISOA in solving welded beam design problem

5.4 Speed Reducer Design Problem

The speed reducer design problem [60] is a classical engineering optimization problem. The optimization objective of this problem is to minimize the weight of a reducer under inequality constraints. The constraints are with respect to bending stress of the gear, surface pressure, lateral deflection of the shaft and pressure in the shaft, and involve 7 optimization variables, including the width (\({\text{b}}\)), the tooth module (\({\text{m}}\)), the number of teeth in the pinion (\({\text{p}}\)), the length of the first shaft between the bearings (\({l}_{1}\)), the length of the second shaft (\({l}_{2}\)), the diameter of the first shaft (\({d}_{1}\)) and the second shaft (\({d}_{2}\)), where the number of teeth (\({\text{p}}\)) is an integer. The model structure diagram is in Fig. 12. The mathematical description of this problem is shown below:

Fig. 12
figure 12

Structure diagram of the speed reducer model

Consider \(x=[{x}_{1} {x}_{2} {x}_{3} {x}_{4} {x}_{5} {x}_{6} {x}_{7}]=[{b m p {l}_{1} {l}_{2} d}_{1} {d}_{2}]\),

$$\mathrm{Minimize\;f}\left(x\right)=0.7854{x}_{1}{x}_{2}^{2}\left(3.3333{x}_{3}^{2}+14.9334{x}_{3}-43.0934\right)-1.508{x}_{1}\left({x}_{6}^{2}+{x}_{7}^{2}\right)+7.4777\left({x}_{6}^{3}+{x}_{7}^{3}\right)+0.7854\left({x}_{4}{x}_{6}^{2}+{x}_{5}{x}_{7}^{2}\right),$$
$${\text{s}}.{\text{t}}.\left\{\begin{array}{l}{g}_{1}(x)=\frac{27}{{x}_{1}{x}_{2}^{2}{x}_{3}}-1\le 0 \\ {g}_{2}(x)=\frac{397.5}{{x}_{1}{x}_{2}^{2}{x}_{3}^{2}}-1\le 0 \\ {g}_{3}(x)=\frac{1.93{x}_{4}^{3}}{{x}_{2}{x}_{6}^{4}{x}_{3}^{ }}-1\le 0 \\ {g}_{4}(x)=\frac{1.93{x}_{4}^{3}}{{x}_{2}{x}_{7}^{4}{x}_{3}^{ }}-1\le 0 \\ {g}_{5}(x)=\frac{{[{(745({x}_{4}/{x}_{2}{x}_{3}))}^{2}+16.9\times {10}^{6}]}^{1/2}}{110{x}_{6}^{3}}-1\le 0 \\ {g}_{6}(x)=\frac{{[{(745({x}_{5}/{x}_{2}{x}_{3}))}^{2}+157.5\times {10}^{6}]}^{1/2}}{85{x}_{7}^{3}}-1\le 0\\ {g}_{7}(x)=\frac{{x}_{2}{x}_{3}}{40}-1\le 0 \\ {g}_{8}(x)=\frac{{5x}_{2}}{{x}_{1}}-1\le 0 \\ {g}_{9}(x)=\frac{{x}_{1}}{12{x}_{2}}-1\le 0 \\ {g}_{10}(x)=\frac{1.5{x}_{6}+1.9}{{x}_{4}}-1\le 0 \\ {g}_{11}(x)=\frac{1.1{x}_{7}+1.9}{{x}_{5}}-1\le 0 \end{array}\right.$$
(26)

where,\(2.6\le {x}_{1}\le 3.6,\) \(0.7\le {x}_{2}\le 0.8,\) \(17\le {x}_{3}\le 28,\) \(7.3\le {x}_{4}\le 8.3,\) \(7.3\le {x}_{5}\le 8.3,\) \(2.9\le {x}_{6}\le 3.9,\) \(5.0\le {x}_{7}\le 5.5\).

The ISOA is compared with both the standard algorithms and the latest algorithms, including PSO [6], SOA [18], GWO [9], AOA [35], SHO [61], CA [62], ESOA [20], hHHO-SCA [51], IAFOA [63], IPSO [64], IDARSOA [48], QOCSOS [52], ASOINU [52] and ISCA [65]. The solution to this design problem by the ISOA is 2973.91750, the optimum variables is \(x=[3.40385, 0.7, 17, 7.74585, 7.76495, 3.32186, 5.25780]\). From Table 9, the ISOA provides lighter weight of the speed reducer compared to other algorithms. Moreover, the ISOA has fast convergence speed, converging to the optimal solution within only 170 iterations, as shown by the convergence curve in Fig. 13. The ISOA has strong competitiveness in solving complex engineering design problems.

Table 9 Experimental comparison results of reducer design problem
Fig. 13
figure 13

Convergence curve of the ISOA in solving reducer design problem

6 Conclusions and Future works

This paper presented an improved seagull optimization algorithm named ISOA, by combining a variety of improvement strategies to overcome the drawbacks of slow convergence, poor ability to balance global and local search, and single search mode. Firstly, the strategy of adding spiral factor to the spiral radius in the attack stage enables search agents to adjust the search radius with the increase of the iteration number, so that the ISOA can not only converge quickly in the early stage but also avoid missing the optimal solution in the late stage caused by an excessively large radius. The second strategy of utilizing dynamic adaptive weight factors adjusts the proportion of best individuals to achieve equal emphasis on global exploration and local exploitation. Finally, the chaotic local search strategy is added to update the algorithm twice, expanding the search scope, and improving the capability to jump out of the local optimal.

The comprehensive comparative experiments on 12 benchmark test functions, including single-peak and multi-peak functions, show that the ISOA has greatly enhanced the optimization accuracy and convergence rate of the original SOA, and has strong competitiveness among the recent related optimization algorithms. Moreover, the ISOA solves 4 engineering optimization problems of the three-bar truss design, pressure vessel design, welded beam design and speed reducer design. The experimental results show that the ISOA generally offers better solutions, and has a fast convergence speed.

In future works, the proposed ISOA and strategies will be applied to solve optimization problems in practical fields, such as UAV path planning, inventory control and resource allocation. For instance, one of our ongoing work is to apply ISOA to multi-scenario and multi-obstacle UAV path planning, and we have made some progress. In addition, another research direction worth being further explored is an application of ISOA to data science such as differential privacy.