1 Introduction

In recent years, with the rapid development of science and technology as well as the era of big data coming, numerous problems in fields, such as financial economy [1], parameter estimation [2], artificial intelligence [3] and image processing [4], can be formulated as nonlinear optimization problems that are characterized by large scale, lots of constraints, complicated structure as well as mass computing. For these problems, traditional deterministic optimization methods [5,6,7,8,9] based on gradient information are difficult to obtain satisfactory results, while meta-heuristic algorithms based on stochastic technique can discover higher quality solutions. The meta-heuristic algorithm utilizes random operators to seek the global optimal solution in search space as much as possible and has the features of flexibility, simplicity and less computational overhead. Moreover, it has attracted extensive attention from engineers and practitioners [10]. Heretofore, numerous novel meta heuristic algorithms have been presented by many scholars, such as particle swarm optimization (PSO) [11], crisscross optimization algorithm (CSO) [12], honey badger algorithm (HBA) [13], ant colony optimization (ACO) [14], artificial bee colony (ABC) [15], fish swarm optimization (FSO) [16], grey wolf optimizer (GWO) [17], tunicate swarm algorithm (TSA) [18] and so on. Although these meta-heuristics algorithms outperform traditional optimization methods, e.g. steepest descent method, least square method, etc., on the complicated optimization issues, they are still limited by the deficiencies of low convergence rate and solution precision. Meanwhile, with the growing complication of optimization problems, the research of meta-heuristic algorithms has become a significant topic.

Whale optimization algorithm is a novel population-based optimization algorithm presented by Mirjalili [19], which solves optimization problems by transforming the predatory behavior of humpback whales into mathematical models. Its optimization progress includes exploration and exploitation phases, which are adjusted by a linear decreasing parameter. Existing studies have proved that whale optimization algorithm can effectively deal with the optimization problems in engineering and industry [20], but it is still plagued by the imbalance between the exploration and exploitation, premature convergence and located optimality. In view of this, since the emergence of WOA, many scholars have adopted a series of strategies into the standard WOA for improving the optimization ability in recent years. Kaur and Arora [21] constructed a chaotic whale optimization algorithm, which chooses multiple chaotic functions to replace the stochastic parameters of WOA to tune the local and global search processes. Sayed et al. [22] adopted a similar idea to solve the filled of features selection. El-Aziz and Mirjalili [23] presented a hyper-heuristic DEWOA to improve the exploration and exploitation capabilities of WOA, where a adaptively selected chaotic strategy and an opposition-based learning are adopted to obtain the initial population with better diversity and then a part of population are chose to execute DE algorithm. Ding et al. [24] introduced a chaotic inertial weight and a nonlinear convergence factor to balance the exploration and exploitation. Sun et al. [25] designed a cosine function as the convergence factor to balance the exploration and exploitation ability and additionally employed a Lévy flight method to help the current optimal solution jump out of the local minimum. Abdel-Basset et al. [26] presented a variant of whale optimization algorithm with L´evy flight and logical chaos mapping. Jin et al. [27] constructed an improved whale optimization algorithm based on the following two improvement methods: first, combine different evaluation and random operators to enhance the exploration; second, design a step factor based on Lévy flight strategy to strengthen the local solution avoidance and an adaptive weight parameter to tune the progress of exploration and exploitation. Saafan et al. [28] proposed a hybrid improved whale optimization salp swarm algorithm. The hybrid algorithm first substituted an exponential function for the linear function to construct an improved WOA and then executed the improved WOA or salp swarm algorithm based on a given condition.

On the whole, recent years have seen that different improvement strategies have been introduced into the basic WOA to obtain a better quality initial population, revise the transition from exploration to exploitation, and enhance the ability of global search and local search, so as to improve the performance to solve optimization problems. Although these improvements of WOA have achieved good advantages for solving optimization issues, there still exist some shortcomings of easily falling into local optimal and premature convergence.

In this paper, a new modified WOA algorithm based on multi-strategy, called MSWOA, is presented to improve the optimization ability of basic WOA. These following diverse strategies play different roles in the whole optimization process, which are also the main highlights of this paper: (1) construct a chaotic initial population by utilizing tent map function to discover high-quality solutions, thereby enhancing search capability; (2) design a nonlinear decreasing strategy to instead of the linear convergence factor, which is more line with the practical optimization process. This strategy can effectively avoid local optimal solutions and speed up the convergence rate; (3) propose an inertia weight strategy that varies periodically based on the number of iterations to adjust the effect of the current best solution on the evolution population and strengthen the global and local search ability; (4) introduce an optimal feedback strategy into search for prey phase, which utilizes the information of the current optimal solution to operate the exploration phase and enhance optimization ability; (5) analyze the effect of each strategy on the optimization ability of basic WOA.

The rest of this paper is organized as follows: Sect. 2 describes the standard whale optimization algorithm before Sect. 3 details the improvements and contributions of presented MSWOA. Section 4 discusses the impact of different improvement strategies on the performance of standard WOA. In Sect. 5, the comparison experiments are conducted to further verify the optimization ability of presented MSWOA in comparison with the other five meta-heuristic algorithms. Finally, Sect. 6 summarizes conclusions of this paper.

2 Standard Whale Optimization Algorithm

Whale optimization algorithm is a novel mathematical model for solving optimization problems by simulating the cooperative predation strategies of humpback whales. The updating mechanism of the algorithm consists of he following three strategies: encircling prey, bubble-net attacking method and search for prey, which are jointly determined by random probability factor p and coefficient vector A, as shown in Fig. 1.

Fig. 1
figure 1

Update mechanism framework of WOA

2.1 Encircling Prey

Let the size of whale population be \(N\). \(\vec{X}_{i} (i \in \{ 1,2, \ldots ,N\} )\) represents the position vector of \(i{\text{th}}\) individual whale, and indicates a feasible solution for a given optimization problem. Since the optimum is unknown beforehand during population initialization, WOA algorithm considers the current best candidate solution as the prey or the approximate optimum. Once the prey (optimal whale individual) is identified, other whales in the population update their positions to approach the prey over the course of iterations. The corresponding location update formula is defined as follows:

$$\vec{D} = \, \left| {\vec{C} \cdot \vec{X}^{*} (t) - \vec{X}(t)} \right|$$
(1)
$$\vec{X}(t + 1) = \vec{X}^{*} (t) - \vec{A} \cdot \vec{D},$$
(2)

where \(t\) is the current iteration, \(\vec{X}(t)\) represents the current individual whale, \(\vec{X}^{*} (t)\) is the optimum whale (the prey) obtained up to now and will be updated if a better position is discovered in each iteration, \(\cdot\) stands for multiplication of one element by one element, and \(\vec{A} \cdot \vec{D}\) means the renew-step length. \(\vec{A}\) and \(\vec{C}\) as the coefficient vectors can be calculated by the following formulas:

$$\vec{A} = 2a\vec{r}_{1} - a$$
(3)
$$\vec{C} = 2\vec{r}_{2} ,$$
(4)

where \(\vec{r}_{1}\) and \(\vec{r}_{2}\) are uniform random vectors in the interval [0,1], and \(a = 2 - 2t/t_{\rm max}\) is a convergence factor where \(t_{\rm max}\) presents the maximum number of iteration.

2.2 Bubble-Net Attacking Method

At the bubble-net attacking stage, WOA discovers the optimal solution by mimicking the bubble-net feeding behavior of humpback whales, which is essentially a spiral search progress in the feasible region, and the mathematical model is as follows:

$$\vec{D^{\prime}} = \, \left| {\vec{X}^{*} (t) - \vec{X}(t)} \right|$$
(5)
$$\vec{X}(t + 1) = \vec{D^{\prime}} \cdot {\rm e}^{bl} \cdot \cos(2\pi l) + \vec{X}^{*} (t),$$
(6)

where \(\vec{D^{\prime}}\) represents the distance between the current whale and the prey. \(l\) is a random number in \([ - 1,1]\) and \(b\) indicates a constant used to define the shape of spiral, which is normally equal to 1.

In the procedure of optimization, when \(\left| A \right| < 1\), whales select bubble-net attacking method or encircling prey to approach the prey (the best solution) with the probability of 50%, which depicts the exploitation phase of WOA algorithm. Accordingly, the update mechanism of whale position can be described as follows:

$$\vec{X}(t + 1) = \left\{ {\begin{array}{*{20}l} {\vec{X}^{*} (t) - \vec{A} \cdot \vec{D}} & , & {p < 0.5} \\ {\vec{X}^{*} (t) + \vec{D^{\prime}} \cdot {\rm e}^{bl} \cdot \cos(2\pi l)} & , & {p \ge 0.5} \\ \end{array} } \right..$$
(7)

2.3 Search for Prey

Search for prey is an important location update mechanism of WOA algorithm to achieve the exploration (global search) ability. When \(\left| A \right| \ge 1\), the current individual whale randomly chooses an individual as the prey to update its position to discover a better one, so as to enhance the global search ability of WOA algorithm. Its mathematical model is similar to Eqs. (1) and (2), except that the optimal individual is replaced by a randomly chosen one. The formula is as follows:

$$\vec{D} = \, \left| {\vec{C} \cdot \vec{X}_{\rm rand} (t) - \vec{X}(t)} \right|$$
(8)
$$\vec{X}(t + 1) = \vec{X}_{\rm rand} (t) - \vec{A} \cdot \vec{D},$$
(9)

where \(\vec{X}_{\rm rand}\) denotes a random individual whale selected from the current population, and other expressions have the same meaning as Sect. 2.1. It is worth noting that during the optimization progress, the scheduling between exploitation (local search) and exploration (global search) is determined by the value of \(\left| A \right|\). If \(\left| A \right| \ge 1\), the exploration phase is performed; otherwise the exploitation stage is executed.

3 Improvements in Whale Optimization Algorithm

Since WOA algorithm has the ability of exploration and exploitation, which is designed by a convergence factor a, it can better perform the optimization progress. Nevertheless, as other meta-heuristic optimization methods, it is known to have some defects such as converging slowly, locating in local solution as well as low precision. Based on this, some improvements of WOA are given to enhance the optimization ability in this paper.

3.1 Tent Mapping-Based Population Initialization

The performance of meta-heuristic algorithms is significantly influenced by the quality of initial solutions. Therefore, the initial population should be better distributed over the entire search space. The original WOA algorithm starts with a set randomly solutions within the feasible domain of optimization problem. However, this randomness does not guarantee the diversity of the initial solutions so as to damage convergence rate and the quality of optimal solutions. A prevalent strategy to the issue is to adopt chaotic maps instead of random selection. The chaotic strategy can yield an initial population with better diversity in the search space for its stochastic property and ergodicity [29]. Many population-based algorithms [21, 24, 30] are enhanced by employing diverse chaotic maps.

In this paper, the tent map is selected to produce a chaotic initial population for optimizing the performance of the standard WOA, and can be presented as follows:

$$s_{j + 1} = \left\{ {\begin{array}{*{20}c} {\frac{{s_{j} }}{0.7}} & , & {s_{j} < 0.7} \\ {\frac{{1 - s_{j} }}{0.3}} & , & {s_{j} \ge 0.7} \\ \end{array} } \right..$$
(10)

When an initial value is given, the chaotic vector \(\vec{S} = \{ s_{1} ,s_{2} , \ldots ,s_{D} \}\) can be yielded by the Eq. (10) through \(D - 1\) iterative loops, where \(D\) represents the dimension of the search space. By introducing the chaotic sequence, the \(i\) th initial individual \(\overrightarrow {{X_{i} }}\) can be produced by the following formula:

$$\vec{X}_{i} = \vec{X}_{lb} + (\vec{X}_{ub} - \vec{X}_{lb} ) \cdot \vec{S},$$
(11)

where \(\vec{X}_{lb} = (x_{1}^{lb} ,x_{2}^{lb} , \ldots ,x_{D}^{lb} )\) and \(\vec{X}_{ub} = (x_{1}^{ub} ,x_{2}^{ub} , \ldots ,x_{3}^{ub} )\) are the lower and upper boundaries of the given optimization problem, respectively. During the stage of population initialization, each individual whale is generated by utilizing the Eqs. (10) and (11) to yield the initial population with better diversity, as shown in algorithm 1.

figure a

3.2 Nonlinear Convergence Factor

In the optimization iterative progress of the standard WOA, there is an imbalance phenomenon between exploration and exploitation, which is caused by the linear decrease of convergence factor \(a\) from 2 to 1. With the reduction of the parameter \(a\), the global search capability of WOA gradually descends; meanwhile the local search capability gradually increases, as shown in formula (3). The linear decrement strategy of parameter \(a\) cannot exactly reflect the complicated and unpredictable optimization process. Consequently, an iterative nonlinear control strategy is introduced in this paper to update the parameter \(a\) for improving the optimal progress of WOA. The specific mathematical expression of \(a\) can be designed as follows:

$$a = 2 - 2\frac{{{\rm e}^{{(\frac{t}{{t_{\rm max} }})^{k} }} - 1}}{e - 1},$$
(12)

where \(k\) depicts the curve smoothness of parameter \(a\). Figure 2 illustrates the change curves of \(a\) under different values of \(k\). Large amounts of numerical results on test functions describe a fact that nonlinear strategy is more beneficial to enhance the optimization ability of the algorithm compared with the linear decreasing strategy. Furthermore, for different optimization problems, there always exists an appropriate value range of \(k\).

Fig. 2
figure 2

Nonlinear convergence parameter with different \(k\)

3.3 Iteration-Based Adaptive Inertia Weight Strategy

Dynamic inertia weight coefficient can be utilized to further tune and balance exploration and exploitation ability of the meta-heuristic algorithm. A larger inertia weight coefficient represents a larger search step and enhances avoidant local optima ability, which is conductive to global exploration. Meanwhile a smaller inertia weight coefficient can perform a more fine-grained local search to improve the convergence precision and speed. Nevertheless, for the standard WOA, the inertia weight value is set to 1 without variation during the procedure of evolutionary iteration. Although this strategy ensures the exploratory capability, it is detrimental to perform a fine search near the global optimum. For this reason, a dynamic inertia weight is introduced in this paper to modify the influence of the best position according to the iterative number, and its mathematical formula as follows:

$$w = \left|cos\left(\frac{nt\pi }{{t_{\rm max} }}\right)\right|.$$
(13)

Evidently, the weight function is a periodic function, and its cycle size is determined by the value of \(n\). Based on the numerical experiments, a suitable value of \(n\) can be discovered to significantly improve the performance of the basic algorithm.

The new update mechanism of the modified WOA algorithm can be written as follows:

$$\vec{X}(t + 1) = \left\{ {\begin{array}{*{20}l} {w\vec{X}^{*} (t) - \vec{A} \cdot \vec{D}} & , & {p < 0.5} \\ {w\vec{X}^{*} (t) + \vec{D^{\prime}} \cdot {\rm e}^{bl} \cdot \cos(2\pi l)} & , & {p \ge 0.5} \\ \end{array} } \right.$$
(14)
$$\vec{X}(t + 1) = w\vec{X}_{\rm rand} (t) - \vec{A} \cdot \vec{D}.$$
(15)

The introduction of the adaptive inertia weight strategy enables the location update mechanism to dynamically adjust the weight coefficient according to the number of iterations (as shown in Fig. 3). This strategy allows the optimal whale position to provide different guidance for other individual whales to renew location at diverse time points, further tuning the exploration and exploitation capabilities of the standard WOA algorithm.

Fig. 3
figure 3

Iteration-based adaptive weight with diverse values n

3.4 Optimal-Based Feedback Mechanism of Random Strategy

Search for prey of WOA algorithm is a random foraging strategy that selects a random individual whale from the current whale population to replace the best whale position found so far for updating the locations of the population in the exploratory period. Although this completely random behavior maintains the diversity of the population, it yet increases the uncertainty of the algorithm, leading to poor stability and slow convergence rate. In reality, during the down-to-earth exploration phase, each whale individual can implement the global search more excellently based on the current obtained information. Inspired by the article [12], an optimal-based feedback mechanism is introduced into the search for prey phase of WOA in this paper. This mechanism draws support from the information of the current optimum to perform the stochastic search. And the specific mathematical expression of the strategy is as follows:

$$\vec{X}_{new\_rand} = \left\{ {\begin{array}{*{20}l} {\lambda \vec{X}_{\rm rand} + (1 - \lambda )\vec{X}^{*} + c(\vec{X}_{\rm rand} - \vec{X}^{*} )} & , & {p1 > CA} \\ {\vec{X}_{\rm rand} } & , & {p1 \le CA} \\ \end{array} } \right.,$$
(16)

where \(\lambda\) and \(p1\) represent uniformly distributed random values between 0 and 1, and \(c\) is a uniformly distributed random value between − 1 and 1. \(CA\) is a predetermined probability threshold which belongs to \([0,1]\).

Based on the above formula, for \(p1 \ge CA\), \(\vec{X}_{\rm rand}\) and \(\vec{X}^{*}\) conduct crossover arithmetic to yield a new random position \(\vec{X}_{{\rm new}\_{\rm rand}}\) within the interval \([\vec{X}_{\rm rand} ,\vec{X}^{*} ]\) with a greater probability or outside this range with a lower probability. For \(p1 < CA\), search for phase is a completely random behavior. An intuitive factor is that for single functions, since there is only one optimal solution, the algorithm can reach the global solution faster with the help of better solution in the search for prey stage. On the contrary, for multimodal functions, since there are multiple local solutions, the random strategy can better jump out of the local optimal solution. Therefore, different values of \(CA\) can be set for functions with distinct properties.

3.5 The Pseudo of MSWOA

The above-mentioned contents detailed the modifications of basic WOA based on multi-strategy to improve its performance. The corresponding pseudo code of the modified WOA algorithm (MSWOA) is described in algorithm 2.

figure b

4 Experimental Results and Analysis

The effectiveness and efficiency of the MSWOA are evaluated by solving 24 global optimization problems listed in Table 1. The numerical experiments consist of two parts. First, in order to verify whether each improvement contributes to improve the algorithm’s optimization ability, we test the effect of each improvement individually. Second, MSWOA is compared with five other population-based optimization algorithms in terms of convergence rate, solution accuracy and stability.

Table 1 Description of benchmark test functions

The experimentation and algorithms are programmed in Matlab2021b and simulations are executed on Core i5-9500 with 3.00 GHz and 8G main memory.

4.1 Benchmark Functions and Experimental Settings

The test functions employed in numerical computations can be divided into two types according to their characteristics. \(f_{1} \sim f_{12}\) are unimodal functions with only one global minimum, which can be adopted to validate the exploitation capability of the investigated meta-heuristic algorithm. And \(f_{13} \sim f_{24}\) represent multimodal functions with more than one local extrema, so they are constantly used to test an optimal algorithm’s global search ability, i.e., whether it can avoid local optimums and obtain the global optimal solution. All test functions have a dimension of 30.

In order to facilitate the comparison of simulation experimental results, some common parameters are introduced into all the comparative analysis algorithms as follows: the population size is equal to 30; the maximum iterative number \(t_{\rm max}\) is set to 500. Other parameters of all algorithms are shown in Table 2. Because of including random factors, each algorithm performs 30 times independently for each test function to reduce statistical errors of the experimental results.

Table 2 Parameter settings

4.2 Effect Analysis ofeach Improvement

To enhance the optimization ability of basic WOA, the new modified MSWOA adopts diverse strategies. This section analyzes the impact of each improvement strategy on the optimization performance of basic WOA. Many meta-heuristic algorithms were enhanced by utilizing chaotic maps to initialize the population in the literatures [21, 24, 30]. Therefore, this situation is not discussed again. Under a deterministic iteration number, the impact of the other three improvements on WOA is analyzed by solving the test functions. The specific experimental results are illustrated in Fig. 4. It can be concluded that the optimization ability of WOA can be significantly improved by adopting appropriate parameters of nonlinear convergence factor, variable weight and optimal feedback.

Fig. 4
figure 4

Test results on the effect of each improvement individually

A large number of calculation results show that for the nonlinear convergence factor \(a\), the appropriate value of smooth parameter \(k\) is \([0.3,0.5]\) for most unimodal functions and \([0.6,0.9]\) for most multimodal functions. From Fig. 2, one can observe that for the unimodal function, the value of \(a\) is large and drops rapidly in the early stage of the iteration and decreases slowly in the later stage of the iteration. The curve change of this nonlinear update mechanism conforms to an intuitive cognition, that is, the early value of \(a\) is large and declines faster, which helps to quickly complete the global search and accelerate convergence rate, and decreases gradually in the later stage that is beneficial to local search and improve solution accuracy. However, this situation is not suitable for multimodal function, which is inconsistent with our cognition.

Based on Figs. 3 and 4, for unimodal functions, the period control factor \(n\) of the weight function has an optimal value in the interval \((0.5,1]\). This would imply that the whole evolutionary procedure of the unimodal function is divided into two stages according to the nonlinear inertia weight. At the beginning of iteration, the population can search for better solutions more efficiently with a large step length in the feasible region, which can advantageously avoid lying in local solutions to prevent the algorithm from appearing premature. And at the latter phase of iteration, the weight value increases along with the evolutionary progress, which can gradually improve the influence of the best whale position to promote the other whales to approach the best whale position with a faster speed, and guarantee that the algorithm converges to the global optimum and accelerate the convergence rate, simultaneously. And then for most multimodal functions, the optimal value of \(n \ge 1\), which means that the weight function has multiple cycles in the iterative process. Obviously, this means that for the optimization problem with multiple extreme values, it is necessary to adjust the influence of the current optimal solution on population renewal many times (i.e. multiple processes similar to addressing unimodal functions) to prevent the population from getting stuck in one local solution.

As can be seen from the Fig. 4, the optimal feedback strategy has little effect on the improvement of unimodal functions and is relatively stable. However, it has a great influence on the processing of multimodal functions but lacks stability. This may be related to the random chosen of search for prey and the stochastic operation of optimal feedback mechanism.

In general, for the test functions, the adaptive inertia weight strategy based on iteration has the best improvement effect, followed by the nonlinear convergence factor, and the optimal feedback strategy has the weakest improvement effect.

5 Performance Comparison and Analysis

To better illustrate the performance of MSWOA, we compare it with ESPSO [11], HBA [13], CSO [12], GWO [31] and WOA [19] in terms of average (mean), standard deviation (std), worst value(worst) and best value (best). The computational results achieved from 30 repeated simulations are reported in Table 3, where "best" and "worst" are the best and worst results obtained by the algorithm to solve the test function in 30 times, which can represent the convergence accuracy of one algorithm; "mean" denotes the average of all fitness values, which is mainly an indicator of the algorithm performance; "std" indicates the standard deviation of all fitness values, which reflects the stability of the algorithm to a certain extent; and the black bold indicates the better results. The parameter settings involved in all comparison algorithms are the same as those in Sect. 4.

Table 3 Computational results of ESPSO, HBA, CSO, GWO, WOA and MSWOA on 24 benchmark functions

As shown in Table 3, for the most unimodal functions, MSWOA exhibits better optimization ability than the comparison algorithms. MSWOA is the most efficient method compared with the other algorithms in solving functions \(f_{2} - f_{5} ,f_{7} ,f_{9} ,f_{10} ,f_{12}\). For \(f_{1} ,f_{8} ,f_{11}\), both MSWOA and CSO can reach the optimal values in terms of the evaluation criteria, and HBA is the third effective algorithm, performing better than the other three algorithms.

According to the results in Table 3, it can be seen that MSWOA has obvious competitive advantages over other methods for most multimodal functions (\(f_{13} \sim f_{24}\)) which are difficult to solve. For \(f_{15} ,f_{19}\), MSWOA is the best algorithm with respect to the evaluation criteria. The second best algorithm is HBA. For \(f_{21}\), MSWOA exhibits the best performance followed by ESPSO and WOA. When dealing with \(f_{13}\) and \(f_{{1{4}}}\), both MSWOA and HBA show the best performance, and then WOA outperforms ESPO, CSO and GWO. For \(f_{16}\), MSWOA, HBA and CSO exhibit the same competitiveness concerning the evaluation criteria and are ahead of ESPSO, GWO and WOA. For \(f_{14}\), MSWOA, HBA, CSO, GWO and WOA have no significant difference in evaluation criterion, and are superior to ESPSO. For \(f_{17}\), both MSWOA and CSO obtain the same results in terms of evaluation criterion, outperforming the other algorithms, and WOA is the third best algorithm. For \(f_{18}\), MSWOA presents the best optimization ability compared with the other methods, followed by CSO. For \(f_{20} ,f_{23}\), MSWOA is inferior to CSO, but still superior to the others. For \(f_{22}\), CSO shows the best algorithm, followed by HBA, and the MSWOA presents the third best performance. Overall, the presented MSWOA exhibits better optimal performance in terms of solution accuracy and stability on almost all test multimodal functions.

In order to further illustrate the difference in the convergence process of the algorithms, the convergence curves of all benchmark functions solving by MSWOA, ESPSO, HBA, CSO, GWO and WOA are shown in Figs. 5 and 6. "Best score obtained so far" is the average of the current solutions achieved by the algorithm solving each benchmark function 30 times.

Fig. 5
figure 5

Convergence curve comparison of the algorithms on unimodal functions

Fig. 6
figure 6

Convergence curves comparison of the algorithms on multimodal functions

Figures 5 and 6 provide the convergence graphs of MSWOA, ESPSO, HBA, CSO, GWO and WOA on each benchmark function, which clearly illustrate the difference in the convergence rate of the comparison algorithms. It can be obviously concluded that MSWOA outperforms ESPSO, HBA, CSO, GWO and WOA significantly on the majority of the test problems in terms of the convergence speed and accuracy. Especially, for unimodal functions \(f_{1} \sim f_{12}\), MSWOA converges to a better solution faster. For multimodal functions \(f_{13} \sim f_{24}\), MSWOA shows apparent superiority in convergence speed in comparison with the other algorithms. However, the final solution qualities of MSWOA in addressing \(f_{23}\) and \(f_{24}\) are not as good as CSO, but is still better than ESPSO, HBA, GWO and WOA. On balance, MSWOA presents distinct advantage than the comparison algorithms in the aspects of convergence rate and accuracy for the most test functions.

Like all meta-heuristic algorithms, MSWOA also includes certain random elements. Wilcoxon’s rank sum test [32] is adopted to statistically reflect the superiority of the proposed algorithm. The \(p \, {\rm value}\) obtained from Wilcoxon's rank sum test indicates a significant difference between MSWOA and the comparison algorithms at a significance level of 0.05. If \(p \, {\rm value} \le 0.05\), it means that MSWOA has statistical advantage in solving the test problem compared to the comparison algorithm, and the value of \(h\) returns 1; If \(0.05 < p \, {\rm value} < 1\), it indicates that MSWOA has no statistical advantage and the value of \(h\) returns 0;

If \(p \, {\rm value} = 1\), it presents that there is no difference between the two compared algorithms, that is, their standard deviations are both 0. The Wilcoxon’s rank sum test results of the global optimum are presented in Table 4, where the global optimum are obtained by utilizing the algorithms to optimize the test functions 30 times independently.

Table 4 p value\(p \, {\rm value}\) and \(h\) obtained from Wilcoxon’s rank sum test

The results state clearly that the \(p \, {\rm value}\) s of most test functions are less than 0.05, illustrating that MSWOA can solve the test problems more robustly in most cases. It should be noted that there only have six cases of \(p \, {\rm value} > 0.05\) as follows: MSWOA and WOA on \(f_{13}\); MSWOA and CSO on \(f_{6} ,f_{20} ,f_{22}\) and \(f_{23}\); MSWOA and HBA on \(f_{23}\).

As a conclusion, from the experimental results and discussions, one can observe that compared with the other comparison algorithms, MSWOA exhibits better superiority in jumping out of local minima, convergence rate as well as stability during the iterative optimization progress. In other words, the new renewal mechanism of MSWOA significantly improves the optimization ability of standard WOA.

6 Conclusions

In this study, an improved WOA based on jointly strategies is designed according to the features of basic WOA to enhance the performance for solving optimization problem. MSWOA first introduces the chaotic theory to generate the initial population with better distribution in search space to maximize the search capability. Meanwhile, nonlinear strategies of convergence factor and inertia weight are designed based on the number of iterations to tune the exploration and exploitation progress, which can better improve accuracy and increase convergence speed. Finally, an optimal-based feedback strategy is given to enhance the stability and global search ability of basic WOA by utilizing the current best solution information to execute the exploration phase without blindness. Extensive numerical experiments are conducted on 24 benchmark functions to analyze the performance of MSWOA. The effect of each improvement strategy on the WOA optimization ability is discussed, and results show that the change of inertia weight has the greatest impact, followed by the convergence factor, and then the current best solution has significant influence on WOA to solve the optimization problem with multiple local optimal solutions. Moreover, compared with other state of the art meta-heuristic algorithms, the compared results indicate that MSWOA can achieve higher quality solutions within fewer iterative steps, and exhibit faster convergence rate as well as stronger stability.