Introduction

Currently, the swarm intelligence algorithm, an emerging optimization algorithm, is indispensable to the field of artificial intelligence, which makes the population gradually move toward the optimal solution by simulating various behaviors and survival rules of the swarm1. Traditional swarm intelligence algorithms include but are not limited to the ant colony optimization algorithm (ACO) and the particle swarm optimization algorithm (PSO)2,3. With the continuous further research of swarm intelligence algorithms, such novel swarm intelligence algorithms have sprung up as the brain storm optimization algorithm, the cuckoo search algorithm (CS), the fruit fly optimization algorithm (FOA), and the firefly algorithm (FA)4.

Inspired by a human brainstorming conference, the BSO algorithm has received extensive academic attention regarding its high optimization accuracy and superior optimization in high dimensions5. The algorithm consists of four steps: clustering, substitution, selection, and mutation. Moreover, the brain storm optimization algorithm greatly improves the diversity of the population due to the use of clustering operations, which divides the population into multiple groups and makes it easier to jump out of the local optimum compared to other swarm intelligence algorithms. The BSO algorithm has succeeded in path planning6, image processing7, wireless sensor networks, and other fields8. To demonstrate the research value of the brain storm optimization algorithm, Fig. 1 illustrates the number of papers published every three years since its inception in 2011. As shown in this figure, only fourteen papers were published between 2011 and 2013, with a significant increase in the number of papers published between 2014 and 2016. The number of papers has been increasing recently, indicating that more and more scholars are committed to improving and applying the brain storm optimization algorithm.

Figure 1
figure 1

The number of papers on brain storm optimization since 2011.

Although the clustering process may enhance the population’s diversity, traditional brain storm optimization still has some disadvantages. Compared with other improved intelligence algorithms, the traditional BSO algorithm converges more slowly and fails to find the real optimal solution. A novel DE algorithm named quantum-based avian navigation optimizer algorithm (QANA) was proposed9. The QANA is modeled by introducing long-term and short-term memories, a V-echelon communication topology, and quantum-based navigation including two mutation strategies and a qubit-crossover operator. The experimental results show that the QANA is highly competitive with multiple intelligent algorithms. Mohammad proposed an enhanced moth-flame optimization (MFO-SFR) algorithm10. The algorithm introduces an effective stagnation finding and replacing (SFR) strategy to maintain population diversity effectively and improve the algorithm’s performance. Ali proposed the improved binary quantum-based avian navigation optimizer algorithm (IBQANA) based on the QANA algorithm11; the performance of the algorithm is further improved. A novel bio-inspired algorithm named starling murmuration optimizer (SMO) was proposed12. The SMO introduces a dynamic multi-flock construction and three new search strategies: separating, diving, and whirling. The SMO is applied to solve several mechanical engineering problems, and results demonstrate that it can provide more accurate solutions.

Compared to these very competitive algorithms, the performance of traditional BSO algorithms needs to be further improved. Therefore, scholars have improved the traditional BSO algorithm’s basic parameters, clustering methods, and mutation strategies. Yu presented a BSO algorithm based on an adaptive search radius to increase the convergence speed, which designed three search strategies by introducing the successful memory and failure memory to adjust the step range13. This algorithm is a fusion of diverse fundamental parameters designed to enhance performance for varying problems. However, its optimization accuracy increases much less than the original BSO, and this algorithm cannot solve the problem that the BSO algorithm tends to fall into the local optimum. The BSO algorithm based on elite individual guidance and parameter adaptation introduced the elite and global-best individuals to guide population mutation14. This algorithm sets adaptive parameters to increase the number of global mutations in the early iteration and the number of local mutations in the later iteration, whose convergence speed and optimization accuracy are significantly improved. Still, the algorithm is prone to local convergence when solving multi-modal optimization problems. El-Abd proposed the BSO algorithm based on global-best individual guidance and fitness grouping to cluster, reducing time complexity and improving performance15. Meanwhile, this algorithm converges slower than some of the latest improved BSO algorithms. The low convergence speed makes its optimization accuracy unsatisfactory when the iterations are limited. A BSO algorithm was proposed based on the difference-mutation and the global-best individual, where the difference step replaces the original BSO mutation step and significantly improves the convergence speed16. This algorithm follows the global-best mutation strategy and leads to dramatically improved optimization. However, the problem of trapping in local optimum remains, making the algorithm perform poorly in solving complex multi-modal problems. Tuba creatively introduced chaos theory and proposed a BSO algorithm based on chaotic maps to improve the BSO algorithm17. Compared with the original BSO algorithm, its performance slightly improves yet hardly leads to apparent advantages. However, implementing chaotic maps proposes innovative methods to address the challenge of algorithms predisposed to local convergence. Based on multi-branch chaotic maps, a BSO algorithm introduced eight chaotic maps and improved its optimization accuracy, but with low convergence speed and higher time complexity18. A BSO algorithm was proposed based on an adaptive self-scaling chaotic search mechanism19. This local search method adjusts the search space based on the adaptive self-scaling mechanism, and the chaotic local search mechanism prevents the algorithm from falling into the local optimum. Its convergence speed is improved somewhat, but the optimization accuracy is low. A BSO algorithm based on an advanced discussion mechanism was proposed20, which introduced a difference step strategy and simplified the selection process of the BSO algorithm. The goal of strengthening the global search in the early stage and the local search in the later stage can improve the algorithm’s optimization accuracy. Furthermore, the difference step strategy has enhanced the algorithm’s convergence speed. However, the optimization accuracy for high-dimensional multi-modal problems is short of expectation, and it is easy to fall into the local optimum trap. The global-best brain storm optimization algorithm based on discussion mechanism and difference step was proposed by reviewing the literature21. This algorithm combines several improvement strategies with different properties. It has optimal convergence speed and optimization accuracy compared to previous algorithms. However, it also tends to fall into local optimum when facing complex optimization problems. Therefore, the algorithm also needs further improvement.

To sum up, the improved BSO algorithms in the existing reference have such problems as low convergence speed, poor optimization accuracy, and a high probability of falling into local optimum. The low convergence speed reduces the algorithm efficiency, i.e., for a given accuracy requirement, the lower the convergence speed, the more time it will take, thus reducing the effectiveness of the practical application. Optimization accuracy is the most essential metric for testing the performance of an optimization algorithm, and low accuracy means poor algorithm performance. The algorithm may fall into the local optimum and waste a vast of time during the iteration cycle, thereby impacting the final accuracy of the optimization search. Therefore, the improvement of the brain storm optimization algorithm in this paper is to further improve the convergence speed and optimization accuracy of the algorithm based on the existing improved algorithm and to improve the ability of the algorithm to jump out of the local optimum when facing complex problems.

Overall, this work introduces opposition-based learning thought and chaos theory, fusing chaotic maps and difference steps to construct a chaotic difference step strategy. The major innovations of the paper are the algorithms’ remarkable ability to jump out of local convergence when facing complex optimization problems with multiple peaks and high dimensions and its higher convergence speed and optimization accuracy. Subsequently, a global-best BSO algorithm is proposed based on chaotic difference step and opposition-based learning. This work: (1) proposes the chaotic difference step strategy, introduces the opposition-based learning thought to generate the opposition-based population, and designs the trigger condition and end condition of the strategy to reduce the algorithm’s time complexity; (2) combines the existing global-best mutation strategy combined with the discussion mechanism to improve the convergence speed and optimization accuracy; (3) completes a large number of experiments and data analysis based on the CEC2013 and CEC201822,23.

BSO

Human brain storm conferences inspire the brain storm optimization algorithm. BSO algorithm sufficiently exerts the characteristics of human intelligence to solve problems and outperforms in convergence speed and optimization accuracy for various optimization problems. Moreover, it has more advantages than traditional optimization algorithms in high-dimensional problems. The algorithm includes four main steps: clustering, substitution, selection, and mutation.

Firstly, the K-means clustering analysis method is used. The current population of n solutions to enter the iteration is divided into m categories, and the purpose is to simulate the human group discussion behavior and improve the search efficiency of the algorithm.

Second, setting a parameter \(p_{replace}\) and generating a random number \(r_1\) between 0 and 1. When \(r_1\) is less than \(p_{replace}\), a new individual will be generated to replace the selected cluster center. If the value of \(p_{replace}\) is too large, it will affect the algorithm’s convergence efficiency, reducing the population’s diversity. If this value is too small, it may cause algorithms to make the algorithm converge in advance.

Third, setting the three probability parameters \(p_{one}\), \(p_{one\_center}\) and \(p_{two\_center}\), generating random number \(r_2\),\(r_3\) and \(r_4\). When \(r_2\) is less than \(p_{one}\), select an individual in one cluster to mutate. Otherwise, select an individual from each cluster to mutate after fusion. If an individual in a cluster is selected for mutation, when \(r_3\) is less than \(p_{one\_center}\), the cluster center is selected for mutation; otherwise, a random individual in this cluster is selected for mutation. If individuals in two clusters are selected to mutate, when \(r_4\) is less than \(p_{two\_center}\), selecting the cluster centers of the two clusters to mutate; otherwise, selecting random individuals in each cluster (two individuals cannot be cluster centers at the same time) to mutate after fusion.

Fourth, fusion or mutation operations on the selected individuals are performed, and then they are compared with the original individuals according to their fitness. The outstanding individuals will be retained after the above operations. The fusion step is as follows:

$$\begin{aligned} {X_f} = v \times {X_1} + (1 - v) \times {X_2}, \end{aligned}$$
(1)

where \({X_f}\) is a new individual after fusion, v is a random number from 0 to 1, \({X_1}\) and \({X_2}\) are two random individuals to be merged. The mutation step is as follows:

$$\begin{aligned} {X_m} = {X_s} + \xi \times n(\mu ,\sigma ), \end{aligned}$$
(2)

where \({X_m}\) is a new individual after mutation, \({X_s}\) is the selected individual to be mutated, \(n(\mu ,\sigma )\) is the Gaussian random number with the mean of \(\mu\) and the variance of \(\sigma\), and \(\xi\) is the mutation coefficient with the mathematical expression in (3).

$$\begin{aligned} \xi = \log sig\left( {\frac{{0.5 \times {g_{max}} - g}}{k}} \right) \times rand(), \end{aligned}$$
(3)

where \({g_{\max }}\) is the maximum number of iterations, g is the current iteration number, k is the adjustment factor. The pseudo code of the BSO algorithm is shown in Algorithm 1.

Algorithm 1
figure a

The BSO algorithm.

COGBSO

In this section, improvements of the BSO algorithm in two aspects are introduced to improve the performance of the algorithm. The steps of the COGBSO algorithm are shown in Algorithm 2.

Discussion mechanism based on global-best strategy

In the early iteration of intelligent algorithm search, the global search should be strengthened to improve the diversity of the population and convergence speed. The local search should be strengthened in the later iteration to improve the optimization accuracy. Because the traditional BSO algorithm’s selection process is entirely random and hard to meet the above requirements, a discussion mechanism is introduced20, which divides the selection process into two situations: the inter-group discussion and the intra-group discussion. An adaptive probability parameter is set to adjust the frequency of inter-group and intra-group discussion. The adaptability of the BSO algorithm is strengthened, and the algorithm’s performance is improved.

The discussion mechanism is divided into two parts: the intra-group discussion and the inter-group discussion. Intra-group discussion means that the individuals to be mutated are generated in one cluster, and it can be divided into three mutation types: random cluster center mutation, random individual mutation, and mutation after the fusion of two random individuals in the group. The inter-group discussion is that the individuals to be mutated are generated by two different clusters, including two random cluster centers fused and then mutated, two random individuals fused and then mutated, and new individual randomly generated. The new individual is randomly generated to ensure the diversity of the population and reduce the probability of the algorithm falling into the local optimum. In addition, set the adaptive probability parameter as follows:

$$\begin{aligned} {P_{intra}} = {P_l} + {P_r} \times (\frac{g}{{{g_{max}}}}), \end{aligned}$$
(4)
$$\begin{aligned} {P_{inter}} = 1 - {P_{intra}}, \end{aligned}$$
(5)

where \(P_{intra}\) is the probability of intra-group discussion, \(P_{inter}\) is the probability of inter-group discussion, \(P_{l}\) is the lowest probability of intra-group discussion being adopted, \(P_{r}\) is the linear range parameter.

The mutation step adopts the difference step. Compared with the Gaussian mutation step, the difference step has better adaptability, which can strengthen the global search ability in the early iteration and the local search ability in the later iteration. The mutation form of the difference step is as follows:

$$\begin{aligned} {X_m} = {X_s} + ({X_1} - {X_2}) \times rand(), \end{aligned}$$
(6)

where \({X_1}\) and \({X_2}\) are two random individuals in the population. Since there is a large gap between individuals in the early population, the difference step is large, which can improve the search range. While the later individual gap is small, a smaller difference step can make the population search in a small range and improve the optimization accuracy.

Many intelligent algorithms have applied the global-best strategy with desirable results24,25. The global-best strategy was introduced into the BSO algorithm for the first time15. The global-best individual is the individual whose fitness of each generation best meets the requirements, and the global-best strategy is to make the newly generated individual as close to the global-best individual as possible, which can improve the optimization accuracy of the algorithm. The core idea is as follows:

$$\begin{aligned} {X_n} = {X_m} + C \times R. \times ({X_b} - {X_m}), \end{aligned}$$
(7)

where \({X_n}\) is the new individual following the global-best individual, R is a vector of dimension D and each dimension is a random number from 0 to 1, \({X_b}\) is the global-best individual, C is the global-best coefficient that can affect the degree to which the new individual follows the global-best individual. The calculation method is as follows:

$$\begin{aligned} C = {C_{min}} + \frac{g}{{{g_{max}}}} \times ({C_{max}} - {C_{min}}), \end{aligned}$$
(8)

, where \(C_{min}\) is the lower bound of the global-best coefficient, \(C_{max}\) is the upper bound of the global-best coefficient.

Chaotic difference step and opposition-based population strategy

The global-best strategy and discussion mechanism are combined to improve the traditional BSO algorithm, which can significantly improve the algorithm’s optimization accuracy and convergence speed. However, when the algorithm deals with high-dimensional multi-modal problems, it is easy to fall into local optimum and barely obtain an ideal solution. In order to improve this situation, this paper designs a local optimal escape mechanism by combining chaos theory and opposition-based learning.

In recent years, many chaotic maps have been discovered and applied to various fields of human activities26. In intelligent algorithms, the chaotic map is widely used in population initialization, individual selection, mutation, and other processes27,28. Because the chaotic solution has the characteristics of ergodicity, randomness, and long-term unpredictability, it can achieve better results than random numbers and improve the algorithm’s performance.

In most existing literature, the application of chaotic maps commonly replaces Gaussian mutation, and the properties of chaotic solutions can be used to increase the diversity of the population. However, the unpredictable mutation of the chaotic step makes it difficult to adjust the step value according to the current population distribution. Thus, the algorithm’s further potential can not be exploited. Since the difference step has remarkable population adaptability, this paper creatively combines the difference step with the chaotic step and, at the same time, retains the advantages of the difference step and the chaotic step, i.e. it has the characteristics of timely adjusting the step with the distribution of the population and increasing the diversity of the population at the same time.

The chaotic difference step scales the difference step by introducing chaotic maps, which can expand the search space and reduce the time of falling into the local optimum. The schematic diagram of its function is shown in Fig. 2.

Figure 2
figure 2

Schematic diagram of the chaotic difference step.

The chaotic difference step can be expressed as:

$$\begin{aligned} {X_m} = {X_s} + ({X_1} - {X_2}) \times rand() + ({X_1} - {X_2}) \times (y(t) - 0.5), \end{aligned}$$
(9)

where y(t) represents a chaotic map. The chaotic difference step formula and the schematic diagram are combined, which can reflect the superiority of the chaotic difference step. In Fig. 2, \(R_1\) represents the average radius of the search space formed by the traditional difference step, \(R_2\) and \(R_3\) represent the average radius of the search space formed by the difference step after the perturbation of the logistic chaotic map. Two kinds of radii are formed because the logistic chaotic map has the characteristic that most of the solutions are distributed near 0 and 1. Since there is \((y(t) - 0.5)\) part in the formula, there will be two directions of radius reduction and increase to generate two search spaces. Take the population size of 10 as an example. It can be seen from Fig. 2 that the reduction of the search space makes the population more likely to find better solutions near the local optimal point in the range. At the same time, expanding the search space gives the population better diversity, which can make the positions of new individuals far away from the local optimum, increasing the possibility of jumping out of the local optimum.

Additionally, the solutions of different chaotic maps have different distribution characteristics. To fully expand the search space, various chaotic maps with different distributions and complementary chaotic maps are selected to form chaotic difference step with different intervals. Then, compare the fitness of the individuals after each chaotic difference mutation and select the best to retain and update the population so that the algorithm can jump out of the local optimum. The four chaotic maps selected in this paper are shown in the following Table 1:

Table 1 Definition of chaotic maps.

We can achieve this better by illustrating the distinctions among each chaotic map through the visualization of chaotic maps29. Visualizations of four chaotic maps are presented in Fig. 3. Figure 3 illustrates the distribution of the chaotic maps. The four selected chaotic maps are distinguishable. The distribution complements each other, covering entirely between 0 and 1. The cubic map is evenly distributed and performs a thorough search across the entire interval from 0 to 1. The distribution of the sine map is similar to that of the Cubic map, but its properties lead to not being able to be distributed near the endpoint 0, thus extending the search space even more. The logistic map covers the full range from 0 to 1 but is more centered on the verge. Furthermore, it can cause a significant transformation of the step. The circle map shows a more even distribution within the range of 0.2 to 0.5, as evidenced by the distinct distribution displayed in the figure from the other three maps. Different distribution promotes the chaotic difference step adjustment and increases the algorithm’s ability to escape the local optimum.

There are various chaotic maps available in chaos theory, yet this work selects these four maps for two reasons. Firstly, more chaotic maps increase time complexity and hinder the algorithm’s overall performance. On the other hand, the four selected chaotic maps in this work are representative. While such map as the iterative map present a similar distribution to the logistic map, its introduction to the algorithm may fail to enhance its performance significantly. Therefore, these four chaotic maps are selected in this paper.

Figure 3
figure 3

The distribution of the four chaotic maps.

The chaotic difference step strategy can expand the search space of the population. Increasing search density improves the probability of jumping out of the local optimum if the search space is unchanged. Scientifically, opposition-based learning theory increases search density. Tizhoosh first proposed the idea of opposition-based learning and applied it to intelligence30. Then, numerous scholars applied this idea to the intelligent algorithms to improve the algorithms’ performance31,32. In these amounts of simulation experiments of the literature, generating opposition-based populations through opposition-based solutions can improve the performance of the algorithms. Compared to the existing literature on brain storm optimization algorithms, opposition-based learning is first introduced into the algorithm. Therefore, to further improve the ability of the algorithm to jump out of the local optimum, this paper breaks through and introduces the opposition-based learning thought into the brain storm optimization algorithm.

According to probability theory, a solution and its opposition-based solution have a half probability of achieving better results. Therefore, an opposition-based solution can be generated after the individual achieves mutation, and excellent individuals are retained to improve the probability of jumping out of the local optimum. The expression for the opposition-based solution is as follows:

$$\begin{aligned} X_o^d = Max{P^d} + Min{P^d} - X_m^d, \end{aligned}$$
(10)

where \(X_o\) is the opposition-based solution, \(X_o^d\) is the d-dimensional component of \(X_o\), \(Max{P^d}\) and \(Min{P^d}\) represent the maximum and minimum values of the d-dimensional components of all individuals in the current population, \(X_m^d\) is the d-dimension component of the outstanding individual retained by difference mutation and chaotic difference step.

The chaotic difference step and the opposition-based population strategy do not conflict and jointly assist the algorithm in jumping out of the local optimum. However, it will significantly increase the time and space complexity of the algorithm. Therefore, a trigger condition and an end condition are required, and the schematic diagram is shown in Fig. 4.

Figure 4
figure 4

The trigger condition and end condition.

Where t is the count variable, flag is the trigger variable. When flag is equal to 1, two strategies are executed, and when it is equal to 0, the original process remains unchanged. This design can improve the problem of increasing the algorithm’s time complexity caused by introducing a chaotic difference step and opposition-based population strategy in the COGBSO algorithm. Since the chaotic difference step and opposition-based population strategy in this paper are designed to assist the algorithm in jumping out of the local optimum, it is necessary to define when the algorithm is in the local optimum. After combining many experimental tests, this paper briefly defines the local optimum. If the algorithm falls into a local optimum, which means that when the optimal fitness value remains unchanged for more than 20 iterations, the search space is expanded through the chaotic difference step, and the opposition-based population is generated to help the algorithm escape the local optimum. The algorithm’s time complexity increases considerably at this point, so it is necessary to set an end condition to enable the algorithm to stop using both strategies in time. The end condition can be divided into two cases. On the one hand, if the algorithm’s fitness can quickly develop in a better direction after introducing the two strategies, it is considered that the strategies have played a role in assisting the algorithm to jump out of the local optimum. Then, the two strategies can be deactivated. On the other hand, if the algorithm’s fitness does not change within a long iteration period after the introduction of the two strategies, the new strategy is considered to have lost its effect, or the algorithm’s arithmetic power reaches its limit, which makes it impossible to improve its performance. Then, the algorithm can stop using the two strategies promptly, thus reducing the algorithm’s time complexity. The end conditions are also clearly defined in Fig. 4. If the fitness value of the algorithm changes or the fitness value does not change in 50 iterations, it will jump back to the original mutation process to reduce the algorithm’s time complexity. The algorithm pseudo code is shown in the following Algorithm 2.

Algorithm 2
figure b

The COGBSO algorithm.

Experimental results

According to the CEC2013 benchmark test suit, 15 benchmark functions, including uni-modal and multi-modal, are selected and defined in Table 2. Experiments are performed on 10, 20, and 30-dimensional conditions, respectively. Simulation comparison experiments are carried out for eight algorithms: BSO5, GBSO15, GDBSO16, ADMBSO20, SSA33, MSCA34, MSWOA35, and COGBSO. In addition, to more fully validate the performance of the algorithms, this paper also includes a variety of competitive intelligent algorithms tested using the CEC2018 benchmark test suit for comparison, with the test conditions set to 30-dimensional9,36. Then, analyze the performance of each algorithm, and each group of simulations is run 30 times independently. The simulation platform is Matlab 2018a.

Table 2 Benchmark functions.

Parameters settings

Some basic parameters of the BSO algorithm and its improved algorithm compared in this paper are referenced in5. The parameter settings are as follows: the population size \(n = 100\), the number of clusters \(m = 5\), the adjustment factor \(k = 20\), the number of evaluations \({F_{max}} = D*{10^4}\), the probability parameter \({P_{replace}} = 0.1\), \({P_{one}} = 0.5\), \({P_{one\_center}} = 0.3\) and \({P_{two\_center}} = 0.2\). The parameters (El-Abd, 2017) related to the global-best strategy are set as: \({C_{min}} = 0.2\) and \({C_{max}} = 0.8\). The discussion mechanism probability parameter (Yang et al.20) is set to \({P_l} = 0.2\) and \({P_r} = 0.7\). Meanwhile, the algorithm in this paper also compares five other types of swarm intelligence algorithms with parameter settings9,33,34,35,36.

Simulation results and analysis

The optimization situation of each algorithm under the condition that the dimension D is equal to 10, 20, and 30 is shown in Tables 3, 4, 5, 6, 7, and 8. Each benchmark function is run 30 times, and four data types are obtained through statistical analysis: mean value, standard deviation, best value, and worst value. The comparison of the algorithms based on the complete CEC2018 benchmark test suit is shown in Tables 9 and 10. The best mean value of each benchmark function is marked in bold.

Table 3 Comparison between BSO variants on 10-D problems.
Table 4 Comparison between BSO variants on 20-D problems.
Table 5 Comparison between BSO variants on 30-D problems.
Table 6 Comparison between different swarm intelligence algorithms on 10-D problems.
Table 7 Comparison between different swarm intelligence algorithms on 20-D problems.
Table 8 Comparison between different swarm intelligence algorithms on 30-D problems.
Table 9 Comparison of COGBSO with BSO variants for CEC 2018 test functions with D = 30.
Table 10 Comparison of COGBSO with latest competitive algorithms for CEC 2018 test functions with D = 30.

Several conclusions can be drawn by analyzing the optimization accuracy of 15 test functions selected from the CEC2013. First, for the 10-dimensional problems in the comparison of other improved brain storm optimization algorithms, the performance of the COGBSO algorithm is much better than that of the BSO algorithm, and the performance of the relatively improved GBSO, GDBSO, and ADMBSO algorithms are still greatly improved. The COGBSO algorithm has the best performance on all benchmark functions. The main reason is that the COGBSO algorithm combines the advantages of the GDBSO and ADMBSO algorithms to improve the convergence speed of the algorithm. Moreover, the possibility of the algorithm jumping out of the local optimum is significantly increased with the assistance of the chaotic difference step and the opposition-based population strategy, which improvs the optimization accuracy of the algorithm. The COGBSO algorithm still has some advantages compared to other swarm intelligence algorithms. Compared to the SSA algorithm, the COGBSO algorithm performs slightly less on the F9 function and is the absolute leader in performance on the remaining functions. Compared to the MSCA and MSWOA algorithms, the COGBSO algorithm can guarantee the lead in optimization accuracy on most benchmark functions but performs poorly on several test functions.

Second, compared to other improved brain storm optimization algorithms, the performance of the COGBSO algorithm degrades on individual benchmark functions for 20-dimensional problems. On the F12 and F13 benchmark functions, the performance of the COGBSO algorithm is only slightly lower than the GBSO algorithm, higher than the ADMBSO and GDBSO algorithms, and much higher than the BSO algorithm. On the F15 benchmark function, although the performance of the COGBSO algorithm is higher than that of the BSO algorithm, it is lower than the three algorithms of GBSO, GDBSO, and ADMBSO. The global optimal value 0 can be found in 30 experiments, but the number of times is limited. On other benchmark functions, the COGBSO algorithm has the best performance. Compared to other swarm intelligence algorithms, the performance of the COGBSO algorithm is similar to that of the 10-dimensional case. COGBSO still dominates the comparison of the SSA algorithms across the board. Compared to the MSCA and MSWOA algorithms, the COGBSO algorithm can take the lead on eight test functions.

Third, for 30-dimensional problems, compared to other improved brain storm optimization algorithms, the COGBSO algorithm also performs poorly on the three benchmark functions of F12, F13, and F15. However, in general, it still has the best optimization performance and can better optimize the multi-modal function. Compared to other swarm intelligence algorithms, the COGBSO algorithm still takes the lead on eight test functions. The results also prove that no one type of swarm intelligence algorithm can solve all optimization problems perfectly, and all need to choose a more suitable intelligent algorithm according to the problem.

In the experiments on the CEC2018 benchmark test suit, we directly use the complex dimension of 30 dimensions to test the optimization accuracy of the algorithms further, and two new competitive algorithms are added, which are the AOA algorithm and the QANA algorithm, and the following conclusions can be obtained based on the simulation results. The COGBSO algorithm has a significant advantage over the other variants of the BSO algorithm in the CEC2018, leading to optimization accuracy on 22 functions and a tiny gap with the other algorithms on their inferior functions. Compared with other types of competitive, intelligent algorithms, the COGBSO algorithm still has certain advantages; as can be seen from Table 10, the COGBSO algorithm is only weaker than the QANA algorithm in terms of overall performance, but it can still achieve better optimization results on one-third of the functions and is entirely ahead of the other four algorithms. All of these experimental results prove the superiority of the COGBSO algorithm.

In order to better demonstrate the advantages of the COGBSO algorithm in terms of convergence speed, the convergence curves of the different improved BSO algorithms on four benchmark functions from CEC2013 are shown in Figs. 5, 6, and 7, and the box line diagram situation is shown in Fig. 8. Meanwhile, based on the optimization accuracy results of each algorithm, this paper also provides further evidence of the effectiveness of the COGBSO algorithm through the Friedman test. The results of the non-parametric tests are presented in Tables 11 and 12. In addition, the convergence curve test and Friedman test results are only used to compare the COGBSO algorithm with other improved BSO algorithms to ensure the brevity of the paper. The comparison with other types of swarm intelligence algorithms through the table of optimization accuracy is enough to prove the advantages of the COGBSO algorithm.

Figure 5
figure 5

The convergence curves of four typical benchmark functions on 10-D problems.

Figure 6
figure 6

The convergence curves of four typical benchmark functions on 20-D problems.

Figure 7
figure 7

The convergence curves of four typical benchmark functions on 30-D problems.

Figure 8
figure 8

Boxplots between algorithms on F1, F3, F10, F11.

Table 11 Friedman’s test results for each algorithm.
Table 12 Wilcoxon statistical test results of COGBSO.

It can be seen from the figure that COGBSO has better convergence speed and optimization accuracy than other improved BSO algorithms. On the convergence curve of the F10 function, it can be observed that with the assistance of the chaotic difference step search space and the strategy of the opposition-based population, the time for the algorithm to fall into the local optimum is very short. The probability of jumping out of the local optimum is significantly higher than other algorithms. This is because the COGBSO algorithm has a broader search space and a higher probability of generating an optimal solution, which makes it difficult for the algorithm to fall into the trap of local optimum for a long time. The boxplots are only given for the 10-dimensional condition due to space limitations. On the boxplots of the four functions, the COGBSO algorithm has optimal performance on minimum, maximum, and median. The Friedman test was used to obtain the average ranking of each algorithm on all test functions. In the nonparametric statistical test, the lower the ranking, the better performance of the algorithm. Table 11 shows the test results. COGBSO ranked first (1.9333), while GDBSO ranked second (2.6). Moreover, the Wilcoxon statistical test was used to verify the significance of the algorithm, and the results are summarized in Table 12. In comparison with that of COGBSO, the p-value of each algorithm was lower than 0.05, indicating that COGBSO has significant advantages over the other algorithms. In addition, the resistance values R+ and R- reflected the excellent performance of COGBSO. The results once again prove the validity of the COGBSO algorithm.

Discussion and experimental summaries

In order to fully understand the effect of chaotic difference step and opposition-based learning population strategy on jumping out of the local optimal solution, We used the idea of ablation experiments. Additionally, we simulated the optimization of the algorithm in three cases. These three cases are the COGBSO algorithm without the assistance of the chaotic difference step as well as the opposition-based population strategy (COGBSO-noCO), the COGBSO algorithm without the assistance of the chaotic difference step only (COGBSO-noC), and the COGBSO algorithm without the opposition-based population strategy only (COGBSO-noO). Select the typical multi-modal benchmark function F10 (Rastrigin) from CEC2013 for simulation comparison, on the one hand, to ensure the simplicity of the article. On the other hand, the local optimal problem is more likely to appear on the multi-modal function, so using this benchmark function is enough to study the contribution of the chaotic difference step and opposition-based population strategy. The COGBSO-noCO, COGBSO-noC, COGBSO-noO, and COGBSO algorithms are simulated under 10, 20, and 30 dimensions, and the convergence curves are shown in Fig. 9.

Figure 9
figure 9

The convergence curves of F10 function based on the idea of ablation experiments.

The convergence curve shows the superiority of the chaotic difference step and the opposition-based population strategy in jumping out of the local optimal solution. First, the black line in the figure represents the COGBSO algorithm, which has the fastest convergence speed in 10, 20, and 30 dimensions. This result proves the effectiveness of the chaotic difference step with the opposition-based population strategy. Second, it can be observed from the curve that the COGBSO algorithm with chaotic difference step and opposition-based population strategy makes it difficult to fall into the local optimum for a long time before reaching the final solution. Moreover, the COGBSO-noCO algorithm has many cases where it falls into local optimum for a long time and fails to move forward under three dimensions. The COGBSO-noC also has some cases of falling into a local optimum for a long time, but the situation is slightly improved compared to the COGBSO-noCO algorithm. The COGBSO-noO is significantly improved but still not as effective as the COGBSO algorithm, which proves that both the chaotic difference step strategy and the opposition-based population strategy can improve the ability of the algorithm to jump out of the local optimum and the chaotic differential step strategy is more effective, but the two strategies are not conflicting. Thus, they can be applied simultaneously to the algorithm to improve its performance even more. Third, the COGBSO algorithm also has the best performance in optimization accuracy, which is better than other algorithms.

Furthermore, the COGBSO-noO algorithm has a higher optimization accuracy than the COGBSO-noC algorithm, and the COGBSO-noC algorithm outperforms the COGBSO-noCO algorithm in optimization accuracy. This implies that both strategies bolster the algorithm’s search accuracy. This result is well-supported, as the algorithm’s more robust ability to escape from the local optimum leads to better performance during iteration. The results show that the chaotic difference step and the opposition-based population strategy enhance the algorithm’s ability to deal with multi-modal problems, significantly improving the optimization accuracy and the probability of jumping out of the local optimum. Moreover, since the two strategies do not conflict with each other, they can be applied to the improvement of the BSO algorithm at the same time.

Based on all the experimental results in Sect. "Simulation results and analysis" and this section, it can be shown that the COGBSO algorithm has the following advantages. First, the optimization accuracy of the COGBSO algorithm has a significant advantage over other variants of the BSO algorithm. This claim can be corroborated by Tables 3, 4, 5, and 9. The COGBSO algorithm can achieve optimal optimization search results on most functions from the CEC2013 and CEC2018 benchmark test suit. Meanwhile, the nonparametric test results in Tables 11 and 12 and the boxplots in Fig. 8 argue this point. Second, the optimization accuracy of the COGBSO algorithm also has a significant advantage over other types of recent competitive intelligent algorithms. This claim can be corroborated by Tables 6, 7, 8 and 10. These results indicate that the COGBSO algorithm’s overall performance is weaker than that of the QANA algorithm but still manages to gain an advantage over many functions while being significantly better than the other intelligent algorithms. Third, the COGBSO algorithm also converges significantly faster than the other BSO variants. This claim can be corroborated by Figs. 5, 6, and 7. Fourth, the chaotic difference step strategy and the opposition-based population strategy proposed in this paper can significantly improve the probability of the BSO algorithm jumping out of the local optimum. This claim can be corroborated by Fig. 9.

Conclusion

This paper proposes a global-best brain storm optimization algorithm based on chaotic difference step and opposition-based learning (COGBSO). First, the discussion mechanism and the global-best strategy are combined into the BSO algorithm, improving the algorithm’s convergence speed and optimization accuracy. Second, chaos theory is introduced to design a chaotic difference step to expand the search space of the population, and the opposition-based population is introduced to improve the population density. Both strategies are designed to make it easier for the algorithm to escape from the local optimum when dealing with optimization problems, especially multi-modal function problems. Third, COGBSO and BSO, GBSO, ADMBSO, GDBSO, SSA, MSCA, MSWOA, AOA and QANA algorithms are compared and analyzed in this paper.

The results show that for most benchmark functions in 10, 20, and 30 dimensions, the COGBSO algorithm has the best performance. Experiments proved the superior performance of COGBSO compared to previous BSO improved algorithms and enables the goal of helping the algorithm to jump out of a local optimum quickly. As one of the first algorithms inspired by human’s behaviors, COGBSO demonstrates its great potential in dealing with complex optimization problems. Meanwhile, the chaotic difference step strategy in the COGBSO algorithm and the application of opposition-based learning theory can stimulate us to propose more novel strategies to adapt to more complex problems arriving in quick succession. In the future, the COGBSO will be applied to applications of high dimension and large scale.

Future research directions include proposing an adaptive mechanism to adjust parameters, assisting the algorithm to intelligently choose when to use the chaotic difference step and opposition-based population strategy, reducing the space-time complexity of the algorithm, and applying the COGBSO algorithm to practical engineering optimization problems.