1 Introduction

In terms of human development, several problem-solving processes are often heuristic [1,2,3]. However, it was only since the 1940s that heuristics were used as a scientific method for various applications [4]. The initial cornerstone in the domain of heuristics was the emergence of evolutionary algorithms, which greatly advanced the theorization and practicalization of heuristics [5, 6]. As population-based computational intelligence [7], evolutionary algorithms (EAs) [8,9,10,11] are applicable to optimization problems and, exhibit immense potential for use in the field of optimization. The metaheuristic algorithm is an extension of the heuristic algorithm on which the stochastic algorithm and local search are fused [12, 13]. In definition, metaheuristic algorithm is proposed relative to the optimization algorithm [14,15,16].

Recently, nature-inspired metaheuristic (NMH) [17, 18] algorithms have yielded promising results in combinatorial optimization [19,20,21]. Inspired by the foraging behavior of bee swarm, Pham et al. developed a novel bee algorithm [22,23,24]. The gravitational search algorithm (GSA) [25, 26] derives from the law of gravitational and mass interactions. Particle swarm optimization (PSO) [27, 28] is an evolutionary computation technique originated from the study of bird feeding behavior. The basic idea of PSO is to find the optimal solution through collaboration and information sharing among individuals in a population. Based on the mechanism that the trajectories of the particles vary depending on the optimal location of an individual, this algorithm can effectively search for better solutions. Subsequently, Tian et al. integrated PSO with differential evolution (DE) [29,30,31] to solve real-world problems, achieving success in arranging fire trucks. Brain storm optimization (BSO) [32, 33] inspired by human creative problem solving process uses the idea of clustering to search for the local optimum. Recently, an increasing number of nature-inspired metaheuristic algorithms are emerging and have been applied to practical engineering problems successfully [34, 35].

The metaheuristic is a process of embodying how to find, select, or generate a satisfactory solution. The general formulation is to choose a set of parameters (variables) that will lead to the optimum of the objective while satisfying a set of relevant constraints [36]. The optimization problem can be generally expressed in the following mathematical form:

$$\min \;f(X)\;{\text{or}}\;\max \;f(X),X = (x_{1} , \ldots ,x_{n} ),$$
(1)

subject to

$$\begin{aligned} {h_i(X)=0,(i=1,2,...,n)}; \\ {g_j(X)\le 0,(j=1,2,...,n)}. \end{aligned}$$

Because of iterative methods, the metaheuristic strategy does not guarantee to obtain the global optimal solution [37,38,39]. However, compared to the computational effort required by heuristic or iterative algorithms, metaheuristic algorithms have the advantage of producing better solutions with less computational effort [40, 41]. The focus of the metaheuristic is on reinforcement, which is performed on the optimal principle for the solution to be chosen for the problem [42, 43]. Metaheuristic algorithms are universal because they involve specific assumptions about the problem to be solved [44,45,46].

With the continuous progress of computer science, the performance of new algorithms is getting better and better, which can solve more and more complex problems and demands [47, 48]. However, no algorithm is perfect. People try to integrate different algorithms to explore more potential algorithm mechanism [49, 50]. The leitmotiv of the hybrid algorithm is to study the root why algorithms behave differently in various aspects, and then combine the mechanisms of various algorithms in a certain way to achieve the purpose of complementing their advantages [51]. The common methods are to exchange information between two algorithms by running them in parallel or crossover, or to split the number of iterations into complementary running [52]. These hybrid algorithms have commonalities from the aspects of population, search mechanism, parallel connection, etc [53, 54].

There are numerous existing mainstream approaches to enhance the performance of hybrid algorithms, such as incorporating enhancement strategies into original algorithm, modifying parameters by self adaptive mechanism, or improving formula of algorithm to approximate objective function [55]. For example, the PSO-BP algorithm [56] combines PSO with the back-propagation algorithm (BP) to train feed-forward neural networks. The hybrid algorithm assumes PSO and BP for global and local search, respectively. It has made a considerable progress in aspects of convergence and generalization ability. The AC-ABC algorithm [57] allocated the artificial bee colony algorithm to determine the best subset of features and the ant colony algorithm to avoid the stagnant time. Through hybridization, they manage to avoid the stagant time of searching for the optimum. This algorithm has superior performance in improving the classification accuracy and finding the best selection of features. However, existing hybrid algorithms may still fail to find the global optimum for complex optimization problems because of their incomplete mechanism. Even the algorithm may play unstably and the performance depends heavily on the specific problem [58]. Metaheuristic algorithms nowadays tend to be more complex in structure and more cumbersome in the mechanism. Although the aspects taken into account are becoming more comprehensive, the algorithms always have different core flaws. It can be seen, from the above examples among many others, that there is a lot of room for mining to obtain better algorithms via hybridizing several different methods by combining their inherent characteristics [59,60,61,62].

In terms of intelligent algorithms, cooperative coevolution (CC) has proven to be an effective search strategy. It has been developed to further improve solution quality for large-scale optimization problems. CC is a common and valid way to enhance the effectiveness of the algorithm [63]. For an optimization problem, CC takes in a decomposition strategy that divides a problem into several groups and optimizes them in groups. Then, all groups cooperate to complete the optimization of the whole problem [64]. Many examples have proved the feasibility of CC in promoting the algorithm performance.

In this paper, a new algorithm called CCWFSSE that combines two recently developed algorithms—wingsuit flying search (WFS) [65] and spherical evolution (SE) [66] in the coevolutionary way was proposed. Both considered algorithms have their own characteristics. We utilize these features to combine algorithmic structures and explore more efficient modes of algorithm operation. The search mechanism of SE can provide a comprehensive solution search space, so as to avoid falling into local optimum. WFS can quickly identify the contemporary optimal individual and pass the Halton sequence to exploit several potential individuals. As SE focuses on powerful exploitation while WFS possesses the comprehensive exploration capabilities, hybridization of the search dynamics of these algorithms can reach a equilibrium between exploration and exploitation, thereby resulting in better search efficiency [67]. Meanwhile, we add CC to the original algorithm with a judgment mechanism for purpose of improving the population diversity of CCWFSSE. When solving optimization problems, the fundamental aim is to find the global optimum solution within an acceptable time complexity, which becomes the basic principle guiding the design of intelligent algorithms. Therefore, we hope to achieve a balance between global search and local search by combining SE and WFS, while enhancing the convergence speed and optimization result of algorithm [68]. A series of experimental data analysis can verify that CCWFSSE has better search efficiency and robustness than other comparable algorithms [69].

The main contribution to this work can be outlined as follows: (1) Inspired by the complementary characteristics of different algorithms, we innovatively propose a hybrid algorithm that incorporates SE which focuses on local exploitation, into WFS, which possesses excellent global exploration capabilities. (2) Add the spherical search to the process of finding neighborhood points while using cooperative coevolution in parallel to maintain population diversity, which resulted in a good capability and robustness. (3) We have done a variety of experiments to test the proposed algorithm, which can prove that CCWFSSE performs well. This work provides more thoughts on how two or more different algorithms should be combined to build a better algorithm [70].

The structure of the rest part is: Sect. 2 and Sect. 3 separately introduce the algorithm WFS and SE. Section 4 clarifies specific ideas of the hybrid algorithm CCWFSSE. Section 5 gives the experimental results which are analyzed and discussed in detail. A discussion regarding many aspects of algorithm is done in Sect. 6. Ultimately, Sect. 7 reveals the conclusion and does a vision of the future.

2 Brief Description of WFS

The WFS algorithm is derived from wingsuit flying, a extreme sport. The core idea of WFS is to obtain an approximate image of the entire solution space. When this algorithm (flier) starts running (flying), it reaches a global minimum (landing site) in the search space (Earth’s surface). The entire search space is probed by updating the population of points at each iteration. This process gradually shifts the focus to lower regions just as a flying machine acquires a gradually clearer image.

In essence, the algorithm is a population-based search mode, which constantly updates the solution points. Then, we specifically figure the operating mechanism of the algorithm in considering four aspects.

Fig. 1
figure 1

The draft of \(N_0^2\) initial points arrangement

2.1 Population Initialization

In the initial search space, we define N as the number of population size and \(N^*=N-2\) as the number of initial points. In Fig. 1, there are initial points \(\varvec{x} =[x_1,x_2,...x_n]^T\) randomly located in a box search space. Each point is uniformly distributed in an n-dimensional grid. \(N_{0}\) is set as the number of nodes. There is an discrete distance between each point \(\delta \varvec{x}\), which is defined as:

$$\delta {\varvec{x}} = \frac{{{\varvec{x}}_{{{\text{max}}}} - {\varvec{x}}_{{{\text{min}}}} }}{{N_{0} }}$$
(2)

Ascertain other points based on \(x_1\) to constitute a full grid. Then choose the point whose the cost function evaluates the lowest. This step is intended to achieve a better uniform distribution of the populations to be selected in the whole solution space.

2.2 Identify the Neighborhood Size

After the first iteration, the points selected from the previous iteration are sorted in ascending order with respect to their solution values. The point with the highest solution value is assigned to a largest neighborhood with the number of points \(P_{max}\), the second gets a smaller neighborhood, and so on down the list. The function P(i) with respect to rank i can be obtained as:

$$P(i) = \left\lceil {P_{{{\text{max}}}} \left( {1 - \frac{{i - 1}}{{N - 1}}} \right)} \right\rceil$$
(3)

The assignment of corresponding neighbourhood spaces to points possessing different solution values is the core motivation for the algorithm to mimic a flying machine.

2.3 Call Neighborhood Points into Being

With the previous step done, we turn our attention to generating new valid points. Figure 2 shows an case demonstration. The dots filled in blue represent neighborhood points, and the white dots are where they might be located. For each point \(\varvec{x} _i^{(m)}\), create a vector oriented to the current solution defined as \(\varvec{v} _i^{(m)}\). The coordinate of each neighborhood point is originated based on Eq. (4):

$$\begin{aligned} \begin{aligned} S_{1}(\varvec{x} _i^{(m)})=\lbrace x_{i}^{(m)}-\delta x^{(m)}, x_{i}^{(m)}\rbrace , if \ v_{i}^{(m)} <0 \\ S_{2}(\varvec{x} _i^{(m)})=\lbrace x_{i}^{(m)}, x_{i}^{(m)}+\delta x^{(m)}\rbrace , if \ v_{i}^{(m)} >0 \\ S_{3}(\varvec{x} _i^{(m)})=S_{1}(x_{i}^{(m)})\bigcup S_{2}(x_{i}^{(m)}), if \ v_{i}^{(m)}=0 \end{aligned} \end{aligned}$$
(4)

The probability of finding the global optimum is further improved by generating new valid points in the vicinity of the optimal solution selected in the previous generation.

Fig. 2
figure 2

The draft of \(N^{(m)}\) points and their neighborhoods

2.4 Generating Centroid and Random Points

The flier often prefers to locate the middle of the ground surface. Similarly, we set a centroid to help locate the present search position and to effectively perform spatial localization. Further, we choose a point randomly from the minimum boundary of m-th iteration and set it as a random point to add to the search space. These steps can effectively enhance the exploration ability of WFS.

3 Spherical Evolution

SE is a recently developed NMH algorithm. It is a mathematical model based on search operators inspired by the traditional search pattern. SE studied the search patterns of a large number of traditional algorithms and found that most of the functions they use can be understood as a matrix search style, which enlightened the development of the spherical evolutionary algorithm. The SE operating mechanism can be briefly explained by three random vectors —\({X}_{1}, {X}_{2}\), and \({X}_{3}\). These three vectors are selected from the population and the initial solution is \({X}_{1}\). When searching a circular area by the radius \(|{X}_{2}{X}_{3}|\), we will get a new vector \({X}_{\text{new}}\) to shift the old solution \({X}_{\text{old}}\). On the 2D space, the radius and angle are constantly adjusted to create updated units so that the whole search area can be covered. Thus, this mechanism can be simply represented as:

$$X_{{i,d}}^{{new}} = X_{{\gamma ,d}} + \Sigma _{{k = 1}}^{n} S\left( {X_{{\alpha ,d}}^{k} ,X_{{\beta ,d}}^{k} } \right)$$
(5)

where d is the number of dimension. \(X_{i, d}^{\text{ new }}\) demonstrates the new i-th solution. \(X_{\alpha }, X_{\beta }\) and \(X_{\gamma }\), represent the three solutions picked based on a specific strategy. \(S\left( X_{\alpha ,d}^k,X_{\beta ,d}^k\right)\) stands for the updated units.


SE can well realize the operation of the search operator in all dimensions. When the number of dimensions increases, the spherical search functions based on the Euclidean distance. Euclidean distance refers to the true interval between two units in m-dimensional space. In one-dimensional, two-dimensional and high-dimensional approaches, the search schema of SE can be conducted by Eqs. (6, 7, 8), respectively:

$$SS_{1} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot \left| {X_{{\alpha ,d}} - X_{{\beta ,d}} } \right| \cdot \cos \left( \theta \right)$$
(6)
$$\begin{gathered} SS_{2} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot \left| {X_{{\alpha ,*}} - X_{{\beta ,*}} } \right| \cdot \sin \left( \theta \right) \hfill \\ SS_{2} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot \left| {X_{{\alpha ,*}} - X_{{\beta ,*}} } \right| \cdot \cos (\theta ) \hfill \\ \end{gathered}$$
(7)
$$\begin{gathered} SS_{{ \ge 3}} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot ||X_{{\alpha ,*}} - X_{{\beta ,*}} )||_{2} \cdot \prod\limits_{{k = d}}^{{dim - 1}} {\sin (\theta )} ,d = 1 \hfill \\ SS_{{ \ge 3}} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot ||X_{{\alpha ,*}} - X_{{\beta ,*}} )||_{2} \cdot \cos (\theta _{{d - 1}} ) \cdot \prod\limits_{{k = d}}^{{dim - 1}} {\sin (\theta )} ,1 < d \le dim - 1 \hfill \\ SS_{{ \ge 3}} \left( {X_{{\alpha ,d}} ,X_{{\beta ,d}} } \right) = SF() \cdot ||X_{{\alpha ,*}} - X_{{\beta ,*}} )||_{2} \cdot \cos (\theta _{{d - 1}} ),d = dim \hfill \\ \end{gathered}$$
(8)

where \(\left| X_{\alpha , d}-X_{\beta , d}\right|\) stands for the absolute distance between two in 1-dimension. \(\left\| X_{\alpha , *}-X_{\beta , *}\right\| _{2}\) represents the distance in the high-dimensional case. \(\theta\) denotes a random angle between \(X_{\alpha , *}\) and \(X_{\beta , *}\).

The mechanism of SE is simple and the search range is large. Therefore, it is an effective algorithm that can be applied to a variety of problems [71].

4 Spherical Mechanism-Driven Cooperative Coevolution Wingsuit Flying Search

4.1 Motivation

By studying the mechanism of metaheuristic algorithms, we find that most metaheuristic algorithms have two characteristics in common. The first one is about search patterns. A simple and efficient search pattern can help each individual population find a better solution. The second to be concerned is the selection way of individual. Several methods have been proposed, such as stochastic universal sampling method, tournament selection, and roulette wheel selection [4].

Search pattern has always been the core problem of algorithms, and it has a very important impact on algorithm. The common characteristics of different search patterns cannot be represented in a single template. For example, in grey wolf optimization (GWO) [72], grey wolf i updates its position by the differential perturbation between it and the three best wolves. In DE, the initial point is denoted as j and its position is updated by randomly selected individuals. In PSO, a particle k renovates its position with a velocity item as two update units. In fact, many metaheuristic algorithms utilize perturbation differences between individuals or binary crossover to implement search patterns.

To conclude, the common behavior of search, such as binary search and sequential search, usually plays a crucial role in the operation of an algorithm. WFS uses a search pattern based on population by updating candidate solution points. At the same time, SE is a simple and efficient heuristic approach that has more potential to search the whole promising solution space. As WFS and SE have their own operating characteristics, this inspires us to incorporate search scheme into the design of an algorithm. Next, we specifically explain how to combine WFS and SE through coevolution to achieve the purpose of complementary advantages.

4.2 Issues of WFS

WFS can independently optimize different locations of the whole solution space. However, in the iterative process, several candidate populations are missing due to the search mechanism. By contrast, SE possesses excellent performance in terms of search capability, while its problem-solving ability and convergence speed are unsatisfactory. Therefore, we attempt to incorporate SE into the procedure of exploring the local space in WFS, hoping that a perfect combination of both can be achieved.

From Eqs. (4), we can determine the size of the biggest neighborhood. Depending on the value of the vector \(\varvec{v} _i^{(m)}\), there are two types of neighborhoods, one directed and one non-strictly directed. Thus, when choosing neighborhood points, there are also two choices. Points are selected from the directed neighborhood or some remaining points are selected from the non-strictly directed neighborhood.

When generating neighbourhood points, WFS uses the Halton sequence and the rectangular search to determine the location of the solution. Therefore, this process inevitably misses the opportunity to obtain the best solution. To enhance the exploration capacities of the algorithms, a random point is selected by a specific strategy and added to the solution space. Therefore, to improve the probability of obtaining the best solution, we intend to improve the original method of generating neighborhood points. Thus, SE is considered due to its powerful search capability.

4.3 Spherical Search Scheme

Most of the existing mainstream algorithms adopt a hypercube search scheme. As shown in Fig. 3, the hypercube is represented as a rectangle space in 2D. By regulating the radius \(|B_{i}D|\) or \(|B_{i}A|\), the rectangle search space can be covered. The black line with the arrow (\(B_{i}C_{i}\)) represents the updating search trajectory of DE when the crossover rate (CR) is 1 in 2D space. The blue one indicates the updating search trajectory of DE in the case of only one dimension to choose. The red one indicates the updating search trajectory of PSO as the dimensional scale values are different.

Fig. 3
figure 3

Hypercube search in 2D space

Fig. 4
figure 4

Spherical search in 2D space

It is well known that the area of a circle is greatest for a certain circumference. Inspired by the above, spherical evolutionary is invented. In contrast, the spherical search pattern covers the solution space by continually adjusting the vector. To illustrate the mechanism of SE, an example in 2D is shown. From the Fig. 4, the dashed lines with arrows \(OA_{i}\) and \(OB_{i}\) represent two different solution vectors. The entire region occupied by the circle is regarded as a complete solution space. By rotating the angle \(\alpha\) and adjusting the radius \(|A_{i}B_{i}|\), a updated point \(B'_{i}\) is generated. Evidently, when rotating the angle \(\alpha\) and the radius from 0 to \(|A_{i}B_{i}|\), we can search the entire area of the circle. In Figs. 3 and 4, the spherical search mechanism covers a full-scale solution area than the hypercube does. Thus, the spherical mechanism possesses a better exploration capability than the hypercube does. The search mechanism of SE can thus address the shortcomings of WFS with regard to the insufficient use of the search space and can increase the population diversity as well.

4.4 Cooperative Coevolution

As the complexity of systems increased, the concept of modularity could be introduced to the solution process. CC solves complex optimization problems in the form of co-fitted subcomponents. In CC, complex problems are decomposed into subproblems, which are solved in evolutionary subpopulations. Individual evaluation depends on cooperation between subpopulations, and a complete solution is obtained by combining representative individuals of each subpopulation [73].

The original cooperative coevolution can be briefly outlined: (1) Problem decomposition, n-dimensional solution vector is split into multiple low-dimensional subparts. (2) Random initialization of subpopulations. (3) Perform a cycle to generate progeny subpopulations for individual fitness value assessment. (4) Update the solution vector and select the next generation subpopulation.

The loop includes the complete evolution of all subcomponents. When applying a CC, the epistatic interaction between variables is an important aspect to be considered. The interaction may have a negative impact on the convergence rate. Compared with conventional evolutionary algorithms, a salient advantage of CC is the good diversity and maneuverability using several subcomponents.

4.5 Spherical Mechanism-Driven Cooperative Coevolution Wingsuit Flying Search

Considering the different characteristics of the algorithm, we propose a new hybrid algorithm CCWFSSE herein. The differences between the mechanisms of WFS and SE result in differences between their performance. In WFS, each iteration of the population shifts the focus to lower regions. The main feature of this algorithm is its emphasis on more potential points. That is to say, exploitation of the solution space receives more attention in this process, while exploration is relatively sub-priority. On the contrary, in SE, the search is more directional than in other metaheuristic algorithms because the optimal population is based on the prior best (pbest) and global best positions (gbest). In summary, WFS and SE have their own advantages and disadvantages. The objective is to associate the superiority of both algorithms in order to improve the comprehensive ability. For example, introduce an SE mechanism to enhance the global search capability of WFS while avoiding reducing the search speed. Synchronously adding CC in the search process can increase the potential of searching the entire space and improve the population diversity.

WFS uses Horton sequences to generate random points, which is a standard hypercube mechanism. In this process, we introduce the phasor difference and spherical search pattern. Based on two random selected solution vectors \({X} _ {1}, {X} _ {2}\), a updated vector \({X} _ {{new}}\) is produced and is add to the population. Owing to the ability of searching a comprehensive solution space, SE uses \(\delta\) for local space exploitation, which leads to a risk of neighbourhood points entering the local optimum. However, the spherical mechanism promotes the exploration of the solution space in addition to balancing its exploration and search capabilities.

figure a

Therefore, a combination of parallelized computation and rapid convergence enables the proposed algorithm to achieve a good performance on exploration and exploitation. This performance can be considered as a unique property of the algorithm because CCWFSSE is essentially parameter-free. The general steps of CCWFSSE can be represented as follows:

  1. (1)

    The algorithm generates an initial population based on a certain strategy.

  2. (2)

    The threshold of aviator is set, and a proposed solution is selected via ranking.

  3. (3)

    Neighborhood points are generated for a selected point using the phasor difference and spherical search pattern and are added to the solution space. In addition to these two strategy, cooperative coevolution operator is more possible to be adopted than SE when iteration number is relatively high.

  4. (4)

    The eligible points selected are added to the population.

  5. (5)

    Repeat the Steps (2)–(4) until the limiting condition is reached.

Fig. 5
figure 5

The flowchart of CCWFSSE

Figure 5 reveals the operating flowchart of this algorithm in details. The CCWFSSE pseudocode is shown in Algorithm. 1. Lines 11–16 represent the process that sorts X and selects the appropriate point. Subsequently, we generate the neighborhood point. As shown in lines 18–34, we utilize the vector difference to select a random point when iteration number is relatively small. When iteration number is relatively high, we tend to choose cooperative coevolution to select point. Then add that point and the centroid point to the population.

5 Experimental Results

We expect that the improved algorithm can be significantly competitive. Therefore, the performance of the proposed CCWFSSE algorithm was verified using IEEE CEC2017. In this section, first, the CEC2017 benchmark functions are introduced. Second, comparison between CCWFSSE and other metaheuristic algorithms is presented. Then, a series of experimental results in terms of the mean error with the known optimal values [74,75,76] and statistical test are analyzed. Third, the performance of CCWFSSE on some real-world problems is described and analyzed.

5.1 Benchmark Functions

IEEE CEC2017 is recognized as a valid test suite for evaluating algorithm performance. This benchmark is comprised by 30 different problems. The F2 function was excluded in this study due to its unstable performance in high dimensions. Therefore, 29 functions including unimodal functions (F1 and F3), simple multimodal functions (F4–F10), hybrid functions (F11–F20) and composition functions(F21–F30) were used.

5.2 Experiment Setup

The parameters are set as follows: the size of population N was set as 30,000; The dimension of functions was 30, so the MaxFEs was \(10^{4}\,{\times}\,30\). All algorithms were performed on a PC with a Intel(R) Core(TM) i5-7400 CPU at 3.00 GHz with 16GB of RAM. Each algorithm was run 51 times independently for each function to achieve accurate results.

5.3 Performance Comparison

We performed three different evaluation to asses the performance of CCWFSSE. The first was the Wilcoxon rank-sum test which was conducted to compare the performance of CCWFSSE with that of WFS, PSO, DE, and butterfly optimization algorithm (BOA) [77]. BOA is a natural heuristic algorithm based on the feeding process of butterflies, which has a high convergence accuracy. This test is for determining whether there is a significant difference between two groups of data with unknown distribution types and different sample sizes. The parameter settings are shown in Table 1.

Table 1 Parameter settings of the heuristic algorithms
Table 2 Experimental results of CCWFSSE and other algorithms on CEC2017 benchmark functions
Table 3 The experimental results of CCWFSSE and other algorithms on real-world problems

Table 2 reveals the experiment results. The best results for each tested problem among all compared algorithms are highlighted in bold. The sign “\(+\)”, “\(\approx\)” and “−” represent that the target algorithm is better/similar to/worse than its competitor, respectively. The notations “w/t/l” denotes whether CCWFSSE performs better/the same as/worse than the other algorithms. The bolded values indicate the best results in a control group. We can learn that the number of wins, ties, and losses of CCWFSSE are 13/13/3, 29/0/0, 29/0/0, 29/0/0 against WFS, PSO, DE, and BOA, respectively. There is no significant difference between CCWFSSE and the original algorithm WFS in terms of the unimodal functions, because these functions only consist of a global optimum. However, for multimodal functions, given the existence of local optimum which may cause WFS to stall prematurely, the improved algorithm CCWFSSE performs significantly superior than WFS. It can prove that our approach is effective in avoiding falling into local optimum. From the data, CCWFSSE has a good performance particularly on separable benchmark functions, multimodal and non-separable benchmark functions.

The second evaluation is the convergence curve graph which can display the convergence rate of different algorithms. It exhibits the track of the current optimal solution. Figure 6 shows the six typical functions F5, F7, F9, F16, F20, and F28. The X-axis represents the number of evaluations, and the Y-axis refers to the average fitness obtained until a given time. These results show that the slope of the curve of CCWFSSE is gentler than that of the other four algorithms. At the initial phase, CCWFSSE exhibts a clear intention of converging, while other four algorithms converge considerably slower in comparison. It is worth mentioning that there are more concerns about whether the solution provided by NMH algorithm is the correct method to solve the problem. This validity sometimes comes in form of the final result of convergence. The fact that our proposed CCWFSSE converges efficiently verifies the validity of the algorithm. Thus, CCWFSSE can quickly converge to the desired target and effectively eliminate the local optimum.

Fig. 6
figure 6

The convergence graphs of WFSSE and other algorithms on F5, F7, F9, F16, F20, and F28

The last evaluation is the box-and-whisker plot, which reflects the distribution and dispersion of the data. Figure 7 presents box-and-whisker graphs on F5, F7, F9, F16, F25, and F28. The shorter distance between the limit values represents the smaller deviation and the stable search performance. The red crosses are outliers in a set of data that allow people to analyze the cause of the outliers and eliminate them. The top and bottom black lines represent the boundary fitness values. The middle line indicates the median value of the data. From the Fig. 7, the experimental results reveals that all kinds of image indexes of CCWFSSE are almost lower than others. This can prove that CCWFSSE possesses the ability to search out the better solutions. With respect to the simple multimodal functions F5, F9 and composition function F28, the CCWFSSE has better stability compared with the other algorithms.

Fig. 7
figure 7

The box-and-whisker diagrams on F5, F7, F9, F16, F20 and F28

Baesd on these experiments, we can conclude that CCWFSSE is effective on numerical function optimization. Overall, the CCWFSSE is a considerable improvement compared with the original algorithm in terms of convergence speed, mean error, and stability.

5.4 Real-World Problems

To understand the performance of CCWFSSE more clearly, 22 IEEE CEC2011 real-world problems are selected as a test set. Whether an algorithm can adapt to different real complex problems is also an important metric. Table 3 shows the mean, standard deviation, and the final result of this test. We compare CCWFSSE with a control group of WFS, PSO, DE, and GWO. The outcome represents that the CCWFSSE exceeds other algorithms in performance.

In particular, satisfactory results were achieved for F1, F5, F7, F10, and F13. F1 is a six-dimensional optimization problem that works to optimize the FM settings. This problem is a key issue in the field of music. F5 involves the evaluation of the atomic potential of covalent systems, especially silicon covalent systems. It has recently attracted considerable research interest. F7 is a question about radar pulses. When designing a radar system, the proper waveform is of paramount importance. F10 is a spherical antenna array problem that can be applied to sonar, sensing, communication and other fields. F13 is a spacecraft trajectory optimization problem to find the best safe path for the vehicle. Therefore, based on our findings, CCWFSSE can cope with complex real-world problems to a certain extent.

6 Discussion

6.1 Exploitation and Exploration

During the operation of an algorithm, the different stages can be abstractly understood as a dynamic development process. Two main aspects of the operating process are exploitation and exploration. Exploitation refers to the probe on a specific local area, considering that a suitable solution is observed or predicted, while exploration refers to generating the diverse solutions on a global scale [78]. These two can well explain the behavior pattern of the algorithm in different stages and regions. When choosing an optimal solution, local exploration can speed up the convergence to the optimum, while diverse exploitation can prevent the algorithm from falling into a local optimum [79]. An appropriate alliance of aspects can ensure the implementation of a global optimal and accelerate the convergence rate.

Fig. 8
figure 8

Search history of individuals of CCWFSSE

To assess the performance of CCWFSSE about exploration and exploitation, typical graphs of population distribution on three types of functions with two dimensions were plotted, as show in Fig. 8. These functions includes unimodal function (F1), multimodal function (F8), and composition function (F27), where the size of population is 30,000, the maximum number of iterations is 10, and the search range of each dimension is [-100; 100]. Alphabet t means the current number of iteration. The blue point represents the global optimum. The red points denote the current location of each individual. The distribution density of individuals in the initial iteration (\(t = 1\)) is uniform. These individuals are uniformly generated based on the Holden sequence and begin to enter the WFS mechanism to find relatively suitable individuals. The entire algorithm begins to enter the exploration process. The first clear turning point is that when t = 4, the individual converges to the best sub-region until a stagnant state is reached. Subsequently, the spherical search mechanism is used to search for neighbors of the stagnant crowd; thus, the number of available populations increases. This represents a suitable exploitation process. Finally, the individual converges to the global optimal region. CCWFSSE compensates for the one-sidedness of other algorithms by controlling the mutual conversion of these two search mechanisms.

6.2 Conceptual Comparison Between CCWFSSE and Other Heuristic Algorithms

Currently, most mainstream metaheuristic algorithms use a pattern matrix. The pattern matrix includes solutions considered in a single algorithm iteration and multiple global iterations [80]. The optimal solution can be rapidly improved through iteration, and the degree of improvement is called the convergence speed. In genetic algorithms, for example, a chromosome is equivalent to a pattern matrix. In PSO, the matrix is considered as an ensemble, with each pattern corresponding to a particle. Several heuristic algorithms improve the pattern matrix by using basic genetic rules and laws of operations. Nevertheless, the variety in the properties of these algorithms decreases as the iteration progresses, and the main problem lies in generating new forms or properties for the pattern matrix [81].

In CCWFSSE, the pattern matrix includes all the points that the vehicle can explore in one iteration. CCWFSSE eschews the traditional crossover, variational rules and instead generates new solutions by finding new neighborhoods. Through a fast screening mechanism of each generation, CCWFSSE can theoretically find the global optimum of the objective function in exponential time. To sustain the diversity, the Halton sequence and grid logic are chosen during the process of initialization. In addition to the pseudo-random points in the specified neighborhood, random points and the center of mass are adopted at each iteration to advocate the diversity.

6.3 Population Diversity Analysis

Population diversity is also a considerable parameter in algorithm evaluation. It can indicate whether an algorithm is prone to prematurely stalling, i.e., falling into local optimum [82]. This indicator Div can be calculated using Eq. (9):

$$\begin{aligned} Div=\frac{1}{N} \sqrt{\sum _{i=1}^{N}\left( \left\| X_{i}-X_{\text{ mean }}\right\| \right) ^{2}} \end{aligned}$$
(9)

where N represents the size of population. \(X_{i}\) is the i-th individual. \(X_{\text{ mean }}\), which is defined as \(X_{\text{ mean }}=\) \(\frac{1}{N} \sum _{i=1}^{N} X_{i}\), is the average of the population.

Fig. 9
figure 9

Population diversity on F4, F9, F13, F14, F17, and F28

However, owing to the particularity of the CCWFSSE mechanism, in the initial population of 30,000, relatively suitable possible solutions were quickly selected, and the neighborhood points were then generated based on the Halton sequence. Therefore, it is unfair for CCWFSSE to use common mechanisms when comparing its population diversity with other outstanding algorithms. The variation trends in six representative ones are compared between CCWFSSE and WFS, as shown in Fig. 9. CCWFSSE maintains a high level of diversity, especially in the middle iteration stage. As the iterative process goes on, the diversity of CCWFSSE remains stable at a high level. In the entire optimization process, we use SE’s vector difference and cooperative coevolution to provide each individual with a driving force for diffusion, to expand a extensive search space. This manipulation affords more opportunities for individuals and ensures a higher level of population diversity throughout the search phase.

6.4 Time Complexity

In the above text, we have proved the effectiveness of the CCWFSSE on the benchmark functions. In the ending, we analyze the computational complexity of the algorithm. The time taken to execute an algorithm is theoretically impossible to calculate, but we can speculate by the number of statements executed and the degree of program complexity. It can be a good reflection of the advantages and disadvantages of an algorithm. The main procedures of this analysis can be demonstrated as:

  1. (1)

    The process of population initialization may cost O(n);

  2. (2)

    Generate extract point and update population require time complexity \(O(n^{2})\);

  3. (3)

    Generating neighborhood size has O(n) complexity;

  4. (4)

    Selected point using the vector difference and spherical evolution strategy require \(O(n^{2})\) cost;

  5. (5)

    Updating the random point and centroid point needs O(n)

The execution time of other program segments which are independent of the problem size n can be denoted as O(1). To sum up, the computational complexity of CCWFSSE can be organized as:

$$\begin{aligned} \begin{array}{l}{T(N)=O(n)+T[O(n^{2})+O(n)+O(n^{2})} {+O(n)=(2T+1)O(n) + 2TO(n^{2})}\end{array} \end{aligned}$$
(10)

We generally only need to keep the highest power of the calculated result. Therefore, the time complexity T(N) of CCWFSSE is identified as \(O(N^{2})\), which is the same as that of WFS, but better than SE. It means there is still room for CCWFSSE to improve in designing program execution.

7 Conclusion

A novel hybrid algorithm termed CCWFSSE was presented in this article. This algorithm utilized the spherical search to improve the exploration ability, and cooperative coevolution to enhance the population diversity and exploitation ability. A sufficiently large test suite IEEE CEC2017 and CEC2011 real-world problems were chosen to assess various aspects of CCWFSSE’s performance. CCWFSSE used the Halton sequence to generate points in the distribution space and used the vector difference and spherical search patterns to select and generate adjacent points to utilize the entire search space more effectively. To verify how well CCWFSSE actually performs, we chose WFS, PSO, DE, GWO and BOA as control objectives. Through the above experimental data, we proved that CCWFSSE has relatively good robustness and feasibility. Considering the structural complexity of CCWFSSE, specifically the fact that it was essentially parameter-free, the findings were encouraging, because CCWFSSE can successfully compete with related algorithms.

Although it was verfied that CCWFSSE is an effective algorithm, it appeared to face difficulties when addressing complex and high-dimensional issues. In future work, we will focus on simplifying multiple diversity-driven strategies and on processing search information. Moreover, we can apply CCWFSSE to address more practical problems, such as multi-task learning [83], dynamic community detection [84], routing problem [85, 86], and multiple objective optimization [71, 87, 88].