1 Introduction

As a powerful alternative for classical optimization method, meta-heuristic algorithms had attracted numerous attention of researchers in the past decades. Compared with classical optimization method, meta-heuristic algorithm is relatively simple, less computationally expensive and global searching. Most of important, meta-heuristic algorithm does not impose any restrictions on the objective function, which maybe continuous or discontinuous, differentiable or non-differentiable, convex or non-convex, etc. Nowadays, different kinds of meta-heuristic algorithms had been proposed. For instance, differential evolution (DE) [34], biogeography-based optimization (BBO) [40], particle swarm optimization (PSO) [23], artificial bee colony [43], bacteria foraging optimization [17], Remora optimization algorithm (ROA) [20], coronavirus optimization algorithm (COA) [25, 26], hunter–prey optimization [31], marine predators algorithm (MPA) [14] and so on.

As a competitive meta-heuristic algorithm, imperialist competition algorithm (ICA) was suggested by Atashpaz-Gargari and Lucas. ICA stimulates the imperialistic competitive phenomenon among countries in the world [5]. Different from many other meta-heuristic algorithms, ICA intrinsically divides the individuals into several sub-swarms. In the evolutionary process, each sub-swarm exchanges information via imperialistic competition. Thus, ICA can be viewed as a multi-swarm evolutionary algorithm. Due to this characteristic, ICA exhibits excellent optimization performance in solving different engineering problems like job shop scheduling problem [27], multi-layer perceptron training [29], node placement problem in wireless sensor networks [7], fuzzy controller coefficient optimization [38], data clustering [4], and so on.

Though ICA had gained success in many real applications, it also possesses several common shortcomings as most other meta-heuristic algorithms. Specifically, ICA has a tendency to stuck into local optimum and results in premature phenomenon. On the other hand, ICA is sometimes difficult to maintain the diversity of population making it cannot search more potential region in the search space. To overcome these drawbacks, researchers had paid great effort and proposed different schemes to improve the performance of ICA.

The first class improving scheme attempts to change the way of assimilation, including the assimilation coefficients and assimilation formula. In [2], a dynamically adjusting scheme for assimilation coefficients was designed on the basis of attraction and repulsion principle. The authors in [9] proposed a scheme to dynamically change the assimilation coefficient via an interval type-2 fuzzy system (T2FS). While in [24], an adaptive deviation angle adjusting scheme was designed using fuzzy inference system (FIS) and in [1], an dynamically adjusting approach was designed for deviation angle using Gaussian probability model. Also, the colony’s moving formula had been re-designed by several researchers. For example, in [19, 35], the velocity-position model in PSO was adopted as the moving formula of colony’s in the assimilation phase. While in Gaussian bare-bone ICA (GBB-ICA) [10], the Gaussian sampling strategy in bare-bone particle swarm optimization (BBPSO) was also borrowed by the authors as the assimilation scheme. In [39], the moving formula of firefly algorithm (FA) was utilized as the assimilation formula of ICA. In [12, 33], the authors proposed to use the crossover and mutation in DE algorithm as assimilation strategy.

The second class improving scheme is to add mutation operator into ICA. For example, in [42], a mutation-based imperialist competition algorithm is proposed, in which the imperialist was mutated via Gaussian, Cauchy and Lévy three operators. In [3], two new candidate positions are generated for each colony using the unimodal normal distribution crossover (UNDX) operator. The best of the two candidate positions are taken as the new position of the colony. In [30], a chaotic mutation-based ICA was proposed. In the proposed method, the chaotic map is adopted as the mutation strategy to generate new individuals to replace the old ones in the revolution stage, making the algorithm escape from local optima. The third class of improving scheme is to hybrid other evolutionary algorithms with ICA. For example, ICA hybrids with GA [6, 18], with simulating annealing (SA) [32], with PSO [28].

Among various improved ICAs, GBB-ICA is an interesting ICA variant. GBB-ICA has less parameters to be adjusted than initial ICA and the other ICA variants. This characteristic enables GBB-ICA to be easily applied by different engineers. Most of important, GBB-ICA shows superior performance than many other ICA variants. However, in the searching process, GBB-ICA generates new candidate position for colony around the imperialist. If the imperialist locates at unproductive region, GBB-ICA cannot perform effective searching and cannot locate high-quality candidate position for colony. In addition, the searching range of GBB-ICA is also relatively small, as a result, it cannot explore more effective region that global optimal may locate.

To overcome the drawbacks of GBB-ICA, we proposed a new double Gaussian sampling strategy to enhance the searching ability of GBB-ICA. Specifically, an extra Gaussian sampling point is added to the GBB-ICA. The mean and standard of the extra Gaussian sampling point is calculated using the positions of the second best colony in the empire and the current colony itself. To further speed up the converge and explore informative searching region, the quasi-oppositional learning technique is also incorporated into GBB-ICA for two purposes. One is to generate a higher quality initial population and the second is to produce more potential candidates in the assimilation step. The proposed algorithm is called quasi-oppositional learning-based double Gaussian sampling bare-bone imperialist competitive algorithm (QOLBDGSBB-ICA). The main contributions of our paper include the following aspects.

  • A new double Gaussian sampling strategy is proposed to enhance the searching ability of GBB-ICA such that it can effectively avoid premature convergence.

  • The quasi-oppositional learning strategy is incorporated into GBB-ICA to further speed up the convergence of GBB-ICA and help it explore more information searching region to produce high-quality solution.

  • The proposed QOLBDGSBB-ICA exhibits superior performance in terms of global searching ability and faster convergence speed.

The remaining part of this paper is arranged as follows. In Sect. 2, the GBB-ICA is shortly reviewed. The proposed QOLBDGSBB-ICA is explained in Sect. 3. The experimental results and comparison analysis are reported in Sect. 4. Lastly, Sect. 5 presents the conclusive remarks.

2 Review of GBB-ICA

The main steps of GBB-ICA are similar to those of ICA, which mainly consist of three evolutionary processes, including initialization, assimilation, imperialistic competition. In the assimilation stage of GBB-ICA, double Gaussian sampling is used to generate new position for colonies, instead of random moving in ICA. In the following, the evolutionary processes of GBB-ICA are introduced in details.

2.1 Initialization

Let us consider a d variables minimization problem as

$$\begin{aligned} \begin{array}{ll} \min &{} f(\varvec{x}), \varvec{x} = [x_1, x_2, \ldots , x_d]\\ \text{ s.t. } &{} l_j \le x_j \le u_j. \end{array} \end{aligned}$$
(1)

In ICA, each candidate solution for optimization problem (1) is called a country. Mathematically, a d-dimensional vector \(\varvec{x} = [x_1, x_2, \ldots , x_d]\) denotes the country’s position. In initialization stage, N initial countries are produced in the search range as the following random manner,

$$\begin{aligned} x_{ij} \!= \!l_j \!+\! (u_j \!-\! l_j)\times r, j \!=\! 1,2,\ldots , d; i=1,2,\ldots , N,\nonumber \\ \end{aligned}$$
(2)

where \(x_{ij}\) stands the jth component of the ith country, r is a uniformly distributed random number in [0, 1], \(l_j\) and \(u_j\) are the minimum and maximum value allowed for variable \(x_j\). Then, the cost or power of each country can be computed as

$$\begin{aligned} c_{i}=f(\varvec{x}_i). \end{aligned}$$
(3)

In the next step, the N countries are sorted in ascend order according to their cost. The first \(N_{\text {imp}}\) countries with the best cost are taken as imperialist while the remained \(N-N_{\text {imp}}\) are colonies. Each imperialist possesses several colonies. To determine how much colonies the nth imperialist can possess, one first calculate the normalized cost or power of the nth imperialist as

$$\begin{aligned} C_{n}=c_{n}-\max \limits _{i}\{c_{i}\}. \end{aligned}$$
(4)

The nth imperialist can possess \(NC_n\) colonies, which is computed as

$$\begin{aligned} NC_{n}=\text {round}\{p_{n} \cdot N_{\text {col}}\}, n=1,2,\ldots , N_{\text {imp}}, \end{aligned}$$
(5)

where

$$\begin{aligned} p_{n}=\left| \frac{C_{n}}{\sum \limits _{i=1}^{N_{\text {imp}}}C_{i}}\right| . \end{aligned}$$
(6)

After assigning the colonies to each imperialist, \(N_{\text {imp}}\) empires are initially formed and the initialization process completes.

2.2 Assimilation

In this stage, each colony moves toward the imperialist to improve its power and thus strengthens the whole power of the empire. In ICA, each colony tries to approach to the imperialist to increase its power. This process is illustrated in Fig. 1.

Fig. 1
figure 1

Assimilation process

In Fig. 1, a colony approaches to the imperialist by moving a distance \(\Delta \varvec{x}\) from its current position with a deviation angle \(\theta \). The length of \(\Delta \varvec{x}\) is component-wise random, that is to say,

$$\begin{aligned} \Delta {x}_j \thicksim U(0,\beta \times L), \end{aligned}$$
(7)

where \(\Delta {x}_j\) is the jth component of \(\Delta \varvec{x}\) and L is the distance between the ith colony and its imperialist, \(\beta \) is a real number greater than 1, which is called assimilation coefficient. Similarly, \(\theta \) is a number randomly selected from \([-\pi /4, \pi /4]\), that is [5],

$$\begin{aligned} \theta \thicksim U(-\pi /4, \pi /4). \end{aligned}$$

In GBB-ICA, Gaussian sampling process is utilized to produce new candidate position for colony. For the convenience of statement, let \(\varvec{x}^{\text {imp}}=[x_{1}^{\text {imp}}, x_{2}^{\text {imp}}, \cdots , x_{d}^{\text {imp}}]\) be the imperialist’s position and \(\varvec{x}_i^{\text {col}}= [x_{i1}^{\text {col}}, x_{i2}^{\text {col}}, \ldots , x_{id}^{\text {col}}] \) be the ith colony’s position. The new position of the ith colony is denoted as \(\varvec{z}_i^{\text {col}}=[z_{i1}^{\text {col}}, z_{i2}^{\text {col}}, \ldots , z_{id}^{\text {col}}]\). GBB-ICA generates new position using Gaussian sampling technique as

$$\begin{aligned} z_{ij}^{\text {col}} = \left\{ \begin{array}{ll} \mathcal {N}(\mu _j, \sigma _j), &{} \hbox {if}\,\text {rand}(0,1) >0.5; \\ x_{ij}^{\text {col}}+ U(0, \beta \times L), &{} \hbox {otherwise.} \end{array} \right. , \end{aligned}$$
(8)

where \(\mathcal {N}(\mu _j, \sigma _j)\) represents a Gaussian distribution with mean \(\mu _j\) and deviation \(\sigma _j\). They are calculated as

$$\begin{aligned} \mu _j = \frac{x_{j}^{\text {imp}}+x_{ij}^{\text {col}}}{2}, \sigma _j =|x_{j}^{\text {imp}} - x_{ij}^{\text {col}}|. \end{aligned}$$

GBB-ICA generates new position for colony only around the imperialist, which has the potential risk of getting stuck into local optima and missing the chance of exploring more productive region.

2.3 Imperialistic Competition

After assimilation, the imperialistic competition between empires begins. In this stage, the stronger empires seize the colonies of the weaker empires such that the stronger empires become more and more stronger while the weaker empires become more and more weaker. To do this, the weakest colony in the weakest empire will be occupied by the other stronger empires. The probability that the stronger empires occupy the lost colony is calculated as follows. First, the total cost of the nth empire is calculated as

$$\begin{aligned} TC_{n}= c_n +\xi \frac{\sum _{i=1}^{NC_n}c_i }{NC_n}, \end{aligned}$$
(9)

where \(\xi \) is a real positive number smaller than 1, which is called cost ratio coefficient, \(c_n\) is the cost of the nth imperialist, \(c_i\) is the cost of the ith colony in the empire. The normalized total cost \(NTC_{n}\) of the nth empire is defined as

$$\begin{aligned} NTC_{n}=TC_{n}- \max \limits _{i} \{TC_{i}\}. \end{aligned}$$
(10)

All the empires have chances to possess the lost colony with a probability \(p_n\), which is computed as

$$\begin{aligned} p_{n}=\left| \frac{NTC_{n}}{\sum \limits _{j=1}^{N_{\text {imp}}} NTC_{j}} \right| , n=1,2,\ldots , N_{\text {imp}}. \end{aligned}$$
(11)

Denotes

$$\begin{aligned} \varvec{p} =[p_{1},p_{2},...,p_{N_{\text {imp}}}], \end{aligned}$$
(12)

be the probability vector. Generating a random vector \(\varvec{r} =[r_{1},r_{2},...,r_{N_{\text {imp}}}]\) with \(r_i \in [0,1]\). The difference between P and R is computed as

$$\begin{aligned} \varvec{m} =[p_{1}-r_{1},p_{2}-r_{2},...,p_{{N_{\text {imp}}}}-r_{N_{\text {imp}}}]. \end{aligned}$$
(13)

In \(\varvec{m}\), the index corresponding to the maximized value among \(p_i-r_i,i=1,2,\ldots , N_{imp}\) is denoted as \(\kappa \). Then, the \(\kappa \)th empire will possess the lost colony. Figure 2 gives an explanation of imperialistic competition.

Fig. 2
figure 2

Imperialist competition of ICA

3 The Proposed QOLBDGSBB-ICA

In this section, the proposed QOLBDGSBB-ICA is explained in details. First, the proposed double Gaussian sampling strategy is introduced, and then, the quasi-oppositional learning is explained.

3.1 Double Gaussian Sampling Strategy

As stated before, GBB-ICA generates new position for A colony only around the imperialist, which has the potential risk of trapping into local minimum and missing the chance of exploring more productive region. Observing this, we proposed a Double Gaussian sampling strategy, in which an extra Gaussian sampling is added to generate new position for current colony. Let \(\varvec{x}_k^{\text {col}}= [x_{k1}^{\text {col}}, x_{k2}^{\text {col}}, \ldots , x_{kd}^{\text {col}}]\) denotes the best colony in an empire, double Gaussian sampling strategy generates new position for a colony in the following way,

$$\begin{aligned} \varvec{z}_{i}^{\text {col}} = r_1 \varvec{z}_{i1}^{\text {col}} + r_2 \varvec{z}_{i2}^{\text {col}}, \end{aligned}$$
(14)

where \(r_1\) and \(r_2\) are two random number distributed in [0,1], and

$$\begin{aligned} \varvec{z}_{i1}^{\text {col}}= & {} \mathcal {N}\left( \frac{\varvec{x}_i^{\text {col}}+ \varvec{x}_k^{\text {col}}}{2}, |\varvec{x}_i^{\text {col}} -\varvec{x}_k^{\text {col}}| \right) , \end{aligned}$$
(15)
$$\begin{aligned} \varvec{z}_{i2}^{\text {col}}= & {} \mathcal {N}\left( \frac{\varvec{x}^{\text {imp}} + \varvec{x}_i^{\text {col}}}{2}, |\varvec{x}^{\text {imp}} -\varvec{x}_i^{\text {col}}| \right) . \end{aligned}$$
(16)

Compared with GBB-ICA, the double Gaussian sampling strategy utilizes another information in the empire to generate new position for colony, which enlarges the searching range and can effectively avoid premature convergence.

3.2 Quasi-Oppositional Learning Strategy

Here, the quasi-oppositional learning technique is used to further strengthen the performance of GSBB-ICA. Several research results had proven that, in most evolutionary algorithm, generating new candidate solution using oppositional guess are more effective than purely random ones [41]. Thus, Tizhoosh proposed the concept of quasi-oppositional learning [36]. The quasi-oppositional learning had been widely adopted to enhance the performance of different evolutionary algorithms. In the following, several concepts related to quasi-oppositional learning are firsts introduced.

Opposite number The opposite number of a real number x in the interval [ab] is defined as

$$\begin{aligned} \breve{x} = a+b-x. \end{aligned}$$
(17)

Opposite point The opposite point of the point \(\varvec{x} = [x_1, x_2, \ldots , x_d]\) in d-dimensional space is denoted as \(\breve{{\varvec{x}}}=[\breve{x}_1, \breve{x}_2, \ldots , \breve{x}_d]\), in which, \(\breve{x}_i\) is defined as

$$\begin{aligned} \breve{x}_i=a_i + b_i -x_i. \end{aligned}$$
(18)

Quasi-opposite number The quasi-oppositional number of a real number x in the interval [ab] is defined as

$$\begin{aligned} \bar{x} = \left\{ \begin{array}{ll} \frac{a+b}{2}+r(\breve{x}-\frac{a+b}{2}), &{} \breve{x}< \frac{a+b}{2}; \\ \breve{x}+(\frac{a+b}{2}-\breve{x}), &{} \breve{x}\ge \frac{a+b}{2}. \end{array} \right. \end{aligned}$$
(19)

Quasi-oppositional point The quasi-oppositional point of \(\varvec{x}=[x_1, x_2, \ldots , x_d]\) is denoted as \(\bar{\varvec{x}}=[\bar{x}_1, \bar{x}_2, \ldots , \bar{x}_d]\), the components of \(\bar{\varvec{x}}\) are defined as

$$\begin{aligned} \bar{x}_i = \left\{ \begin{array}{ll} \frac{a+b}{2}+r(\breve{x}_i-\frac{a+b}{2}), &{} \breve{x}_i< \frac{a+b}{2}; \\ \breve{x}_i+(\frac{a+b}{2}-\breve{x}_i), &{} \breve{x}_i\ge \frac{a+b}{2}. \end{array} \right. \end{aligned}$$
(20)

Quasi-opposition-based optimization Let \({\varvec{x}}=(x_1, x_2, \ldots , x_d)\) be a point in d-dimensional space and \(f({\varvec{x}})\) the fitness function. The quasi-oppositional point of \(\varvec{x}\) is denoted as \(\bar{{\varvec{x}}}\). If \(f(\bar{{\varvec{x}}})\) is better than \(f({\varvec{x}})\), then point \({\varvec{x}}\) is replaced with \(\bar{{\varvec{x}}}\); otherwise, point \(\varvec{x}\) is reserved.

3.3 Details of QOLBDGSBB-ICA

In this section, the double Gaussian sampling strategy and quasi-oppositional learning techniques are incorporated into GBB-ICA to enhance its optimization performance. The resulted algorithm is called QOLBDGSBB-ICA. The quasi-oppositional learning technique is used in the initialization and assimilation stage to enhance the algorithm’s performance.

3.3.1 Quasi-Oppositional Learning-Based Initialization

In the initialization stage, quasi-oppositional learning is used to generate higher quality initial solution. The detailed process is explained as follows.

  1. 1.

    Generating an initial population \(\varvec{P}_0\) with N countries.

  2. 2.

    Generating the quasi-oppositional population \(\varvec{QOP}_0\) of \(\varvec{P}_0\) according to method shown in (20).

  3. 3.

    Calculating the cost of the initial population \(\varvec{P}_0\) and \(\varvec{QOP}_0\). The costs is denoted as \(\varvec{F}_0\) and \(\varvec{QOF}_0\).

  4. 4.

    Sorting the costs \(\varvec{F}_0 \bigcup \varvec{QOF}_0\) in ascend order. Correspondingly, the 2N countries are also sorted in the same order as in \(\varvec{F}_0 \bigcup \varvec{QOF}_0\).

  5. 5.

    From the sorted 2N countries, selecting the first N countries and corresponding costs as the initial population.

3.3.2 Quasi-Oppositional Learning-Based Assimilation

After generating new position for colonies using double Gaussian sampling, quasi-oppositional learning technique is adopted to help the algorithm find better solution. Let \(\varvec{z}_i^{\text {col}}\) be the newly generated position for the ith colony, one generates the quasi-oppositional point \(\varvec{qoz}_i^{\text {col}}\) of \(\varvec{z}_i^{\text {col}}\). If \(f(\varvec{qoz}_i^{\text {col}}) < f(\varvec{z}_i^{\text {col}}) \), then replacing \(\varvec{z}_i^{\text {col}}\) with \(\varvec{qoz}_i^{\text {col}}\), otherwise, reserving \(\varvec{z}_i^{\text {col}}\) as the new position of the ith colony.

The proposed QOLB-DGSBB-ICA are summarized in Algorithm 1.

Algorithm 1
figure a

QOLB-DGSBB-ICA algorithm

4 Experimental Results

To demonstrate the performance of the proposed QOLB-DGSBB-ICA, optimization experiments on 20 benchmark functions and several engineering design problem are conducted. Several good improved ICA algorithms, i.e., AR-ICA, ICA-PSO, and GBB-ICA are selected as the comparing algorithms. The parameters of the above referenced algorithms are set to the same value. Specifically, the population size is 50, the initial number of empire is 8, the assimilation coefficient is 2. All the referenced algorithms dependently run 30 times. The best value of objective function, and the worst, the std are used as the evaluation metric. Furthermore, comparisons between the referenced algorithms via a nonparametric test are provided.

4.1 Experiments on Benchmark Function

First, the optimization experiments are conducted on Benchmark functions to verify the performance of the proposed QOLBDGSBB-ICA.

4.1.1 Description of Benchmark Function

Here, 20 famous benchmark functions are adopted to examine the performance of the proposed QOLBDGSBB-ICA. The adopted benchmark functions, whose details are listed in Tables 11 and 12 in the Appendix section. The adopted benchmark functions can be divided into two classes, i.e., unimodal and multimodal function. Unimodal function has only one global optimum, and it is relatively easy to be optimized. On the contrary, the multimodal function has many local optimum, it is more difficult to be optimized than unimodal function. In Tables 11 and 12, function \(f_1\) and \(f_2\) are unimodal, and the other 18 functions, i.e., \(f_{03}\)\(f_{20}\) are multimodal function.

4.1.2 Experimental Results of 30-Dimensional Case

The optimization results of the 30-dimensional benchmark functions in the 30 independent runs are statistically reported in Tables 1 and 2 in terms of the best, the worst, the mean and std (standard deviation). In general, one can see from Tables 1 and 2 that the performance of the proposed QOLBDGSBB-ICA is superior to other referenced ICAs.

For the unimodal functions \(f_1\) and \(f_2\), the best, the worst, the mean and std of the proposed QOLBDGSBB-ICA all are 0, which indicates that the proposed QOLBDGSBB-ICA can achieves the global minimum 0 each time in the 30 runs. The phenomenon illustrates the superior performance of QOLBDGSBB-ICA in optimizing the unimodal functions. However, for \(f_1\), AR-ICA, ICA-PSO and GBB-ICA achieve acceptable optimization results. However, ICA fails to give a satisfactory optimization result because its mean is 1.0052, which is far from the global minimum. For \(f_2\), ICA, AR-ICA, ICA-PSO and GBB-ICA all cannot give satisfactory optimization results. They do not converge to the global minimum within the maximum iterative steps.

Finding the global minimum of multimodal functions is very difficult because many local optimum exist in the searching space. For our proposed QOLBDGSBB-ICA, it achieves perfect optimization results on function \(f_4\), \(f_5\), \(f_7\), \(f_9\), \(f_{10}\), \(f_{11}\), \(f_{12}\), \(f_{13}\), \(f_{15}\), \(f_{16}\), \(f_{17}\). On these functions, the obtained the best, the worst, the mean and the std obtained by QOLBDGSBB-ICA all approach to 0. This phenomenon show that QOLBDGSBB-ICA can find the global minimum each time within the 30 runs, which indicates that QOLBDGSBB-ICA has powerful capacity to escape from local optimums in the searching process. For \(f_3\), the optimization result obtained by QOLBDGSBB-ICA is not the best compared with ICA, AR-ICA, ICA-PSO, GBB-ICA and DGSBB-ICA. AR-ICA achieves the best optimization result. For function \(f_6\), the best obtained QOLBDGSBB-ICA approaches the global minimum – 1.2279E+04, which is superior to those obtained by the other referenced ICAs. However, there is a little pity that the mean is – 1.1646E+04, which is not equal to the global minimum. Furthermore, the std is not equal to 0, which indicates that QOLBDGSBB-ICA cannot approach the global minimum each time within the 30 runs. For function \(f_8\), QOLBDGSBB-ICA and DGSBB-ICA find the same optimum result, that is to say, the best, the worst, the mean all are 4.4409E – 16 and the std is 0. This illustrates that QOLBDGSBB-ICA and DGSBB-ICA find the optimal results 4.4409E – 16 each time within the 30 runs. Though 4.4409E – 16 is not the global minimum, it is a satisfactory result. For function \(f_9\), the global minimum is – 1 and the all the referenced ICAs obtain satisfactory optimization result, that is to say, the best found by all the referenced ICAs approach to – 1. However, the performance of DGSBB-ICA and QOLBDGSBB-ICA is the best. As far as function \(f_{14}\) is concerned, its global minimum is \(-\)1174.9797. The best obtained by QOLBDGSBB-ICA is \(-\)1.1749E+03, which is the closest to the global minimum. Furthermore, the mean obtained by QOLBDGSBB-ICA is also the closest to the global minimum. For function \(f_{19}\) and \(f_{20}\), the results obtained by all the referenced ICAs do not achieve the global minimum; however, the results of QOLBDGSBB-ICA is the best compared with other referenced ICAs. For function \(f_{19}\), QOLBDGSBB-ICA gives the best result 1.30667E\(-\)16, which is better than 7.6723E\(-\)09 and 9.5570E\(-\)09 obtained by GBB-ICA and AR-ICA, respectively. For function \(f_{20}\), QOLBDGSBB-ICA give the best 3.02014E\(-\)13, which is better than 3.1536E\(-\)09 and 3.5010E\(-\)09 given by ICA-PSO and GBB-ICA.

Table 1 Statistical results of benchmark function for 30-dimensional case (\(f_1-f_{10}\))
Table 2 Statistical results of benchmark function for 30-dimensional case (\(f_{11}-f_{20}\))
Table 3 Statistical results of benchmark function for 50-dimensional case (\(f_1-f_{10}\))
Table 4 Statistical results of benchmark function for 50-dimensional case (\(f_{11}-f_{20}\))

4.1.3 Experimental Results on 50-Dimensional Functions

The optimization results on 50-dimensional benchmark functions are reported here. Usually, as the dimension increases, the optimal problem becomes more difficult. The optimization results of 50-dimensional benchmark functions are statistically listed in Tables 3 and 4. Obviously, with the increasing of dimension, the optimization performance of the referenced ICA, AR-ICA, ICA-PSO, GBB-ICA decrease dramatically no matter for unimodal functions or multimodal functions. On the contrary, our proposed QOLBDGSBB-ICA still maintains good optimization performance. In the following, a detailed explanation is given.

First, for the unimodal functions \(f_1\) and \(f_2\), the best, the worst, the mean and std obtained by QOLBDGSBB-ICA all equal to 0, which explains that QOLBDGSBB-ICA finds the global minimum each time within the 30 runs. DGSBB-ICA gives satisfactory optimization results. However, ICA, AR-ICA, ICA-PSO and GBB-ICA gives unacceptable optimization results because the means obtained by these algorithm are far away from the global minimum.

For multimodal functions \(f_4\), \(f_5\), \(f_7\), \(f_9\), \(f_{10}\sim f_{13}\), \(f_{15}\), \(f_{17}\), the best, the worst, the mean and std obtained by QOLBDGSBB-ICA all equal to 0, which illustrates that QOLBDGSBB-ICA finds the global minimum of these functions each time within 30 runs. This phenomenon demonstrates the superior optimization performance of QOLBDGSBB-ICA. For function \(f_3\), all the referenced ICAs fail to give satisfactory optimization results. For function \(f_4\), \(f_7\), \(f_9\), \(f_{13}\), \(f_{17}\), the DGSBB-ICA also finds the global minimum each time as QOLBDGSBB-ICA. For function \(f_5\), the ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor result. For function \(f_6\), its global minimum is -2.0949E04. All the referenced algorithm do not find the global minimum, however, the result found by QOLBDGSBB-ICA is the closest to the global minimum. For function \(f_8\), the ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor optimization results, the DGSBB-ICA and QOLBDGSBB-ICA achieve the best value 4.4409E-16, which is a satisfactory result. For function \(f_{11}\), \(f_{15}\), ICA, AR-ICA, ICA-PSO and GBB-ICA all give poor results, while DGSBB-ICA presents a satisfactory result. The proposed QOLBDGSBB-ICA finds the global minimum of \(f_{11}\) each time. For function \(f_{14}\), its global minimum is 1.9582E03. The result obtained by ICA-PSO is the closest to the global minimum and the that obtained by QOLBDGSBB-ICA is the second best. For function \(f_{16}\), QOLBDGSBB-ICA can find its global minimum within the 30 runs. However, the worst, the mean and std do not equal to 0, which indicates that QOLBDGSBB-ICA cannot find the global minimum each time. Nevertheless, ICA, AR-ICA, ICA-PSO, GBB-ICA and DGSBB-ICA cannot find an acceptable optimization result. Similar phenomenon occurs for function \(f_{18}\). For function \(f_{19}\) and \(f_{20}\), all the reference algorithms cannot find the global minimum. However, the optimization results obtained by QOLBDGSBB-ICA is relatively the best compared with other algorithms.

From the optimization results listed in Tables 1, 2, 3 and 4 and analysis performed above, one can make a conclusion that the performance of QOLBDGSBB-ICA is the best compared with other comparison algorithms. It can effectively overcome the drawbacks of premature convergence and explore more informative region in the searching process.

4.1.4 Convergence Analysis

In this section, the convergence of the proposed QOLBDGSBB-ICA are investigated. To this end, the evolutionary curves of all the algorithms are plotted.

Figures 3 and  4 show the evolutionary curves of 30-dimension benchmark functions. For unimodal function \(f_1\) and \(f_2\), one can see that ICA, AR-ICA, ICA-PSO and GBB-ICA exhibit stagnation behavior in the searching process. However, DGSBBB-ICA and QOLBDGSBB-ICA exhibit perfect searching performance. This phenomenon explains that the introduced double sampling strategy can indeed improve searching performance of GBB-ICA. Compared with DGSBBB-ICA, QOLBDGSBB-ICA converges to the global minimum within 400 generations, while DGSBBB-ICA do not converge to the global minimum within the maximum generations, i.e., 1000.

For multimodal functions, DGSBBB-ICA, QOLBDGSBB-ICA also demonstrate superior convergence performance. For functions \(f_4\), \(f_7\sim f_9\), \(f_{13}\) and \(f_{17}\), the DGSBBB-ICA and QOLBDGSBB-ICA converge to the global minimum at an extremely fast speed, while other referenced ICA’s variant exhibits slow convergence speed. For functions \(f_5\), \(f_{10}\), \(f_{11}\), \(f_{12}\) and \(f_{15}\), the converge speed of DGSBBB-ICA and QOLBDGSBB-ICA is relatively faster than that of ICA, AR-ICA, ICA-PSO and GBB-ICA. It seems that ICA, AR-ICA, ICA-PSO and GBB-ICA trap into local minimum and cannot jump out from the local minimum. For functions \(f_6\), \(f_{14}\), \(f_{18}\), \(f_{19}\) and \(f_{20}\), the situation is more complex. QOLBDGSBB-ICA cannot converge to the global minimum within 1000 generations, and DGSBBB-ICA exhibits stagnation behavior. While several other referenced ICA variants exhibit good convergence performance. But overall, the convergence speed of QOLBDGSBB-ICA is the faster compared with other ICA variants. From the above analysis, one can draw a conclusion that the proposed double sampling strategy indeed can improve the exploration ability of GBB-ICA, and meanwhile, the quasi-oppositional learning can effectively help the algorithm explore valuable region in the searching process, and therefore improves the convergence speed.

Figures 5 and  6 show the evolutionary curves of 50-dimensional benchmark function. It is well known that with the increasing of dimension, the difficulty of finding the global minimum also increases. From Figs. 5 and  6, one can see similar convergence behavior as the 30-dimensional case. Specifically, the unimodal function \(f_1\) and \(f_2\), ICA, AR-ICA, ICA-PSO, GBB-ICA all exhibit stagnation behavior in the searching process, while the proposed QOLBDGSBB-ICA converges to the global minimum within 400 generations. For multimodal functions \(f_4\), \(f_7\sim f_9\), \(f_{13}\) and \(f_{17}\), the proposed QOLBDGSBB-ICA and DGSBBB-ICA converge to the global minimum at an extremely fast speed, and for \(f_5\), \(f_{10}\sim f_{12}\) and \(f_{15}\), the convergence speed of QOLBDGSBB-ICA and DGSBBB-ICA is relatively faster than other ICA variants. For \(f_6\), \(f_{14}\), \(f_{19}\) and \(f_{20}\), DGSBBB-ICA demonstrates stagnation behavior while QOLBDGSBB-ICA still exhibits good convergence performance.

Fig. 3
figure 3

Evolutionary curves for 30-D case (\(f_1\sim f_{10}\))

Fig. 4
figure 4

Evolutionary curves for 30-D case (\(f_{11}\sim f_{20}\))

Fig. 5
figure 5

Evolutionary curves for 50-D case (\(f_{1}\sim f_{10}\))

Fig. 6
figure 6

Evolutionary curves for 50-D case (\(f_{11}\sim f_{20}\))

Table 5 Wilcoxon test results for 30-dimensional benchmark function

4.1.5 Wilcoxon Test

To further verify the superior performance of QOLBDGSBB-ICA, a statistical test called Wilcoxon signed-rank test is conducted. The test examines two hypothesis, i.e., the null hypothesis \(H_0\): QOLBDGSBB-ICA is similar to GBB-ICA and the alternative hypothesis \(H_1\): QOLBDGSBB-ICA is better than GBB-ICA. The data used in the test are the best function values of each algorithm in the 30 runs.

The test results are listed in Tables 5 and 7, where \(h=1\) denotes that the QOLBDGSBB-ICA is superior to the algorithm involved in comparison in statistical sense. From Table 5, one can see that the QOLBDGSBB-ICA is superior to GBB-ICA on 19 functions. Meanwhile, QOLBDGSBB-ICA is superior to DGSBB-ICA on 12 functions. This indicates that by introducing the quasi-oppositional learning and double sampling strategy can indeed improve the optimization performance of GBB-ICA. For 50-dimensional case, the number of functions that QOLBDGSBB-ICA is superior to DGSBB-ICA increases to 15. This explains that QOLBDGSBB-ICA is more power than DGSBB-ICA when handling with more complex optimization problem.

Table 6 Wilcoxon test results for 50-dimensional benchmark function

4.2 Application to Engineering Design Problems

In this subsection, the proposed QOLBDGSBB-ICA is further applied to solve four engineering design problems for the purpose of testing its performance.

4.2.1 Pressure Vessel Design Problem

Pressure vessel (PV) has capped ends and hemispherical heads. The pressure vessel design problem aims to minimize the total cost of the cylindrical PV, which is closely related to material, forming and welding [22]. For PV design problem, the design variables involve the depth of the shell (\(T_s\)) and head (\(T_h\)), the internal radius (R), and the extent of the section, minus the head (L). The mathematical expression of this problem is formulated as follows.

$$\begin{aligned} \begin{array}{ll} \text {Consider} &{} \varvec{x}=[x_{1}\ x_{2} \ x_{3} \ x_{4}]=[T_{s} \ T_{h} \ R\ L] \\ \text {min} &{} f(\varvec{x}) =0.6224 x_{1} x_{3} x_{4}+1.7781 x_{2}x_{3}^{2} \\ &{}+3.1661 x_{1}^{2}x_{4}+19.84 x_{1}^{2} x_{3} \\ \text {s.t.} &{}g_{1}(\varvec{x})=-x_{1}+0.0193 x_{3} \le 0 \\ &{}g_{2}(\varvec{x})=-x_{2}+0.00954x_{3} \le 0 \\ &{}g_{3}(\varvec{x})=-\pi x_{3}^{2}x_{4}-\frac{4}{3} \pi x_{3}^{3}+1296000 \le 0, \\ &{}g_{4}(\varvec{x})= x_{4}-240 \le 0\\ \text {with} &{}0 \le x_{1}, x_{2}\le 99, \\ &{}10 \le x_{3},x_{4} \le 200. \end{array} \end{aligned}$$

This problem had been solved by several approaches, for example, PSO, GA, ES and harmony search algorithm, see [11] for details. The optimization results of the proposed and referenced algorithm in the literature are reported in Table 7. It can be seen from Table 7 that the OLBDGSBB-ICA obtains the minimum value 4543.5956, and QOLBDGSBB-ICA 4624.8766, which are all smaller than that by IHS, PSO, GA, ES, WOA and BWOA.

Table 7 Optimization results of PV problem

4.2.2 Gear Train Design Problem

Gear train (GT) design problem is an unconstrained discrete design problem in mechanical engineering. The design goal is to minimize the gear ratio, which is defined as the angular velocity of the output shaft to that of input shaft. The design variables usually are the number of teeth of gears \(N_a\), \(N_b\), \(N_d\) and \(N_f\). This problem can be expressed as the following optimization problem.

$$\begin{aligned} \begin{array}{ll} \text {Consider} &{} \varvec{x}=[x_{1}\ x_{2}\ x_{3} \ x_{4}]=[N_a\ N_b\ N_d \ N_f] \\ \text {min} &{} f(\varvec{x})= \left( \frac{1}{6.931}-\frac{x_2x_3}{x_1x_4} \right) ^2\\ \text {s.t.} &{} 12 \le x_i \le 60 \ \text {and}\ x_i \in \mathcal {Z} i=1,2,3,4. \\ \end{array} \end{aligned}$$

This problem had been solved by several different algorithms such as Lagrange multiplier method [22], cuckoo search algorithm [16] and genetic adaptive search (GAS) [21]. The optimization results of gear train design problem are shown in Table 8. Obviously, the optimization results obtained by the proposed OLBDGSBB-ICA and QOLBDGSBB-ICA are slightly better than the methods in the literature.

Table 8 Optimization results of Gear design

4.2.3 Multiple Disk Clutch Brake Design Problem

Table 9 Optimization results of multiple disk clutch brake design problem
Table 10 Optimization results of car side impact design problem

Multiple disk clutch brake designed problem is a constrained optimization problem with discrete variables. Its primary goal is to minimize the mass of a multiple disk clutch brake [8]. The involved design variables are the inner radius \((x_1)\), outer radius \((x_2)\), thickness of the disk \((x_3)\), actuating force \((x_4)\) and number of friction surfaces \((x_5)\). The mathematical formulation of the problem can be expressed as follows.

$$\begin{aligned} \begin{array}{ll} \text {min} &{} f(\varvec{x})= \pi \left( x_{2}^{2}-x_{1}^{2}\right) x_{3}\left( x_{5}+1\right) \rho \\ \text {s.t.} &{} g_{1}(\varvec{x})=x_{2}-x_{1}-\Delta R \ge 0 \\ &{} g_{2}(\varvec{x})=L_{\max }-\left( x_{5}+1\right) \left( x_{3}+\delta \right) \ge 0 \\ &{} g_{3}(\varvec{x})=p_{\max }-p_{r z} \ge 0 \\ &{} g_{4}(\varvec{x})=p_{\max } * V_{sr\max }-p_{rz} * V_{sr}\ge 0 \\ &{} g_{5}(\varvec{x})=V_{sr\max }-V_{sr} \ge 0 \\ &{} g_{6}(\varvec{x})=T_{\max }-T \ge 0 \\ &{} g_{7}(\varvec{x})=M_{h}-s M_{s} \ge 0 \\ &{} g_{8}(\varvec{x})=T \ge 0 \\ \text {where} &{} M_{h}=\frac{2}{3} \mu x_{4} x_{5} \frac{x_{2}^{3}-x_{1}^{3}}{x_{2}^{2}-x_{1}^{2}} \mathrm {~N} \cdot \textrm{mm} \\ &{} T =\frac{n\pi I_z}{30(M_h-M_f)} \textrm{rad} / \textrm{s} \\ &{} A=\pi \left( x_{2}^{2}-x_{1}^{2}\right) \textrm{mm}^{2} \\ &{} p_{rz}=\frac{x_{4}}{A} \mathrm {~N} / \textrm{mm}^{2} \\ &{} V_{sr}=\frac{n\pi R_{sr}}{30} \mathrm {~mm} / \textrm{s}\\ &{} R_{sr}=\frac{2}{3} \frac{x_{2}^{3}-x_{1}^{3}}{x_{2}^{2} x_{1}^{2}} \mathrm {~mm} \\ &{} \Delta R=20 \mathrm {~mm}, \quad L_{\max }=30 \mathrm {~mm}, \quad \mu =0.6 \\ &{} p_{\max }=1 \textrm{MPa}, \quad \rho =0.0000078 \mathrm {~kg} / \textrm{mm}^{3} \\ &{} V_{sr\max }=10 \mathrm {~m} / \textrm{s}, \quad \delta =0.5 \mathrm {~mm}, \quad s=1.5; \\ &{} T_{\max }=15 \mathrm {~s}, \quad n=250 \textrm{rpm}, \quad I_{z}=55 \textrm{Kg} \cdot \textrm{m}^{2} \\ &{} M_{s}=40 \textrm{Nm}, \quad M_f=3 \textrm{Nm} \\ \text {with} &{} 60 \le x_{1} \le 80, \quad 90 \le x_{2} \le 110, \quad 1 \le x_{3} \le 3 \\ &{} 0 \le x_{4} \le 1000, \quad 2 \le x_{5} \le 9 \\ &{} x_{i} \in \mathbb {Z}, \quad i=1,2,3,4,5. \end{array} \end{aligned}$$

In the literature, constrained iterative topographical global optimization (C-ITGO) [15], teaching–learning-based optimization (TLBO) [37], water cycle algorithm (WCA) [13] and improved accelerated PSO (IAPSO) [8] and APSO [8]. The optimization results are shown in Table 9. It can be seen that the proposed QOLBDGSBB-ICA and OLBDGSBB-ICA can obtain satisfactory results. Of course, the results obtained by the proposed algorithms are slightly inferior to that by C-ITGO, WCA and IAPSO.

4.2.4 Car Side Impact Design Problem

This problem aims to minimize the total weight of the door. The involved design variables include thickness of B-Pillar inner (\(x_1\)), thickness of B-Pillar reinforcement \((x_2)\), thickness of floor side inner \((x_3)\), thickness of cross members \((x_4)\), thickness of door beam \((x_5)\), thickness of door beltline reinforcement \((x_6)\), thickness of roof rail \((x_7)\), materials of B-Pillar inner \((x_8)\), materials of floor side inner \((x_9)\), barrier height \((x_{10})\) and hitting position \((x_{11})\). This problem is mathematically expressed as follows.

$$\begin{aligned} \begin{array}{ll} \text {Consider} &{} \varvec{x}=[x_{1}\ x_{2}\ x_{3} \ x_{4}\ x_5 \ x_6 \ x_7 \ x_8 \ x_9 \ x_{10} \ x_{11}]\\ \text {min} &{} f(\varvec{x})= 1.98+4.90 x_{1}+6.67 x_{2}+6.98 x_{3}\\ &{} +4.01 x_{4} +1.78 x_{5}+2.73 x_{7} \\ \text {s.t} &{} g_{1}(\varvec{x})=1.16-0.3717 x_{2} x_{4}-0.00931 x_{2} x_{10} \\ &{} \quad -0.484 x_{3} x_{9}+0.01343 x_{6} x_{10} -1\le 0 \\ &{} g_{2}(\varvec{x})=0.261-0.0159 x_{1} x_{2}\\ &{}\quad -0.0188 x_{1} x_{8}-0.0191 x_{2} x_{7} \\ &{}\quad +0.0144 x_{3} x_{5}+0.0008757 x_{5} x_{10}+0.08045 x_{6} x_{9} \\ &{}\quad +0.00139 x_{8} x_{11}+0.00001575 x_{10} x_{11}-0.32 \le 0 \\ &{} g_{3}(\varvec{x})=0.214+0.00817 x_{5}-0.131 x_{1} x_{8} \\ &{}\quad -0.0704 x_{1} x_{9}+0.03099 x_{2} x_{6}-0.018 x_{2} x_{7}\\ &{}\quad +0.0208 x_{3} x_{8}+0.121 x_{3} x_{9} -0.00364 x_{5} x_{6}\\ &{}\quad +0.0007715 x_{5} x_{10}-0.0005354 x_{6} x_{10} \\ &{}\quad +0.00121 x_{8} x_{11}+0.00184x_9x_{10}-0.02x_2^2 \le 0.32 \\ &{}g_{4}(\varvec{x})=0.74-0.61 x_{2}-0.163 x_{3} x_{8}+0.001232 x_{3} x_{10}\\ &{}\quad -0.166 x_{7} x_{9}+0.227 x_{2}^{2} -0.32\le 0 \\ &{}g_{5}(\varvec{x})=28.98+3.818 x_{3}-4.2 x_{1} x_{2}+0.0207 x_{5} x_{10} \\ &{}\quad +6.63 x_{6} x_{9}-7.7 x_{7} x_{8}+0.32 x_{9} x_{10} \le 32 \\ &{}g_{6}(\varvec{x})=33.86+2.95 x_{3}+0.1792 x_{3}-5.057 x_{1} x_{2} \\ &{}\quad -11.0 x_{2} x_{8}-0.0215 x_{5} x_{10} \\ &{}\quad -9.98 x_{7} x_{8}+22.0 x_{8} x_{9}-32 \le 0 \\ &{}g_{7}(\varvec{x})=46.36-9.9 x_{2}-12.9 x_{1} x_{2} \\ &{}\quad +0.1107 x_{3} x_{10} -32\le 0 \\ &{}g_{8}(\varvec{x})=4.72-0.5 x_{4}-0.19 x_{2} x_{3}-0.0122 x_{4} x_{10} \\ &{}\quad +0.009325 x_{6} x_{10}+0.000191 x_{11}^{2} -4\le 0 \\ &{}g_{9}(\varvec{x})=10.58-0.674 x_{1} x_{2}-1.95 x_{2} x_{8} \\ &{}\quad +0.02054 x_{3} x_{10}-0.0198 x_{4} x_{10}\\ &{}\quad +0.028 x_{6} x_{10} -9.9\le 0 \\ &{}g_{10}(\varvec{x})=16.45-0.489x_3x_7-0.843x_5x_6 \\ &{}\quad +0.0432x_9x_{10}-0.0556 x_{9}x_{11} \\ &{} \quad -0.000786 x_{11}^{2} -15.7\le 0\\ \text {with} &{} 0.5 \le x_{i} \le 1.5, i=1,2,3,4,5,6,7 \\ &{} x_i \in \{0.192,0.345\}, i =8,9 \\ &{} -30 \le x_{i} \le 30, i = 10,11 \\ \end{array} \end{aligned}$$

This problem had been solved by several different algorithms, for example, PSO, DE, FA, CS and SNS. The optimization results are shown in Table 10. It can be seen from Table 10 that the proposed algorithms obtain smaller cost value, compared with the referenced comparing algorithm.

Table 11 Benchmark functions (\(f_{01}-f_{10})\)

5 Conclusion

In this paper, an improved GBB-ICA is proposed to enhance the searching ability and speed up the convergence of GBB-ICA. First, a double Gaussian sampling strategy is designed to enhance the searching ability of GBB-ICA. The Gaussian sampling strategy generates a new position according to the current position of the imperialist and the best colony in an empire simultaneously. This strategy ensures the good perfect search direction while avoids getting stuck into local optima. Second, the quasi-oppositional learning strategy is incorporated into the initialization and assimilation step, which leads to a faster convergence speed and powerful ability to explore valuable region in the searching space. Extensive experiments had been conducted on 20 benchmark functions and 4 engineering design problems. The experiment results validate that the proposed algorithm exhibit more perfect optimization performance both in terms of convergence speed and global search ability.

In the future, we will strive to design more powerful Gaussian sampling strategy to increase the global search capacity of GBB-ICA for more complex optimization problem.