Introduction

Over the past years, inspired by animals and social behaviors in nature, researchers have proposed different evolutionary algorithms (EAs) to deal with some complex optimization problems. As a branch of EAs, swarm intelligence has attracted extensive interest of many scholars. Until now, many excellent algorithms have been developed, such as firefly algorithm (FA) [1], genetic algorithm (GA) [2], differential evolution (DE) [3], artificial bee colony algorithm (ABC) [4], particle swarm algorithm (PSO) [5], and so on. These methods display good ability in solving optimization problems.

Motivated by the cooperative foraging behaviors of bees, Karaboga first proposed ABC algorithm in 2005 [6]. Compared with other algorithms, ABC has the features of simple structure, fewer control parameters and more powerful search capability. Thus, it has been widely studied by many scholars, and has been used to deal with some complex problems. Moreover, in the fields of real-word optimization problems, such as electrical engineering [7], engineering physics [8], path planning [9], feature classification [10, 11], vehicular networks [12], ABC has been widely adopted.

Related work

Although ABC could achieve good performance in many areas, it does a poor job in maintaining a balance between global search capacity and local search capacity. In other words, it may be caught in a local optimal position in a high likelihood and get a low precision solution, as well as slow convergence speed. Therefore, to overcome these shortcomings, it is necessary to develop potential ABC algorithms. To this end, there are many terrific ABC variants (ABCs) have been presented from different aspects in recent years.

According to the strategies used in literatures, they can be grouped into three categories: parameter-based strategy, probability-based strategy and movement-equation-based strategy. The brief reviews are as follows:

(1) Parameter-based strategy. As pointed out in Ref. [13], the convergence of EAs can be adjusted well by introducing reasonable control parameters. So, to adjust the parameters in EAs, some scholars have proposed many adjustable strategies. For example, by analyzing the control parameters, Akay and Karaboga [14] discussed the influences of parameters in ABC. In Ref. [15], based on a new control parameter modification rate (MR), a modified vision ABC was proposed, which could achieve the purpose of controlling frequency of perturbation. According to the movement information of the previous solution, Kiran et al. [16] designed a control parameter d to guide the bees in a directional movement. For solving binary problems, Durgut and Aydin [17] proposed an adaptive hybrid approach to devise ABC algorithm, which includes multiple operators selected adaptively strategy and credit assignment mechanism.

(2) Probability-based strategy. As we know, the onlooker bees select individuals according to the probability defined by the fitness ratio in ABC. But this probability selection strategy may cause great selection pressure in the late stage. To overcome this deficiency, based on knowledge fusion, Wang et al. [18] presented a ABC variant KFABC, which wisely utilizes three kinds of knowledge strategies to search new solutions by the selection rules. In Ref. [19], Cui et al. proposed a novel ABC named DFSABC_elite, which designs two modified movement equations and a depth-first search framework (DFS). This method implemented the selection of the solutions by designing a new parameter mechanism. For constrained optimization problems, Liu et al. [20] proposed a ABC variant DPLABC, which includes dynamic penalty method, Levy flight with logistic map, further search mechanism and new boundary handling mechanism. A novel ABC named ABC-AHC has proposed by Chu et al. [21]. This study designed a new selection probability model, and presented an adaptive heterogeneous competition augmented ABC algorithm. In dealing with multi-objective problem, Chaudhuri and Sahu [22] proposed a multi-objective ABC-based feature weighting technique for Naïve Bayes. In the article, a mutation probability was designed to control the selection of movement equations in the phases of employed bee and onlooker bee, and the selection probability was calculated on the bases of the sorting number each food source.

(3) Movement-equation-based strategy. In EAs, how to better tradeoff the capacities of global search and local search is always concerned. Many studies found that the purpose can be achieved by modified the movement equation. Zhou et al. [23] proposed an enhancing ABC with multi-elite guidance (MGABC), which chooses elite solutions to construct elite groups and designs two modified search equations using the elite groups information. Inspired by PSO algorithm, Zhu et al. [24] presented a Gbest-guided ABC algorithm (GABC). In GABC, using the contemporary global optimum solution, the new modified update equation was shown. In terms of the results of experiments, GABC could be superior to the conventional ABC up to a point. By utilizing the best individual of the previous, Luo et al. [25] provided a new ABC algorithm named COABC. In COABC, a novel update equation was designed in the phase of onlooker bee. Aiming at improving the performance of ABC, Zhou et al. [26] proposed a modified search strategy by using the current global optimal solution and designed a new Gaussian perturbation with evolutionary rate. Using the best neighbor-guided movement strategy, Peng et al. [27] presented an algorithm named NABC. Based on factor library and dynamic search balance, Yu et al. [28] proposed a hybrid, fast, and enhanced ABC, called HFEABC.

Table 1 summarizes some representative ABC variants, including their advantages, disadvantages and classification.

Table 1 Summary of related work

Motivation

Since ABC has excellent performance in solving optimization problems, it has been widely studied and applied in many fields [29, 30]. When solving some complex optimization problems, it still has some shortcomings. For example, it has great selection pressure in the late stage, which may be caused by the probability mechanism on the basis of the fitness ratio [31]. To alleviate this problem, two new rules were designed in Refs. [18, 19], which use previous experience to implement different strategies. However, pure empirical guidance may make the algorithm fall into oscillation, thus it is meaningful to retain the randomness of the probability mechanism. To reduce the blindness of choosing food sources to a certain extent, a novel probability is designed in this paper.

In addition, as we know, the movement equation plays a key role in maintaining a balance between global search and local search. To enhance the exploitation ability of ABC, many excellent ABCs focused on modifying the movement equations or designing good learning strategies have been proposed. In these methods, some useful information was used, such as the information of the current best solution or the elite solutions [23, 32]. However, only using high-quality solutions without controlling the search range may cause the algorithm to exploit and fall into local optimal position. Therefore, reasonable neighbor selection and parameter design are considered and discussed in our approach.

Based on the above considerations, to compensate the shortcomings of conventional ABC, a novel ABC algorithm based on Bayesian estimation [33] (BEABC) is proposed in this paper. The major contributions can be listed below:

  1. 1.

    A new selection probability is designed. Based on Bayesian estimation, the posterior probability of the food sources are calculated, which is used to select those food sources that can produce better offsprings in a high likelihood.

  2. 2.

    A novel directional guidance strategy is presented. In onlooker bee and scout bee phases, using the location information of neighbors and the current optimal solution, two solution search equations are designed, which can guide the bees to search in the right direction.

  3. 3.

    Two dynamically and adaptively adjusting parameters MR and \(\lambda \) are introduced. By automatically controlling the disturbance frequency and adjusting the search region, respectively, the algorithm can keep balance between exploration and exploitation capacities.

The rest of the paper is outlined as follows: “Artificial bee colony algorithm (ABC)” introduces the conventional ABC framework. In “ABC based on Bayesian estimation (BEABC)”, the motivation and the specific processes of BEABC are given. “Numerical and engineering Experiments” presents and analyzes the experimental results of BEABC and other excellent EAs. The effectiveness and superiority of BEABC are verified by these experiments. In “Conclusion”, the conclusion is drawn. Note that all the abbreviations involved in this paper are explained in the appendix.

Artificial bee colony algorithm (ABC)

ABC mainly contains four phases: initialization phase, employed bee phase, onlooker bee phase and scout bee phase. Different kinds of bees can change their roles iteratively until the termination condition is meet. Note that there is an associated counter for each food source. If one food source is not improved, the increment of its corresponding counter is 1; otherwise the counter resets to 0. If the quality of a solution has not been improved more than limit (preset parameter), the employed bee would be transformed into a scout bee.

Initialization phase

Let \({X} =\{ {x_{1}},{x_{2}}, \ldots ,{x_{\text {SN}}} \}\) be the initialization population, which are generated randomly in the entire space. The food source \(x_{ij}\) of the initial population is determined by the following equation:

$$\begin{aligned} x_{ij} = x_j^{\min } + r\times (x_{j}^{\max } - x_{j}^{\min }), \end{aligned}$$
(1)

where \(i = 1,2, \ldots , SN\); \(j = 1,2, \ldots , D\).

Employed bee phase

In this phase, to generate new solutions \(v_{i}\), the employed bee executes a random neighborhood search around \(x_{i}\) using the following equation:

$$\begin{aligned} v_{ij} = {x_{ij}} + \varphi _{ij}\times (x_{ij} - x_{kj}), \end{aligned}$$
(2)

where k and j are randomly selected from \(\left\{ {1,2, \ldots , SN}\right\} \) and \(\left\{ {1,2, \ldots ,D} \right\} \), respectively, and \(k\ne i\). In (2), only one dimension of \(x_i\) is modified. If \({v}_{i}\) is more excellent than \({x}_{i}\), \({x}_{i}\) will be replaced by \({v}_{i}\), and the counter for \(x_{i}\) will be reset to 0; otherwise \({x}_{i}\) keeps invariant, and its counter will go up by 1.

Onlooker bee phase

In this phase, the fitness ratio will be used as the selection probability to choose food sources. The probability can be calculated by (3):

$$\begin{aligned} P_{i} = \frac{{{\text {Fit}}_{i}}}{{\sum _{{i} = 1}^{\text {SN}} {{\text {Fit}}_{i}} }}. \end{aligned}$$
(3)

The fitness value of food source \(x_{i}\) is defined by (4):

$$\begin{aligned} {\text {Fit}}_{i}= \left\{ \begin{array}{llll} \frac{1}{1+f(x_{i})}, &{}\quad {\text {if}} \, \quad f(x_{i})\ge 0, \\ 1+|f(x_{i})|, &{}\quad \mathrm{else} . \end{array} \right. \end{aligned}$$
(4)

From (3) and (4), it is easy to see that a food source with a larger fitness value has a higher possibility to be selected by the onlooker bees. Similar to the employed bee phase, the new solutions are determined by (2), and the associated counters will be changed accordingly.

Scout bee phase

In this phase, if the counter of a certain solution is bigger than limit, the solution will be considered as a terrible solution, and the corresponding employed bee will be transformed into a scout bee. The location of the scout bee will be randomly generated in the entire space according to (1).

By the above description of ABC, the pseudo-code is displayed briefly in Algorithm 1:

figure a

ABC based on Bayesian estimation (BEABC)

In this section, we propose a new ABC variant based on Bayesian estimation, called BEABC, which tries to address the following issues: (1) how to maximize the probability of those better solutions selected by roulette; (2) how to reasonably design movement equations to enhance the exploitation capability; (3) how to keep balance between the exploration ability and the exploitation ability. The detailed of BEABC is given in the following subsections.

Selection probability based on Bayesian estimation

In ABC, to generate new offsprings, the bees select the food sources by (3), which can maintain population diversity to a certain extent. But it may face selection pressure and premature convergence, especially at the late stage. To overcome such shortcomings, two probability mechanisms were presented [20, 21]. However, they were lack of pertinence in the solutions with good quality. So, how to choose better solutions with high probability deserves to be studied. Combining mathematical knowledge, on the foundation of improving the quality of a solution, the probability that the solution is selected is a problem of posterior conditional probability. Thus, Bayesian estimation is a suitable way to calculate the probability. According to Bayesian estimation, if there are events \(B_{i}\) satisfy \(\bigcup \nolimits _{i = 1}^n {{B_i}} = \Omega , {B_i} \bigcap {B_{j}}{ = }\emptyset , i \ne j,{P(B_i) > 0}\), then \({B_1},{B_2}, \ldots ,{B_n}\) can be seen as an exhaustive sequence of events. Given the occurrence of event A, the conditional probability \(P(B_{i}|A)\) can be calculated by (5):

$$\begin{aligned} P({B_{i}}|A) = \frac{{P({B_i})P(A|{B_i})}}{{\sum \nolimits _{i = 1}^n {P(A|{B_i})P({B_i})} }}. \end{aligned}$$
(5)

When it is extended to ABC, without loss of generality, let \(\left\{ {{x_{1}},{x_{2}},\ldots ,{x_{\text {SN}}}} \right\} \) be the exhaustive sequence of events, and A be the event that the food source with a better offspring is selected. Assume that the solutions are finite and sufficient. According to the law of large numbers, the probability \(P(B=x_{i})\) and the conditional probability \(P(A|B=x_{i})\) can be given as below:

$$\begin{aligned}&P(B = {x_{i}}) = \frac{1}{\text {SN}}, \end{aligned}$$
(6)
$$\begin{aligned}&P(A|B = {x_{i}}{) = }\frac{{{I(A,{B} = {x_{i}})} }}{{{I({B} = {x_{i}})} }}. \end{aligned}$$
(7)

Substituting (6) and (7) into (5), the posterior probability \(P(B{=}{x_{i}}{|A)}\) can be calculated. It is clear that this probability helps the algorithm choose the better solutions with high quality, which may improve the convergence rate of the algorithm.

In addition, noted that the denominator of the Bayesian equation (5) is a constant, therefore, to reduce the amount of calculation, this paper omits the denominator in the computational process.

Directional guidance mechanism

As pointed out above, in ABC, the original movement equation performs very well in exploration but weakly in exploitation. To adjust the global and local search capacities, inspired by different swarm intelligent algorithms, scholars have proposed some new movement equations [5, 34]. By (2), we know that, the purpose of randomly selecting neighbors is to maintain population diversity. However, this moving equation has great blindness, and lacks directional guidance. Thus, to reduce the blindness, a directional guidance mechanism is proposed as follows. Let \(x_{n}\) be the neighbor selected randomly for \(x_{i}\). Consider two cases:

Case 1: \(f(x_{n})<f(x_{i})\). Under this case, the new solution \(v_{i}\) is produced around \(x_{i}\) as below:

$$\begin{aligned} v_{ij} = \left\{ \begin{array}{ll} (1-\lambda )\times x_{ij}-\lambda \times \psi _{ij}\times (x_{i,j}-x_{nj}), &{} \mathrm{if} \, x_{ij}>x_{nj},\\ (1-\lambda )\times x_{ij}+\lambda \times \psi _{ij}\times (x_{n,j}-x_{ij}), &{} \mathrm{if} \, x_{ij}<x_{nj}. \end{array}\right. \end{aligned}$$
(8)

In this case, the location of \(x_{n}\) may more close to the global best solution than \(x_{i}\), so it makes sense for \(x_{i}\) to move towards \(x_{n}\). From (8), we can see that it may ensure \(x_{i}\) to move towards \(x_{n}\) no matter where \(x_i\) lies in. To help the readers better understand, the following graph is used to show this process:

Fig. 1
figure 1

The process of movement

In Fig. 1, it can be seen that, when \(x_{ij}<x_{nj}\), the location of \(x_{nj}\) has a great chance to appear on the left of the high-quality solution, thus \(x_{ij}\) moves towards \(x_{nj}\) may be more closer to the global best solution. In the other case, \(x_{ij}>x_{nj}\) means that the location of \(x_{ij}\) is on the right of \(x_{nj}\), thus it makes more sense for \(x_{ij}\) to move left, which may generate superior candidate solution. In a word, under the guidance of its neighbor \(x_{n}\), \(x_{i}\) always moves towards the global best solution.

Case 2: \(f(x_{n})>f(x_{i})\). This case means that the neighbor \(x_{n}\) is worse than \(x_{i}\). For this case, by adopting the information of the current global optimal solution, a modified movement equation is presented below:

$$\begin{aligned} v_{ij} = \left\{ \begin{array}{ll} (1-\lambda )\times x_{gj} + \lambda \times \varphi _{ij}\times (x_{gj}-x_{ij}), &{} \mathrm{if} \, \mathrm{rand} \, < MR,\\ x_{ij},&{} \mathrm{else.} \end{array} \right. \nonumber \\ \end{aligned}$$
(9)

From (9), the frequency of perturbation is controlled by MR, when rand is less than MR, the candidate solution \(v_{ij}\) is generated around \(x_{gj}\), which guarantees that the new solution has relatively high quality; otherwise, \(v_{ij}\) is equal to \(x_{ij}\).

In scout bee phase, the task of the bee is to discard the obsolete solution and regenerate new one in the whole solution space. From ABC, it can be seen that, though the new food source generated by (1) can maintain the population diversity, it may have blindness to some extent and reduce the convergence rate. Based on the above analysis, if the information of the current global best solution is adopted, a better food source may be obtained. Thus, to generate a better solution, the following equation is presented:

$$\begin{aligned} x_{ij}=(1-\lambda )\times x_{gj} + \lambda \times \varphi _{ij} \times (x_{gj}-x_{ij}). \end{aligned}$$
(10)

From (10), the new food source \(x_{i}\) is generated around \(x_{g}\). This strategy may increase the convergence rate of BEABC.

Parameters design

In Ref. [15], MR was proposed first and was set to a fixed constant. Generally, linearly varying parameters have some advantages over constant parameters in adjusting the algorithm [35]. Furthermore, for EAs, a nonlinear control parameter updating rule was proposed [36], which has a better performance than linearly varying parameters. In this paper, to strengthen the performance of BEABC, a new way of adaptively varying parameter MR is proposed, which is given as follows:

$$\begin{aligned} \mathrm{MR}=\mathrm{MR}_{\max }\times \exp \left( \frac{-\mathrm{Dt}}{\mathrm{MaxDt}}\right) . \end{aligned}$$
(11)

From (11), MR decreases exponentially as the number of iterations increases, which means the candidate solutions have more chances of being generated near the current global optimal solution at an early stage. This may improve the convergence rate to some extent. In the late stage of the algorithm, \(v_{i}\) will be generated around \(x_{i}\), which can enhance the local search ability.

Meanwhile, it should be noted that \(\lambda \) plays a key role in adjusting the search region at (8) and (9). In our paper, \(\lambda \) is introduced as follows:

$$\begin{aligned} \lambda =\frac{\mathrm{MaxDt}-\mathrm{Dt}+1}{\mathrm{MaxDt}}. \end{aligned}$$
(12)

As shown in (12), it pays more weight on the second item of the movement equations (8) and (9) at the beginning, which is more likely to strengthen exploration capacity of BEABC. As the algorithm goes on, \(\lambda \) becomes smaller and smaller, so the search process will focus on the first item of the movement equations, which means that the exploitation ability of BEABC are enhanced.

On the basis of the description above, the detailed pseudo-code of BEABC is shown in Algorithm 2. The source code can be publicly available at https://www.mathworks.com/matlabcentral/fileexchange/104205-an-improved-artificial-bee-colony-algorithm-based-on-bayesia.

figure b

The flowchart of BEABC is shown in Fig. 2. From the figure, the selection probability is calculated by (5). In the onlooker bee and scout bee phases, the food sources are updated according to the new movement formulas (8)–(10), respectively. In addition, evolutionary perturbations are controlled by (11) in each iteration.

Fig. 2
figure 2

Flowchart of BEABC

Numerical and engineering experiments

To validate the performance of BEABC, 31 classical test functions (24 single-objective and 7 multi-objective test functions) and 2 real-world optimization problems are implemented separately in this section. The experiment results are given and discussed. All these algorithms are coded in Matlab R2017a, and ran on PC with Core i5, 2G memory, Windows 7.

In “Sensitivity test for the parameter MR”, a sensitivity test of MR is discussed. In “Experiment 1”, the benchmark functions selected are tested by BEABC and other excellent ABCs. Meanwhile, the discussion and analysis of the results are given. In “Experiment 2”, BEABC are compared with some excellent EAs in dealing with 2 practical engineering problems.

Sensitivity test for the parameter MR

As pointed out in Ref. [15], the control parameter and modified rate (MR) can effectively enhance the performance of ABC. From (11), we can see that MR with a large value means that the candidate solution \(v_{i}\) obtains more information from the parents, which can improve the quality of \(v_{i}\). However, the population diversity may be sacrificed. While MR with a small value has limited room to improve the performance of the algorithm, especially in the early stage. Thus, a exponentially varying MR which gradually decreases from \({\text {MR}}_{\text {max}}\) is introduced, and a sensitivity test is carried out to study the affect of \({\text {MR}}_{\text {max}}\). To test the sensitivity of MR, several complex test functions with D=30 are selected. Considering the improvement of the exploitation capacity of BEABC, different values for \({\text {MR}}_{\text {max}}\) are set to test its affect, i.e., 0.5, 0.6, 0.7, 0.8 and 0.9. The numerical results and the box-plots are given in Table 2 and Fig. 3, respectively. From Table 2, it can be seen that, BEABC has a better performance when \({\text {MR}}_{\text {max}}\) has a larger value on most functions. The reason may be that the candidate solutions have more opportunities to inherit the parents’ information. From Fig. 3, it also can be seen that BEABC has a better performance when \({\text {MR}}_{\text {max}}\) is 0.9. From the above discussion, setting \({\text {MR}}_{\text {max}}\) to 0.9 may be a good choice.

Table 2 Test results of BEABC with different \({\text {MR}}_{\text {max}}\)
Fig. 3
figure 3

Sensitivity test for MR

Experiment 1

Single-objective test functions

In this experiment, 24 classical test functions [1, 37] are selected to validate the effectiveness of BEABC. Table 3 gives the detailed information of these test functions. \(f_{1}\)\(f_{8}\) are unimodal functions, \(f_{9}\)\(f_{16}\) are multimodal functions, \(f_{17}\)\(f_{24}\) are rotated and shifted functions. Some excellent ABCs are chosen to compare with BEABC, including dABC [16], MGABC [23], GABC [24] and COABC [25]. The comparison results obtained by all comparative algorithms are shown in Tables 56 and 7, respectively. And the evaluation indicators include minimum (Min), Mean (mean), standard (Std), rank and t test. Based on Mean, the ranking makes it very clear how the algorithms are performing. t test is used to ascertain whether the results obtained by BEABC are statistically alien to its competitors. The significance level \(\alpha \) is set to 0.05. ‘+’, ‘\(\approx \)’ and ‘-’ mean that BEABC is statistically better, equal to or significantly inferior to its peers, respectively. To visually show the convergence rate of these algorithms, the convergence curves of the benchmark functions are displayed in Figs. 4, 5 and 6.

Table 3 Benchmark test functions

To ensure fairness, the basic parameters and running environment of all the algorithms are the same. Table 4 displays the specific parameters of these algorithms, which are the same as the corresponding literatures. Meanwhile, each numerical function is independently run 30 times with each algorithm.

Fig. 4
figure 4

Convergence curves of \(f_{1}\)\(f_{8}\)

Fig. 5
figure 5

Convergence curves of \(f_{9}\)\(f_{16}\)

Fig. 6
figure 6

Convergence curves of \(f_{17}\)\(f_{24}\)

Unimodal functions: In Table 5, the numerical results of unimodal functions are given. From the table, BEABC can obtain the global best solution on each function and dominates all ABCs in terms of Min. According to Mean, BEABC takes the first place in all algorithms, and t test implies that BEABC is statistically significantly superior to the compared algorithms on \(f_{1}\), \(f_{2}\), \(f_{4}\), \(f_{6}\) and \(f_{8}\). For \(f_{3}\) and \(f_{5}\), BEABC performs equal to MGABC but superior to GABC, dABC and COABC. Although all ABCs can obtain the global best solution and perform equally well on \(f_{7}\), BEABC and MGABC have samll Std. Figure 4 shows that all these algorithms obtained a local optimum on \(f_{6}\), but BEABC has the fastest convergence rate. The reason may be that the new probability mechanism adopted by BEABC prefers to select those good solutions.

Table 4 Parameters for compared EA algorithms

Multimodal functions: From Table 6, for \(f_{10}\), \(f_{12}\), \(f_{14}\) and \(f_{15}\), in terms of Min and Mean, BEABC outperforms the compared ABCs and takes the first place. Meanwhile, t test indicates that BEABC is statistically significantly better than all compared ABCs in most functions. For \(f_{9}\) and \(f_{13}\), BEABC superiors to MGABC and dABC while performs equal to GABC, COABC. However, for \(f_{16}\), BEABC only performs better than dABC. From Fig. 5, although more than one algorithm get the theoretical global best solutions on \(f_{9}\), \(f_{11}\) and \(f_{13}\), the convergence rate of all peers is much lower than that of BEABC. Unfortunately, all these algorithms are caught in a poor location on \(f_{16}\). Based on the above discussion, although BEABC can really do well in most multimodal functions, there is still room to enhance the search ability of BEABC.

Rotated and shifted functions: From Table 7, on \(f_{17}\)\(f_{20}\), BEABC performs well and obtains the global best solutions. Moreover, t test implies that it is significantly better than all its competitors. For \(f_{23}\)\(f_{24}\), BEABC did not perform very well on the shifted functions, and it is second to MGABC and COABC in terms of Rank. However, though the performance of BEABC is inferior to MGABC, it is still significantly superior than its peers in most cases at Rank and t test. From Fig. 6, BEABC converges rapidly in rotated functions but it shows signs of premature convergence in shifted functions. The reason may be that the shifted behavior does not fit the search process of the algorithm.

The results of above experiments indicate that BEABC performs well on most single-objective functions. Although the oscillation are inevitable in solving some complex optimization problems, the convergence speed is obviously dominant. Due to the particularity of some functions, especially the shifted functions, the performance of the proposed algorithm needs to be further improved.

Significance statistics

To analyze the significant difference between BEABC and other ABCs, the comprehensive statistical results got by t test with \(\alpha =0.05\) level are given in Table 8. From the table, compared with its peers, the significant excellent times of BEABC is 73, and the overall experimental success ratio is 76\(\%\). The statistical results show that BEABC has a better performance than its comparison algorithms.

Table 5 Computional results of unimodal test functions
Table 6 Computional results of multiunimodal test functions
Table 7 Computional results of rotated and shifted test functions

Multi-objective functions

To further investigate the comprehensive performance of the proposed algorithm, seven continuous multiobjective optimization problems (MOPs) are carefully selected as the test functions, including UF1–UF7 [38]. All MOPs are unconstrained (bound constrained) two-objective test problems, and the number of decision variables n is 30. Tables 9 and 10 give the details of UF1–UF7, including their function representations, PF and PS.

BEABC is compared with several different algorithms, including MOEA/D [39], NSGAIILS [40], MTS [41], OMOEAII [42] and MO-ABC/DE [43]. Let M be a set of uniformity distributed points along the Pareto front in the feasible space, and C be an approximate set to the Pareto front. IGD is used as the comparative indicators to evaluate the results obtained, which is defined as follows:

$$\begin{aligned} \mathrm{IGD}(C,M)=\frac{\sum _{v\in C}d(v,C)}{|M|}. \end{aligned}$$
(13)

Due to the space limit, the results in this paper are taken directly from [43], except for BEABC. The average IGD is obtained by running each algorithm 30 times independently for each function. The results are given in Table 11. From the table, we find that BEABC takes the first place for most test functions. For UF3 and UF7, the performance of BEABC is slightly worse than that of MOEA/D. In addition, the IGD results obtained by BEABC are very close to 0, which means that the PF obtained by BEABC are very close to the true Pareto front.

Computational time complexity

The computational time complexity of our algorithm and other ABCs are analyzed in this subsection. The details are presented in Table 12. For a problem f, assume that O(f) is the computational time complexity of evaluating its function value. SN is the population size. The time complexity of conventional ABC is \(O(\mathrm{SN}*f+\mathrm{SN}*f)=O(\mathrm{SN}*f)\) [44]. For BEABC, it designs a dynamic control parameter MR to determine the frequency of search, and the complexity of initialize phase is \(O(\mathrm{SN}*f)\). So the complexity are \(O(MR*(\mathrm{SN}*f+\mathrm{SN}*f))\) in the employed and onlooker bee phases. Therefore, the total computational complexity of BEABC is \(O(\mathrm{SN}*f)\) at each iteration.

From Table 12, we can see that most ABCs have the same complexity as conventional ABC except for dABC. The main reason is that they do not have redundant loops to calculate function values. Moreover, BEABC does not need to calculate the fitness value of the function, which may save time to a certain extent.

Table 8 Significance statistics

Experiment 2

To verify the effectiveness of one optimization algorithm, according to [45, 46], the real-world engineering optimization problems are good choices. so, to evaluate BEABC effectively, two engineering optimization problems are selected in this subsection. However, since these two practical optimization problems are basically constrained, we need to transform them into unconstrained optimization problems to facilitate the calculation of the algorithms. The following methods can be used, including rejection method, repair method, operator correction method and penalty function method. In this paper, the penalty function method is used.

Structural design of tension–pressure spring

The optimization design problem is described in Ref. [47], which minimizes the weight of a tension spring under the constraints such as shear stress, frequency of vibration and so on. There are three main design variables: coil diameter \(d(x_{1})\), average diameter of spring coil \(D(x_{2})\) and the effective number of circles \(N(x_{3})\). Figure 7 indicates the schematic diagram of this design problem. The engineering optimization problem is depicted as:

$$\begin{aligned}&\min \quad f(x)=x_{2}x_{1}^2(x_{3}+2) \\&\begin{array}{r@{\quad }l@{}l@{\quad }l} \mathrm{s.t.}&{} g_{1}(x)=1-\frac{x_{2}^3x_{3}}{71785x_{1}^4}\le 0,\\ &{}g_{2}(x)=\frac{4x_{2}^2-x_{1}x_{2}}{12566(x_{2}x_{1}^3-x_{1}^4)}+\frac{1}{5108x_{1}^2}-1\le 0,\\ &{}g_{3}(x)=1-\frac{140.45x_{1}}{x_{2}^2x_{3}}\le 0,\\ &{}g_{4}(x)=\frac{x_{1}+x_{2}}{1.5}-1\le 0,\\ &{}\mathrm{where}\\ &{}0.05\le x_{1} \le 2,\quad 0.25\le x_{2} \le 1.3,\\ &{}2\le x_{3} \le 15. \end{array} \end{aligned}$$
Table 9 Multi-objective benchmark functions
Table 10 Multi-objective benchmark functions (continue)
Table 11 Comparison results using the Mean (IGD) and Std (IGD) of MOPs
Table 12 The computational time complexity
Fig. 7
figure 7

Structure design of tension–pressure spring

Several optimization algorithms have been developed to solve the optimization problem, such as CNPSO [5], SPA [48], MPM [49], IOD [50], TGA [51] and CPSO [52]. Table 13 contains the optimum solutions and the constrained solutions obtained by these algorithms. Besides, the Min, Mean and Std are used as the evaluation indicators. Table 14 shows the statistical results. Except for BEABC, the solutions offered by these algorithms are taken directly from [5]. By the results, though the constraint \(g_{1}(x)\) is approximately satisfied, the small enough difference 1.5814040E−07 can be ignored in practice. In terms of Min (f(x)), BEABC outperforms other algorithms, and we can come to the conclusion that BEABC is efficient in solving this optimization problem.

Table 13 Comparison of best solution for this problem
Table 14 Comparison of statistical results for this problem

Design of a speed reducer gearing system

The gear design problem is to minimize the weight of a speed reducer at the mercy of constraints on bending stress of the gear teeth, transverse deflections of the shafts, surface stress and stresses in the shafts [53]. This optimization problem involves seven variables, and \(x_{1}\)-\(x_{7}\) are used to represent these variables, respectively. All the variables are incessant except number of teeth (\(x_{3}\)) which is an integer value. The reducer diagram is depicted in Fig. 8. All above, the design objective function is described as:

$$\begin{aligned}&\min \quad f(x)=0.7854x_{1}x_{2}^2(3.3333x_{3}^2\\&\quad \quad \quad +14.9334x_{3}-43.0934)-1.508x_{1}(x_{6}^2\\&\quad \quad \quad \quad +x_{7}^2)+7.4777(x_{6}^3+x_{7}^3)+0.7854(x_{4}x_{6}^2+x_{5}x_{7}^2) \\&\begin{array}{r@{\quad }l@{}l@{\quad }l} s.t.&{} g_{1}(x)=\frac{27}{x_{1}x_{2}^2x_{3}}-1\le 0,\\ &{}g_{2}(x)=\frac{397.5}{x_{1}x_{2}^2x_{3}^2}-1\le 0,\\ &{}g_{3}(x)=\frac{1.93x_{4}^3}{x_{2}x_{6}^4x_{3}}-1\le 0,\\ &{}g_{4}(x)=\frac{1.93x_{5}^3}{x_{2}x_{7}^4x_{3}}-1\le 0,\\ &{}g_{5}(x)=\frac{((745(\frac{x_{4}}{x_{2}x_{3}}))^2+16.9*10^6)^\frac{1}{2}}{110x_{6}^3}-1\le 0,\\ &{}g_{6}(x)=\frac{((745(\frac{x_{5}}{x_{2}x_{3}}))^2+157.5*10^6)^\frac{1}{2}}{85x_{7}^3}-1\le 0,\\ &{}g_{7}(x)=\frac{x_{2}x_{3}}{40}-1\le 0,\\ &{}g_{8}(x)=\frac{5x_{2}}{x_{1}}-1\le 0,\\ &{}g_{9}(x)=\frac{x_{1}}{12x_{2}}-1\le 0,\\ &{}g_{10}(x)=\frac{1.5x_{6}+1.9}{x_{4}}-1\le 0,\\ &{}g_{11}(x)=\frac{1.1x_{7}+1.9}{x_{5}}-1\le 0,\\ &{}where\\ &{}2.6\le x_{1} \le 3.6,\quad 0.7\le x_{2} \le 0.8,\\ &{}17\le x_{3} \le 28,\quad {}7.3\le x_{4},x_{5} \le 8.3,\\ &{}2.9\le x_{6} \le 3.9,\quad 5\le x_{7} \le 5.5.\\ \end{array} \end{aligned}$$
Fig. 8
figure 8

Design of a speed reducer gearing system

Table 15 Comparison of best solution for speed reducer design problem

By reviewing the literatures, we find that some intelligent algorithms have been applied to solve this design problem, including DELC [54], DEDS [55], HEAA [56], PSO-DE [57], MDE [58] and MBA [59]. The optimal solutions and the statistical results obtained by each algorithm, which are taken directly from [59], are given in Tables 15 and 16, respectively. From the results, although the Min and Mean of BEABC are equal to those of DELC and DEDS, the Std is the smallest in all competitors. It is easy to draw the conclusion that BEABC is competent to search the best optimum solution and has a good robustness for this problem.

Table 16 Comparison of statistical results for speed reducer design problem

Conclusion

A novel ABC algorithm named BEABC is presented in this paper. First, BEABC used a new probability based on Bayesian estimation instead of the fitness ratio for accelerating the convergence rate. Using the new probability selection mechanism, the onlooker bees prefer to choose the solutions that can produce good offsprings. Second, in onlooker bee phase, a directional guidance strategy was presented to tradeoff the balance between exploration and exploitation. Finally, the sensitivity test of MR, the experimental results of numerical functions and two practical optimization problems showed the availability and advancement of BEABC. Although BEABC outperforms than its competitors, there is still room to improve its performance in solving the rotated and shifted problems. And in the future, we will further develop the potential of BEABC.