An improved artificial bee colony algorithm based on Bayesian estimation

Artificial bee colony (ABC) algorithm was proposed by mimicking the cooperative foraging behaviors of bees. As a member of swarm intelligence algorithms, ABC has some advantages in handling optimization problems. However, it has the exploration capacity over the exploitation capacity, which may lead to slow convergence speed and lower solution accuracy. Hence, to enhance the performance of the algorithm, a novel ABC based on Bayesian estimation (BEABC) is presented in this paper. First, instead of using the fitness ratio, the selection probability in ABC is replaced with a new probability calculated by Bayesian estimation. Second, to help the bees adopt more useful information during updating new food sources, a directional guidance mechanism is designed for onlooker bees and scout bees. Finally, the comprehensive performance of BEABC is evaluated by 24 single-objective test functions. The numerical experiment results indicate that BEABC dominates its peers over most test functions, and the significant statistics show that the significant excellence rate of BEABC is 76%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$76\%$$\end{document} in the overall comparison. In addition, to further test the performance of BEABC, seven multi-objective problems and two real-word optimization problems are solved. The comparison results show that BEABC can achieve better results than other EA competitors.


Introduction
Over the past years, inspired by animals and social behaviors in nature, researchers have proposed different evolutionary algorithms (EAs) to deal with some complex optimization problems. As a branch of EAs, swarm intelligence has attracted extensive interest of many scholars. Until now, many excellent algorithms have been developed, such as firefly algorithm (FA) [1], genetic algorithm (GA) [2], differential evolution (DE) [3], artificial bee colony algorithm (ABC) [4], particle swarm algorithm (PSO) [5], and so on.

Related work
Although ABC could achieve good performance in many areas, it does a poor job in maintaining a balance between global search capacity and local search capacity. In other words, it may be caught in a local optimal position in a high likelihood and get a low precision solution, as well as slow convergence speed. Therefore, to overcome these shortcomings, it is necessary to develop potential ABC algorithms. To this end, there are many terrific ABC variants (ABCs) have been presented from different aspects in recent years.
According to the strategies used in literatures, they can be grouped into three categories: parameter-based strategy, probability-based strategy and movement-equation-based strategy. The brief reviews are as follows: (1) Parameter-based strategy. As pointed out in Ref. [13], the convergence of EAs can be adjusted well by introducing reasonable control parameters. So, to adjust the parameters in EAs, some scholars have proposed many adjustable strategies. For example, by analyzing the control parameters, Akay and Karaboga [14] discussed the influences of parameters in ABC. In Ref. [15], based on a new control parameter modification rate (MR), a modified vision ABC was proposed, which could achieve the purpose of controlling frequency of perturbation. According to the movement information of the previous solution, Kiran et al. [16] designed a control parameter d to guide the bees in a directional movement. For solving binary problems, Durgut and Aydin [17] proposed an adaptive hybrid approach to devise ABC algorithm, which includes multiple operators selected adaptively strategy and credit assignment mechanism.
(2) Probability-based strategy. As we know, the onlooker bees select individuals according to the probability defined by the fitness ratio in ABC. But this probability selection strategy may cause great selection pressure in the late stage. To overcome this deficiency, based on knowledge fusion, Wang et al. [18] presented a ABC variant KFABC, which wisely utilizes three kinds of knowledge strategies to search new solutions by the selection rules. In Ref. [19], Cui et al. proposed a novel ABC named DFSABC_elite, which designs two modified movement equations and a depth-first search framework (DFS). This method implemented the selection of the solutions by designing a new parameter mechanism. For constrained optimization problems, Liu et al. [20] proposed a ABC variant DPLABC, which includes dynamic penalty method, Levy flight with logistic map, further search mechanism and new boundary handling mechanism. A novel ABC named ABC-AHC has proposed by Chu et al. [21]. This study designed a new selection probability model, and presented an adaptive heterogeneous competition augmented ABC algorithm. In dealing with multi-objective problem, Chaudhuri and Sahu [22] proposed a multi-objective ABC-based feature weighting technique for Naïve Bayes. In the article, a mutation probability was designed to control the selection of movement equations in the phases of employed bee and onlooker bee, and the selection probability was calculated on the bases of the sorting number each food source.
(3) Movement-equation-based strategy. In EAs, how to better tradeoff the capacities of global search and local search is always concerned. Many studies found that the purpose can be achieved by modified the movement equation. Zhou et al. [23] proposed an enhancing ABC with multi-elite guidance (MGABC), which chooses elite solutions to construct elite groups and designs two modified search equations using the elite groups information. Inspired by PSO algorithm, Zhu et al. [24] presented a Gbest-guided ABC algorithm (GABC). In GABC, using the contemporary global optimum solution, the new modified update equation was shown. In terms of the results of experiments, GABC could be superior to the conventional ABC up to a point. By utilizing the best individual of the previous, Luo et al. [25] provided a new ABC algorithm named COABC. In COABC, a novel update equation was designed in the phase of onlooker bee. Aiming at improving the performance of ABC, Zhou et al. [26] proposed a modified search strategy by using the current global optimal solution and designed a new Gaussian perturbation with evolutionary rate. Using the best neighbor-guided movement strategy, Peng et al. [27] presented an algorithm named NABC. Based on factor library and dynamic search balance, Yu et al. [28] proposed a hybrid, fast, and enhanced ABC, called HFEABC. Table 1 summarizes some representative ABC variants, including their advantages, disadvantages and classification.

Motivation
Since ABC has excellent performance in solving optimization problems, it has been widely studied and applied in many fields [29,30]. When solving some complex optimization problems, it still has some shortcomings. For example, it has great selection pressure in the late stage, which may be caused by the probability mechanism on the basis of the fitness ratio [31]. To alleviate this problem, two new rules were designed in Refs. [18,19], which use previous experience to implement different strategies. However, pure empirical guidance may make the algorithm fall into oscillation, thus it is meaningful to retain the randomness of the probability mechanism. To reduce the blindness of choosing food sources to a certain extent, a novel probability is designed in this paper.
In addition, as we know, the movement equation plays a key role in maintaining a balance between global search and local search. To enhance the exploitation ability of ABC, many excellent ABCs focused on modifying the movement equations or designing good learning strategies have been proposed. In these methods, some useful information was used, such as the information of the current best solution or the elite solutions [23,32]. However, only using highquality solutions without controlling the search range may cause the algorithm to exploit and fall into local optimal position. Therefore, reasonable neighbor selection and parameter design are considered and discussed in our approach.
Based on the above considerations, to compensate the shortcomings of conventional ABC, a novel ABC algorithm based on Bayesian estimation [33] (BEABC) is proposed in this paper. The major contributions can be listed below: 1. A new selection probability is designed. Based on Bayesian estimation, the posterior probability of the food sources are calculated, which is used to select those food sources that can produce better offsprings in a high likelihood. 2. A novel directional guidance strategy is presented. In onlooker bee and scout bee phases, using the location information of neighbors and the current optimal solution, two solution search equations are designed, which can guide the bees to search in the right direction. 3. Two dynamically and adaptively adjusting parameters MR and λ are introduced. By automatically control-ling the disturbance frequency and adjusting the search region, respectively, the algorithm can keep balance between exploration and exploitation capacities.
The rest of the paper is outlined as follows: "Artificial bee colony algorithm (ABC)" introduces the conventional ABC framework. In "ABC based on Bayesian estimation (BEABC)", the motivation and the specific processes of BEABC are given. "Numerical and engineering Experiments" presents and analyzes the experimental results of BEABC and other excellent EAs. The effectiveness and superiority of BEABC are verified by these experiments.
In "Conclusion", the conclusion is drawn. Note that all the abbreviations involved in this paper are explained in the appendix.

Artificial bee colony algorithm (ABC)
ABC mainly contains four phases: initialization phase, employed bee phase, onlooker bee phase and scout bee phase. Different kinds of bees can change their roles iteratively until the termination condition is meet. Note that there is an associated counter for each food source. If one food source is not improved, the increment of its corresponding counter is 1; otherwise the counter resets to 0. If the quality of a solution has not been improved more than limit (preset parameter), the employed bee would be transformed into a scout bee.

Initialization phase
Let X = {x 1 , x 2 , . . . , x SN } be the initialization population, which are generated randomly in the entire space. The food source x i j of the initial population is determined by the following equation: where i = 1, 2, . . . , S N ; j = 1, 2, . . . , D.

Employed bee phase
In this phase, to generate new solutions v i , the employed bee executes a random neighborhood search around x i using the following equation: where k and j are randomly selected from {1, 2, . . . , S N } and {1, 2, . . . , D}, respectively, and k = i. In (2), only one dimension of x i is modified. If v i is more excellent than x i , x i will be replaced by v i , and the counter for x i will be reset to 0; otherwise x i keeps invariant, and its counter will go up by 1.

Onlooker bee phase
In this phase, the fitness ratio will be used as the selection probability to choose food sources. The probability can be calculated by (3): The fitness value of food source x i is defined by (4): From (3) and (4), it is easy to see that a food source with a larger fitness value has a higher possibility to be selected by the onlooker bees. Similar to the employed bee phase, the new solutions are determined by (2), and the associated counters will be changed accordingly.

Scout bee phase
In this phase, if the counter of a certain solution is bigger than limit, the solution will be considered as a terrible solution, and the corresponding employed bee will be transformed into a scout bee. The location of the scout bee will be randomly generated in the entire space according to (1).
By the above description of ABC, the pseudo-code is displayed briefly in Algorithm 1:

13:
The employed bees update new solutions x i by (2).

ABC based on Bayesian estimation (BEABC)
In this section, we propose a new ABC variant based on Bayesian estimation, called BEABC, which tries to address the following issues: (1) how to maximize the probability of those better solutions selected by roulette; (2) how to reasonably design movement equations to enhance the exploitation capability; (3) how to keep balance between the exploration ability and the exploitation ability. The detailed of BEABC is given in the following subsections.

Selection probability based on Bayesian estimation
In ABC, to generate new offsprings, the bees select the food sources by (3), which can maintain population diversity to a certain extent. But it may face selection pressure and premature convergence, especially at the late stage. To overcome such shortcomings, two probability mechanisms were presented [20,21]. However, they were lack of pertinence in the solutions with good quality. So, how to choose better solutions with high probability deserves to be studied. Combining mathematical knowledge, on the foundation of improving the quality of a solution, the probability that the solution is selected is a problem of posterior conditional probability. Thus, Bayesian estimation is a suitable way to calculate the probability. According to Bayesian estimation, if there are events . . , B n can be seen as an exhaustive sequence of events. Given the occurrence of event A, the conditional probability P(B i |A) can be calculated by (5): When it is extended to ABC, without loss of generality, let {x 1 , x 2 , . . . , x SN } be the exhaustive sequence of events, and A be the event that the food source with a better offspring is selected. Assume that the solutions are finite and sufficient. According to the law of large numbers, the probability P(B = x i ) and the conditional probability P(A|B = x i ) can be given as below: Substituting (6) and (7) into (5), the posterior probability P(B=x i |A) can be calculated. It is clear that this probability helps the algorithm choose the better solutions with high quality, which may improve the convergence rate of the algorithm.
In addition, noted that the denominator of the Bayesian equation (5) is a constant, therefore, to reduce the amount of calculation, this paper omits the denominator in the computational process.

Directional guidance mechanism
As pointed out above, in ABC, the original movement equation performs very well in exploration but weakly in exploitation. To adjust the global and local search capacities, inspired by different swarm intelligent algorithms, scholars have proposed some new movement equations [5,34]. By (2), we know that, the purpose of randomly selecting neighbors is to maintain population diversity. However, this moving equation has great blindness, and lacks directional guidance. Thus, to reduce the blindness, a directional guidance mechanism is proposed as follows. Let x n be the neighbor selected randomly for x i . Consider two cases: In this case, the location of x n may more close to the global best solution than x i , so it makes sense for x i to move towards x n . From (8), we can see that it may ensure x i to move towards x n no matter where x i lies in. To help the readers better understand, the following graph is used to show this process: In Fig. 1, it can be seen that, when x i j < x nj , the location of x nj has a great chance to appear on the left of the highquality solution, thus x i j moves towards x nj may be more closer to the global best solution. In the other case, x i j > x nj means that the location of x i j is on the right of x nj , thus it makes more sense for x i j to move left, which may generate superior candidate solution. In a word, under the guidance of its neighbor x n , x i always moves towards the global best solution. Case This case means that the neighbor x n is worse than x i . For this case, by adopting the information of the current global optimal solution, a modified movement equation is presented below: From (9), the frequency of perturbation is controlled by MR, when rand is less than MR, the candidate solution v i j is generated around x g j , which guarantees that the new solution has relatively high quality; otherwise, v i j is equal to x i j .
In scout bee phase, the task of the bee is to discard the obsolete solution and regenerate new one in the whole solution space. From ABC, it can be seen that, though the new food source generated by (1) can maintain the population diversity, it may have blindness to some extent and reduce the convergence rate. Based on the above analysis, if the information of the current global best solution is adopted, a better food source may be obtained. Thus, to generate a better solution, the following equation is presented: From (10), the new food source x i is generated around x g . This strategy may increase the convergence rate of BEABC.

Parameters design
In Ref. [15], MR was proposed first and was set to a fixed constant. Generally, linearly varying parameters have some advantages over constant parameters in adjusting the algorithm [35]. Furthermore, for EAs, a nonlinear control parameter updating rule was proposed [36], which has a better performance than linearly varying parameters. In this paper, to strengthen the performance of BEABC, a new way of adaptively varying parameter MR is proposed, which is given as follows: From (11), MR decreases exponentially as the number of iterations increases, which means the candidate solutions have more chances of being generated near the current global optimal solution at an early stage. This may improve the convergence rate to some extent. In the late stage of the algorithm, v i will be generated around x i , which can enhance the local search ability.
Meanwhile, it should be noted that λ plays a key role in adjusting the search region at (8) and (9). In our paper, λ is introduced as follows: As shown in (12), it pays more weight on the second item of the movement equations (8) and (9) at the beginning, which is more likely to strengthen exploration capacity of BEABC. As the algorithm goes on, λ becomes smaller and smaller, so the search process will focus on the first item of the movement equations, which means that the exploitation ability of BEABC are enhanced.
On the basis of the description above, the detailed pseudocode of BEABC is shown in Algorithm 2. The source code can be publicly available at https://www.mathworks.com/ matlabcentral/fileexchange/104205-an-improved-artificialbee-colony-algorithm-based-on-bayesia.

Algorithm 2
The pseudo-code of BEABC 1: /*Initialization phase*/ 2: Randomly generate an initial population by (1); 3: Obtain x g of the food sources at the initial iteration. 4: for T = 1 to Max Dt do 5: /*Employed bee phase*/ 6: Calculate M R by (10). 7: for i = 1 to SN do 8: Generate new food sources v i by (2). 9:  19: 25: The flowchart of BEABC is shown in Fig. 2. From the figure, the selection probability is calculated by (5). In the onlooker bee and scout bee phases, the food sources are updated according to the new movement formulas (8)-(10), respectively. In addition, evolutionary perturbations are controlled by (11) in each iteration.

Numerical and engineering experiments
To validate the performance of BEABC, 31 classical test functions (24 single-objective and 7 multi-objective test functions) and 2 real-world optimization problems are implemented separately in this section. The experiment results are given and discussed. All these algorithms are coded in Matlab R2017a, and ran on PC with Core i5, 2G memory, Windows 7.
In "Sensitivity test for the parameter MR", a sensitivity test of MR is discussed. In "Experiment 1", the benchmark functions selected are tested by BEABC and other excellent ABCs. Meanwhile, the discussion and analysis of the results are given. In "Experiment 2", BEABC are compared with some excellent EAs in dealing with 2 practical engineering problems.

Sensitivity test for the parameter MR
As pointed out in Ref. [15], the control parameter and modified rate (MR) can effectively enhance the performance of ABC. From (11), we can see that MR with a large value means that the candidate solution v i obtains more information from the parents, which can improve the quality of v i . However, the population diversity may be sacrificed. While MR with a small value has limited room to improve the performance of the algorithm, especially in the early stage. Thus, a exponentially varying MR which gradually decreases from MR max is introduced, and a sensitivity test is carried out to study the affect of MR max . To test the sensitivity of MR, several complex test functions with D=30 are selected. Considering the improvement of the exploitation capacity of BEABC, different values for MR max are set to test its affect, i.e., 0.5, 0.6, 0.7, 0.8 and 0.9. The numerical results and the box-plots are given in Table 2 and Fig. 3, respectively. From Table 2, it can be seen that, BEABC has a better performance when MR max has a larger value on most functions. The reason may be that the candidate solutions have more opportunities to inherit the parents' information. From Fig. 3, it also can be seen that BEABC has a better performance when MR max is 0.9. From the above discussion, setting MR max to 0.9 may be a good choice.

Single-objective test functions
In this experiment, 24 classical test functions [1,37] are selected to validate the effectiveness of BEABC. Table 3 gives the detailed information of these test functions. f 1f 8 are unimodal functions, f 9f 16 are multimodal functions, f 17f 24 are rotated and shifted functions. Some excellent ABCs are chosen to compare with BEABC, including dABC [16], MGABC [23], GABC [24] and COABC [25]. The comparison results obtained by all comparative algorithms are shown in Tables 5, 6 and 7, respectively. And the evaluation indicators include minimum (Min), Mean (mean), standard (Std), rank and t test. Based on Mean, the ranking makes it very clear how the algorithms are performing. t test is used to ascertain whether the results obtained by BEABC are statistically alien to its competitors. The significance level α is set to 0.05. '+', '≈' and '-' mean that BEABC is statistically better, equal to or significantly inferior to its peers, respectively. To visually show the convergence rate of these algorithms, the convergence curves of the benchmark functions are displayed in Figs. 4, 5 and 6.
To ensure fairness, the basic parameters and running environment of all the algorithms are the same. Table 4 displays the specific parameters of these algorithms, which are the same as the corresponding literatures. Meanwhile, each numerical function is independently run 30 times with each algorithm.
Unimodal functions: In Table 5, the numerical results of unimodal functions are given. From the table, BEABC can obtain the global best solution on each function and dominates all ABCs in terms of Min. According to Mean, BEABC takes the first place in all algorithms, and t test implies that BEABC is statistically significantly superior to the com- The optimal results are marked in bold

Fig. 3 Sensitivity test for MR
− 78.332 Rotated and shifted  Figure 4 shows that all these algorithms obtained a local optimum on f 6 , but BEABC has the fastest convergence rate. The reason may be that the new probability mechanism adopted by BEABC prefers to select those good solutions. Multimodal functions: From Table 6, for f 10 , f 12 , f 14 and f 15 , in terms of Min and Mean, BEABC outperforms the compared ABCs and takes the first place. Meanwhile, t test indicates that BEABC is statistically significantly better than all compared ABCs in most functions. For f 9 and f 13 , BEABC superiors to MGABC and dABC while performs equal to GABC, COABC. However, for f 16 , BEABC only performs better than dABC. From Fig. 5, although more than one algorithm get the theoretical global best solutions on f 9 , f 11 and f 13 , the convergence rate of all peers is much lower than that of BEABC. Unfortunately, all these algorithms are caught in a poor location on f 16 . Based on the above discussion, although BEABC can really do well in most multimodal functions, there is still room to enhance the search ability of BEABC.
Rotated and shifted functions: From Table 7, on f 17f 20 , BEABC performs well and obtains the global best solutions. Moreover, t test implies that it is significantly better than all its competitors. For f 23f 24 , BEABC did not perform very well on the shifted functions, and it is second to MGABC and COABC in terms of Rank. However, though the performance of BEABC is inferior to MGABC, it is still significantly superior than its peers in most cases at Rank and t test. From Fig. 6, BEABC converges rapidly in rotated functions but it shows signs of premature convergence in shifted functions. The reason may be that the shifted behavior does not fit the search process of the algorithm.  The results of above experiments indicate that BEABC performs well on most single-objective functions. Although the oscillation are inevitable in solving some complex optimization problems, the convergence speed is obviously dominant. Due to the particularity of some functions, especially the shifted functions, the performance of the proposed algorithm needs to be further improved.

Significance statistics
To analyze the significant difference between BEABC and other ABCs, the comprehensive statistical results got by t test with α = 0.05 level are given in Table 8. From the table, compared with its peers, the significant excellent times of BEABC is 73, and the overall experimental success ratio is 76%. The statistical results show that BEABC has a better performance than its comparison algorithms.

Multi-objective functions
To further investigate the comprehensive performance of the proposed algorithm, seven continuous multiobjective optimization problems (MOPs) are carefully selected as the test functions, including UF1-UF7 [38]. All MOPs are unconstrained (bound constrained) two-objective test problems, and the number of decision variables n is 30. Tables 9 and 10 give the details of UF1-UF7, including their function representations, PF and PS. BEABC is compared with several different algorithms, including MOEA/D [39], NSGAIILS [40], MTS [41], OMOEAII [42] and MO-ABC/DE [43]. Let M be a set of uniformity distributed points along the Pareto front in the feasible space, and C be an approximate set to the Pareto front. IGD is used as the comparative indicators to evaluate the results obtained, which is defined as follows: Due to the space limit, the results in this paper are taken directly from [43], except for BEABC. The average IGD is obtained by running each algorithm 30 times independently for each function. The results are given in Table 11. From the table, we find that BEABC takes the first place for most test functions. For UF3 and UF7, the performance of BEABC is slightly worse than that of MOEA/D. In addition, the IGD results obtained by BEABC are very close to 0, which means that the PF obtained by BEABC are very close to the true Pareto front.  The optimal results are marked in bold

Computational time complexity
The computational time complexity of our algorithm and other ABCs are analyzed in this subsection. The details are presented in Table 12.   +SN *  f )) in the employed and onlooker bee phases. Therefore, the total computational complexity of BEABC is O(SN * f ) at each iteration. From Table 12, we can see that most ABCs have the same complexity as conventional ABC except for dABC. The main reason is that they do not have redundant loops to calculate function values. Moreover, BEABC does not need to calculate the fitness value of the function, which may save time to a certain extent. The optimal results are marked in bold

Experiment 2
To verify the effectiveness of one optimization algorithm, according to [45,46], the real-world engineering optimization problems are good choices. so, to evaluate BEABC effectively, two engineering optimization problems are selected in this subsection. However, since these two practical optimiza-tion problems are basically constrained, we need to transform them into unconstrained optimization problems to facilitate the calculation of the algorithms. The following methods can be used, including rejection method, repair method, operator correction method and penalty function method. In this paper, the penalty function method is used.

Structural design of tension-pressure spring
The optimization design problem is described in Ref. [47], which minimizes the weight of a tension spring under the constraints such as shear stress, frequency of vibration and so on. There are three main design variables: coil diameter d(x 1 ), average diameter of spring coil D(x 2 ) and the effective number of circles N (x 3 ). Figure 7 indicates the schematic diagram of this design problem. The engineering optimization problem is depicted as: Several optimization algorithms have been developed to solve the optimization problem, such as CNPSO [5], SPA [48], MPM [49], IOD [50], TGA [51] and CPSO [52]. Table 13 contains the optimum solutions and the constrained solutions obtained by these algorithms. Besides, the Min, Mean and Std are used as the evaluation indicators. Table 14 shows the statistical results. Except for BEABC, the solutions offered by these algorithms are taken directly from [5]. By the results, though the constraint g 1 (x) is approximately sat-isfied, the small enough difference 1.5814040E−07 can be ignored in practice. In terms of Min ( f (x)), BEABC outperforms other algorithms, and we can come to the conclusion that BEABC is efficient in solving this optimization problem.

Design of a speed reducer gearing system
The gear design problem is to minimize the weight of a speed reducer at the mercy of constraints on bending stress of the gear teeth, transverse deflections of the shafts, surface stress and stresses in the shafts [53]. This optimization problem involves seven variables, and x 1 -x 7 are used to represent these variables, respectively. All the variables are incessant except number of teeth (x 3 ) which is an integer value. The reducer diagram is depicted in Fig. 8. All above, the design objective function is described as: By reviewing the literatures, we find that some intelligent algorithms have been applied to solve this design problem, including DELC [54], DEDS [55], HEAA [56], PSO-DE [57], MDE [58] and MBA [59]. The optimal solutions and the statistical results obtained by each algorithm, which are taken directly from [59], are given in Tables 15 and 16, respectively. From the results, although the Min and Mean of BEABC are j∈J2 h(y j ) J 1 = { j| j is odd and 2 ≤ j ≤ n} J 2 = { j| j is even and 2 ≤ j ≤ n}  The optimal results are marked in bold  The optimal results are marked in bold The optimal results are marked in bold; NA represents the results are not given in Ref. [5] Fig. 8 Design of a speed reducer gearing system equal to those of DELC and DEDS, the Std is the smallest in all competitors. It is easy to draw the conclusion that BEABC is competent to search the best optimum solution and has a good robustness for this problem.

Conclusion
A novel ABC algorithm named BEABC is presented in this paper. First, BEABC used a new probability based on Bayesian estimation instead of the fitness ratio for accelerating the convergence rate. Using the new probability selection mechanism, the onlooker bees prefer to choose the solutions that can produce good offsprings. Second, in onlooker bee phase, a directional guidance strategy was presented to tradeoff the balance between exploration and exploitation. Finally, the sensitivity test of MR, the experimental results of numerical functions and two practical optimization problems showed the availability and advancement of BEABC. Although BEABC outperforms than its competitors, there is still room to improve its performance in solving the rotated and shifted problems. And in the future, we will further develop the potential of BEABC.