Abstract
Artificial bee colony (ABC) algorithm was proposed by mimicking the cooperative foraging behaviors of bees. As a member of swarm intelligence algorithms, ABC has some advantages in handling optimization problems. However, it has the exploration capacity over the exploitation capacity, which may lead to slow convergence speed and lower solution accuracy. Hence, to enhance the performance of the algorithm, a novel ABC based on Bayesian estimation (BEABC) is presented in this paper. First, instead of using the fitness ratio, the selection probability in ABC is replaced with a new probability calculated by Bayesian estimation. Second, to help the bees adopt more useful information during updating new food sources, a directional guidance mechanism is designed for onlooker bees and scout bees. Finally, the comprehensive performance of BEABC is evaluated by 24 single-objective test functions. The numerical experiment results indicate that BEABC dominates its peers over most test functions, and the significant statistics show that the significant excellence rate of BEABC is \(76\%\) in the overall comparison. In addition, to further test the performance of BEABC, seven multi-objective problems and two real-word optimization problems are solved. The comparison results show that BEABC can achieve better results than other EA competitors.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Over the past years, inspired by animals and social behaviors in nature, researchers have proposed different evolutionary algorithms (EAs) to deal with some complex optimization problems. As a branch of EAs, swarm intelligence has attracted extensive interest of many scholars. Until now, many excellent algorithms have been developed, such as firefly algorithm (FA) [1], genetic algorithm (GA) [2], differential evolution (DE) [3], artificial bee colony algorithm (ABC) [4], particle swarm algorithm (PSO) [5], and so on. These methods display good ability in solving optimization problems.
Motivated by the cooperative foraging behaviors of bees, Karaboga first proposed ABC algorithm in 2005 [6]. Compared with other algorithms, ABC has the features of simple structure, fewer control parameters and more powerful search capability. Thus, it has been widely studied by many scholars, and has been used to deal with some complex problems. Moreover, in the fields of real-word optimization problems, such as electrical engineering [7], engineering physics [8], path planning [9], feature classification [10, 11], vehicular networks [12], ABC has been widely adopted.
Related work
Although ABC could achieve good performance in many areas, it does a poor job in maintaining a balance between global search capacity and local search capacity. In other words, it may be caught in a local optimal position in a high likelihood and get a low precision solution, as well as slow convergence speed. Therefore, to overcome these shortcomings, it is necessary to develop potential ABC algorithms. To this end, there are many terrific ABC variants (ABCs) have been presented from different aspects in recent years.
According to the strategies used in literatures, they can be grouped into three categories: parameter-based strategy, probability-based strategy and movement-equation-based strategy. The brief reviews are as follows:
(1) Parameter-based strategy. As pointed out in Ref. [13], the convergence of EAs can be adjusted well by introducing reasonable control parameters. So, to adjust the parameters in EAs, some scholars have proposed many adjustable strategies. For example, by analyzing the control parameters, Akay and Karaboga [14] discussed the influences of parameters in ABC. In Ref. [15], based on a new control parameter modification rate (MR), a modified vision ABC was proposed, which could achieve the purpose of controlling frequency of perturbation. According to the movement information of the previous solution, Kiran et al. [16] designed a control parameter d to guide the bees in a directional movement. For solving binary problems, Durgut and Aydin [17] proposed an adaptive hybrid approach to devise ABC algorithm, which includes multiple operators selected adaptively strategy and credit assignment mechanism.
(2) Probability-based strategy. As we know, the onlooker bees select individuals according to the probability defined by the fitness ratio in ABC. But this probability selection strategy may cause great selection pressure in the late stage. To overcome this deficiency, based on knowledge fusion, Wang et al. [18] presented a ABC variant KFABC, which wisely utilizes three kinds of knowledge strategies to search new solutions by the selection rules. In Ref. [19], Cui et al. proposed a novel ABC named DFSABC_elite, which designs two modified movement equations and a depth-first search framework (DFS). This method implemented the selection of the solutions by designing a new parameter mechanism. For constrained optimization problems, Liu et al. [20] proposed a ABC variant DPLABC, which includes dynamic penalty method, Levy flight with logistic map, further search mechanism and new boundary handling mechanism. A novel ABC named ABC-AHC has proposed by Chu et al. [21]. This study designed a new selection probability model, and presented an adaptive heterogeneous competition augmented ABC algorithm. In dealing with multi-objective problem, Chaudhuri and Sahu [22] proposed a multi-objective ABC-based feature weighting technique for Naïve Bayes. In the article, a mutation probability was designed to control the selection of movement equations in the phases of employed bee and onlooker bee, and the selection probability was calculated on the bases of the sorting number each food source.
(3) Movement-equation-based strategy. In EAs, how to better tradeoff the capacities of global search and local search is always concerned. Many studies found that the purpose can be achieved by modified the movement equation. Zhou et al. [23] proposed an enhancing ABC with multi-elite guidance (MGABC), which chooses elite solutions to construct elite groups and designs two modified search equations using the elite groups information. Inspired by PSO algorithm, Zhu et al. [24] presented a Gbest-guided ABC algorithm (GABC). In GABC, using the contemporary global optimum solution, the new modified update equation was shown. In terms of the results of experiments, GABC could be superior to the conventional ABC up to a point. By utilizing the best individual of the previous, Luo et al. [25] provided a new ABC algorithm named COABC. In COABC, a novel update equation was designed in the phase of onlooker bee. Aiming at improving the performance of ABC, Zhou et al. [26] proposed a modified search strategy by using the current global optimal solution and designed a new Gaussian perturbation with evolutionary rate. Using the best neighbor-guided movement strategy, Peng et al. [27] presented an algorithm named NABC. Based on factor library and dynamic search balance, Yu et al. [28] proposed a hybrid, fast, and enhanced ABC, called HFEABC.
Table 1 summarizes some representative ABC variants, including their advantages, disadvantages and classification.
Motivation
Since ABC has excellent performance in solving optimization problems, it has been widely studied and applied in many fields [29, 30]. When solving some complex optimization problems, it still has some shortcomings. For example, it has great selection pressure in the late stage, which may be caused by the probability mechanism on the basis of the fitness ratio [31]. To alleviate this problem, two new rules were designed in Refs. [18, 19], which use previous experience to implement different strategies. However, pure empirical guidance may make the algorithm fall into oscillation, thus it is meaningful to retain the randomness of the probability mechanism. To reduce the blindness of choosing food sources to a certain extent, a novel probability is designed in this paper.
In addition, as we know, the movement equation plays a key role in maintaining a balance between global search and local search. To enhance the exploitation ability of ABC, many excellent ABCs focused on modifying the movement equations or designing good learning strategies have been proposed. In these methods, some useful information was used, such as the information of the current best solution or the elite solutions [23, 32]. However, only using high-quality solutions without controlling the search range may cause the algorithm to exploit and fall into local optimal position. Therefore, reasonable neighbor selection and parameter design are considered and discussed in our approach.
Based on the above considerations, to compensate the shortcomings of conventional ABC, a novel ABC algorithm based on Bayesian estimation [33] (BEABC) is proposed in this paper. The major contributions can be listed below:
-
1.
A new selection probability is designed. Based on Bayesian estimation, the posterior probability of the food sources are calculated, which is used to select those food sources that can produce better offsprings in a high likelihood.
-
2.
A novel directional guidance strategy is presented. In onlooker bee and scout bee phases, using the location information of neighbors and the current optimal solution, two solution search equations are designed, which can guide the bees to search in the right direction.
-
3.
Two dynamically and adaptively adjusting parameters MR and \(\lambda \) are introduced. By automatically controlling the disturbance frequency and adjusting the search region, respectively, the algorithm can keep balance between exploration and exploitation capacities.
The rest of the paper is outlined as follows: “Artificial bee colony algorithm (ABC)” introduces the conventional ABC framework. In “ABC based on Bayesian estimation (BEABC)”, the motivation and the specific processes of BEABC are given. “Numerical and engineering Experiments” presents and analyzes the experimental results of BEABC and other excellent EAs. The effectiveness and superiority of BEABC are verified by these experiments. In “Conclusion”, the conclusion is drawn. Note that all the abbreviations involved in this paper are explained in the appendix.
Artificial bee colony algorithm (ABC)
ABC mainly contains four phases: initialization phase, employed bee phase, onlooker bee phase and scout bee phase. Different kinds of bees can change their roles iteratively until the termination condition is meet. Note that there is an associated counter for each food source. If one food source is not improved, the increment of its corresponding counter is 1; otherwise the counter resets to 0. If the quality of a solution has not been improved more than limit (preset parameter), the employed bee would be transformed into a scout bee.
Initialization phase
Let \({X} =\{ {x_{1}},{x_{2}}, \ldots ,{x_{\text {SN}}} \}\) be the initialization population, which are generated randomly in the entire space. The food source \(x_{ij}\) of the initial population is determined by the following equation:
where \(i = 1,2, \ldots , SN\); \(j = 1,2, \ldots , D\).
Employed bee phase
In this phase, to generate new solutions \(v_{i}\), the employed bee executes a random neighborhood search around \(x_{i}\) using the following equation:
where k and j are randomly selected from \(\left\{ {1,2, \ldots , SN}\right\} \) and \(\left\{ {1,2, \ldots ,D} \right\} \), respectively, and \(k\ne i\). In (2), only one dimension of \(x_i\) is modified. If \({v}_{i}\) is more excellent than \({x}_{i}\), \({x}_{i}\) will be replaced by \({v}_{i}\), and the counter for \(x_{i}\) will be reset to 0; otherwise \({x}_{i}\) keeps invariant, and its counter will go up by 1.
Onlooker bee phase
In this phase, the fitness ratio will be used as the selection probability to choose food sources. The probability can be calculated by (3):
The fitness value of food source \(x_{i}\) is defined by (4):
From (3) and (4), it is easy to see that a food source with a larger fitness value has a higher possibility to be selected by the onlooker bees. Similar to the employed bee phase, the new solutions are determined by (2), and the associated counters will be changed accordingly.
Scout bee phase
In this phase, if the counter of a certain solution is bigger than limit, the solution will be considered as a terrible solution, and the corresponding employed bee will be transformed into a scout bee. The location of the scout bee will be randomly generated in the entire space according to (1).
By the above description of ABC, the pseudo-code is displayed briefly in Algorithm 1:
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs40747-022-00746-1/MediaObjects/40747_2022_746_Figa_HTML.png)
ABC based on Bayesian estimation (BEABC)
In this section, we propose a new ABC variant based on Bayesian estimation, called BEABC, which tries to address the following issues: (1) how to maximize the probability of those better solutions selected by roulette; (2) how to reasonably design movement equations to enhance the exploitation capability; (3) how to keep balance between the exploration ability and the exploitation ability. The detailed of BEABC is given in the following subsections.
Selection probability based on Bayesian estimation
In ABC, to generate new offsprings, the bees select the food sources by (3), which can maintain population diversity to a certain extent. But it may face selection pressure and premature convergence, especially at the late stage. To overcome such shortcomings, two probability mechanisms were presented [20, 21]. However, they were lack of pertinence in the solutions with good quality. So, how to choose better solutions with high probability deserves to be studied. Combining mathematical knowledge, on the foundation of improving the quality of a solution, the probability that the solution is selected is a problem of posterior conditional probability. Thus, Bayesian estimation is a suitable way to calculate the probability. According to Bayesian estimation, if there are events \(B_{i}\) satisfy \(\bigcup \nolimits _{i = 1}^n {{B_i}} = \Omega , {B_i} \bigcap {B_{j}}{ = }\emptyset , i \ne j,{P(B_i) > 0}\), then \({B_1},{B_2}, \ldots ,{B_n}\) can be seen as an exhaustive sequence of events. Given the occurrence of event A, the conditional probability \(P(B_{i}|A)\) can be calculated by (5):
When it is extended to ABC, without loss of generality, let \(\left\{ {{x_{1}},{x_{2}},\ldots ,{x_{\text {SN}}}} \right\} \) be the exhaustive sequence of events, and A be the event that the food source with a better offspring is selected. Assume that the solutions are finite and sufficient. According to the law of large numbers, the probability \(P(B=x_{i})\) and the conditional probability \(P(A|B=x_{i})\) can be given as below:
Substituting (6) and (7) into (5), the posterior probability \(P(B{=}{x_{i}}{|A)}\) can be calculated. It is clear that this probability helps the algorithm choose the better solutions with high quality, which may improve the convergence rate of the algorithm.
In addition, noted that the denominator of the Bayesian equation (5) is a constant, therefore, to reduce the amount of calculation, this paper omits the denominator in the computational process.
Directional guidance mechanism
As pointed out above, in ABC, the original movement equation performs very well in exploration but weakly in exploitation. To adjust the global and local search capacities, inspired by different swarm intelligent algorithms, scholars have proposed some new movement equations [5, 34]. By (2), we know that, the purpose of randomly selecting neighbors is to maintain population diversity. However, this moving equation has great blindness, and lacks directional guidance. Thus, to reduce the blindness, a directional guidance mechanism is proposed as follows. Let \(x_{n}\) be the neighbor selected randomly for \(x_{i}\). Consider two cases:
Case 1: \(f(x_{n})<f(x_{i})\). Under this case, the new solution \(v_{i}\) is produced around \(x_{i}\) as below:
In this case, the location of \(x_{n}\) may more close to the global best solution than \(x_{i}\), so it makes sense for \(x_{i}\) to move towards \(x_{n}\). From (8), we can see that it may ensure \(x_{i}\) to move towards \(x_{n}\) no matter where \(x_i\) lies in. To help the readers better understand, the following graph is used to show this process:
In Fig. 1, it can be seen that, when \(x_{ij}<x_{nj}\), the location of \(x_{nj}\) has a great chance to appear on the left of the high-quality solution, thus \(x_{ij}\) moves towards \(x_{nj}\) may be more closer to the global best solution. In the other case, \(x_{ij}>x_{nj}\) means that the location of \(x_{ij}\) is on the right of \(x_{nj}\), thus it makes more sense for \(x_{ij}\) to move left, which may generate superior candidate solution. In a word, under the guidance of its neighbor \(x_{n}\), \(x_{i}\) always moves towards the global best solution.
Case 2: \(f(x_{n})>f(x_{i})\). This case means that the neighbor \(x_{n}\) is worse than \(x_{i}\). For this case, by adopting the information of the current global optimal solution, a modified movement equation is presented below:
From (9), the frequency of perturbation is controlled by MR, when rand is less than MR, the candidate solution \(v_{ij}\) is generated around \(x_{gj}\), which guarantees that the new solution has relatively high quality; otherwise, \(v_{ij}\) is equal to \(x_{ij}\).
In scout bee phase, the task of the bee is to discard the obsolete solution and regenerate new one in the whole solution space. From ABC, it can be seen that, though the new food source generated by (1) can maintain the population diversity, it may have blindness to some extent and reduce the convergence rate. Based on the above analysis, if the information of the current global best solution is adopted, a better food source may be obtained. Thus, to generate a better solution, the following equation is presented:
From (10), the new food source \(x_{i}\) is generated around \(x_{g}\). This strategy may increase the convergence rate of BEABC.
Parameters design
In Ref. [15], MR was proposed first and was set to a fixed constant. Generally, linearly varying parameters have some advantages over constant parameters in adjusting the algorithm [35]. Furthermore, for EAs, a nonlinear control parameter updating rule was proposed [36], which has a better performance than linearly varying parameters. In this paper, to strengthen the performance of BEABC, a new way of adaptively varying parameter MR is proposed, which is given as follows:
From (11), MR decreases exponentially as the number of iterations increases, which means the candidate solutions have more chances of being generated near the current global optimal solution at an early stage. This may improve the convergence rate to some extent. In the late stage of the algorithm, \(v_{i}\) will be generated around \(x_{i}\), which can enhance the local search ability.
Meanwhile, it should be noted that \(\lambda \) plays a key role in adjusting the search region at (8) and (9). In our paper, \(\lambda \) is introduced as follows:
As shown in (12), it pays more weight on the second item of the movement equations (8) and (9) at the beginning, which is more likely to strengthen exploration capacity of BEABC. As the algorithm goes on, \(\lambda \) becomes smaller and smaller, so the search process will focus on the first item of the movement equations, which means that the exploitation ability of BEABC are enhanced.
On the basis of the description above, the detailed pseudo-code of BEABC is shown in Algorithm 2. The source code can be publicly available at https://www.mathworks.com/matlabcentral/fileexchange/104205-an-improved-artificial-bee-colony-algorithm-based-on-bayesia.
![figure b](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs40747-022-00746-1/MediaObjects/40747_2022_746_Figb_HTML.png)
The flowchart of BEABC is shown in Fig. 2. From the figure, the selection probability is calculated by (5). In the onlooker bee and scout bee phases, the food sources are updated according to the new movement formulas (8)–(10), respectively. In addition, evolutionary perturbations are controlled by (11) in each iteration.
Numerical and engineering experiments
To validate the performance of BEABC, 31 classical test functions (24 single-objective and 7 multi-objective test functions) and 2 real-world optimization problems are implemented separately in this section. The experiment results are given and discussed. All these algorithms are coded in Matlab R2017a, and ran on PC with Core i5, 2G memory, Windows 7.
In “Sensitivity test for the parameter MR”, a sensitivity test of MR is discussed. In “Experiment 1”, the benchmark functions selected are tested by BEABC and other excellent ABCs. Meanwhile, the discussion and analysis of the results are given. In “Experiment 2”, BEABC are compared with some excellent EAs in dealing with 2 practical engineering problems.
Sensitivity test for the parameter MR
As pointed out in Ref. [15], the control parameter and modified rate (MR) can effectively enhance the performance of ABC. From (11), we can see that MR with a large value means that the candidate solution \(v_{i}\) obtains more information from the parents, which can improve the quality of \(v_{i}\). However, the population diversity may be sacrificed. While MR with a small value has limited room to improve the performance of the algorithm, especially in the early stage. Thus, a exponentially varying MR which gradually decreases from \({\text {MR}}_{\text {max}}\) is introduced, and a sensitivity test is carried out to study the affect of \({\text {MR}}_{\text {max}}\). To test the sensitivity of MR, several complex test functions with D=30 are selected. Considering the improvement of the exploitation capacity of BEABC, different values for \({\text {MR}}_{\text {max}}\) are set to test its affect, i.e., 0.5, 0.6, 0.7, 0.8 and 0.9. The numerical results and the box-plots are given in Table 2 and Fig. 3, respectively. From Table 2, it can be seen that, BEABC has a better performance when \({\text {MR}}_{\text {max}}\) has a larger value on most functions. The reason may be that the candidate solutions have more opportunities to inherit the parents’ information. From Fig. 3, it also can be seen that BEABC has a better performance when \({\text {MR}}_{\text {max}}\) is 0.9. From the above discussion, setting \({\text {MR}}_{\text {max}}\) to 0.9 may be a good choice.
Experiment 1
Single-objective test functions
In this experiment, 24 classical test functions [1, 37] are selected to validate the effectiveness of BEABC. Table 3 gives the detailed information of these test functions. \(f_{1}\)–\(f_{8}\) are unimodal functions, \(f_{9}\)–\(f_{16}\) are multimodal functions, \(f_{17}\)–\(f_{24}\) are rotated and shifted functions. Some excellent ABCs are chosen to compare with BEABC, including dABC [16], MGABC [23], GABC [24] and COABC [25]. The comparison results obtained by all comparative algorithms are shown in Tables 5, 6 and 7, respectively. And the evaluation indicators include minimum (Min), Mean (mean), standard (Std), rank and t test. Based on Mean, the ranking makes it very clear how the algorithms are performing. t test is used to ascertain whether the results obtained by BEABC are statistically alien to its competitors. The significance level \(\alpha \) is set to 0.05. ‘+’, ‘\(\approx \)’ and ‘-’ mean that BEABC is statistically better, equal to or significantly inferior to its peers, respectively. To visually show the convergence rate of these algorithms, the convergence curves of the benchmark functions are displayed in Figs. 4, 5 and 6.
To ensure fairness, the basic parameters and running environment of all the algorithms are the same. Table 4 displays the specific parameters of these algorithms, which are the same as the corresponding literatures. Meanwhile, each numerical function is independently run 30 times with each algorithm.
Unimodal functions: In Table 5, the numerical results of unimodal functions are given. From the table, BEABC can obtain the global best solution on each function and dominates all ABCs in terms of Min. According to Mean, BEABC takes the first place in all algorithms, and t test implies that BEABC is statistically significantly superior to the compared algorithms on \(f_{1}\), \(f_{2}\), \(f_{4}\), \(f_{6}\) and \(f_{8}\). For \(f_{3}\) and \(f_{5}\), BEABC performs equal to MGABC but superior to GABC, dABC and COABC. Although all ABCs can obtain the global best solution and perform equally well on \(f_{7}\), BEABC and MGABC have samll Std. Figure 4 shows that all these algorithms obtained a local optimum on \(f_{6}\), but BEABC has the fastest convergence rate. The reason may be that the new probability mechanism adopted by BEABC prefers to select those good solutions.
Multimodal functions: From Table 6, for \(f_{10}\), \(f_{12}\), \(f_{14}\) and \(f_{15}\), in terms of Min and Mean, BEABC outperforms the compared ABCs and takes the first place. Meanwhile, t test indicates that BEABC is statistically significantly better than all compared ABCs in most functions. For \(f_{9}\) and \(f_{13}\), BEABC superiors to MGABC and dABC while performs equal to GABC, COABC. However, for \(f_{16}\), BEABC only performs better than dABC. From Fig. 5, although more than one algorithm get the theoretical global best solutions on \(f_{9}\), \(f_{11}\) and \(f_{13}\), the convergence rate of all peers is much lower than that of BEABC. Unfortunately, all these algorithms are caught in a poor location on \(f_{16}\). Based on the above discussion, although BEABC can really do well in most multimodal functions, there is still room to enhance the search ability of BEABC.
Rotated and shifted functions: From Table 7, on \(f_{17}\)–\(f_{20}\), BEABC performs well and obtains the global best solutions. Moreover, t test implies that it is significantly better than all its competitors. For \(f_{23}\)–\(f_{24}\), BEABC did not perform very well on the shifted functions, and it is second to MGABC and COABC in terms of Rank. However, though the performance of BEABC is inferior to MGABC, it is still significantly superior than its peers in most cases at Rank and t test. From Fig. 6, BEABC converges rapidly in rotated functions but it shows signs of premature convergence in shifted functions. The reason may be that the shifted behavior does not fit the search process of the algorithm.
The results of above experiments indicate that BEABC performs well on most single-objective functions. Although the oscillation are inevitable in solving some complex optimization problems, the convergence speed is obviously dominant. Due to the particularity of some functions, especially the shifted functions, the performance of the proposed algorithm needs to be further improved.
Significance statistics
To analyze the significant difference between BEABC and other ABCs, the comprehensive statistical results got by t test with \(\alpha =0.05\) level are given in Table 8. From the table, compared with its peers, the significant excellent times of BEABC is 73, and the overall experimental success ratio is 76\(\%\). The statistical results show that BEABC has a better performance than its comparison algorithms.
Multi-objective functions
To further investigate the comprehensive performance of the proposed algorithm, seven continuous multiobjective optimization problems (MOPs) are carefully selected as the test functions, including UF1–UF7 [38]. All MOPs are unconstrained (bound constrained) two-objective test problems, and the number of decision variables n is 30. Tables 9 and 10 give the details of UF1–UF7, including their function representations, PF and PS.
BEABC is compared with several different algorithms, including MOEA/D [39], NSGAIILS [40], MTS [41], OMOEAII [42] and MO-ABC/DE [43]. Let M be a set of uniformity distributed points along the Pareto front in the feasible space, and C be an approximate set to the Pareto front. IGD is used as the comparative indicators to evaluate the results obtained, which is defined as follows:
Due to the space limit, the results in this paper are taken directly from [43], except for BEABC. The average IGD is obtained by running each algorithm 30 times independently for each function. The results are given in Table 11. From the table, we find that BEABC takes the first place for most test functions. For UF3 and UF7, the performance of BEABC is slightly worse than that of MOEA/D. In addition, the IGD results obtained by BEABC are very close to 0, which means that the PF obtained by BEABC are very close to the true Pareto front.
Computational time complexity
The computational time complexity of our algorithm and other ABCs are analyzed in this subsection. The details are presented in Table 12. For a problem f, assume that O(f) is the computational time complexity of evaluating its function value. SN is the population size. The time complexity of conventional ABC is \(O(\mathrm{SN}*f+\mathrm{SN}*f)=O(\mathrm{SN}*f)\) [44]. For BEABC, it designs a dynamic control parameter MR to determine the frequency of search, and the complexity of initialize phase is \(O(\mathrm{SN}*f)\). So the complexity are \(O(MR*(\mathrm{SN}*f+\mathrm{SN}*f))\) in the employed and onlooker bee phases. Therefore, the total computational complexity of BEABC is \(O(\mathrm{SN}*f)\) at each iteration.
From Table 12, we can see that most ABCs have the same complexity as conventional ABC except for dABC. The main reason is that they do not have redundant loops to calculate function values. Moreover, BEABC does not need to calculate the fitness value of the function, which may save time to a certain extent.
Experiment 2
To verify the effectiveness of one optimization algorithm, according to [45, 46], the real-world engineering optimization problems are good choices. so, to evaluate BEABC effectively, two engineering optimization problems are selected in this subsection. However, since these two practical optimization problems are basically constrained, we need to transform them into unconstrained optimization problems to facilitate the calculation of the algorithms. The following methods can be used, including rejection method, repair method, operator correction method and penalty function method. In this paper, the penalty function method is used.
Structural design of tension–pressure spring
The optimization design problem is described in Ref. [47], which minimizes the weight of a tension spring under the constraints such as shear stress, frequency of vibration and so on. There are three main design variables: coil diameter \(d(x_{1})\), average diameter of spring coil \(D(x_{2})\) and the effective number of circles \(N(x_{3})\). Figure 7 indicates the schematic diagram of this design problem. The engineering optimization problem is depicted as:
Several optimization algorithms have been developed to solve the optimization problem, such as CNPSO [5], SPA [48], MPM [49], IOD [50], TGA [51] and CPSO [52]. Table 13 contains the optimum solutions and the constrained solutions obtained by these algorithms. Besides, the Min, Mean and Std are used as the evaluation indicators. Table 14 shows the statistical results. Except for BEABC, the solutions offered by these algorithms are taken directly from [5]. By the results, though the constraint \(g_{1}(x)\) is approximately satisfied, the small enough difference 1.5814040E−07 can be ignored in practice. In terms of Min (f(x)), BEABC outperforms other algorithms, and we can come to the conclusion that BEABC is efficient in solving this optimization problem.
Design of a speed reducer gearing system
The gear design problem is to minimize the weight of a speed reducer at the mercy of constraints on bending stress of the gear teeth, transverse deflections of the shafts, surface stress and stresses in the shafts [53]. This optimization problem involves seven variables, and \(x_{1}\)-\(x_{7}\) are used to represent these variables, respectively. All the variables are incessant except number of teeth (\(x_{3}\)) which is an integer value. The reducer diagram is depicted in Fig. 8. All above, the design objective function is described as:
By reviewing the literatures, we find that some intelligent algorithms have been applied to solve this design problem, including DELC [54], DEDS [55], HEAA [56], PSO-DE [57], MDE [58] and MBA [59]. The optimal solutions and the statistical results obtained by each algorithm, which are taken directly from [59], are given in Tables 15 and 16, respectively. From the results, although the Min and Mean of BEABC are equal to those of DELC and DEDS, the Std is the smallest in all competitors. It is easy to draw the conclusion that BEABC is competent to search the best optimum solution and has a good robustness for this problem.
Conclusion
A novel ABC algorithm named BEABC is presented in this paper. First, BEABC used a new probability based on Bayesian estimation instead of the fitness ratio for accelerating the convergence rate. Using the new probability selection mechanism, the onlooker bees prefer to choose the solutions that can produce good offsprings. Second, in onlooker bee phase, a directional guidance strategy was presented to tradeoff the balance between exploration and exploitation. Finally, the sensitivity test of MR, the experimental results of numerical functions and two practical optimization problems showed the availability and advancement of BEABC. Although BEABC outperforms than its competitors, there is still room to improve its performance in solving the rotated and shifted problems. And in the future, we will further develop the potential of BEABC.
References
Wang C, Liu K (2019) A randomly guided firefly algorithm based on elitist strategy and its applications. IEEE Access 7:130373–130387
Das AK, Pratihar DK (2019) A directional crossover (DX) operator for real parameter optimization using genetic algorithm. Appl Intell 49(5):1841–1865
Hu Z, Gao C, Su Q (2021) A novel evolutionary algorithm based on even difference grey model. Expert Syst Appl 176:114898
dos Santos Coelho L, Alotto P (2011) Gaussian artificial bee colony algorithm approach applied to Loney’s solenoid benchmark problem. IEEE Trans Magn 47(5):1326–1329
Wang C, Song W (2019) A modified particle swarm optimization algorithm based on velocity updating mechanism. Ain Shams Eng J 10(4):847–866
Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department
Subramanyam G (2019) An improved artificial bee colony algorithm based harmonic control for multilevel inverter. J Control Eng Appl Inform 21(4):59–70
Luo H, Wang C, Zhi L, Yan S (2020) Prestack AVO inversion using the improved artificial bee colony algorithm based on exact Zoeppritz equations. SEG technical program expanded abstracts 2020. Society of Exploration Geophysicists, pp 345–349
Li B, Gong L, Yang W (2014) An improved artificial bee colony algorithm based on balance evolution strategy for unmanned combat aerial vehicle path planning. Sci World J 2014:95–104
Jacob MS, Selvi Rajendran P (2021) Fuzzy artificial bee colony-based CNN-LSTM and semantic feature for fake product review classification. Pract Exp Concurr Comput 34:1–16
Zorarpaci E, Özel SA (2021) Privacy preserving rule-based classifier using modified artificial bee colony algorithm. Expert Syst Appl 183:115437
Thilak KD, Amuthan A, Rajkamal S (2021) Mitigating DDoS attacks in VANETs using a variant artificial bee colony algorithm based on cellular automata. Soft Comput 25:12191–12201
De Jong K (2007) Parameter setting in EAs: a 30 year perspective. Springer, Berlin, pp 1–18
Akay B, Karaboga D (2009) Parameter tuning for the artificial bee colony algorithm. Springer, Berlin, pp 608–619
Akay B, Karaboga D (2012) A modified artificial bee colony algorithm for real-parameter optimization. Inf Sci 192:120–142
Kiran MS, Findik O (2015) A directed artificial bee colony algorithm. Appl Soft Comput 26:454–462
Durgut R, Aydin ME (2021) Adaptive binary artificial bee colony algorithm. Appl Soft Comput 101:107054
Wang H, Wang W, Zhou X, Zhao J, Wang Y, Xiao S, Xu M (2021) Artificial bee colony algorithm based on knowledge fusion. Complex Intell Syst 7(3):1139–1152
Cui L, Li G, Li Q, Du Z, Gao W, Chen J, Lu N (2016) A novel artificial bee colony algorithm with depth-first search framework and elite-guided search equation. Inf Sci 367:1012–1044
Liu F, Sun Y, Wang G, Wu T (2018) An artificial beecolony algorithm based on dynamic penalty and Levy flight for constrained optimization problems. Arab J Sci Eng 43(12):7189–7208
Chu X, Cai F, Gao D, Li L, Cui J, Xu S, Qin Q (2020) An artificial bee colony algorithm with adaptive heterogeneous competition for global optimization problems. Appl Soft Comput 93:106391
Chaudhuri A, Sahu TP (2021) Feature weighting for nave Bayes using multi objective artificial bee colony algorithm. Int J Comput Sci Eng 24(1):74–88
Zhou X, Lu J, Huang J, Zhong M, Wang M (2021) Enhancing artificial bee colony algorithm with multi-elite guidance. Inf Sci 543:242–258
Zhu G, Kwong S (2010) Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl Math Comput 217(7):3166–3173
Luo J, Wang Q, Xiao X (2013) A modified artificial bee colony algorithm based on converge-onlookers approach for global optimization. Appl Math Comput 219(20):10253–10262
Zhou X, Wu Z, Wang H, Rahnamayan S (2016) Gaussian bare-bones artificial bee colony algorithm. Soft Comput 20(3):907–924
Peng H, Deng C, Wu Z (2019) Best neighbor-guided artificial bee colony algorithm for continuous optimization problems. Soft Comput 23(18):8723–8740
Yu W, Li X, Cai H, Zeng Z, Li X (2018) An improved artificial bee colony algorithm based on factor library and dynamic search balance. Math Probl Eng 2018:1–16
Karaboga D, Gorkemli B, Ozturk C, Karaboga N (2014) A comprehensive survey: artificial bee colony (ABC) algorithm and applications. Artif Intell Rev 42(1):21–57
Liu J, Zhu H, Ma Q, Zhang L, Xu H (2015) An artificial bee colony algorithm with guide of global and local optima and asynchronous scaling factors for numerical optimization. Appl Soft Comput 37:608–618
Karaboga D, Gorkemli B (2014) A quick artificial bee colony (qABC) algorithm and its performance on optimization problems. Appl Soft Comput 23:227–238
Jadon SS, Bansal JC, Tiwari R, Sharma H (2018) Artificial bee colony algorithm with global and local neighborhoods. Int J Syst Assur Eng Manag 9(3):589–601
Al Mutairi AO (2018) Bayesian estimation using (Linex) for generalized power function distribution. Lobachevskii J Math 39(3):297–303
Mezura-Montes E, Coello CAC (2005) Useful infeasible solutions in engineering optimization with evolutionary algorithms. Springer, Berlin, pp 652–662
Ratnaweera A, Halgamuge SK, Watson HC (2004) Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. IEEE Trans Evol Comput 8(3):240–255
Tang B, Xiang K, Pang M (2020) An integrated particle swarm optimization approach hybridizing a new self-adaptive particle swarm optimization with a modified differential evolution. Neural Comput Appl 32(9):4849–4883
Zhang Y, Liu X, Bao F, Chi J, Zhang C, Liu P (2020) Particle swarm optimization with adaptive learning strategy. Knowl Based Syst 196:105789
Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S (2008) Multiobjective optimization test instances for the CEC 2009 special session and competition. University of Essex, Colchester, UK and Nanyang technological University, Singapore, special session on performance assessment of multi-objective optimization algorithms, technical report 264, pp 1–30
Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 11(6):712–731
Sindhya K, Sinha A, Deb K, Miettinen K (2009) Local search based evolutionary multi-objective optimization algorithm for constrained and unconstrained problems. In: IEEE congress on evolutionary computation. IEEE, pp 2919–2926
Tseng LY, Chen C (2008) Multiple trajectory search for large scale global optimization. In: IEEE Congress on evolutionary computation (IEEE world congress on computational intelligence). IEEE, pp 3052–3059
Zeng S, Yao S, Kang L, Liu Y (2005) An efficient multi-objective evolutionary algorithm: OMOEA-II. In: International conference on evolutionary multi-criterion optimization. Springer, Berlin, Heidelberg, pp 108–119
Rubio-Largo Gonzlez-lvarez D L, Vega-Rodrłguez MA, Gmez-Pulido JA, Snchez-Prez JMMO-ABC (2012) DE-multiobjective artificial bee colony with differential evolution for unconstrained multiobjective optimization. In: IEEE 13th international symposium on computational intelligence and informatics (CINTI). IEEE, pp 157–162
Wang H, Wu Z, Rahnamayan S, Sun H, Liu Y, Pan JS (2014) Multi-strategy ensemble artificial bee colony algorithm. Inf Sci 279:587–603
Sharma TK, Abraham A (2020) Artificial bee colony with enhanced food locations for solving mechanical engineering design problems. J Ambient Intell Humaniz Comput 11(1):267–290
Panagant N, Pholdee N, Bureerat S, Kaen K, Yildiz AR, Sait SM (2020) Seagull optimization algorithm for solving real-world design optimization problems. Mater Test 62(6):640-644
Rao SS (1996) Engineering optimization. Wiley, New York
Coello CAC (2000) Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 41(2):113–127
Belegundu AD (1982) A study of mathematical programming methods for structural optimization. The University of Iowa, Iowa City
Arora JS (2004) Introduction to optimum design. Elsevier, Amsterdam
Coello CAC, Montes EM (2002) Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inform 16(3):193–203
He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engineering design problems. Eng Appl Artif Intell 20(1):89–99
Ku KJ, Rao SS, Chen L (1998) Taguchi-aided search method for design optimization of engineering systems. Eng Optim 30(1):1–23
Wang L, Li L (2010) An effective differential evolution with level comparison for constrained engineering design. Struct Multidiscip Optim 41(6):947–963
Zhang M, Luo W, Wang X (2008) Differential evolution with dynamic stochastic selection for constrained optimization. Inf Sci 178(15):3043–3074
Wang Y, Cai Z, Zhou Y, Fan Z (2009) Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-handling technique. Struct Multidiscip Optim 37(4):395–413
Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl Soft Comput 10(2):629–640
Mezura-Montes E, Coello C A C, Velzquez-Reyes J (2006) Increasing successful offspring and diversity in differential evolution for engineering design. In: Proceedings of the seventh international conference on adaptive computing in design and manufacture, pp 131–139
Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new population based algorithm for solving constrained engineering optimization problems. Appl Soft Comput 13(5):2592–2612
Acknowledgements
The authors are grateful to the responsible editor and the anonymous referees for their valuable comments and suggestions, which have improved the earlier version of this paper. This study was funded by the National Natural Science Foundation NSFC (12071133); the Key Cultivation Project of Xianyang Normal University (XSYK21044); the Key Project of Henan Educational Committee (20A110021); the Basic Research Program of Natural Science in Shaanxi Province (2022JM-349).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Description of parameters and abbreviations
Appendix: Description of parameters and abbreviations
Sets | SN | Set of the food sources |
D | Set of the vector dimension | |
Variables | \(x_{ij}\) | The \(j\hbox {th}\) dimension of the food source i |
\(x_{kj}\) | The \(j\hbox {th}\) dimension of the neighbor solution k | |
\(v_{ij}\) | The \(j\hbox {th}\) dimension of the candidate solution i | |
\(P_{i}\) | The selection probability | |
\({\text {Fit}}_{i}\) | The fitness value of \(x_{i}\) | |
\(f(x_{i})\) | The function value of \(x_{i}\) | |
\(P(B=x_{i})\) | The probability of \(x_{i}\) occurs | |
\(I(B=x_{i})\) | The quantity of \(x_{i}\) occurs | |
\(I(A,B=x_{i})\) | The number of better solutions obtained based on \(x_{i}\) | |
\(x_{nj}\) | The \(j\hbox {th}\) dimension of the neighbor solution n | |
\(x_{gj}\) | The \(j\hbox {th}\) dimension of the current global solution g | |
Parameters | MR | Modification rate |
\({\text {MR}}_{\text {max}}\) | The upper bound of modification rate | |
\(x_{j}^{\text {max}}\) | The upper bound of the \(j\hbox {th}\) dimension of \(x_{i}\) | |
\(x_{j}^\mathrm{min}\) | The lower bound of the \(j\hbox {th}\) dimension of \(x_{i}\) | |
r, rand, \(\psi _{ij}\) | The uniform random number in [0,1] | |
\(\varphi _{ij}\) | The uniform random number in [-1,1] | |
\(\lambda \) | Adjust parameter | |
MaxDt | The maximum iteration number | |
Dt | The current iteration number | |
\(\mathrm{trial}_{i}\) | The counter of the food source i | |
limit | The counter threshold | |
d(v, C) | The minimum Euclidean distance between v and the points in C | |
Abbreviations | ABC | Artificial bee colony |
ABC-AHC | Adaptive heterogeneous competition augmented ABC | |
ABCM | ABC with memory | |
ABCs | Artificial bee colony variants | |
BEABC | ABC based on Bayesian estimation | |
CNPSO | PSO based on velocity updating mechanism | |
Abbreviations | COABC | Converge onlookers ABC |
CPSO | Co-evolutionary PSO | |
dABC | Directed ABC | |
DE | Differential evolution | |
DEDS | DE with dynamic stochastic selection | |
DELC | DE with level comparison | |
DFS-ABC | Depth-first search framework ABC | |
DPLABC | ABC with dynamic penalty and Levy flight | |
EAs | Evolutionary algorithms | |
FA | Firefly algorithm | |
GA | Genetic algorithm | |
GABC | Gbest-guided ABC | |
HEAA | Hybrid evolutionary algorithm | |
IOD | Introduction to optimum design | |
KFABC | ABC based on knowledge fusion | |
MBA | Mine blast algorithm | |
MDE | Modified DE | |
MGABC | ABC with multi-elite guidance | |
MO-ABC/DE | Multi objective artificial bee colony with differential evolution | |
MOEA/D | Multi objective evolutionary algorithm based on decomposition | |
MPM | Mathematical programming methods | |
MR | Modification rate | |
MTS | Multiple trajectory search | |
NABC | ABC based on the best neighbor-guided | |
NSGAIILS | Fast non-dominated sorting genetic algorithm with local search | |
OMOEAII | Orthogonal multi-objective evolutionary algorithm | |
PSO | Particle swarm algorithm | |
PSO-DE | PSO with DE | |
SPA | Self-adaptive penalty approach | |
TGA | GA with dominance-based tournament selection |
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wang, C., Shang, P. & Shen, P. An improved artificial bee colony algorithm based on Bayesian estimation. Complex Intell. Syst. 8, 4971–4991 (2022). https://doi.org/10.1007/s40747-022-00746-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-022-00746-1