Introduction

To solve the nonlinear complex optimization problems, the backtracking search algorithm (BSA) [1] is one of the most efficient and powerful tools because of the simplicity and user-friendly approach of the algorithm. Due to these advantages, numerous attempts have been made towards further improving the BSA, and at the same time, these improved versions have been applied to solve various optimization problems in the past few years [2,3,4,5,6,7,8,9]. For example, based on self-adaptive strategy design [10,11,12,13] and hybridization mechanism [2, 3, 11, 14], many variants of BSA have been studied by several authors in the recent past. These improved and hybridized variants of BSA have been applied to obtain the optimum solution of various problems from diverse fields, such as parameters estimation of photovoltaic models [11], identifying soil parameters [15], job-shop scheduling [9], and fuzzy multiproduct multistage scheduling problem [16].

On the other hand, other than BSA, many biologically inspired algorithms have been used in numerous real-life applications. Due to the advantages of these algorithms to solve various types of optimization problems which are otherwise very hard to solve by the conventional methods, many researchers have introduced a variety of bio-inspired algorithms. Together with their variants, those algorithms have been used to solve numerous problems from both academia and industry. For example, Saha et al. [17] applied the HSOS algorithm on the pseudo-dynamic-bearing capacity of shallow strip footing; the critical failure mechanisms were determined via optimization with a genetic algorithm (GA). Different bio-inspired algorithms have been used to optimize the factor of safety and other related parameters of slope stability analysis in [18]. The weight and cost of the cantilever retaining wall have been minimized using a hybrid algorithm, namely, h-BOASOS in [19]. Sharma et al. proposed MPBOA [20], appending mutualism and commensalism phase in BOA, and used it to segment images using multilevel thresholding strategy. Chakraborty et al. [21] updated the whale optimization algorithm (WOA), modifying the parameters, increasing the exploration ability with random initialization of solution during the global search phase, and reducing the population. Later, the updated method mWOAPR was used to segment COVID-19 X-ray images. Mohammadzadeh et al. [22] improved the grey wolf optimization (GWO) algorithm with the hill-climbing method and chaos theory to solve workflow scheduling in green cloud computing. Trinh et al. [23] used moth-flame optimization algorithm (MFO) to tune the fuzzy logic controllers to improve life span, reduce retransmission, and normalize packet loss percentage in wireless sensor networks. Some other modifications of bio-inspired algorithms can be found in [24,25,26].

One of the exciting findings in the field of nature-inspired algorithms was “No Free Lunch (NFL) theorems” [27]. Logically, it has been proven that no optimization algorithm is ideal for solving all optimization problems. Alternatively, a specific algorithm can produce promising effects on a set of optimization problems, but it can perform poorly in another set of problems. This theorem unlocks the field for further studies on these algorithms, strengthening existing methods and suggesting new algorithms.

BSA is executed through four significant steps: selection-I, mutation, crossover, and selection-II operators. It has two algorithm-specific control parameters: scaling factor (F) and mixrate parameter (M). The scaling factor controls the direction of the population in the mutation strategy, and the mixrate parameter controls the crossover process. The proper execution of BSA can be done by setting the appropriate value of these parameters [13]. The value of these parameters can be set in many ways [8]. Some parameter settings can enhance the convergence speed [12], whereas some other settings are effective for solving separable functions [28]. Besides, solving one specific optimization problem may require a parameter setting where parameters vary automatically during the execution process [4].

The works carried out on BSA indicate that the algorithm produces adequate results in solving unimodal and multimodal problems but does not always provide optimum solutions. In some cases, it tends to converge in local optima. Also, in some test functions, when dimensions are increased, the performance of the BSA decreases. Therefore, in this work, the core search technique of the traditional BSA has been updated to answer all these questions and increase the efficiency to make a better trade-off between exploration and exploitation.

Also, some of the researchers studied the “multi-swarm” and “multi-population island models” [29] concept on some of the state-of-the-art algorithms for improving the performance of those algorithms. Basically, in those works, the initial population is partitioned into several subpopulations [17]. The core objective behind these works is to maintain the entire population’s assortment in the search space and maintain better local and global search abilities.

Motivated by the arguments mentioned above, in this work, a novel-enhanced BSA variant (ImBSA, in short) is proposed by utilizing the multi-population-based approach with multiple mutation operators. During the evolutionary process, the whole population is split into multiple subpopulations. The study’s main objective is to apprehend the automated parameter tuning approach among each mutation strategy over the populations to produce a new variant of BSA. ImBSA has applied on single-objective CEC 2014 [30] and CEC 2017 [31] test functions with dimension 50. The extensive comparative performance of ImBSA is exhibited with several state-of-the-art standard algorithms, improved BSA, efficient DE variants, PSO variants, and some other hybrid variants. Moreover, the proposed algorithm is applied to solve six real-world engineering design problems, namely frequency modulation sounds parameter identification problem, Lennard–Jones potential problem, Tersoff potential function minimization problem, speed reducer design problem, I-beam design problem, cantilever beam design problem, and speed reducer design problem. Through the experimental research, we seek to better understand the actions of the algorithm, in particular how exploratory and exploitative processes are balanced within this study.

The remaining part of the paper is arranged in the following way: a brief discussion of the basic BSA is presented in “Overview of the Backtracking Search Algorithm” section. The “The Proposed Algorithm” section offers a detailed description of ImBSA. The “Performance Results Analysis of CEC 2014 and CEC 2017 Test Functions” section reports the performance analysis of the proposed ImBSA on CEC 2014 and CEC 2017 test functions. The application of ImBSA to real-world optimization problems has been presented in the “Application of ImBSA to Real-World Problem” section. Finally, the “Conclusion” section reviews the influence of the present study.

Overview of the Backtracking Search Algorithm

The BSA is a population-based iterative, evolutionary algorithm that executes the process through five significant components: initialization, selection-I, mutation, crossover, and selection II.

To start with an initial population (Pop), a uniform distribution \(U(lb,ub)\) is used in BSA, represented by Eq. (1).

$${Pop}_{i,j}=lb+rand\left(\mathrm{0,1}\right)*\left(ub-lb\right)$$
(1)

where \(i=1, \mathrm{2,3},\dots\ \dots\ \dots\ .., NP; j=1, \mathrm{2,3},\dots\ \dots\ .., D\); \(lb\) and \(ub\) are the lower and upper bound of the optimization problem; and rand is a uniformly distributed random number. NP is the number of population; D is the dimension of the optimization problem.

Initial, historical population (OldPop) is obtained by utilizing the Eq. (2). Then selection I operator is used to redefine OldPop at the beginning of each evolutionary process through Eq. (3). Once OldPop is determined, a random shuffling function is applied to transform each individual’s position using Eq. (4).

$${OldPop}_{i,j}=lb+rand\left(\mathrm{0,1}\right)*\left(ub-lb\right)$$
(2)
$$If a<b, then, OldPop=Pop;a,b\in \left(\mathrm{0,1}\right)$$
(3)
$$OldPop=permuting \left(OldPop\right)$$
(4)

Here, permuting change the position of the OldPop.

To generate the search-direction matrix, OldPop is used, which means that BSA takes partial advantage of previous experiences for generating a new trial population. BSA uses the mutation operator to generate a trial population (Mutant) during the evolutionary process using Eq. (5).

$$Mutant=Pop+F*\left(OldPop - Pop\right)$$
(5)

Here, \(\left(OldPop - Pop\right)\) is called the search-direction matrix, and F controls it. The value of F is defined by Eq. (6). F is the scaling factor.

$$F=3*rndn,\ where\ rndn\in N\left(\mathrm{0,1}\right)$$
(6)

Here, \(N\left(\mathrm{0,1}\right)\) is the normally distributed random number.

After completing the mutation operator, the BSA crossover operator is applied to determine the trial population’s appropriate system (T). Two steps execute it. An NP*D matrix (map) of binary integer value in the crossover operator is initially calculated. The map is defined by Eqs. (7) and (8). Moreover, it is controlled by the mixrate parameter (M). In the second step, the trial population (T) is determined according to the Eq. (9).

$${map}_{i,u(1,:\lceil M*rand*D\rceil)}=0|\mathrm{u}=\mathrm{permutating}\left(<1, 2, 3, . . . , D>\right);$$
(7)
$${map}_{i,randi(D)}=1$$
(8)

Here M is the mixrate parameter, and its value is 1; 'rand' is a uniformly distributed random number; randi is an integer between 1 and D.

$${T}_{i,j}=\left\{\begin{array}{ll}{Mutant}_{i,j}& if\ {map}_{i,j}=1\\ {Pop}_{i,j}& if\ {map}_{i,j}=0\end{array}\right.$$
(9)

Next, the determined trial population T is updated into the population set Pop using the selection II operator. In the selection II operator, the better objective function value of the individuals of T is updated to the corresponding individual of Pop for producing the current population and to use in the next step through Eq. (10).

$${\mathrm{Pop}}_{\mathrm{i},\mathrm{j}}=\left\{\begin{array}{lr}{\mathrm{T}}_{\mathrm{i},\mathrm{j}}& if\ f\left({\mathrm{T}}_{\mathrm{i},\mathrm{j}}\right)<f\left({\mathrm{Pop}}_{\mathrm{i},\mathrm{j}}\right)\\ {\mathrm{Pop}}_{\mathrm{i},\mathrm{j}}& \mathrm{Otherwise}\end{array}\right.$$
(10)

The Proposed Algorithm

This section discusses the newly suggested improved backtracking search algorithm (ImBSA) incorporating the multi-population strategy, adaptive parameter setting, and a new mutation operator in the original BSA.

Modified Mutation Operator

For an optimization algorithm’s success, the significant development of the algorithm’s local and global search capabilities is essential [36, 37]. Global exploration capability indicates that the population visited the entire search region. In contrast, local exploitation capability specifies that the population visited nearer of those potential regions of the search space where the population visited previously. BSA mutation operation uses a historical population (\(OldPop\)) to produce new mutants to avoid the local optimums through their exploration capability. BSA remembers the historical population until the new population is created. To find the direction of the population,\(OldPop\) is used; thus, BSA takes partial advantage of the previous best-performing individual (\(BestPop\)) for producing a new trial population. Therefore, BSA can remember the best previous population (\(BestPop\)). For the above reasons, we have modified the mutation operator of BSA and formulated the new mutation operator as follows:

$$Mutant=Pop+F*\left(OldPop-Pop\right)+F*\left(BestPop-Pop\right)$$
(11)

Multi-Population-Based Ensemble Mutation Strategy Approach

The ensemble of the BSA mutation strategy can provide an efficient algorithm for solving various complex global optimization problems. In this study, two mutation strategies, “Mutation strategy 1” and “Mutation strategy 2,” are considered for producing a mutant, each of which has respective advantage. The “Mutation strategy 1” is used to explore the whole search space of the optimization problem, and “Mutation strategy 2” is used to exploit the population nearer the potential solutions.

Mutation strategy 1:

$${V}_{1}=Mutant = Pop + F* \left(OldPop - Pop\right)$$
(12)

Mutation strategy 2:

$${V}_{2}=Mutant=Pop+F*\left(OldPop-Pop\right)+F*\left(BestPop-Pop\right)$$
(13)

To appreciate the effectiveness of the ensemble of two mutation strategies, the whole population is divided into two sub-populations in each evaluation, which may be equal or unequal in size. In this study, we have considered equal sub-populations. If Pop denotes the entire population and \({Pop}_{1}\) and \({Pop}_{2}\) represent the two sub-populations, then, \(Pop={Pop}_{1}\cup {Pop}_{2}\). After that, a shuffle function is applied to the whole population to randomly change each population’s position. Thus, during the implementation of ImBSA, each population can exchange information by utilizing different mutation strategies in other evaluations.

Parameter Adaptation of BSA

BSA has three algorithm-specific parameters (NP, M, and F). F controls the direction of the matrix \(\left(OldPop - Pop\right)\). The crossover operator is controlled by the mixrate parameter M. The mixrate parameter M is adopted in the BSA crossover operator to control the population that will mutate in a new trial population [3]. However, for finding BSA’s significant performance on different optimization problems, the appropriate value of control parameters, viz. scaling factor “F” and mixrate “M,” has until now not been stated by some authors [4, 7, 8, 10]. Hence finding the appropriate values of F and M are a difficult task in the study of BSA. Under this consideration, self-adaption schemes for F and M have been suggested in this work. The self-adaption scheme can automatically maintain the appropriate value of these two parameters during the execution. The objective is to develop a highly tuneable algorithm based on BSA to solve a wide range of real-world optimization problems. The initial value of scaling factor F is determined within 0.45 and 2.0 for the population Pop. Popi defines the ith solution in the population, and the value of “F” is determined by Eq. (14) [4].

$${F}_{i}^{new}=\left\{\begin{array}{lc}{F}_{l}+rand\left(\mathrm{0,1}\right)*\left({F}_{u}-{F}_{l}\right),& f\left({T}_{i}\right)>f\left({Pop}_{i}\right)\\ {F}_{i},& otherwise\end{array}\right.$$
(14)

If \(F=0\), then the new trial population is generated by utilizing the crossover operator only but no mutation operator. The lower and upper values of F are considered as \({F}_{l}=0.45\) and \({F}_{u}=2.0\). Here, \(f({T}_{i})\) and \(f({Pop}_{i})\) represent the value of the ith trial population and the population’s objective function. From the observation of the Eq. (14), it can be said that the proper value of F is evaluated between 0.45 and 2.0. The value of F is obtained before the mutation operator of BSA is performed.

The value of the mixrate parameter “M” for the ith population is determined by Eq. (15).

$${\mathrm{M}}_{i}=1-0.1*\left(1-{r}_{i}\right)$$
(15)

Here, \({r}_{i}\in [0, 1]\). From Eq. (15), the value of “M” is always less than 1. Thus, in the proposed ImBSA, the user does not need to guess the better values of F and M.

Boundary Control Operator

If an individual moves outside the search space, it should be returned into the search space (feasible region). To keep the individuals in the feasible region, the following rule has been introduced:

$${Pop}_{i,j}=\left\{\begin{array}{cc}lb+0.5*rand \left(\mathrm{0,1}\right)*\left(ub-lb\right),& if\ {Pop}_{i,j}<lb\\ ub-0.5*rand\left(0, 1\right)*\left(ub-lb\right) ,& if\ {Pop}_{i,j}> ub\end{array}\right.$$
(16)

where \(i=\mathrm{1,2},3,\dots\ \dots\ \dots\ ..,NP; j=\mathrm{1,2},3,\dots\ \dots\ ..,D.\)

Incorporating the above modifications in the basic BSA, an improved BSA algorithm called ImBSA has been proposed to search the maximum possible region and exploit the population information. The framework of ImBSA is given in Algorithm 1.

Algorithm 1
figure 1

Pseudo-code of ImBSA

Performance Results Analysis of CEC 2014 and CEC 2017 Test Functions

The proposed ImBSA is applied on CEC 2014 and CEC 2017 functions to justify the better performance of the algorithm and compared with various state-of-the-art algorithms. In the following subsections, we will discuss the results obtained by ImBSA and other algorithms on different test functions cited above.

Result and Discussion on CEC 2014 Test Function

The proposed ImBSA is executed on a suit of thirty test functions of the CEC 2014 special session and competition. A detailed description of these thirty functions can be seen in [30]. For a comprehensive comparison, each comparative algorithm runs 51 times over the test functions with dimension 50, population size 100, and maximum function evaluations D*1000. All the algorithms are run in the MATLAB R2010a platform. The lower and upper bound of each test function is considered as −100 and 100, respectively.

Comparison with State-of-the-Art Algorithms

Computational results of thirty CEC 2014 test functions with dimension 50, obtained by BSA [1], PSO [32], ABC[33], DE [34], ABSA [13], and ImBSA, are presented in Table 1. Results obtained by ImBSA are given in boldface if they are the best among the compared algorithms. From this table, several observations and conclusions can be made. Table 2 presents the performance results of ImBSA where it is inferior to, similar to, and superior to other compared algorithms. From Table 2, it is seen that the performance of ImBSA is superior to DE, PSO, ABC, BSA, and ABSA on 27, 27, 26, 29, and 29 test functions, respectively; inferior to DE, PSO, ABC, and BSA on 3, 3, 4, and 1 test functions respectively; and equal to ABSA on one occasion.

Table 1 Performance results of DE, PSO, ABC, BSA, ABSA, and ImBSA at dimension (D) 50 after reaching D*1000 FEs of CEC 2014 test functions over 51 runs with 100 population size
Table 2 The number of occasions where ImBSA is superior to, inferior to, or similar to DE, PSO, ABC, BSA, and ABSA on CEC 2014 functions with dimension 50

The Friedman rank test is conducted by utilizing each benchmark function’s mean results obtained by all algorithms. For each algorithm, the average ranks and variance of positions are obtained and presented in Table 3. At 95% confidence level, the critical value, i.e., the F-statistic value, is 2.576. Based on the average rank of all CEC 2014 test functions, the calculated Friedman statistics, i.e., F-Score values, are reported in Table 3. In Table 3, it is noticed that, at a 95% confidence level, the F-statistics value (2.576) is less than the calculated F-Score value (3.625). Hence, the null hypothesis is rejected, and it indicated that the average ranks obtained by ImBSA utilizing the average performance of all the benchmark test functions are statistically significant.

Table 3 Friedman test on CEC 2014 benchmark functions with 50 dimensions of the algorithms DE, PSO, ABC, BSA, ABSA, and ImBSA using their mean performances

Next, we apply a pairwise post hoc Bonferroni-Dunn test [35] to identify the significant performance of the proposed ImBSA compared to other existing algorithms. Bonferroni-Dunn test states that “if the average ranks difference is superior to the critical difference (CD) value, then the algorithm is statistically significant.” From the observation of Table 3, it is seen that, at a 95% level of confidence, the obtained average rank difference is superior to the calculated CD (1.239978804). Therefore, from the above discussion and as shown in Fig. 1, it can be concluded that the performance of ImBSA on thirty CEC 2014 special session and competition test functions is statistically significant. Some graphical representations of the convergence performance of ImBSA are reported in Fig. 2.

Fig. 1
figure 2

Ranks obtained by Friedman test on CEC 2014 test function at D = 50

Fig. 2
figure 3

Convergence graph of four functions (F4, F6, F18, and F24) with dimension D = 50 and D*1000 FEs over fitness value vs. fitness evaluation

Results from Fig. 2 show that ImBSA converges substantially more quickly than BSA in all functions. The convergences of both methods are somewhat comparable for several functions in the beginning stages, while the ImBSA convergence is superior for later stages.

Comparison with Some Self-Adaptive DE Variants

Computational results of thirty test functions with dimension 50, obtained by DE/best/1/bin [36], DE/rand/2/bin [37], DE/target-to-best/2/bin [37], DE/target-to-best/1/bin [38], jDE [39], CoDE [40], and ImBSA, are reported in Table 4. Results obtained by ImBSA are given in boldface if they are the best among the compared algorithms. From this table, several observations and conclusions can be made. The numbers of occasions where ImBSA is inferior to, equal to, and superior to other comparative algorithms are presented in Table 5. From the observation of Table 5, it is seen that ImBSA is superior to DE/rand/2/bin, DE/best/1/bin, DE/target-to-best/1/bin, DE/target-to-best/2/bin/, jDE, CoDE on 28, 23, 24, 24, 28, and 26 test functions respectively; inferior to DE/rand/2/bin, DE/best/1/bin, DE/target-to-best/1/bin, DE/target-to-best/2/bin/, jDE, and CoDE on 1, 6, 5, 5, 1, and 3 test function respectively; and equal to one test function with each algorithms.

Table 4 Performance results of DE/rand/2/bin, DE/best/1/bin, DE/target-to-best/1/bin, DE/target-to-best/2/bin/, jDE, CoDE, and dimension (D) 50 after reaching D*1000 FEs of CEC 2014 test functions over 51 runs with 100 population size. Boldface represents the best result among the compared algorithms
Table 5 The number of occasions where ImBSA is superior to, inferior to, or similar to DE/rand/2/bin, DE/best/1/bin, DE/target-to-best/1/bin, DE/target-to-best/2/bin/, jDE, and CoDE on CEC 2014 benchmark suite with dimension 50

The Friedman rank test is also applied to thirty CEC 2014 special session and competition test functions (P) and seven algorithms (A) in this study. For each algorithm, the average ranks, the variance of ranks, and the average rank difference obtained by utilizing the average results of all test functions are presented in Table 6. At 95% confidence level, the critical value, i.e., F-statistic value, is 2.638. The calculated F-Score, i.e., Friedman statistics based on the average rank of all CEC2014 test functions, is shown in Table 6. It is observed that, at a 95% confidence level, the calculated F-Score value (3.543640898) is greater than the F-statistics value (2.638). Hence, the null hypothesis is rejected, and it is indicated that the mean ranks obtained by utilizing the average results of all the test functions are statistically different. Table 6 shows that, at a 95% confidence level, the computed critical difference value (1.4714061) is less than the mean rank difference. Hence, from the analysis of the result in Fig. 3, it is observed that ImBSA’s performance on thirty CEC 2014 special session and competition test functions is statistically significant at a 95% confidence level.

Table 6 Friedman test on CEC 2014 functions with 50 dimensions of DE/rand/2/bin, DE/best/1/bin, DE/target-to-best/1/bin, DE/target-to-best/2/bin/, jDE, CoDE, and ImBSA using mean performances
Fig. 3
figure 4

Ranks obtained by Friedman test on CEC 2014 test function at D = 50

Comparison with some PSO variants

Table 7 shows the comparison of computational results obtained by PSO-cf [32], FI-PS, CLPSO [41], DNLPSO, PSOGSA [41], DE-PSO, and ImBSA of thirty CEC2014 test functions with dimension 50. Results obtained by ImBSA are bolded if they are the best among the compared algorithms. From this table, the performance results of ImBSA, where it is inferior to, equal to, and superior to the above algorithms, are presented in Table 8. From Table 8, it is seen that the performance of ImBSA is superior to PSO-cf, FI-PS, CLPSO, DNLPSO, PSOGSA, DE-PSO on 24, 27, 29, 24, 23, and 22 test functions; inferior on 6, 2, 0, 6, 7, 8, and 6 occasions; and similar to FI-PS, CLPSO, and DNLPSO on 1 case, respectively.

Table 7 Experimental results of PSO-cf, FI-PSO, CLPSO, DNLPSO, PSOGSA, DE-PSO, and ImBSA at 50 D after reaching D*1000 FEs on CEC 2014 test functions over 51 runs and 100 population size
Table 8 The number of occasions where ImBSA is superior to, inferior to, or similar to PSO-cf, FI-PSO, CLPSO, DNLPSO, PSOGSA, and DE-PSO on CEC 2014 benchmark suite with dimension 50

Next, the Friedman rank test has been conducted on thirty CEC2014 benchmark functions on ImBSA and seven above algorithms. For each algorithm, the average ranks and variance of positions obtained by utilizing all test functions’ average results are presented in Table 9. Here, at 95% confidence level, the critical value, i.e., F-statistic value, and the calculated F-Score, i.e., Friedman statistics, are reported in Table 9. From Table 9, it is seen that, at a 95% confidence level, the calculated F-Score value (4.243870263) is greater than the F-statistics value (2.638). Hence, the null hypothesis is rejected, and it indicates that the average ranks obtained by utilizing the average results of all the test functions are statistically different. Also, it is observed from Table 9 and Fig. 4 that the mean rank difference is superior to the calculated CD value (1.4714061). Moreover, the analysis summarized that ImBSA presents acceptable performance on thirty CEC 2014 special session and competition test functions.

Table 9 Friedman test on CEC 2014 functions with 50 dimensions of the algorithms PSO-cf, FI-PSO, CLPSO, DNLPSO, PSOGSA, DE-PSO, and ImBSA using their mean performances
Fig. 4
figure 5

Ranks obtained by Friedman test on CEC 2014 function at D = 50

Results and Discussion on CEC 2017 Test Function

In CEC 2017 test functions, there are unimodal functions, simple multimodal functions, hybrid functions, and composition functions. One can see the details of these functions in [40]. The obtained results of ImBSA are compared with ABSA [13], IBSA [4], OBSA [42], HBSA [2], SCA [43], and HSCA [44] respectively. For fare comparison, the parameter setting for all the algorithms is kept the same. The performance results of these algorithms on the test functions are presented in Table 10. Table 11 presents the counting number that ImBSA is better than, worse than, and similar to the compared algorithms. This table shows that ImBSA is superior to ABSA, IBSA, OBSA, HBSA, SCA, and HSCA on 27, 28, 24, 26, 29, and 27 test functions.

Table 10 Experimental results of ImBSA, ABSA, IBSA, OBSA, HBSA, SCA, and HSCA for 10-dimensional CEC 2017 benchmark functions
Table 11 The number of occasions where ImBSA is superior to, inferior to, and similar to the compared algorithms on 10-dimension CEC 2017 benchmark functions

For statistical analysis, the Friedman rank test has been conducted based on the mean performance on CEC 2017 for each algorithm. By utilizing the average results of all CEC2017 test functions, the obtained average ranks and variance of ranks are presented in Table 14. At 95% confidence level, the obtained critical value (F-statistic value) and Friedman statistics (F-Score) are reported in Table 12. Table 12 shows that, at a 95% confidence level, the calculated F-Score value (37.86853147) is greater than the F-statistics value (2.638), which indicates that the average ranks obtained by utilizing the average results of all the test functions are statistically different. It is also observed from Table 12 and Fig. 5 that the mean rank difference is superior to the calculated CD value (1.496560163). Hence, it is summarized that ImBSA presents outstanding performance on the CEC2017 special session and competition test functions from the analysis.

Table 12 Friedman test on CEC2017 real-world benchmark functions with 10 dimensions of ABSA, IBSA, OBSA, HBSA, SCA, HSCA, and ImBSA based on their mean performances
Fig. 5
figure 6

Ranks obtained by Friedman test on CEC 2017 test functions of 10D

Complexity Analysis

This section details the complexity of our ImBSA algorithms with dimensions 50 and 10 correspondingly for IEEE CEC 2014 and 2017 test functions. Analysis of an algorithm’s complexity is useful for comparing the suggested algorithm with other algorithms regarding the time consumed to solve a problem. The algorithm with lower complexity is considered efficient. The present work calculates the complexity according to the definition presented in IEEE CEC 2014 and 2017. Algorithms 2 and 3 represent the simple codes used to evaluate the value of \(T0\) in IEEE CEC 2014 and 2017 functions, respectively. These codes were recommended in the problem definition and evaluation criteria of IEEE CEC 2014 and 2017, respectively. The calculated algorithm complexity of the test function IEEE CEC 2014 [34] and 2017 [35] is shown in Tables 13 and 14.

Algorithm 2

for i = 1:1,000,000

      x = 0.55 + (double)i;

x = x + x; x = x/2; x = x*x;

     x = sqrt(x); x = log(x); x = exp(x); x = x/(x + 2);

end

Algorithm 3

x = 0.55;

for i = 1:1,000,000

     x = x + x; x = x/2; x = x*x; x = sqrt(x); x = log(x); x = exp(x); x = x/(x + 2);

end

  • \(T1\) is the time to execute 200,000 evaluations of benchmark function F18 by itself with D dimensions.

  • \(T2\) is the time to execute ImBSA with 200,000 evaluations of F18 in D dimensions.

  • \(\widehat{T2}\) is the average \(T2\) value of 5 runs.

Table 13 Runtime complexity of IEEE CEC 2014 test function with D = 50 (in seconds)
Table 14 Runtime complexity of IEEE CEC 2017 test function with D = 10 (in seconds)

According to Tables 13 and 14, both \(T1\) and \(\widehat{T2}\) scaled linearly higher of BSA than ImBSA for the respective dimensions, as shown by the paired comparison of the value of \(( \widehat{T2}-T1)/T 0\).

Figures 6 and 7 have been drawn to depict the intricacy better. From these two figures, it can be observed that the height of the cone of ImBSA is lower than the height of the cone of BSA. This indicates that BSA is more complex than ImBSA. Thus, in terms of runtime complexity, ImBSA has a superior algorithm than BSA.

Fig. 6
figure 7

Algorithm complexity on CEC 2014 test functions with 50D

Fig. 7
figure 8

Algorithm complexity of CEC 2017 test function with 10D

State-of-the-Art Recent BSA Variants

In this section, the proposed ImBSA is compared with some recently proposed variants of BSA, namely GWO-BSA [45], e-SOSBSA [46], BS-DE [47], LFBSA [11], IBS [10], and hDEBSA [48]. For this comparison, the CEC 2017 test functions with ten dimensions have been considered.

The details of these functions can be seen in [31]. For fare comparison, the parameter setting of all the algorithms is kept the same. The performance results of these algorithms on the test functions are presented in Table 15. Table 16 presents the number of functions where the proposed ImBSA is better than, worse than, and similar to the compared algorithms. This table shows that ImBSA is superior to GWO-BSA, BS-DE, LFBSA, IBS, and hDEBSA on all the twenty-nine test functions, whereas ImBSA is superior to e-SOSBSA on 20 test functions.

Table 15 Experimental results of ImBSA, GWO-BSA, e-SOSBSA, BS-DE, LFBSA, IBS, and hDEBSA at for 10-dimensional CEC 2017 benchmark functions
Table 16 The number of occasions where ImBSA is superior to, inferior to, and similar to the compared algorithms on 10-dimension CEC 2017 benchmark functions

For statistical analysis, the Friedman rank test has been conducted based on the mean performance on CEC 2017 for each algorithm. By utilizing the average results of all CEC 2017 test functions, the obtained average ranks and variance of ranks are presented in Table 17. At 95% confidence level, the obtained critical value (F-statistic value) and Friedman statistics.

Table 17 Friedman test on CEC 2017 real-world benchmark functions with 10 dimensions of GWO-BSA, e-SOSBSA, BS-DE, LFBSA, IBS, hDEBSA, and ImBSA based on their mean performances

(F-Score) are reported in Table 17. Table 17 shows that, at a 95% confidence level, the calculated F-Score value (13.149847095) is greater than the F-statistics value (2.638), which indicates that the average ranks obtained from the mean results of all the test functions are statistically different. It is also observed from Table 17 and Fig. 8 that the mean rank difference is superior to the calculated CD value (1.496560163). Hence, it is summarized that ImBSA presents outstanding performance on CEC 2017 special session and competition test functions than its competitors considered in this study.

Fig. 8
figure 9

Results of Friedman rank test for CEC 2017 test functions with 10D

Application of ImBSA to Real-World Problem

In this part, six recognized engineering test problems [49] are tested with the proposed updated version of BSA, namely, (i) frequency modulation sounds parameter identification problem, (ii) Lennard–Jones potential problem, (iii) Tersoff potential function minimization problem, (iv) speed reducer design problem, (v) I-beam design problem, (vi) cantilever beam design problem, and (vii) speed reducer design engineering problem. The following can be summarized in the numerical engineering test problem:

Frequency Modulation Sounds Parameter Identification Problem [49]

The sound wave is a widespread optimization issue and plays a significant part in numerous current music systems. Frequency-modulation (FM) fusion is skillful in producing unique sounds but is not always properly regulated. A small number of factors can play a crucial role in delivering an extensive range of sound tones. Noises that are comparable to the goal sounds can be produced; therefore, using ImBSA for FM synthesis, the FM matching FM approach is described to identify optimum parameter values. Initially, a set of parameters is established by ImBSA, and the FM synthesizer generates the sound.

The distances between the target and synthesized sounds are treated as the fitness value in this task (or objective function value). This engineering benchmark problem is a tough multimodal challenge. Six of the sound waves are the dimension of this problem. The objective function has the following definition:

$${\varvec{Minimize}}\ f\left(y\right)=\sum\nolimits_{t=0}^{100}{\left(y\left(t\right)-{y}_{0}(t)\right)}^{2}$$
(17)

where Eqs. (18) and (19) give the primary sound wave and target sound.

$$y\left(t\right)={a}_{1}sin\left({\omega }_{1}t\theta +{a}_{2}sin\left({\omega }_{2}t\theta +{a}_{3}sin\left({\omega }_{3}t\theta \right)\right)\right)$$
(18)
$${y}_{0}\left(t\right)=1.0*sin\left(5.0*t\theta +1.5*sin\left(4.8*t\theta +2.0*sin\left(4.9*t\theta \right)\right)\right)$$
(19)

Six variables are \(y=\left\{{a}_{1},{\omega }_{1},{a}_{2},{\omega }_{2},{a}_{3},{\omega }_{3}\right\},\) and the domain of these variables is [− 6.4, 6.35]. Here \(\theta =2\pi /100.\)

Lennard–Jones Potential Problem [49]

The potential challenge of Lennard–Jones (LJ) is to minimize the molecular power potential of the pure LJ cluster. An icosahedral core and a combination of surface grid points are present in the grid structure of the LJ cluster. The algorithm ImBSA is implemented to make the particle less energy-efficient to adapt its molecular structure to construct the atoms. This hypothetical challenge of multimodal optimization energy reduction involves an endless number of local minimum levels. The Cartesian coordinates provide the cluster setup of “n” atoms in LJ problems.

$$\overrightarrow{{p}_{i}}=\left\{\overrightarrow{{x}_{i}}, \overrightarrow{{y}_{i}}, \overrightarrow{{z}_{i}}\right\},i=\mathrm{1,2},3,\dots ,N$$
(20)

The following are given for the mathematical creation of Lennard–Jones’ atom “N” pair potential:

$${V}_{N}\left(p\right)=\sum\nolimits_{i=1}^{N-1}\sum\nolimits_{j=i+1}^{N}\left({r}_{i,j}^{-12}-2.{r}_{i,j}^{-6}\right)$$
(21)
$$\mathrm{Where},\ {r}_{i,j}=\Vert \overrightarrow{{p}_{j}}-\overrightarrow{{p}_{i}}\Vert$$
(22)

And the gradient is

$${\nabla }_{j}{V}_{N}\left(p\right)=-12\sum\nolimits_{i=1,i\ne j}^{N-1}\left({r}_{i,j}^{-14}-2.{r}_{i,j}^{-8}\right)\left(\overrightarrow{{p}_{j}}-\overrightarrow{{p}_{i}}\right),j=\mathrm{1,2},3,\dots ,N$$
(23)

Ten atomic issues were taken into account, and a total of 30 variables were added. Any allowing and translating the {pj} order lead to the identical functional value, as \({V}_{N}\left(p\right)\) only depends on the pair distances. The search space of first three variables are (0, 4), (0, 4), and (0, π), and for other variables are:\(\left[-4-\frac{1}{4}\bigg\lceil\frac{i-4}{3}\bigg\rceil,4+\frac{1}{4}\bigg\lceil\frac{i-4}{3}\bigg\rceil\right]\) where \(\lceil r \rceil\) is the nearest integer of r.

Tersoff Potential Function Minimization Problem for Model Si(B) [49]

The total Tersoff potential energy function of the system is the sum of individual potentials of atoms and is defined as

$$f\left(\overrightarrow{{X}_{1}}, \overrightarrow{{ X}_{2}}, \overrightarrow{{X}_{3}},\dots ..,\overrightarrow{{X}_{N}}\right)={E}_{1}\left(\overrightarrow{{X}_{1}},\overrightarrow{{X}_{2}},\overrightarrow{{X}_{3}},\dots ..,\overrightarrow{{X}_{N}}\right)+\dots +{E}_{N}\left(\overrightarrow{{X}_{1}},\overrightarrow{{X}_{2}},\overrightarrow{{X}_{3}},\dots ..,\overrightarrow{{X}_{N}}\right)$$
(24)

The Tersoff potential of individual atoms can be formally defined as follows:

$${E}_{i}=\frac{1}{2}\sum\nolimits_{i\ne j}{f}_{c}({r}_{i,j})({V}_{R}({r}_{i,j})-{B}_{i,j}{V}_{A}({r}_{i,j})), \forall i$$
(25)

Here \({r}_{i,j}\) is the distance between atoms \(i\) and \(j\), \({V}_{R}\) is a repulsive term, \({V}_{A}\) is an attractive term, \({f}_{c}({r}_{i,j})\) is a switching function, and \({B}_{i,j}\) is a many-body term that depends on the positions of atoms \(i\) and \(j\) and the neighbors of atom \(i\). The term \({B}_{i,j}\) is given by

$${B}_{i,j}={\left(1+{\gamma }^{{n}_{1}}{\xi }_{i,j}^{{n}_{1}}\right)}^{-1/2{n}_{1}}$$
(26)

In Eq. (26), \({n}_{1}\) and γ are known fitted parameters, and the term \({\xi }_{i,j}\) for atoms, \(i\) and \(j\) (i.e., for bond), are given by:

$${\xi }_{i,j}=\sum\nolimits_{k\ne i}{f}_{c}\left({r}_{i,j}\right)g({\theta }_{ijk})\mathrm{exp}\left({\lambda }_{3}^{3}-{\left({r}_{ij}-{r}_{ik}\right)}^{3}\right)$$
(27)

In Eq. (27), the term \({\theta }_{ijk}\) is the bond angle between bonds \(ij\) and \(ik\), and the function \(g\) is given by:

$$g\left({\theta }_{ijk}\right)=1+{c}^{2}/{d}^{2}-{c}^{2}/\left({d}^{2}+{\left(h-\mathrm{cos}({\theta }_{ijk})\right)}^{2}\right)$$
(28)

The quantities \({\lambda }_{3}\) and \(c, d, h\) which appear in Eqs. (27) and (28) are also known as fitted parameters. The terms \({V}_{R}\left({r}_{i,j}\right)\), \({V}_{A}\left({r}_{i,j}\right)\) and the switching function \({f}_{c}\left({r}_{i,j}\right)\), appear in Eq. (25) are given by:

$${V}_{R}\left({r}_{i,j}\right)=A{e}^{-{\lambda }_{1}{r}_{ij}}$$
(29)
$${V}_{A}\left({r}_{i,j}\right)=B{e}^{-{\lambda }_{2}{r}_{ij}}$$
(30)
$${f}_{c}\left({r}_{i,j}\right)=\left\{\begin{array}{lrc}1& ,& {r}_{i,j}\le R-D\\ \frac{1}{2}-\frac{1}{2}sin\left(\frac{\pi \left({r}_{i,j}-R\right)}{D}\right)& ,& R-D<{r}_{i,j}<R+D\\ 0& ,& {r}_{i,j}\ge R+D\end{array}\right.$$
(31)

where A, B, \({\lambda }_{1}\) R, D, and \({\lambda }_{2}\) are given fitted parameters.

Thus, the cost function now can be redefined as:

$$f\left(x\right)={E}_{1}\left(x\right)+{E}_{2}\left(x\right)+\dots +{E}_{N}\left(x\right), x\in\Omega$$
(32)

In general, for N atoms, the no. of unknown variables is n = 3 × N − 6. Now the cluster X of N atoms can be redefined as

$$x=\left\{{x}_{1},{x}_{2},{x}_{3},\dots {x}_{n}\right\}, x\in {IR}^{3N-6}$$
(33)

The search region for Si(B) model of Tersoff potential is

$$\Omega =\left\{\left\{{x}_{1},{x}_{2},{x}_{3},\dots {x}_{n}\right\}: -4.25\le {x}_{1},{x}_{i}\le 4.25; i=\mathrm{4,5},\dots ,n; 0\le {x}_{2}\le 4; 0\le {x}_{3}\le \pi \right\}.$$

The number of atoms is therefore regarded to be 10, with the total no variables being 30. Reference [49] provides a complete overview of this problem.

Spread Spectrum Radar Polyphase Code Design Problem[49]

The spectrum radar polyphase code design challenge is one of the most frequent design optimization issues for global optimization techniques. The choice of proper waveform must be given close attention when a radar system is designed using pulse compression. There are several modulation methods of radar pulse that allow pulse compression. Polyphase codes are appealing because they give reduced side globes in the compressed signals and make digital processing techniques easier to apply. This synthesis is based upon the features of the aperiodic autocorrelation function, assuming that the receiver is being consistently radar pulse processed. The problem has constant variables and a min–max nonlinear non-convex optimization problem with several local solutions. This problem is mathematically formed as follows:

$$Minimize\ f\left(X\right)=\mathrm{max}\{{\phi }_{1}\left(x\right), { \phi }_{2}\left(x\right), { \phi }_{3}\left(x\right),\dots ,{\phi }_{2m}\left(x\right)\}$$
(34)

where \(X=\left\{\left({x}_{1},{x}_{2},{x}_{3},\dots ,{x}_{4}\right)\right.\in {R}^{D}\left|0\le {x}_{j}\le 2\pi ,j=\mathrm{1,2},3,\dots ,D\right\}, and\ m=2D-1, with\)

$${\phi }_{2i-1}\left(X\right)=\sum\nolimits_{j=i}^{D}cos\left(\sum\nolimits_{k=\left|2i-j-1\right|+1}^{j}{X}_{k}\right), i=\mathrm{1,2},3,\dots ,D$$
(35)
$${\phi }_{2i}\left(X\right)=0.5+\sum\nolimits_{j=i+1}^{D}cos\left(\sum\nolimits_{k=\left|2i-j\right|+1}^{j}{X}_{k}\right), i=\mathrm{1,2},3,\dots ,D-1$$
(36)
$${\phi }_{m+i}\left(X\right)=-{\phi }_{i}\left(X\right), i=1, 2, 3,\dots ,m;$$
(37)

In this context, the objective is to optimize (minimize) the most extensive module among the autocorrelation function linked to the compressed radar pulse at optimum recipient output. At the same time, the variables indicate symmetry phase differences. This is a problem of NP-hard optimization. Due to its objective function, the problem may be characterized as a class of continuous minimal global optimization problems. The problem includes 20 variables, and the variable domain is [0, 2π].

Cantilever Beam Design Engineering Problem [49]

The topic is the optimization of the weight of a square cross-sectional cantilever beam depicted in Fig. 9. At node 1, the beam is firmly held, while at node 5, a vertical force exists. The five design variables include the heights (or widths) of the beam and the thickness of the fixed elements of the beams. The analysis of the problem is as follows:

Fig. 9
figure 10

Cantilever beam design problem

$${\varvec{Minimize}}\ \boldsymbol{ }{f}_{5}\left(\overrightarrow{u}\right)=0.0624({u}_{1}+{u}_{2}+{u}_{3}+{u}_{4}+{u}_{5})$$
(38)

Subject to,

$${g}_{1}\left(\overrightarrow{u}\right)=\frac{61}{{u}_{1}^{3}}+\frac{37}{{u}_{2}^{3}}+\frac{19}{{u}_{3}^{3}}+\frac{7}{{u}_{4}^{3}}+\frac{1}{{u}_{5}^{3}}-1\le 0$$
(39)
$$\mathrm{Where}\ 0.01\le {u}_{j}\le 100 \left(j=\mathrm{1,2},\mathrm{3,4},5\right), \overrightarrow{u}=\left({u}_{1},{u}_{2},{u}_{3},{u}_{4},{u}_{5}\right)$$
(40)

I-beam Design Engineering Problem [46]

The ImBSA algorithm tested a new design problem, i.e., the problem of I-beam design, incorporating four variables, in addressing actual engineering difficulties. The goal of I-beam design optimization is to decrease vertical I-beam deflection, as seen in Fig. 10. With the I-beam design optimization issue, cross-sectional areas and stress limits are concurrently fulfilled with specified loads. The following are the formulation of the I-beam optimization problem:

Fig. 10
figure 11

I-beam design problem

$${\varvec{Minimize}}\ \boldsymbol{ }{f}_{6}\left(\overrightarrow{u}\right)=\frac{5000}{\frac{{u}_{3}({u}_{1}-2{u}_{4})}{12}+\frac{{u}_{2}{u}_{4}^{3}}{6}+2{u}_{2}{u}_{4}{(\frac{{u}_{1}-{u}_{4}}{2})}^{2}}$$
(41)

Subject to,

$${g}_{1}\left(\overrightarrow{u}\right)=2{u}_{2}{u}_{3}+{u}_{3}\left({u}_{1}-2{u}_{4}\right)-300\le 0$$
(42)
$${g}_{2}\left(\overrightarrow{u}\right)=\frac{18{u}_{1}\times {10}^{4}}{{u}_{3}{({u}_{1}-2{u}_{4})}^{3}+2{u}_{2}{u}_{3}(4{u}_{4}^{2}+3{u}_{1}({u}_{1}-2{u}_{4}))}+\frac{15{u}_{2}\times {10}^{3}}{\left({u}_{1}-2{u}_{4}\right){u}_{3}^{3}+2{u}_{3}{{u}_{2}}^{3}}-56\le 0$$
(43)
$$Where,10\le h\le 80, 10\le {u}_{2}\le \mathrm{50,0.9}\le {u}_{3}\le 5, and\ 0.9\le {u}_{4}\le 5,$$
(44)
$$\overrightarrow{u}=\left({u}_{1},{u}_{2},{u}_{3},{u}_{4}\right)=(h,b,{t}_{w},{t}_{f})$$

Speed Reducer Design Engineering Problem

The proposed ImBSA is employed to determine the speed reducer design problem [46]. Various components of this problem are the number of teeth on pinion \((z)\), the module of teeth \((m)\), face width \((b)\), the diameter of shaft 1 (\({d}_{1}\)), and diameter of shaft 2 (\({d}_{2}\)), length of shaft 1 between bearings (\({l}_{1}\)), and length of shaft 2 between bearings (\({l}_{2}\)) as shown seen in Fig. 11. The objective of this engineering benchmark design optimization problem is to minimize the total weight of the speed reducer. The constraints included in this problem are bending stress, surface stress, and transverse deflections. The analytical expressions of this problem are as follows:

Fig. 11
figure 12

Speed reducer design problem

$$\begin{array}{c}{\varvec{Minimize}}\ {f}_{7}\left(\overrightarrow{u}\right)=0.7854{u}_{1}{u}_{2}^{2}\left(3.3333{u}_{3}^{2}+14.933{u}_{3}-43.0934\right)-1.508{u}_{1}\left({u}_{6}^{2}+{u}_{7}^{2}\right)\\+7.4777\left({u}_{6}^{3}+{u}_{7}^{3}\right)+0.7854\left({u}_{4}{u}_{6}^{2}+{u}_{5}{u}_{7}^{2}\right),\end{array}$$
(45)

Subject to

$${g}_{1}\left(\overrightarrow{u}\right)=\frac{27}{{u}_{1}{u}_{2}^{2}{u}_{3}}-1\le 0$$
(46)
$${g}_{2}\left(\overrightarrow{u}\right)=\frac{397.5}{{u}_{1}{u}_{2}^{2}{u}_{3}^{2}}-1\le 0$$
(47)
$${g}_{3}\left(\overrightarrow{u}\right)=\frac{1.93{u}_{4}^{3}}{{u}_{2}{u}_{6}^{4}{u}_{3}}-1\le 0$$
(48)
$${g}_{4}\left(\overrightarrow{u}\right)=\frac{1.93{u}_{5}^{3}}{{u}_{2}{u}_{7}^{4}{u}_{3}}-1\le 0$$
(49)
$${g}_{5}\left(\overrightarrow{u}\right)=\frac{{[{\left(\frac{745{u}_{4}}{{u}_{2}{u}_{3}}\right)}^{2}+16.9\times {10}^{6}]}^\frac{1}{2}}{110.{u}_{6}^{3}}-1\le 0$$
(50)
$${g}_{6}\left(\overrightarrow{u}\right)=\frac{{\left[{\left(\frac{745{u}_{5}}{{u}_{2}{u}_{3}}\right)}^{2}+157.5\times {10}^{6}\right]}^\frac{1}{2}}{85.0{u}_{7}^{3}}-1\le 0$$
(51)
$${g}_{7}\left(\overrightarrow{u}\right)=\frac{{u}_{2}{u}_{3}}{40}-1\le 0$$
(52)
$${g}_{8}\left(\overrightarrow{u}\right)=\frac{{5u}_{2}}{{u}_{1}}-1\le 0$$
(53)
$${g}_{9}\left(\overrightarrow{u}\right)=\frac{{u}_{1}}{12{u}_{2}}-1\le 0$$
(54)
$${g}_{10}\left(\overrightarrow{u}\right)=\frac{1.5{u}_{6}+1.9}{{u}_{4}}-1\le 0$$
(55)
$${g}_{11}\left(\overrightarrow{u}\right)=\frac{1.1{u}_{7}+1.9}{{u}_{5}}-1\le 0$$
(56)

where \(2.6\le {u}_{1}\le 3.6\), \(0.7\le {u}_{2}\le 0.8\), \(17\le {u}_{3}\le 28\), \(7.3\le {u}_{4}\le 8.3\), \(7.3\le {u}_{5}\le 8.3\), \(2.9\le {u}_{6}\le 3.9\), and \(5.0\le {u}_{7}\le 5.5\),\(\overrightarrow{u}=\left({u}_{1},{u}_{2},{u}_{3},{u}_{4},{u}_{5},{u}_{6},{u}_{7}\right)=(z, m, b, {l}_{1},{l}_{2}, {d}_{1}, {d}_{2})\).

Results and Discussions on Real-World Problems

The ImBSA has calculated 1000 function evaluations of 50 population sizes and 25 runs for the first four tasks. Tables 18, 19, 20, and 21 describe the result of four issues, i.e., the optimal solution. The findings of the problems mentioned above are derived from the reference [49].

Table 18 Parameter estimation for frequency-modulated (FM) sound waves
Table 19 Lennard–Jones potential problem
Table 20 Tersoff potential function minimization problem
Table 21 Spread spectrum radar polyphase problem

The result achieved is compared to ISOS, SOS, ASOS_BF1, ASOS_BF2, ASOS_BF1&2, BSA, ABSA, DE, and ImBSA for the frequency-modulated (FM) parameter estimation problem. Table 18 contains the best, median, worst, median, and SD obtained by ImBSA and some other algorithms on this problem. Table 18 shows that ImBSA receives the best solution. Therefore, the findings of Table 18 suggest that the performance of the ImBSA for this problem is best compared to other methods.

The outcomes achieved by the proposed algorithm are compared with ISOS, SOS, ASOS BF1, ASOS BF2, ASOS BF1&2, BSA, ABSA, DE, and ImBSA for Lennard–Jones potential problem. Table 19 presents the best, median, worst, average, and SD showing the performance outcomes of this problem by ImBSA and aforesaid algorithms. Table 19 displays that ImBSA obtains the best, median, and mean results. Thus, one can readily sum up from the findings that the performance of the ImBSA is the best compared to other techniques.

The findings obtained are compared with other algorithms like ISOS, SOS and ASOS BF1, ASOS BF2, ASOS BF1&2, BSA and ABSA, and ImBSA for Tersoff’s Si(B) potential. Table 20 contains the best, median, worst, mean, and SD performance outcomes of this optimization task. Table 20 shows that the BSA obtains the best answer, although ImBSA obtains the best median and mean outcomes. The results suggest that the ImBSA version demonstrates a superior way than some algorithms compared here.

The comparison table of the results obtained by ISOS, SOS, ASOS BF1, ASOS BF2, ASOS BF1&2, BSA, ABSA, DE, and ImBSA for the radar polyphase code design problem is presented in Table 21. Table 21 shows that ImBSA obtains the best solution, while ASOS BF2 gets the worst outcome. So, by observing Table 21, it can be said that the ImBSA works better than any other approach. This suggests ImBSA’s overall superiority to solve this problem. The explanation above indicates that the performance of the ImBSA is acceptable to solve this problem.

The proposed ImBSA has been employed to solve the cantilever beam design with the same configuration as the reference [46]. The optimal output attained is shown in Table 22, and results other than ImBSA are derived from [46]. In comparison with m-SCA, SCA, CS, GCA(I), GCA(II), MMA, CONLIN, and e-SOSBS, the optimal performance of the suggested ImBSA is better. Compared to the other optimizer offered in this study, the comparative research shown in Table 22 shows ImBSA’s efficient performance capability.

Table 22 Comparison of performance result of the cantilever beam design problem

Cauchy-GWO, ARSM, improved ARSM, CS, GWO, and e-SOSBSA were solved I-beam design engineering optimization problems previously. The suggested ImBSA research is also employed to solve this significant optimization issue with the same parameter configuration used in reference [46]. The best outcome achieved by ImBSA is presented in Table 23, and the results other than ImBSA are taken from [46]. The optimized values of the four design variables are 80, 50, 1.7646138, and 5.0000000, and an ideal I-beam vertical deflection is 0.0071004879203201. Table 23 demonstrates that ImBSA handles the problem with improved performance in the design of beams compared to other algorithms given in this table.

Table 23 Comparison of performance result of the I-beam design problem

The speed reducer design problem has previously been optimized with a different algorithm which can be seen in [46]. Table 24 provides a summary of the best solution for the recorded methods. The results were achieved with the same parameter setting as in reference [46] to compare the performance of ImBSA. The optimum output is compared with e-SOSBSA, SC-GWO, CS, SCA, PSO, wPSO, GWO, Ray and Saini, mGWO, wGWO, m-SCA, OBSCA, SSA, MFO, WOA, ISCA, Chaotic SSA, Akhtar et al., Ku et al., Montes, and Coello. The results of these methods are taken from [46]. The optimum values of all decision variables are 3.5008989, 0.7000000, 17.0000000, 7.3007000, 7.7159679, 3.3538547, and 5.2846039, and the optimum weight of the speed reducer is 2994.4710501. The results’ comparison with other algorithms is also shown in the same table, demonstrating the proposed ImBSA over different algorithms.

Table 24 Comparison of performance result of the speed reducer problem

It is evident from the preceding discussion, findings, and reviews that the quality, efficiency, and resilience of ImBSA have improved to address real-life challenges. In addition, ImBSA may also be used for practical applications as an alternative optimizer.

Conclusion

In this study, a new mutation strategy of the BSA has been proposed, which exploits the searchability of the algorithm in a better way. A new modified parameter scheme has been considered in ImBSA. An ensemble of mutation strategies with a multi-population concept has also been incorporated into this study. Incorporating the concepts above, a novel BSA variant, called multi-population-based self-adaptive BSA (ImBSA, in short), has been proposed in this study. The extensive performance of ImBSA on CEC 2014 and CEC 2017 test functions has been tested by comparing with several algorithms found in the literature. The comparison results show that ImBSA outperforms some other efficient algorithms. For statistical analysis, a nonparametric Friedman test in which a pairwise post hoc Bonferroni-Dunn test with a 95% confidence level is utilized to highlight the significant difference of performance of ImBSA among other compared algorithms. The statistical analysis also confirms that ImBSA is statistically more effective than the compared algorithms.

The study addresses the efficiency of the proposed algorithm employing six problems of engineering design optimization. Results on technical challenges also approve that ImBSA has been improved significantly compared to different optimizers in the literature.

Based on the findings of experimental results, statistical testing, and real-world applications, some concluding remarks are given below:

  • Compared with conventional BSA, the improved stability of BSA’s exploitation and exploration has been established and improved. With auto-adaptive mutation rate and mixrate parameters, in the suggested ImBSA, the stability and convergence speed have been improved.

  • The numerical findings of the CEC 2014 and CEC 2017 benchmark suites demonstrate that the proposed ImBSA has increased search capacity compared to some of the competitors.

  • The convergence curve drawn between the fitness values at each generation determines the stability within the suggested methodology for exploitation and identifies the optimal value throughout the process of searching.

  • A convergence and statistical validation study of the data confirms the improved performance by ImBSA when addressing issues with optimization compared to traditional BSA and other comparative optimization methods.

  • To determine the numerical findings, the challenge of technical test problems, including diverse levels of complexity and space of research, also promotes the efficiency of the suggested ImBSA.

Limitation and Future Direction

There is hardly any study without limitations. Although we have tried to enhance the efficiency of the BSA, this study has several limitations. Some of these limitations and future directions are given below:

  • The adaptive parameter setting based control parameters value has only been suggested here. But this can be done in many other ways such as the fuzzy adaptive, neighborhood-based, concept of the topology of the population, and fitness distance-based. These are some future directions of the current work.

  • The algorithm’s stochastic analysis, such as convergence analysis and stability analysis, may also be analyzed so that the proposed algorithm may have a strong mathematical base.

  • The best-guided search-direction matrix is used in the proposed method, but this can be further improved in different ways.

  • This method is applied only on single-objective constrained and unconstrained optimization problems which may be extended to the multi-objective scenario.

  • This approach may be used to determine numerous optimization problems from academia and industry.