Six Novel Hybrid Extreme Learning Machine–Swarm Intelligence Optimization (ELM–SIO) Models for Predicting Backbreak in Open-Pit Blasting

Backbreak (BB) is one of the serious adverse blasting consequences in open-pit mines, because it frequently reduces economic benefits and seriously affects the safety of mines. Therefore, rapid and accurate prediction of BB is of great significance to mine blasting design and other production activities. For this purpose, six different swarm intelligence optimization (SIO) algorithms were proposed to optimize the extreme learning machine (ELM) model for BB prediction, i.e., ELM-based particle swarm optimization (ELM–PSO), ELM-based fruit fly optimization (ELM–FOA), ELM-based whale optimization algorithm (ELM–WOA), ELM-based lion swarm optimization (ELM–LOA), ELM-based seagull optimization algorithm (ELM–SOA) and ELM-based sparrow search algorithm (ELM–SSA). In total, 234 data records from blasting operations in the Sungun mine in Iran were used in this study, including six input parameters (special drilling, spacing, burden, hole length, stemming, powder factor) and one output parameter (i.e., BB). To evaluate the predictive performance of the different optimization models and initial models, six performance indicators including the root mean square error (RMSE), Pearson correlation coefficient (R), determination coefficient (R2), variance accounted for (VAF), mean absolute error (MAE) and sum of square error (SSE) were used to evaluate the models in the training and testing phases. The results show that the ELM–LSO was the best model to predict BB with RMSE of 0.1129 (R: 0.9991, R2: 0.9981, VAF: 99.8135%, MAE: 0.0706 and SSE: 2.0917) in the training phase and 0.2441 in the testing phase (R: 0.9949, R2: 0.9891, VAF: 98.9806%, MAE: 0.1669 and SSE: 4.1710). Hence, ELM techniques combined with SIO algorithms are an effective method to predict BB.


INTRODUCTION
Blasting has been regarded as the main effective technique for rock excavation in open pit and underground mines (Bhandari 1997;Armaghani et al. 2016;Wang et al., 2018a, b;Huo et al., 2020;Du et al., 2022). Nevertheless, only a small amount (20-30%) of the explosive energy is used to break rock with current blasting techniques, and most (70-80%) of the explosive energy is lost with varying degrees of adverse consequences (Fig. 1) such as flyrock, backbreak and air blast (Agrawal and Mishra 2018;Uyar and Aksoy 2019;Fang et al., 2020;Fattahi and Hasanipanah 2021;Ramesh et al., 2021;Ye et al., 2021;Zhou et al., 2021a, b, c;Dai et al., 2022). Among these unwanted consequences, backbreak (BB) is one of the continuous and focused concerns of blasting engineers and scholars Shirani et al., 2016). BB is defined as the damaged rocks beyond the limits of the last row of holes (Konya and Walter 1991;Jimeno et al., 1995). Different studies have investigated various parameters associated with BB, including controllable blasting parameters and uncontrollable blasting parameters (Konya and Walter 1991;Bhandari 1997;Konya 2003;Monjezi and Dehghani 2008;Monjezi et al., 2010a, b). Controllable parameters include blast design parameters and explosive properties such asburden (B), spacing (S), stemming (ST), subdrilling (SU), blasthole length (BL), blasthole diameter (BD), stiffness ratio (SR), explosive type, explosive density, explosive strength, powder factor (PF) and coupling ratio (Sari et al., 2014;Ghasemi 2017). Konya (2003) found that BB is positively correlated with ST and B. Monjezi and Dehghani (2008) considered that ST/B, PF, charge per delay and other parameters have the greatest influence on BB. Monjezi et al., (2010a) reported the different influences of ST, B, S and depth of the hole (DH) on BB. Sari et al. (2014) showed that reducing explosive strength and PF can effectively reduce BB. Several researchers reported the effects of different materials of explosive and coupling ratio on BB (Wilson and Moxon 1988;Firouzadj et al., 2006;Iverson et al., 2009;Enayatollahi and AghajaniBazzazi 2010). Uncontrollable blasting parameters refer to physical and mechanical properties of rock masses such as rock density, rock porosity, rock strength, discontinuities orientation, and discontinuities strength (Ghasemi 2017). Bhandari and Badal (1990) considered the relationship between the orientation of discontinuities and BB. Bhandari (1997) showed the effects characteristics of rock mass on BB. Jia et al. (1998) believed that joints with a dip angle were one of the main causes of BB based on the numerical simulation results.
Therefore, this study aimed to develop novel hybrid ELM models by using six SIO algorithms to predict BB in an open pit, i.e., ELM-based particle swarm optimization (ELM-PSO), ELM-based fruit fly optimization (ELM-FOA), ELM-based whale optimization algorithm (ELM-WOA), ELM-based lion swarm optimization (ELM-LSO), ELM-based seagull optimization algorithm (ELM-SOA) and ELM-based sparrow search algorithm (ELM-SSA). The rest of this study is organized as follows. The section ''Methodology'' introduces the ELM model and six SIO algorithms. The section ''Dataset and Preparation'' shows data sources and detailed data analysis. The section ''Performance Indicators'' introduces six indicators to evaluate the performance of different models. The section ''Results and Discussion'' describes the development of all models and displays the results of models for BB prediction. The section ''Conclusion and Summary'' gives the main conclusion remarks of this study and some personal opinions. Huang et al. (2006) proposed a special neural network model, called the ELM, as one of the single-layer feed-forward neural network (SLFN) architectures. This model has one hidden layer, which can easily handle optimization problems by simply adjusting the number of neurons in the hidden layer (Zhang et al., 2021a, b). Assuming a training set D that contains K-dimensional (

METHODOLOGY Extreme Learning Machine
input vectors and L-dimensional output vectors t i = t i1 ; t i1 ; ::: :::;t iL ½ T , an effective ELM model is built to simulate the internal connection between input and output vectors according to the following three steps.
Step 1: Building an SLFN. The purpose of this step is to establish preliminarily the input weights W q and bias B q between the input layer and the hidden layer, and the output weights b i between the hidden layer and the output layer. Therefore, an SLFN with M neurons in a hidden layer can be written as: where g(x) represents the activation function, w q belongs to the set W: W q1 ; W q1 ; ::: :::;W qn Â Ã T , and B q belongs to B: B q1 ; B q1 ; ::: :::;B qn Â Ã T .
Step 2: Selecting weights and biases. These have an important effect on the output for a certain number of neurons in the hidden layer. To minimize the output error P P i¼1 t i À u i ¼ 0, the SLFN in step 1 can be transformed to: where u i = u i1 ; u i1 ; ::: :::;u iL ½ T represents the target vector. Then, the output of hidden layer neurons H and weights b can be expressed as: Step 3: Estimating the weights between the hidden and output layers. The optimal output weight can be solved by an inverse hidden layer output matrix (Shariati et al., 2019). It means that the target vector T v is closest to the real vector. Therefore, the target vector T v and the corresponding output weights vectorb can be expressed as: where H y is Moore-Penrose generalized inverse matrix.

Swarm Intelligence Optimization
Particle Swarm Optimization Kennedy and Eberhart (1995) proposed a particle swarm optimization (PSO) algorithm to solve the optimization problem inspired by the predation behavior of birds. The core of PSO comprises massless particles with velocity and position. Velocity indicates how fast birds search for food, and position affects the direction of birds. Each bird (particle) is independent but shares the position of the food at the same time. Throughout the search space, the individual extremum is a position of food for each bird. Birds aim to move toward the best food location by comparing shared food positions.
The velocity and position of each bird in the n + 1th iteration can be expressed by two mathematical formulas, thus: where N is the number of particles, u is a factor not less than zero, c 1 and c 2 are individual and social learning factors, where c 1 = c 2 = 2 in this study, r 1 and r 2 are random numbers between 0 and 1, and P n individual;i and P n group;i are the optimal positions for the individual and the group, respectively.

Fruit Fly Optimization Algorithm
Pan (2012) proposed a new algorithm based on the foraging behavior of fruit flies to solve the global optimization problem, named the fruit fly optimization algorithm (FOA). The fruit fly is considered one of the best hunters in nature because of its excellent sense of smell and vision. The illustration of the body looks and foraging process of the fruit fly is depicted in Figure 2. Within a certain range of search space, the fruit fly first activates the olfactory function to search for food. After approaching, it uses keen vision to search for food precisely and finally determine the position. Therefore, there are two main steps in the FOA.
Step 1: Osphresis search. Assume the position of one fruit fly is (x i , y i ), which randomly searches for food in a certain space based on olfactory feedback. However, the position of the food is not known in advance. The smell concentration (Smell i ) is assumed to be inversely proportional to the distance (Dist i ) of the i th fruit flies from the starting point (0, 0). Then, the Osphresis foraging can be expressed as: Step 2: Vision search. Olfactory search aims to determine the position of flies with the best smell concentration (Smellbest) and moving toward the position (x_axis, y_axis), which can be expressed as:

Whale Optimization Algorithm
Mirjalili and Lewis (2016) developed the whale optimization algorithm (WOA) by mimicking the predatory behavior of humpback whales in the ocean. Whales are relatively intelligent creatures in the ocean, thanks to having more than twice as many spindle cells as humans, especially humpback whales have even developed their own language (Hof et al., 2007). The most interesting thing about humpback whales is their foraging behavior, which is called bubble-net hunting as shown in Figure 3a. This foraging is required such that humpback whales dive 12-15 m to the bottom of the shoal and then attack by creating bubbles along a circle or Ô9Õ-shaped path (Mirjalili and Lewis 2016;Fan et al., 2020;Zhou et al. 2022c). Before hunting, humpback whales are very good at locating and encircling prey, and this behavior can be expressed mathematically as: where C 1 and C 2 are coefficient vectors, respectively, X Ã w and X represent the best and current positions of whales, respectively, in the n th iteration, E is the absolute value of the distance between whales and prey, and e is decreasing from 2 to 0 in the course of iteration, and u is randomly changed in [0, 1].
After encircling their prey, humpback whales shrink encircling and constantly reposition themselves to complete the bubble-net feeding behavior. As shown in Figure 3b, the shrinking encircling behavior is done by decreasing b in Eq. 11. Meanwhile, the positions of humpback whales (X, Y) are also changing in a spiral as shown in Figure 3c. The new position of the whale is expressed as: where x and l are changed randomly in [0, 1], and s is a constant to define the spiral shape.

Lion Swarm Optimization
Liu et al. (2018) proposed the lion swarm optimization (LSO) based on the hunting behavior of a lion swarm in nature. There is a strict social hierarchy within a lion swarm. The first echelon is the king lion, called a leader. A leader is responsible for assigning tasks to the other lions, distributing food and accepting status challenges. The second echelon is the lioness, called a predator. The predator is responsible for hunting, including searching for, tracking, trapping, and attacking prey. In addition, the predator is a direct communication link between the leader and the other lions and is responsible for giving instructions and feedback. The lion cubs are at the bottom of the swarm, and their main job is to learn how to hunt from a predator, a called follower. Once in their adulthood, followers are driven out of the group and trained to challenge the leader. Assume a lion swarm has N lions, where the ratio of leaders is less than 0.5, the positions of the lions in the different echelons are expressed as follows.
(a) The position of leader is near the food: (b) The predator often needs the help of another lioness to move: (c) The positions of the cubs are determined by the leader and the predator: where p kþ1 i represents the position of the ith leader at the k + 1 th iteration, g k and l k i represent the optimal and historical position of the leader in the kth iteration, respectively, c and q are changed randomly in [0, 1], l k c and l k m represent the historical position of the ith predator and follower at the kth iteration, respectively, and g k represents the current position of followers. The disturbance factors are defined as a f and a c in the movement range of predator and follower.

Seagull Optimization Algorithm
Dhiman and Kumar (2019) proposed a new algorithm, called the seagull optimization algorithm (SOA), based on the migration and attacking behavior of seagulls (Fig. 4) to solve the optimization problem. Seagulls rely on unique intelligence to catch prey, such as imitating the sound of rain to lure fish to the surface (Dhiman and Kumar 2019). As a kind of seasonal migration, seagulls need to obtain food to supplement energy in the process of migration to reach the destination (Avise 2017). However, the population of seagulls is very large in migration such that it is important to avoid colliding with each other. Assume the movement behavior (U) of seagulls, and this problem can be solved as: wherex s andÑ s represent the updated and current position of the seagulls, respectively, t represents the iteration time, m_iter indicates the maximum iteration, and C represents a control factor of U, which can be decreased linearly from 2 to 0. To get enough good food, seagulls need to constantly adjust their position to keep moving toward the best food. This behavior can be described as: whereÑ bs t ð Þ represents the current best position of a seagull,D s indicates the position where the current seagull toward the best seagull,Ã s represents the distance between the updated seagull and the best seagull, and F is a balance factor that can be estimated as: where h is changed randomly in [0, 1]. The seagulls maintain a spiral motion and change the angle and speed through their wings and weight in the attack. This attack pattern can be written in x, y, z planes, thus: where r indicates the radius of the spiral in each turn, K represents a random number in the range 0-2p, c 1 and c 2 represent constants to describe the shape of a spiral, and e is the base of the natural logarithm. Therefore, the best position of seagulls is calculated as:  powerful memories that help them better find food. Note that the sparrows are mainly divided into two types in this so-called sparrow search algorithm (SSA). The producers are responsible for searching for high-energy foods, and the food of the scroungers comes from the producers. The interesting thing is the flexible interchangeability of the producers and the scroungers identities, but the ratio of producers to scroungers is fixed in the sparrow swarm (Barta et al., 2004;Xue et al., 2020). This means that the strategy is useful for the producers and the scroungers to find higher-energy foods (Liker and Barta 2002). The natural curiosity of sparrows helps the producers, and the scroungers evade attackers. When one or more individuals spot attackers and sing, the entire swarm flies away (Pulliam 1973).
Assuming that there are n sparrows, J represents the spatial distribution and setting of the warning signal W s . In the SSA, the producers are not only responsible for finding the food, but also for feeding the scroungers. Therefore, the producers can search a wider area for energy-dense foods, and the position of the producers can be written as: where S kþ1 i;j represents the position of the i th producer in the j th dimension at the k + 1 th iteration, Q represents a random number that follows a normal distribution, L represents a matrix ( 1 Â d) where each element is 1, and the maximum number of columns (d) is the maximum dimension of J; a and R 2 are random numbers that vary in ð0; 1Þ, and iter max indicates the maximum time of iteration. As shown in Eq. 25, when R 2 > W s it means that individuals detect the attackers, the producers and the scroungers quickly fly to safe places. Inversely, the producers continue to search for food. The positions of the scroungers are related to the producers. The scroungers grab the producers of higher energy foods and update their positions according to: where S kþ1 b and S k worst represent the current best position of the producers and the global worst position, A represents a matrix ( 1 Â d) where each element is 1 or -1 and the maximum number of columns (d) also is the maximum dimension of J, and A þ ¼ A T ðAA T Þ À1 . However, if i > n/2, then the i th scrounger is most likely the starving sparrow. As soon as one or more individuals spot attackers, the sparrows move to safety. The mathematical expression for this behavior is: where S k best represents the current global best position at iteration k, b shows a control factor of step size that corresponds to a normal distribution with a mean of 0 and a variance of 1, j represents a random number in [-1, 1]. Here, f i , f b and f w are values of a present sparrow, the current global best and worst, respectively. e is a constant to avoid f i = f w . To put it simply, f i > f b indicates that sparrows in this position are highly vulnerable to attacker assail, while f i = f b indicates that sparrows in the center of the group are also aware of the presence of an attacker and begin to approach others to reduce the risk.

Dataset and Preparation
The Sungun mine is one of the large open-pit mines in Iran (Fig. 5). Investigations have shown that the maximum BB in this mine was 10 m. In total, 234 blasting operations recorded by Khandelwal and  in the Sungun mine were used as a dataset in this study. Before a blasting operation, controllable blasting parameters can be set and recorded by blasting engineers actively. Therefore, B, HL, ST, S, PF and special drilling (SD) were used as input parameters to predict BB. Figure 6 shows details of the six input parameters in boxplots, and the correlations between different parameters and BB are shown in Figure 7. The dataset was divided randomly into two groups: The training set (70%) was responsible for constructing the prediction model with certain precision, and the testing set (30%) was responsible for evaluating the prediction performance of the model.

Performance Indicators
In order to obtain accurately the prediction performance of ELM and six novel ELM-SIO  models, the root mean square error (RMSE), Pearson correlation coefficient (R), determination coefficient (R 2 ), mean absolute error (MAE), variance accounted for (VAF) and sum of square error (SSE) were used to evaluate these models in the training and testing phase. This was performed not only to verify the optimization effect of the swarm intelligence algorithm but also to select the best model for application in practical engineering. Therefore, six performance indicators were defined as follows (Hasanipanah et al., 2016;Zhou et al., 2020aZhou et al., , b, 2021a;Jahed et al., 2021;Li et al., 2021a, b, c;Xie et al., 2021a, b): where N represents the number of samples, y i and y indicate the observed and mean observed BB, respectively, and t i and t indicate the predicted and mean predicted BB, respectively.

RESULTS AND DISCUSSION
In total, seven models were considered for BB prediction. Figure 8 depicts the entire prediction process, including input data, model development, and performance evaluation. In addition to stability analysis, model development and performance evaluation were emphasized.

ELM Model
Six swarm intelligence models were developed based on ELM. Therefore, it was necessary to determine the optimal ELM structure. ELM is a special neural network structure with a single hidden layer, and the number of neurons in the hidden layer determines the performance of prediction. In this study, the RMSE index was used to evaluate the performance of ELM in the training and testing phases. The initial number of neurons was 10, and the next experiment increased by increments of 10 and was stopped until 150. Table 1 shows the pre-diction performance and corresponding RMSE values of the considered model with different numbers of neurons in 15 experiments. As shown in this table, the lowest RMSE occurred in the hidden layer with 60 neurons in both the training and testing phases. Therefore, an ELM model with 60 neurons in the single-layer feed-forward neural network (SLFN) architecture was developed (Fig. 9).

ELM-SIO Models
Six ELM-SIO models were developed with the same architecture as the ELM (i.e., a hidden layer with 60 neurons in the SLFN). Thus, the optimization problem was to obtain the best input weights and biases value. SIO is a relatively new idea, and it was proposed by imitating the behavior of insects and animals (Saghatforoush et al., 2016). Different from other methods, SIO only needs to train the most appropriate population to solve an optimization problem. For example, Shariati et al. (2020) considered 75 wolves in the ELM-GWO model to predict the compressive strength of concrete with partial replacements for cement. For a similar purpose, six ELM-SIO models (ELM-PSO, ELM-FOA, ELM-WOA, ELM-LSO, ELM-SOA, ELM-SSA) were used here to obtain the optimal number of the population in a certain number of iterations. The RMSE index was also used here to evaluate the model performance, and the calculation time of each iteration was also recorded. The result of the fitness value in a different numbers of the population is shown in Figure 10. This illustrates that the fitness value was not affected by the number of the population that exceeded 400 iterations in the ELM-PSO model, the rest were 500 in FOA (Fig. 10b), 300 in WOA (Fig. 10c), 500 in LSO (Fig. 10d), 400 in SOA (Fig. 10e) and 500 in SSA (Fig. 10f). As shown in Figure 11, the lowest RMSE and the corresponding iteration time are shown per ELM-SIO model each with a different number of populations. As can be realized, 60 birds were considered in the ELM-PSO model, the rest were 50 fruit flies in FOA (Fig. 11b), 50 whales in WOA (Fig. 11c), 50 lions in LSO (Fig. 11d), 70 seagulls in SOA (Fig. 11e) and 40 (Fig. 11f) sparrows in SSA.

Comparison of Results
As discussed earlier, the numbers of neurons and population in six ELM-SIO models were turned, and these hybrid models were used to predict BB. The prediction performances of the six models were compared by comparing the predicted values with the observed values in regression diagrams, and six evaluation indices (RMSE, R, R 2 , VAF, MAE, SSE) were calculated. The regression diagrams of the six ELM-SIO models in the training phase are shown in Figure 12. The vertical and horizontal axes represent predicted and observed  is responsible for separating the predicted value from the observed value. If the predicted value is greater than the observed value, it falls on the perfect fitting line and, on the contrary, falls below. Only when the predicted value is equal to the ob-served value can it appear on the line, and the model with more targets on the line has higher predictive performance. At the same time, a boundary of 10% off the perfect fitting line was set to cover more points to compare performance. As can be realized, the points distributed on the perfect fitting line were   the least in the ELM-FOA model, and some predicted values even exceeded the 10% range. The prediction performances of the other five models were obviously better than that of ELM-FOA, but it was difficult to distinguish the optimal model by the naked eye.  (Shariati et al., 2020). Conversely, models that perform well in the  training may perform poorly in the testing phase. To avoid misevaluation, a boundary of 30% off the perfect fitting line was increased based on the original 10%. Figure 13 illustrates regression diagrams of the six ELM-SIO models in the testing phase. Compared with the training phase, the predictive performance of each model in the testing phase decreased. As shown in this figure, five models all had the phenomenon that the prediction point was outside the boundary line of 30%, except the ELM-LSO model. Table 3 records the performance evaluation indicators per model in the testing phase. As can be observed in this table, the RMSE (0.2441), MAE (0.1669) and SSE (4.1710) of the ELM-LSO model were higher than those in the training phase, but still the lowest among the six models in the testing phase. The R (0.9949), R 2 (0.9891) and VAF (98.9806%) of ELM-LSO were the highest. As mentioned earlier, the ELM-SOA model performed second only to ELM-LSO, while the ELM-FOA model also had the worst predictive performance in the testing phase. Furthermore, the ranking scores of the six ELM-SLO models were obtained to offer more intuition as to their performance indicators, as shown in Figure 14. In terms of ranking scores, the ELM-LSO model was the best in both the training (36) and the testing (36) phases. To further verify the model performance, Figure 15 depicts the predicted curve versus the observed curve per model in the testing phase for BB prediction. Overall, the predicted curve of each model was consistent with the observed curve. However, some local details are very important to evaluate the predictive performance of each model. As shown in this figure, the ELM-PSO model had obvious prediction errors for sample numbers 6, 37 and 63; the ELM-FOA model had obvious prediction errors for sample numbers 33-35, 42, 46-49 and 65-66; the ELM-WOA model had obvious prediction errors for sample numbers 18, 37 and 63; the ELM-SOA model had obvious prediction errors for sample numbers 9 and 49; the ELM-SSA model had obvious prediction errors for sample numbers 9, 15, 42 and 63. However, the ELM-LSO model was not obviously wrong in the details. Therefore, the results indicate that the ELM-LSO was the best model for BB prediction.  Figure 16 clearly shows the difference between the ELM model and the ELM-LSO model. In this figure, the abscissa is the number of samples, and the ordinate is the ratio of predicted value to observed value. A ratio of 1 indicates that the predicted value is equal to the observed value, and the closer it is to 1, the better the prediction performance is. Figure 16a and c shows the prediction results in the training phase. The ELM-LSO model significantly improved the prediction performance of the ELM model and made the predicted value closer to the observed value. Also, in the testing phase, there were more targets close to 1 in the ELM-LSO model. While this model is not perfect, the outliers were not very far away.
The research results not only show that the performance of the ELM model can be improved by using SIO algorithms for BB prediction, but that ELM-LSO was the best model. Therefore, the comparison between the six hybrid models proposed in this study and previous studies is shown in Table 4. As can be seen in this table, the performance of the ELM-LSO model was the best among all models, especially based on the same dataset proposed and used by Khandelwal and Monjezi (2013), Zhou et al., (2021c) and Dai et al., (2022).

Relative Importance of Influence Variables
Sensitivity analysis is of great help to judge the prediction effect of influence variables on BB. According to the comparison results of different SIO models, the importance of variables was extracted from the ELM-LSO model in this study. The variable importance test mechanism is reflected in the amount of impurity reduction after changing randomly the values of the variables (Qi et al., 2018). Therefore, a new global sensitivity analysis method called PAWN, introduced by Wagener (2015, 2018), was used in this study. Different from the total local sensitivity analysis method based on square difference, the importance score (S) calculation expression is: where stat is a statistic (e.g., maximum, mean or median), and maximum was selected in this study. F y y ð Þ andF y x i j are unconditional and conditional cumulative distribution functions (CDFs) of the output variable y, respectively. Iũ is equally spaced Figure 15. BB prediction in the testing phase by different ELM-SIO models. subintervals of input important variable x i .ũ is usually set to the default value, 10 (Xue et al., 2021). Figure 17 shows the results of the variable importance scores. B and ST were the most sensitive variable for BB, with the highest importance score of 0.8717. The variable with the lowest score was SD (0.4817). However, there are no definitive studies to show that SD is necessarily the least important variable, especially given the amount of data covered in this study. In contrast, Faradonbeh et al., (2016) imposed that the PF and B are the most influential variables on BB. Some research suggested that HL and ST have greater effects on BB (Zhou et al., 2021a, b, c;Dai et al., 2022). Without further discussion, it is believed that the order of importance of influence variables to BB is the B and ST fi HL and S fi PF fi SD.

CONCLUSIONS AND SUMMARY
Predicting BB is an interesting and productive exercise. In this study, 234 cases of BB were involved, with six input variables (hole length, spacing, burden, stemming, powder factor and specific drilling) and unique output (BB). Under the unique advantages of the ELM algorithm combined with SIO, six hybrid models were developed to predict BB. It is concluded that the performance of ELM-LSO was the best model in the prediction of BB compared to the other five ELM-SIO models  worth noting that the burden and the stemming (0.8717) were the most influential input variable, followed by hole length and spacing (0.8205), powder factor (0.5575) and specific drilling (0.4871). This only represents the conclusion supported by the data in this study, which can be used as a reference but is not immutable. Moreover, more valid parameters such as rock quality designation (RQD), geological  strength index (GSI) and weathering index (WI) could be taken into account in future BB prediction tasks, even if these parameters are difficult to obtain in actual blasting investigations. Meanwhile, advanced and effective optimization algorithms need to be developed to improve the BB prediction accuracy as much as possible, which has always been the difficulty of this kind of work.

ACKNOWLEDGMENTS
This research is partially supported by the National Natural Science Foundation Project of China (42177164) and the Innovation-Driven Project of Central South University (2020CX040). The first author was funded by China Scholarship Council (Grant No. 202106370038).

FUNDING
Open Access funding enabled and organized by CAUL and its Member Institutions.

Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

OPEN ACCESS
This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecom mons.org/licenses/by/4.0/.