A stacked generalisation methodology for estimating the uniaxial compressive strength of rocks

Uniaxial compressive strength (UCS) has become a highly essential strength parameter in the mining, civil and geomechanical industries. Estimating the exact value of the strength of rock has become a matter of great concern in real life. Despite this, there have been many works to indirectly/directly estimate the UCS of rocks. This study introduces a novel stacked generalisation methodology for estimating the UCS of rocks in geomechanics. In this study, generalised regression neural network (GRNN), radial basis function neural network (RBFNN), and random forest regression (RF) were used as the base learners and the multivariate adaptive regression spline (MARS) functioned as the meta-learner for the proposed stacking method. The proposed 3-Base learner stack model exhibited dominance over single applied AI methods of GRNN, RBFNN, and RF when confirmed with similar datasets by employing performance metrics like the Nash–Sutcliffe Efficiency Index (NSEI), Root Mean Squared Error (RMSE), Performance Index (PI), Scatter Index (SI) and Bayesian Information Criterion (BIC). The proposed 3-Base learner stack model scored the least RMSE, PI, and SI scores of 1.02775, 0.50691, and 0.00788 respectively for the testing datasets. In addition, it also produced the utmost NSEI value of 0.99969 and the least BIC value of 16.456 as likened to other competing models (GRNN, RBFNN and RF), reaffirming its power in forecasting the UCS of rocks in geomechanical engineering.


Introduction
Uniaxial Compressive Strength (UCS) has been relevant due to its role when rock mass characterisation and classification are discussed in geomechanical, civil and mining engineering.This is because it is used as the main input parameter in evaluating the strength of intact rocks for slope stability analysis, underground tunneling, blast prediction assessment, etc. in mining and geomechanical works [14,53].Due to the aforementioned importance, it has become very necessary to accurately estimate UCS either from laboratory test works [8,29,47] or empirical techniques [4,40,54].However, for the laboratory test works, it is required that the geometry of the core samples remain unbroken which often is a challenge to achieve in most cases.In addition to that, the test procedure is expensive to execute because it requires intensive labour and carefulness in preparation [2,56,60].In the case of the empirical techniques, it cannot accurately predict the strength of all rock types as their characteristics could differ significantly [38,64].This often lead to either over-or underestimation of the strength and lead to wrong judgment in real-life engineering applications.
In recent times, artificial intelligence (AI) algorithms such as artificial neural networks (ANNs) and recurrent neural networks (RNNs) have been extensively employed in various fields for solving pattern recognition and function approximation-related problems [12,44,50].Their competency as a powerful prediction tool in strength research has been properly acknowledged [7,22,28,31].Parashar et al. [48] reported that ANNs are effective tools for predicting the UCS of rocks because of their ability to learn the complex relationships between input and output data.In their study, they employed ANNs to predict the UCS of rocks based on various input parameters like mineralogy, texture, and petrography.In addition, Sahin and Kesimal utilised ANNs to predict UCS values of various types of rocks.They used input variables such as Schmidt hammer number, point load strength, density and porosity in their study.They concluded that ANNs can accurately explain the correlation between UCS and the input variables.Similarly, Li et al. [36] also purported that RNNs are mainly appropriate for predicting the UCS of rocks since they can handle sequential data.In their work, they employed an RNN-based model called Long Short-Term Memory (LSTM) to predict UCS of rocks using density, P-wave velocity and porosity as their input variables.Even though both ANNs and RNNs have their fortes and precincts in strength prediction, the ANNs are good at learning complex and non-linear relationships in data but may have problems in handling progressive data.On the contrary, RNNs are chiefly suited for progressive data but may be limited by the disappearing gradient problem in recurrent networks.Moreover, despite all the potent advantages of machine learning algorithms, it has been purported that, there is no conventional AI method that can vaunt dominancy for resolving all problems involving earth data [46].This reaffirms the no-free lunch theory in the domain knowledge of AI [13,19].As a result, it is necessary to evaluate the predictive accuracy of different machine learning algorithms for strength evaluation.Given the aforementioned thoughts, a plethora of hybrid models have been employed extensively to overcome the associated weaknesses in standalone AI models to achieve enhanced strength prediction [6,41,42,55,57,61].For instance, the traditional ANN techniques are associated with setbacks such as slow learning speed, converging to a local minimum due to the use of gradient optimisers, and requiring manual tasking in setting the adjustable hyperparameters [24].It is noteworthy that the proposed hybrid models in strength-related studies [6,41,42,55,57,61] concentrate on using either dimensionality reduction techniques as pre-processing step or using advanced nature inspired optimisation algorithms to optimise the hyperparameters in the standalone AI methods.Even though the prediction performance of the existing hybrid AI models outweighs that of the standalone models, one of the recent hybrid methods making waves in other scientific fields is the stacked generalisation methodology, which was developed by Wolpert [59].
In the stacking generalisation methodology, an AI method uses the outcomes of base learners as inputs for a meta-learner to obtain enhanced prediction results.In literature, Wang et al. [56] developed a stacking method for forecasting the productivity of cutter suction dredgers based on data mining.In their study, it was revealed that the proposed stacking method had the best productivity prediction as compared with the base learners (Gradient-boosting decision tree (GBDT), Lasso, Elastic net (ENet), Light Gradient Boosting Machine (LightGBM)) and extreme gradient boosting (XGBoost)).Preethaa et al. [49] used the stack generalisation model to enhance the estimation of earthquake-induced soil liquefaction by using data from the Korean Geomechanical Information database system, in South Korea.Their proposed model produced the finest performance compared with the applied base learners (linear regressor, support vector regression, and multilayer perceptron neural network).In addition, Koopialipoor et al. [32] developed stacked generalisation methods for the prediction of rock deformation.In their research, it was found that the proposed stacking method had the best prediction capability as compared with other competing AI models (the k-nearest neighbors (KNN), multi-layer perceptron neural network, and random forest).Similarly, Kadingdi et al. [30] employed a stacked methodology to enhance the prediction of ground vibration from blasting in an open pit mine.They purported that the proposed stacking method had a better prediction performance as compared with those of the base learners (i.e.Gaussian process, gradient boosting machine, and random forest).In extension, stacked generalisation has also been employed in petroleum reservoir characterisation and log interpretation [5,25] and photovoltaic power prediction in energy estimation in renewable energy engineering [21,46].To this end, the stacked generalisation method can be considered a viable estimation tool due to its versatility and opening new frontiers in hybrid model development.
Given the enumerated strengths, the current study employed a stacked generalisation method to forecast the UCS of rocks using data from the Banket Formation in Tarkwa.In the proposed stacked method, Random Forest (RF), Radial Basis Function Neural Network (RBFNN), and Generalised Regression Neural Network (GRNN) were applied as the base learners while the Multivariate Adaptive Regression Spline (MARS) served as the meta-learner.The RBFNN and RF were selected due to their effectiveness in handling both regression and classification problems [20,45].GRNN was considered due to its ability to handle accurately linear and non-linear regression tasks [37,51].The MARS model was chosen because it is a distributionfree non-parametric regression approach that constructs various linear regression models across a range of predictor values and can handle regression problems efficiently [18].
Moreover, it is noteworthy to know that the use of AI methods as meta-learner is receiving recognition in scientific research in modern times [10,62].Furthermore, there is no widely accepted guideline for the appropriate number of base learners to be used for the stacking methodology.Nonetheless, researchers have indicated that two or more methods can be used as base learners in the stacking method [36,43].Ntow et al. [46] and Wolpert [59] reported that the most important decision to be accounted for in the stacking process is the variation of the base learners to avoid over-estimation of the predicted results in the final prediction phase.In this work, the variety in the estimation results was accounted for in the collection of the base learners employed.Hence, considering three base AI learners in this current work was reasonable for the stacking method for UCS estimation.
Given the aforestated benefits of the stacking generalisation method in literature, it is clear that its prediction capabilities outweigh that of the conventional standalone machine learning methods.It is therefore important to explore its possibility for predicting the UCS of rocks in geomechanical engineering.Hence, the main contributions of this work to existing literature are as follows: i To investigate the competency of the stacking generalisation method for UCS prediction of rocks by considering data from the Tarkwaian Formation, Ghana.ii To establish that the stacking generalisation method enhances conventional standalone machine learning method estimation performance and could be employed as a dependable computational tool in UCS estimation.iii To establish the prediction potency of the stacking generalisation model by comparing with benchmark methods (RBFNN, GRNN and RF) which are normally employed in geomechanical engineering for UCS prediction.iv To assess the prediction potency of the stacking method by varying the base learners and comparing the results with standalone machine learning models.
Therefore, the proposed stack generalisation methodology is useful and applicable in the prediction of UCS of rocks in geomechanical and allied fields with high prediction accuracy.

Study area
The rock samples, which were needed to obtain the aims of this study, were selected from a mining company which is found on the Banket Series of the Tarkwaian Formation, Ghana (Fig. 1).This Formation harbors the majority of the palaeo-placer gold deposits in Ghana [17] and comprises four distinct stratigraphic series [33,58] as presented in Table 1.As it is evident from the geological map (Fig. 1), the study area is underlain by palaeo-placer deposits, which are mainly made up of grits, quartzites, and conglomerates of low-grade metamorphic facies.Extensive literature about the local and regional geology of the study area can be found in Ibrahim et al. [26] and Wilson and Opuni [58].

Data source
As earlier stated, the samples collected were core samples from palaeo-placer deposits namely: conglomerates and quartzites.The collected samples were transported to the laboratory and prepared per the protocols of the International Society of Rock Mechanics [27].Afterward, the prepared samples were put to laboratory tests of Porosity (n), Brazilian Tensile (BT), Schmidt Hammer Number (N), Bulk Density (p), Point Load Index (Is 50 ), and Uniaxial Compressive Strength (UCS).In the circumstance of the failure of a sample along a pre-existing break or any other weakness, the associated test result was exempted from the data set.The motive was that it could not represent precisely the strength of intact rock.The overall sample event that was used for soft computation was one hundred and sixty-eight (168).The summary statistics of the data events are given in Table 2.

Methodology
As stated earlier, the current study focuses on developing a stacking generalisation model for the prediction of UCS of rocks in geomechanical engineering.For the developed stacking generalisation methodology, the following machine learning algorithms (GRNN, RBFNN, RF and MARS) were used.A brief account of the applied machine learning algorithms is presented in the subsequent section.

Random forest
RF was developed as supervised AI algorithm to solve both regression and classification problems [64].It is also known as the ensemble method which combines numerous decision tree models to deliver a collection of iterative forecasts about the matching occurrences.The technique entails a collection of tree-organised classifiers {g(q, p), p = 1,…}, where {p} indicates peculiar and equal unsystematic value vectors, and every tree contributes a   single vote for the class with the maximum illustration at the input, q.Several regression tree algorithms are created in the relapsing presentation employing bootstrap parameters of the training data.Furthermore, it presents randomness in the tree-producing process by subjectively selecting a quota of basis constraints to consider when selecting separated points at each knot.Relapsing trees are barred from reaching extremely connected by this procedure, which limits the possibility that exact basic parameters would be selected when split is recommended.In order to limit estimation errors and increase estimation precision, several regression tree estimators are then joined.The technique employs the mean estimation of individual relapsing tree estimator to forecast the value.To employ RF as a regression tool, some hyperparameters are improved.These permitted variables involve the proportion of events to choose in each regression tree, the quantity of trees and estimator parameters arbitrarily selected at every node, etc.These modifiable parameters are adjusted over cross-validation.In totality, there is no motive to change the quantity of trees, however it is recommended to use a very huge number, consequently leading to a convergence of the estimation bias to a steady minimum.

Radial basis function neural network
Broomhead and Lowe [9] invented RBFNN as a threelayer network: an input, a hidden, and output layers, which were used for regression problems.One unique characteristic of the RBFNN is that it requires lesser time in its operation, with little human contribution, and the possibility of ending at local minima is eliminated [52,63].The framework of the RBFNN includes an input layer which is connected to a hidden layer.The connection between the layers are without weights.However, the weight is linked to the hidden neuron that employs the Gaussian activation function earlier to link the output neuron which uses the simple activation function.The final process at the output neuron is a common summation of the pieces and hence RBFNN framework is known to be simple [35] as illustrated in Fig. 2.

Multivariate adaptive regression spline
Cook et al. [11] stated that MARS is a spline regression technique for categorising associations that are nearly summative.This technique can select and adjust variables spontaneously and directly recognize their likely networks.It also follows the stepwise linear basis function of the formula (r-d) + and (d-r) + , as expressed in Eq. ( 1).
where "+" indicates the progressive quota and d is an arbitrary advantage score.MARS splits the data into step functions with a loop at the score d, causing a non-linear connection.These two functions are splitting image sets in the view that they produce an assembly of basis functions, C, of a pseudo form for each input r j .A quantifiable knot of r ij of that input (Eq.( 2)) is formed in a constant order of functions [1].
(1) The MARS method structure involves a backward and forward collection stage.It builds a practical piecewise technique by separating the best results into variable ranges of estimator parameters, which are arranged in the advancing stage.This is accomplished by creating knot with a descriptive model denoted as the basis function.The building of the knot procedure emanates to a finish when an exact threshold is obtained.Backward trimming is applied to eliminate excessive loops when the MARS model overestimates the data.The final equation of the MARS model is given in Eq. ( 3).
where n is the number of basis functions, k n (x) represents the n th basis function, C o is a constant, and C n is the factor of the matching k n (x).MARS model uses the Generalised cross-validation (GCV) principle to estimate the best model in the ultimate phase as presented in Eq. ( 4).
where λ is the size of terms, k (x i ) is the correct response, y i is the jth expected variable, m is the entire data set and h (λ) is the number of variables in the model.It is noteworthy that h (λ) is generally connected to the quantity of basis functions, d is the method's penalty term, and p signifies its easiness/difficulty.It follows that h ( λ) = r + pd.It has been reported that k lies between 2 and 3 [15]. (3)

Generalised regression neural network
Mohamed et al. [23] indicated that GRNN is a forward phase learning algorithm that requires no consistent training.It comprises four layers namely: an input layer, a pattern layer, a summation layer, and an output layer (Fig. 3) [34].These neurons are attached in a single-forward manner.The input neuron collects information from an external environment and directs it to the configuration layer, which calculates the Euclidean split-ups amongst every input and the stored form pieces.A radial basis activation function is then employed with the attained Euclidean distances.
The output is distributed to the S and D -summation neuron layers.The total of the unweighted and transferred outputs from the pattern neurons is calculated by the summation neurons.The ultimate output, Y(x), is generated in the output layer splitting the S-summation neuron's output from the D-summation neuron and is given as Eq. ( 5).
where k(x, x i ) is the radial basis function kernel, and the activation weight for the pattern layer neurons is w ij .A Gaussian activation function having a width variable, σ, and kernel, k(x, xi), defined in Eq. ( 6) was applied in this study. (

The stack generalisation methodology
The stacking method, which is normally referred to as stacked ensemble, is employed to minimse biases produced out by one or more base learners.In this current study, GRNN, RF and RBFNN are the base learners and the MARS model is the meta-learner.Individual AI model (base learner) is used to estimate the UCS and the output results are employed to build the input variables of the meta-learner.This transforms into three input variables for the meta-learner.The essence of employing the base learner is to limit the biases of one or more of the base algorithms for reliable outcomes [16].The architecture of the developed stacking model for this study is illustrated in Fig. 4.

Data description and normalisation
In this current study, a total of 168 data events were applied.The data entail Brazilian Tensile Strength (BT) in (MPa), Porosity (n) in (%), Schmidt hammer Rebound Number (N), Point Load Index (Is (50) ) in (MPa) and Bulk Density (kg/m 3 ) as the input parameters and the output parameter is the UCS (MPa).These input parameters were chosen due to their influence on the strength of undamaged rocks.To advance the UCS estimation models, the data events were first split into training and testing data set applying the widely used hold-out-cross-validation criterion [57].The training set includes one hundred and eighteen (118) data events representing about 70% of the entire data was employed for the training procedure and fitting of the model.The optimal learned algorithm was authenticated by employing the 30% (50 data) as test data sets.
To ensure regularity within the data sets, the data set was normalised to a mutual entity of measurement.The core purpose is to improve computational unity and warranty steadiness in the model development stages.Therefore, the min-max technique (Eq.( 7)) of data normalisation was employed in this study where the data was scaled between -1 to 1.
where d i is its maximum value, d j is the input's least value, and c i and c j also denote the minimum and maximum scores of the selected range.

Performance metrics
To assess the performance of the developed models, four arithmetic metrics were employed to evaluate their prediction accurateness.The metrics are Nash-Sutcliffe Efficiency Index (NSEI), Root Mean Squared Error (RMSE), Performance Index (PI), and Scatter Index (SI).These are (7)  mathematically presented in Eqs. ( 8)- (11).The Bayesian Information Criterion (BIC) (Eq.( 12)) was also used as the best-performing model selection criterion.
where p is the average of the predicted values, n is the test data size, p k are the predicted values, a is the average of the actual values, a k are the observed values and α is the number of variables estimated by each model.Here, the variables demote the number of input parameters used in the development of each model

Model building
In this study, the base learners (GRNN, RF and RBFNN) models were constructed first.These individual AI base learner models were selected due to their famous and greater estimation potency to forecast accurately the strength of undisturbed rocks in geomechanical engineering [3,39].To execute it, Porosity (n), Schmidt hammer Rebound Number (N), Point Load Index (Is (50) ), Brazilian Tensile Strength (BT), and Bulk Density (p) were used as the input variables and the UCS served as the response variable.To construct the stacked model, the single estimated outcomes of the base learners were then employed as the input variables into the metalearner (MARS) serving as the response variable.Here, varying stacked models based on 2 and 3-base learners (GRNN, RF and RBFNN) were developed to ascertain the effect of varying the number of base learners in stacking generalisation methodology.As earlier reported, 118 data events constituted the training data set s used for the learning process.The testing data set consist of 50 data events and was used for the model validation.
The two stacked models (2 and 3-base learners) were matched with the single AI algorithms (GRNN, RF and RBFNN) to evaluate their efficacy in UCS prediction.To ( 8) measure their performance capabilities for UCS prediction, key performance metrics presented in Eqs. ( 8) to (12) were employed.It is important to note that a model is said to be good when it can adapt to training data and generalise correctly across the testing data.Therefore, because this is a regression problem, all discussions and conclusions on the best model are based on the outcome of the testing data set.

Comparison of the developed stacked models
It is important to indicate that the analysis in this section is based on the outcome of the testing data results.
Here, different base learners were combined and scores of the performance metrics were computed to measure the effect of varying the number of base learners in the stacking methodology.This was performed for 2-base and 3-base learners.Summary results of the performance metrics of the stacked models with the different base learners are presented in Table 3.
In Table 3, varying 2-Base learner stack model types were tested.It can be seen that the 2-Base learner model of RF-GRNN had an NSEI score of 0.99968, followed by RBFNN-GRNN with a score of 0.99960 and RBFNN-RF achieving 0.9984 in the testing stage.It can be deduced that, for the 2-Base learner stacking, the combination of the RF-GRNN base learners had the highest NSEI score and was adjudged the best model for the 2-Base learner model.However, inter-comparison between the 2-Base learner models and the 3-Base learner model (GRNN-RF-RBFNN) indicates that the latter obtained the best NSEI value of 0.99969.Thus, showing an improvement in the 2-Base learner models prediction performance.
Similarly, PI values for integrated RF-GRNN, RF-RBFNN, and GRNN-RBFNN base learner stacked models were 0.50795, 0.75125, and 0.5341 (Table 3).Comparing their respective PI scores, it is observed that the RF-GRNN stack model had the least PI value.In extension, the PI value for the 3-Base learner stacked model was 0.50691.Comparatively, it can be stated that the 3-Base learner stacked model had UCS predictions that are in close proximity to the observed UCS.This is further confirmed by the respective SI and RMSE values obtained (Table 3).Although the 2-Base learner stack models produced competing results, this study proposes the 3-Base learner stack model as the best model for UCS prediction.To this end, the 3-Base learner stack model has shown great calibration potential and generalisation in UCS prediction.To further confirm the superiority of the proposed 3-Base learner stack model to its corresponding variant models, a ranking assessment was performed.
This was done on the scores from the performance metrics for the testing data results.In literature, the model with the least total ranked value is reported to be the best prediction model.Hence, from Table 4, it is observed that the proposed 3-Base learner stack model had the least value of 4. This shows that the proposed 3-Base learner stack model is the best-performing prediction model among the other competing 2-Base learner stack models.This high performance of the proposed 3-Base learner model was brought about by the collective computational capabilities of the selected base learners as they enhanced the meta-learner performance.

Comparison of proposed 3-Base learner stacked model with single AI models
The strength of any model to learn and adjust to the behavior of testing, learning, and authenticating data sets signifies its potency in predicting.Foremost, NSEI (Eq.( 8)), which was part of the performance metrics, was used to assess the goodness of fit of the models.For NSEI, the closer the value is to 1, the better the model, and the closer it is to 0, the weaker the model.
From Table 5, it can be observed that the single AI model with the highest NSEI value was the GRNN as it scored 0.998403, followed by RF and RBFNN, which scored 0.99791, and 0.985427 accordingly for the testing data results.When the NSEI value of the GRNN is compared to that of the proposed 3-Base learner stack model, it is observed that the latter model had a higher score than that of the former as depicted in Fig. 5.
The RMSE (Eq.( 9)) and PI (Eq.( 10)) which describe the inherent errors that are associated with the models are presented in Tables 6 and 7 respectively for the testing and training data sets.RMSE and PI values closer to zero (0) indicate a good model.From Table 4, the GRNN had the least RMSE value of 2.27711, followed by RF and RBFNN, which had values of 2.60512 and 6.8782 in the testing stage as shown in Fig. 6.The 3-Base learner stack model recorded an RMSE value of 1.02775.The interpretation is that the 3-Base learner stack model predictions vary slightly from the observed UCS data as compared with the other investigated methods.In continuance, a similar pattern was observed for the PI results where the 3-Base learner stack model achieved the least values in both the training and testing stages respectively.These can additionally be viewed in Figs. 6 and 7.
The scatter index (SI) (Eq.( 11)) which represents the extent the values deviate from their true value was also computed.In literature, it is reported that the closer the SI value is to 0, the better the prediction performance of the model.In Table 8, the SI testing values of the RBFNN, RF, and GRNN were 0.052719, 0.019967, and 0.017453 respectively.From the foregoing, it can be seen that GRNN had the lowest SI value indicating a better performance potency in the case of the single AI models.For the 3-Base learner stack model, the calculated SI value was 0.00788.It is obvious that the 3-Base learner stack model predictions tend to agree more      with the observed UCS data.Thus, the predictions marginally deviate from the observed UCS.Hence, as indicated in literature, the stacking methodology has the capability of enhancing the prediction potential of the applied single AI models.Figure 8 is an illustration of the SI values of the testing results.
In furtherance, a total ranking analysis was also performed on the evaluation metrics.Here also, the model with the least rank value was chosen as the best-performing model.The calculated ranking values for each single AI and the proposed 3-Base learner stack model for the testing data sets are given in Table 9.In Table 9, the least ranked value of 4 was attained by the 3-Base learner stack model.This indicates that the stack model could adequately predict the UCS than the other competing models.The correlation importance of the measured UCS and the predicted UCS of the individual applied AI methods, the 2-Base and 3-Base learner stack models are presented in Figs. 9, 10, 11, 12, 13, 14 and 15.Based on the coefficient of determination (R 2 ) values, it could be seen that the 3-Base learner model had the highest value of 0.9998, followed by the 2-Base models (i.e.RF-GRNN > RBFNN-GRNN > RBFNN-RF) and the single AI algorithms in the order (GRNN > RF > RBFNN).This correlation also

Model selection
The BIC was used to select the best-established model for UCS prediction based on the testing data.In this study, the selection was based on all developed models.With respect to picking the best performing model, the model with the lowest BIC value was selected.
From Table 10, it can be seen that the proposed 3-Base learner stack model had the least BIC score of 16.456; hence, it was chosen as the best prediction model for UCS of rocks.

Conclusions
The proposed stacking generalisation methodology has demonstrated to be appropriate for predicting the UCS of rock using data from the Banket Series of the Tarkwaian Formation, Ghana.The technique depends on the ability of the base learners to produce good  outcomes, which will serve as inputs for the metalearner.In this current study, the stacking technique was established by applying RBFNN RF and GRNN as base learners and MARS as the meta-learner.The computed performance metrics showed that the proposed stacked technique achieved better prediction outputs in the estimation of the UCS of rocks matched to the other competing models (i.e.RBFNN, GRNN, RF, and 2-Base learners).Thus, the proposed stacked technique produced the best based on all statistical performance metrics for training and testing.Hence, the proposed stacked generalisation methodology will be beneficial for geomechanical engineers in the mining, civil and geomechanical fields as a predicting tool for estimating the UCS of rocks.

Fig. 1
Fig. 1 Geological map of the study area

CFig. 2
Fig. 2 Framework of the RBFNN.The terms in the figure are represented as follows: C 1 … C 3 are input variables, Z i ,… Z 4 are hidden neurons, D 1, D 2, …, D Pz are weights of the RBFNN

Fig. 3
Fig. 3 Architect of the GRNN.The terms in the figure are described as follows: I i, ….I n are input variables, N and D are summation variables and N/D is output of the GRNN

Fig. 4
Fig. 4 Structure of the stack generalisation model.The terms in the figure are described as follows: O 1, O 2, O 3, ….O n are new predictor variables from the base models (RBFNN, GRNN and RF) and, Y n is the response variable (UCS)

Fig. 5
Fig. 5 NSEI testing results for the various models

Fig. 7
Fig. 7 PI testing values for the various models

Fig. 8
Fig. 8 SI testing values for the various models

Table 1
Summary of the stratigraphic order of the Tarkwaian Formation

Table 2
Statistical summary of the data used

Table 3
Performance metrics of the different base learners for the 2-Base learner and 3-Base learner stacked models

Table 4
Total ranking scores of the different base learners for the 2-Base learner and 3-Base learner stacked models

Table 5
NSEI values for the single AI and proposed stacked models for the training and testing results

Table 6
RMSE values for the single AI and proposed stacked models for the training and testing results

Table 7
PI values for the single AI and proposed stacked models for the training and testing results Fig. 6 RMSE testing values for the various models

Table 8
SI values for the single AI and proposed stacked models for the training and testing results

Table 9
Ranking scores of the single AI and the proposed 3-Base learner stack models

Table 10
BIC scores of the developed models