Cluster-based Kriging Approximation Algorithms for Complexity Reduction

Kriging or Gaussian Process Regression is applied in many ﬁelds as a non-linear regression model as well as a surrogate model in the ﬁeld of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a well-deﬁned testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.

Surrogate model based optimization [2].Many other regression models exist, such as parametric models, which are easy to interpret but may lack expressive power to model complex functions.
On the other hand, Regression Tree based methods like Random Forests [3] or Gradient Boosted Decision Trees lack the advantage of interpretation [4] but have more expressive power.Another method is Linear Model Trees [5], which uses a tree structure with linear models at the leaves of the tree.There are also more complex algorithms like Neural Networks, or Extreme Learning Machines [6], that are able to model very complex functions but are usually not easy to work with in practice.There are also different kernel based methods such as Support Vector Machines [7] and Radial Basis Functions [8].The main advantage of Kriging over other regression methods is that Kriging provides not only the estimate of the value of a function, but also the mean squared error of the estimation, the so-called Kriging variance.The Kriging variance can be seen as the uncertainty assessment of the model and has been exploited in surrogate model based optimization and many other applications.Despite the clear advantage of the Kriging variance, Kriging suffers from one major problem, the high training time and space complexity, which are O(n 3 ) and O(n 2 ), respectively.Where n denotes the number of points.To overcome this complexity problem, Kriging approximation algorithms such as [9] and [10] are introduced.
Unfortunately, these approximation algorithms are usually less accurate than the original Kriging algorithm.
Contributions.An overview of Kriging approximation methods is presented and a novel divide and conquer based approach, Cluster Kriging (CK), is introduced.The novel Cluster Kriging framework contains three steps, Partitioning, Modeling and Predicting.Each of the steps can be implemented using a wide range of approaches, which are explained in detail in this paper.
Using these approaches four algorithms are implemented and compared against each other and the state of the art.One particular interesting and novel algorithm that uses the Cluster Kriging methodology is the proposed Model Tree Cluster Kriging (MTCK).MTCK uses a regression tree with a specified number of leaf nodes to partition the data in the objective space.A Kriging model is then build on each partition defined by the tree's leaves.MTCK uses only one of the trained Kriging models per unseen record to predict, depending on which leaf node the unseen record is assigned to.The proposed algorithms are evaluated and compared to several stateof-the-art alternative Kriging approximation algorithms.A well-defined testing framework for Kriging approximation algorithms [11] is adopted for the comparisons.

II. KRIGING
Notation.Throughout this paper, we shall use n, k, d to denote the number of data points, the number of clusters and the dimensionality of the input space, respectively.In addition, the regression function is denoted as f : R d → R. In complexity statements in this paper we ignore d since Kriging is generally used on low dimensional datasets.Without loss of generality, the column vector convention is adopted as the notation used throughout this paper.
Loosely speaking, Kriging is a stochastic interpolation method in which the output value of a stochastic process is predicted as a linear function of the observed output values [12], [13].
In particular, Kriging is the best linear unbiased predictor (BLUP) and the corresponding mean squared error of prediction is used for uncertainty qualification.Kriging originates from the field of spatial analysis/geostatistics and more recently is being widely used in Bayesian optimization and design and analysis of computer experiments (DACE) [14], [15].The model features in providing the theoretical uncertainty measurement of estimations.
When the stochastic process is assumed to be Gaussian, Kriging is equivalent to Gaussian Process Regression (GPR), where the posterior distribution of the regression function (posterior Gaussian process) is inferred through Bayesian statistics.In this paper, we shall consider this special case and adopt the mathematical treatment of the Gaussian process.Assume that input data points are summarized in the set X = {x (1) , x (2) , . . ., x (n) } ⊆ R d and the corresponding output variables are represented as y = [y(x (1) ), y(x (2) ), . . ., y(x (n) )] .Specifically, the mostly used variant of Kriging, Ordinary Kriging, models the regression function f as a random process, that is a combination of an unknown constant trend µ with a centered Gaussian Process ε.The output variables are considered as the "noisy" observation of f , that is perturbed by a Gaussian random noise γ: Note that the noises (error terms) γ are assumed to be homoscedastic (identically distributed) and independent, both from each other and the Gaussian Process ε.The centered Gaussian process ε is a stochastic process which possesses zero mean everywhere and any finite collection of its random variables has a joint Gaussian distribution [1].It can be completely specified by providing a covariance function k(•, •) to calculate the pairwise covariance: The covariance function k(•, •) is a kernel function performing the so-called "kernel trick", which computes the inner product on the feature space as a function in the input space.In this paper, the covariance function is chosen to be stationary, meaning that the kernel k is a function of x − x and invariant to translations in the input space.Consequently, the variance σ 2 ε (x) of a Gaussian process ε is independent from the input x and thus denoted as σ 2 ε in the following.In practice, a common choice is the Gaussian covariance function (also known as squared exponential kernel): where θ i 's are called hyper-parameters, that are either predetermined or estimated through model fitting, and σ 2 ε is inferred by maximum likelihood method.We omit the likelihood function in this paper.To infer output value y (t) = y(x (t) ) at an unobserved data point x (t) , the joint distribution of y (t) and observed outputs y are derived, conditioning on the input data set X , x (t) and the unknown prior mean µ.Such a joint distribution is a multivariate Gaussian and is expressed as follows; where 1 n+1 denotes a column vector of length n + 1 that contains only 1's.The homogeneous variance σ 2 γ of the noise can be either determined by the user or estimated through maximum likelihood method.The posterior distribution of y (t) can be calculated by marginalizing µ out and conditioning on the observed output variables y [1].Without any derivations, the posterior distribution for Ordinary Kriging is again Gaussian [16]: where the posterior mean and variance are expressed as: Note that the estimation of the trend, μ is obtained by maximum a posteriori principle (MAP).
The posterior mean function (Eq.4) is used as the predictor while the posterior variance (Eq.5) is the so-called Kriging variance that measures the uncertainty of the prediction.

III. RELEVANT RESEARCH
Despite the theoretically sound development of the Kriging model, it suffers several issues when applied to large data sets.The major bottleneck is the high time complexity of the model fitting process: The inverse of the covariance matrix Σ −1 needs to be computed for both the posterior mean and variance (Eq. 4 and 5), which has roughly O(n3 ) time complexity 1 .Moreover, when optimizing the hyper-parameters of the kernel function, the log likelihood function of those parameters is again calculated through Σ −1 , resulting in a O(n 3 ) computational cost per each optimization iteration.Thus, for a large data set, such a high overhead in model fitting renders Kriging inapplicable in practice.Various attempts have been made to overcome the computational complexity issue of Kriging [1]: Subset of Data (SoD) [17] is a very simple approach in reducing complexity by taking m < n data points, usually taken at random.Obviously this is a waste of information, but it might work well if a sufficient number of data points is available.
Subset of Regressors (SoR) [18] approximates Kriging by a linear combination of kernel functions on a set of basis points.The basis points are linearly weighted to construct the predictor.
The choice of the basis points does influence the final outcome.As noted also in [19], there are only m (number of basis points) degrees of freedom in the model because the model is degenerate (finite linear-in-the-parameters), which might be too restrictive.
The Fully Independent Training Conditional (FITC) [20], [21].Fast Kriging with Gaussian Markov Random Fields [10] is an algorithm that uses an approximation of the covariance matrix with a sparse precision matrix.It uses Gaussian Markov Random Fields (GMRF) on a reasonable dense grid to exploit the computational benefits of a Markov field while keeping the formula of Kriging weights.This method reduces the complexity for simple and ordinary Kriging, but might not always be efficient with universal Kriging.
Bayesian Committee Machines (BCM) [9] is an algorithm similar to the one we propose, but developed from a completely different perspective.The basic motivation is to divide a huge training set into several relatively small subsets and then construct Kriging models on each subset.The benefit of this approach is that the training time on each subset is satisfactory and the training task can be easily parallelized.After training, the prediction is made by weighted combination of estimations from all the Kriging models.BCM uses batch prediction to speed up the computation even further.However, BCM does not seem to correct for different hyper parameters per module, neither for badly fitted modules, which becomes a major problem when the number of modules increases.
Several other attempts have been made to divide the Kriging model in sub-models [22], [23], each solution for different domains.In [22], a Bagging [24] method is proposed to increase the robustness of the Kriging algorithm, rather than speeding up the algorithm's training time.In [23], a partitioning method is introduced to separate the data points into local Kriging models and combine the different models using a distance metric.
All of these approximation algorithms have their advantages and disadvantages and they are compared to our Cluster Kriging algorithms.The algorithms listed above can be divided into three categories: 1) methods that approximate the covariance matrix (using sparsity), 2) methods that divide the training data into several clusters and build a model for each cluster, 3) methods that take only a subset of the training data into account.
For the empirical study, three state of the art algorithms: SoD, FITC and BCM are selected to compare with the proposed approaches in this paper, as they seem to be the mostly used in their category.

IV. CLUSTER KRIGING
The main idea behind our proposed methodology, Cluster Kriging, is to use a clustering algorithm to partition the data set into several clusters and build Kriging models on each cluster.
Loosely speaking, if the whole data set is partitioned into clusters of similar sizes, Cluster Kriging will reduce the time complexity by a factor of k 2 resulting in k where k is the number of clusters) if Kriging models are fitted sequentially.When exploiting k CPU processes in parallel, the time complexity will be further reduced to n k 3 .In practice this means that if we take k depending on n our algorithm becomes quadratic in time, and using k clusters it even reaches linear time complexity.For the output value y (t) at an unobserved data point x (t) , each Kriging model provides a (local) prediction for y (t) .To obtain a global prediction, it is proposed to either combine the predictions from all the Kriging models or select the most proper Kriging model for the prediction.
There are many options for the data partitioning, e.g.K-means and Gaussian mixture models (GMM), and the Kriging model on clusters can also be combined in different manners.By varying the options in each step of the cluster Kriging, many algorithms can be generated.Four of them will be explained in the next section.In this section, the options in each step of the algorithms are introduced gradually.

A. Clustering
The first step in the Cluster Kriging methodology is the clustering of the input data X (and the output variables) into several smaller data sets.In general, the goal is to obtain a set S containing k clusters on the input data set X .
In addition, the output values y are also grouped according to the clustering of X : y = [y 1 , y 2 , . . ., y k ] .The clustering can be done in many ways, with the most simple and feasible approach being random clustering.For our framework however we introduce three more sophisticated partitioning methods that are used in the experiments later on.

1) Hard Clustering:
The hard clustering splits the data into k smaller disjoint data sets: This can be achieved by various methods, for instance the K-means algorithm (Eq.7).K-means clustering minimizes the within-cluster sum of squares, that is expressed as: where µ (i) is the centroid of cluster i and is calculated as the mean of the points in X i .The minimization of the within-cluster sum of squares takes only O(nkd) execution time.
2) Soft Clustering: Instead of using a hard clustering approach, a fuzzy clustering algorithm can be used to introduce slight overlap between the various smaller data sets, which might increase the final model accuracy.To incorporate fuzzy clustering, instead of directly applying cluster labels, the probabilities that a point belongs to a cluster are calculated (Eq.8) and for each cluster (n • o)/k points with the highest membership values are assigned, where o is a user defined setting that defines the overlap.o is set between 1.0 (no overlap) and 2.0 (completely overlapping clusters).
In principle, any fuzzy clustering algorithm can be used for the partitioning.In this paper the Fuzzy C-Means (FCM) [25] clustering algorithm and the Gaussian Mixture Models (GMM) [26] are used.FCM is a clustering algorithm very similar to the well known K-means.The algorithm differs from K-means in that it has additional membership coefficients and a fuzzifier.
The membership coefficients of a given point give the degrees that this point belongs to each cluster.These coefficients are normalized so they sum up to one.The algorithm can be fitted on a given dataset and returns the coefficients for each data point to each cluster.The number of clusters is a user defined parameter.Fuzzy C-means optimizes the objective function given in Eq. 8 iteratively.In each iteration, the membership coefficients of each point being in the clusters are computed using Eq. 9. Subsequently, the centroid of each cluster µ (j) is computed as the center of mass of all data points, taking the membership coefficients as weights.The objective of fuzzy C-means is to find a set of centroids that minimizes the following function: where w ij are the membership values (see Eq. 9) and m is the so-called fuzzifier (set to 2 in this paper).The fuzzifier determines the level of cluster fuzziness as follows: The other fuzzy clustering procedure used is the Gaussian Mixture Models.GMM are used together with the expectation-maximization (EM) algorithm for fitting the Gaussian models.The mixture models are fitted on the training data and later used in the weighted combination of the Kriging models by estimating cluster membership probabilities of the unseen data points.
The advantage of this clustering technique is that it is fairly robust and that the number of clusters can be specified by the user.For the GMM method one could use the full covariance matrix whenever the dimensionality of the input data is small.However, when working with high dimensional data a diagonal covariance matrix can be used instead.The time complexity of GMM depends on the underlying EM algorithm.In each iteration EM, it takes O(nk) operations to re-estimate the model parameters.
3) Regression Tree Partitioning: The third method used is the partitioning by use of a Regression Tree [27] on the complete training set.The regression tree splits the dataset recursively at the best splitting point using the variance reduction criterion.Each leaf node of the Regression Tree represents a cluster of data points.The number of leaves (or the number of records per leave) can be set by the user.By reducing the variance in each leaf node and therefore the variance in each dataset, the Kriging models can be fitted to the local datasets much better as will be presented later on.The time complexity of using a Regression Tree for the partitioning is O(n), given that the depth of the tree or the number of leaf nodes is set by the user.

B. Modeling
After partitioning the data set into several clusters, Kriging models are fitted on each of the smaller data sets.The Kriging algorithm is applied on each cluster individually, this way each model will be optimized on its own training set and will have different hyper-parameters.For simplicity we assume, in this paper, the kernel functions used on each cluster to be the same.As for the regression tree approach, the data set, or more precisely the input space, is partitioned by the tree algorithm and, for each leaf node, a Kriging model is computed using the data belonging to this node (Fig. 1).A similar technique is introduced in the context of combining linear regression models [28], [29], [5].In general, the predictive (posterior) distribution of the target variable y (t) on each cluster is: where m l and σ 2 l are specified again by Eq. 4 and 5 except that X , y are replaced by X l , y l here.Note that building the Kriging models can be easily parallelized, which gives an additional speedup to Cluster Kriging.Another benefit of building each model separately, is that each model has usually a much better local fit than a single global Kriging model would obtain.

C. Prediction
After training the various Kriging models, unseen data points need to be predicted.For this prediction, there are several options.The first method which can be used is an optimally weighted combination of the Kriging models.
1) Optimal weighting procedure: When the input data set is separated by hard clustering methods, the Gaussian processes built on different clusters are independent from each other.In this sense, it is possible to construct a global Gaussian process model as the superposition of Gaussian processes from all the clusters.In addition, a weighting scheme is used to model how much "trust" should be put on the prediction from each cluster.The weighted superposition of all Gaussian processes is [30]: The overall prediction and its variance depend on the weights used in the equation above.
Intuitively, the optimal prediction is achieved when the variance of the estimation is minimal.To obtain such an optimal predictor, the overall Kriging variance should be optimized with respect to the weights, resulting in the following optimization task: minimize: The optimal weights are obtained by solving the problem above (see the previous work of the authors [30], [31] for details): The optimal weights are then used to construct the optimal predictor, which is the inner product of the model predictions with the optimal weights.
2) Membership Probabilities: For the GMM and other soft clustering approaches, the membership probabilities can be used for unseen records to define the weights for the combination of predictions.For each unseen record, the membership probabilities that this record belongs to the k clusters are calculated and directly used as the weights in the weighted sum of predictions and variances given by the Kriging models: where C is the cluster indicator variable ranging from 1 to k.The rationale behind such a weighting scheme can be shown from the following derivation.In general, the goal here is to express the predictive distribution of variable y (t) that is the conditional density function on the whole data set X , using the posterior densities from all clusters.By applying the total probability with respect to the cluster indicator variable C, such a density function g can be written as [31]: The independence assumption between Gaussian process models still holds approximately when the amount of the overlap between clusters is small.Thus, the density function g(y (t) | X , y, x (t) ) approximately equals to Eq. 14.The first term inside the sum in Eq. 14 is the predictive density function obtained from each cluster.The second term represents the probability of data point x (t) belonging to a cluster, which is the weight in Eq. 13.Consequently, the overall predictive density function is a mixture of predictive distributions of all the Gaussian process models on clusters.To predict y (t) , the expectation of the conditional density function g(y (t) | X , y, x (t) ) is calculated: Note that m l (x t ) is the mean function as shown in Eq. 10.Eq. 15 suggests that the overall prediction made on the whole data set can be expressed as a convex combination of the local predictions on each cluster of data, in which the combination weights are membership proba-bilities of GMM or similar clustering approaches.Furthermore, the variance of the prediction (expectation) above is derived as follows: Note that σ 2 l (x (t) ) is again the Kriging variance at point x (t) from cluster l. 3) Single Model Prediction: The last method which can be used to predict unseen data points is by using only one of the local Kriging models.First the partitioning method is used to predict which cluster the new data point belongs to, then the Kriging model trained using this particular cluster is used to predict the mean and variance at the new data point.In case of the Regression Tree procedure, the targets are predicted from new unseen data points by first deciding which model needs to be used, using the Regression Tree.The target is then predicted using the specific Kriging model assigned to the leaf node (Figure 1).The main advantage of this method is that there is no combination of different predictions and only one of the local Kriging models needs to provide a prediction.This results in a significant speed-up for the prediction task.

V. FLAVORS OF CLUSTER KRIGING
Using the three stages and various components for each stage of the Cluster Kriging methodology, various algorithms can be implemented.In this paper we asses four different flavors of Cluster Kriging: Optimally Weighted Cluster Kriging (OWCK), which uses a hard (K-means) clustering technique to partition the data into k clusters.Subsequently, a Kriging model is trained on each cluster and to predict unseen data points, the predictions and variances of each model are combined using the Optimal Weights Procedure (Section IV-C1).
Optimally Weighted Fuzzy Cluster Kriging (OWFCK), which uses a soft clustering technique (Fuzzy C-Means) to partition the data into k overlapping clusters and also uses the Optimal Weights Procedure combining the different predictions (Section IV-C1).
Gaussian Mixture Model Cluster Kriging (GMMCK), which uses Gaussian Mixture Models to partition the data into k overlapping clusters and the trained Kriging models are weighted using the membership probabilities assigned on the unseen data by the Gaussian Mixture Model (Section IV-C2).
Model Tree Cluster Kriging (MTCK), the proposed novel algorithm, uses a regression tree with a fixed amount of leaf nodes to partition the data in the objective space.A Kriging model is then trained on each partition defined by the tree's leaves.MTCK uses only one of the trained Kriging models per unseen record to predict (Section IV-C3), depending on which leaf node the unseen record is assigned to.
First a decision tree regressor is constructed using the complete dataset.The tree is generated from the root node by recursively splitting the training data using the target variable and the variance reduction criterion.Once a node contains less than the minimum samples needed to split or the node contains only one record, the splitting stops and the node is called a leaf.To control the number of clusters, the user can set the maximum number of leaves or the minimum leaf size.Next, each leaf node is assigned a unique index and each record belonging to the leaf is assigned to this index.For each leaf, a Kriging model is computed using only those records assigned to this leaf.Each Kriging model is now able to predict a particular region defined by the Regression Tree.
For the prediction of the target for unseen records, the regression tree decides which Kriging model should be used.The final predicted mean and variance is provided by this Kriging model.

VI. EXPERIMENTAL SETUP AND RESULTS
A broad variety of experiments is executed to compare Optimally Weighted Cluster Kriging and its Fuzzy and Model Tree variants, to a wide set of other Kriging approximation algorithms.The algorithms included in the test are; Bayesian Committee Machines, both with shared parameters (BCM sh.) and with individual parameters (BCM), Subset of Data (SoD), Fully Independent Training Conditional (FITC), Optimally Weighted Cluster Kriging (OWCK) using K-means clustering, Fuzzy Cluster Kriging using Fuzzy C-means (OWFCK), Fuzzy Cluster Kriging with Gaussian Mixture Models (GMMCK) and, finally, Model Tree Cluster Kriging (MTCK).
The above algorithms are evaluated on three different data sets from the UCI machine learning repository [32]: • Concrete Strength [33], a data set with 1030 records, 8 attributes and one target attribute.
The task is to predict the strength of concrete.
• Combined Cycle Power Plant (CCPP) [34], a data set of 9568 records, 3 attributes and one target attribute.The target is the hourly electrical energy output and the task is to predict this target.
• SARCOS [35], a data set from gaussianprocess.org with a training set of 44484 records, 21 attributes and 7 target attributes.The task is to predict the joint torques of a anthropomorphic robot arm.All 21 attributes are used as training data but only the 1 st target attribute is used as target.The dataset comes with a predefined test set of 4449 records.
In addition, 8 synthetic datasets with each 10.000 records, 20 attributes and one target attribute are used.The synthetic datasets are generated using benchmark functions from the Deap Python Package [36] and are often used in optimization.The functions are Ackley, Schaffer, Schwefel, Rastrigin, H1, Rosenbrock, Himmelblau and Diffpow.

A. Hyper-Parameters
Each of the Kriging approximation algorithms has a hyper-parameter that can be tuned by the user to define the number of data points, clusters or inducing points, basically defining the trade-off between complexity and accuracy.For each of the algorithms a wide range of these hyper-parameters are used to see the effect and make a fair comparison between the different algorithms.The overlap for each of the Fuzzy algorithms is set to 10%, since from empirical experience we know that 10% works well.Although higher percentages (above 10%) usually increase accuracy, the increase of accuracy is not significant and costs additional training time as well.For the Model Tree variant, the number of leaves is enforced by setting a minimum number of data points per leaf and an optional maximum number of leaves.For the Concrete Strength dataset and all synthetic datasets: FITC is set to a range of inducing points starting from 32 and increasing in powers of 2 to 512.SoD is set to the same range as FITC but for SoD this means the number of data points.BCM, both shared and non-shared versions and all Cluster Kriging variants are set to a range from 2 to 32 clusters, increasing with powers of 2.
For the Combined Cycle Power Plant dataset: FITC is set to a range of inducing points starting from 64 and increasing in powers of 2 to 1024.SoD is set to the a range from 256 to 4092 data points.BCM, both shared and non-shared versions and all Cluster Kriging variants are set to a range from 4 to 64 clusters.
Finally, for the SARCOS dataset, the range of FITC's inducing points stays the same as for the CCPP dataset, for SoD the range is from 512 to 8184 data points, and for all cluster based algorithms and the model tree variant, the range is set from 8 to 128 clusters.

B. Quality Measurements
The quality of the experiments is estimated with the help of 5-fold cross validation, except of the SARCOS dataset, which uses its predefined test set.The experiments are performed in a test framework similar to the framework proposed by Chalupka, K. et al. [11], i.e. several quality measurements are used to evaluate the performance of each algorithm.The Coefficient of determination R 2 score, Mean Standardized Log Loss (MSLL) (see [1] Chapter 8.1) and the Standardized Mean Squared Error (SMSE) are measured for each test run.The Mean Standardized Log Loss is a measurement that takes both the predicted mean and the predicted variance into account.Penalizing wrong predictions that have a small predicted variance more than wrong predictions with a large variance.
Where σ t is the predicted variance for record x i and ŷi the predicted mean.With triv the trivial score simulating a predictor that predicts the overall mean and standard deviation: For MSLL and SMSE lower scores are better, for R 2 , 1.0 is the best possible score meaning a perfect fit and everything lower is worse.

C. Results
The results of experiments on the real world datasets Concrete Strength, CCPP and SARCOS are shown in Figure 2. The results are shown with both objectives, time and accuracy (x and y axis respectively) in mind to show the trade-off and to show that some algorithms are performing better in both objectives.The R 2 scores of each dataset per algorithm, averaged over all folds, are shown in

D. Parameter Setting Recommendations
To use the Cluster Kriging algorithms, the minimum cluster size or the number of clusters has to be set as a user defined parameter.It is recommended to set this parameter in such a way that each individual cluster contains between 100 and 1000 records.1000 records is still computationally tractable by Kriging in terms of execution time and 100 records is in most cases still doable in terms of fitting the Kriging model.Selecting smaller cluster sizes is likely to result in poorly fitted models and selecting cluster sizes larger than 1000 will in most cases not increase accuracy but will only increase execution time.These recommendations are purely based on empirical observations and depend highly on the dataset one is working with.For MTCK smaller cluster sizes are usually still fine because of the low variance in the records per leaf due to the splitting criterion of the Regression Tree.

VII. CONCLUSIONS AND FURTHER RESEARCH
A novel Kriging approximation methodology, Cluster Kriging, is proposed, using a combina-

Fig. 1 :
Fig. 1: Visualisation of a Model Tree.The top node is the root and the bottom nodes are the leaves with attached models.Each record in the data (on the left) is assigned to a leaf node of the regression tree.

Fig. 2 :
Fig. 2: Quality measurements of each algorithm with the hyper-parameters increasing in sample sizes for FITC and SoD, and decreasing in cluster sizes for the cluster based algorithms as explained in Section VI-A.The results are shown for the Concrete, CCPP and Sarcos datasets and the gerenated dataset for the H1 function.The training time is given on the x axis and the R 2 score on the y axis.The dashed green line indicates the non-dominated set.

Table I .
The MSLL scores are provided in TableIIand the SMSE scores in TableIII.The best results for each dataset are shown in bold face.

TABLE I :
Average R 2 score per dataset for each algorithm

TABLE II :
Average MSLL score per dataset for each algorithm

TABLE III :
Average SMSE score per dataset for each algorithm