Band depth based initialization of K-means for functional data clustering

The k-Means algorithm is one of the most popular choices for clustering data but is well-known to be sensitive to the initialization process. There is a substantial number of methods that aim at finding optimal initial seeds for k-Means, though none of them is universally valid. This paper presents an extension to longitudinal data of one of such methods, the BRIk algorithm, that relies on clustering a set of centroids derived from bootstrap replicates of the data and on the use of the versatile Modified Band Depth. In our approach we improve the BRIk method by adding a step where we fit appropriate B-splines to our observations and a resampling process that allows computational feasibility and handling issues such as noise or missing data. We have derived two techniques for providing suitable initial seeds, each of them stressing respectively the multivariate or the functional nature of the data. Our results with simulated and real data sets indicate that our Functional Data Approach to the BRIK method (FABRIk) and our Functional Data Extension of the BRIK method (FDEBRIk) are more effective than previous proposals at providing seeds to initialize k-Means in terms of clustering recovery.


Introduction
Amongst all non-hierarchical clustering algorithms, k -Means is the most widely used in every research field, from signal processing to molecular genetics.It is an iterative method that works by allocating each data point to the cluster with nearest gravity center until assignments no longer change or a maximum number of iterations is reached.Despite having a fast convergence to a minimum of the distortion -i.e., the sum of squared Euclidean distances between each data point and its nearest cluster center- [25], the method has well known disadvantages, including its dependence on the initialization process.If inappropriate initial points are chosen, the method can exhibit drawbacks such as getting stuck on a bad local minimum, converging more slowly or producing empty clusters [5].The existence of multiple local optima, which has proven to depend on the dataset size and on the overlap of clusters, greatly influences the performance of k-Means [27].Also, the method is known to be NP-hard [10]; this has motivated numerous efforts to find techniques that provide sub-optimal solutions.Therefore, there are several methods for initializing k -Means with suitable seeds, though none of them universally accepted; see [12,26,6], for example.
Recently, the BRIk method (Bootstrap Random Initialization for k -Means) has been proposed in [30] as a relevant alternative.This technique has two separate stages.In the first one, the input dataset is bootstrapped B times.k -Means is then run over these bootstrap replicates, with randomly chosen initial seeds, to obtain a set of cluster centers that form more compact groups than those in the original dataset.In the second stage these cluster centers are partitioned into groups and the deepest point of each grouping is calculated.These prototypes are used as the initialization points for the k -Means clustering algorithm.
The BRIk method is flexible as it allows the user to make different choices regarding the number B of bootstrap replicates, the technique to cluster the bootstrap centers and the depth notion used to find the most representative seed for each cluster.There is a variety of data depth definitions, all of which allow generalizing the unidimensional median and rank to the multivariate or the functional data contexts.The main difference between them is that the latter models longitudinal data as continuous functions and thus incorporates much more information into the analysis.Over the last two decades there has been a substantial growth in the functional data analysis (FDA) literature, including techniques for cluster analysis [8,15], classification [20,18], analysis of variance [32] and principal components analysis [11], among others.In particular, the -computationally feasible-Modified Band Depth (MBD) [21] has been successfully applied to FDA for shape outlier detection [3], functional boxplot construction [28] or time warping [2]; also, for BRIk, it is recommended to make use of the multivariate version of the MBD.
In this work we consider the FDA context and propose an extension of BRIk, that we have called the Functional data Approach to BRIk (FABRIk) method.The underlying idea is simple.We fit a continuous function to longitudinal data and sample a number D of time points, which will be clustered using Ddimensional seeds, thus providing the final output labels.This offers computational feasibility and several advantages over standard multivariate techniques, including the possibility of smoothing data to reduce noise or including observations with missing features (time points) into the analysis.
The paper is organized as follows.Section 2 describes in detail our algorithm and the methods it will be compared to.Section 3 specifies the data, both simulated and real, and the quality measures used to assess FABRIk.Section 4 presents the overall results and in Section 5 we summarize our findings.

Methods
Consider a multivariate dataset X = {x 1 , . . ., x N } of N observations in d dimension, x i = (x i1 , . . ., x id ), i = 1, 2, ..., N, that have to be partitioned into K clusters G 1 , . . ., G K by means of the k -Means algorithm [9].Once K initial seeds (centers) c 1 , . . ., c K have been obtained, k -Means works in the following way.Each data point x i is assigned to exactly one of the clusters, G i0 , where i 0 = arg min 1≤j≤K d(x i , c j ).Next, for the j-th cluster, the centers are updated by calculating the component-wise mean of elements in G j .The assignment of elements to clusters and the computation of centers are repeated until there are no changes in the assignment or a maximum number of iterations is reached.
Since the k-Means output strongly depends on the initial seeds, there have been numerous efforts in the literature to obtain suitable initial centers for k -Means.In this work we focus on extending one of these methods, the BRIk algorithm, summarized as follows.-Run k-Means, randomly initialized with K seeds, on X b and store the final K centroids.S2.Group the dataset of K × B centroids into K clusters, using some non-hierarchical algorithm.
S3. Find the deepest point of each cluster from step S2, using some data depth notion; these are used as initial seeds of k-Means.
Specifically, it is recommended to use the MBD depth notion.Thus, the BRIk algorithm's third step relies on finding the MBD-deepest point of each cluster (center grouping) from S2.The method is designed to use any clustering technique in this step.Here we used the Partitioning Around the Medoids (PAM) method [16].Note that the Ward algorithm [31] also reported a good performance in the experiments [30].
The method that we present here is an extension of BRIk.In the case of data that come from function observations we can add another stage, a functional approximation, to enhance the behavior of the method.In particular, we take the B-splines that best fit the original data in the least squares sense.This process sets a basis of (continuous) piecewise polynomial functions of a given degree and constructs the linear combination of these that best fits the data, to provide an approximation to the original function that is continuous and differentiable to a certain order.
The use of MBD with continuous functions is straight forward.Given a set of N real functions x 1 , . . ., x N that are continuous on the compact interval I, the MBD of a function x within such a set is given by where A(x) ≡ A(x; x i1 , x i2 ) = {t ∈ I : min r=i1,i2 x r (t) ≤ x(t) ≤ max r=i1,i2 x r (t)} and λ r (A(x)) = λ(A)/λ(I), and λ is the Lebesgue measure on I. Thus, λ r (A(x)) represents the proportion of time that x is in the band defined by functions x i1 and x i2 .With this approach, we have all the advantages of continuous functions.From a computational perspective we can evaluate the functions obtained on new time points (on a grid as dense as desired) to get a new dataset in D dimension.Then, this re-sampled data are input to the BRIk algorithm to find the initial seeds of k -Means.
To check the performance of our method we have selected, as benchmark, the classical Forgy approach [9], where the initial seeds are selected at random; we refer to this as the KM initialization.
Next, we have considered a widely-used algorithm, k -Means++ (KMPP) [4], which aims at improving the random selection of the initial seeds in the following way.A random data point c 1 is firstly picked from the data set.This conditions the selection of the remaining initial seeds c j , j = 2, . . ., K, which are sequentially chosen among the remaining observations x according to a probability proportional to d 2 (x, c jx ), the squared distance between the point x and its closest seed c jx , where j x = arg min 1≤i<j−1 d(x, c i ).Following this procedure, the initial centers are typically separated from each other and yield more accurate groupings.
Additionally, in order to assess the potential improvement of our method, the functional approximation stage of the FABRIk method is added before the KM and the KMPP method are run.This way we can make a complete and fair comparison of how the FDA approach affects the BRIk method against how it improves KM or KMPP.
Hence, on the one hand, we will compare six different k -Means initialization techniques: KM and its FDA version, designated as FKM, BRIk, FABRIk, KMPP and KMPP with the functional approximation (denoted by FKMPP).On the other hand, we want to compare our strategy of replacing the d-dimensional data by those estimated in D dimension against the popular proposal of clustering the B-splines coefficients [1].This is because two functions in the same cluster are expected to have similar vectors of coefficients, and also because in most situations these vectors are considerably smaller in size than the D-dimensional ones.Therefore, we ran k-Means on the set of computed coefficients, initialized with KM, BRIk and KMPP.We will refer to these approaches as C-KM, C-BRIk and C-KMPP.
In order to explore the advantages of our method, we have also carried out experiments where the data points had missing observations, what translates into "sparse data" in the functional data context.For each simulated dataset we randomly removed a proportion p ∈ {0.1, 0.25, 0.5} of the coordinates of each d-dimensional vector.For FKM, FABRIk and FKMPP, we estimated each missing value in a given vector by means of the corresponding B-spline.For KM, BRIk and KMPP, we imputed the missing data by using linear interpolation; note that, for simplicity, the removal of coordinates was hence not applied to the first and last values.We then performed the analysis of the resulting data with each of the nine methods mentioned.

Experimental Setup
Our experiments were carried out in the R statistical language [24], using the implementation of k-Means included in the stats package, that of B-splines in the splines package and the M BD coding provided in the depthTools package [29].For each dataset, the number of clusters in the k-Means algorithm was set to be equal to the number of groups.For FABRIk and BRIk, we used bootstrap sizes of B = 25.Using a larger bootstrap size decreases the speed of the FABRIk and BRIk algorithms, while slightly improving the distortion.Cubic B-splines with no intercept and a varying number of equally spaced knots, depending on the model to be analyzed, were chosen to approximate our data; then new evenly spaced observations were obtained by using an oversampling process with different oversampling factors.An oversampling factor of m means that the number D of time points observed in the approximated function is m times the number of original input samples: D = m × d.The knots are defined through the degrees of freedom (DF) parameter.A DF value of n with cubic B-splines implementation means that n − 3 internal knots are placed uniformly in the horizontal axis.The resemblance of the approximated function to the real one in each of the models is determined by the DF parameter of the B-splines.

Datasets
We conducted experiments involving simulated and real datasets.For the simulated ones we chose the four models described in Table 1; the functions giving origin to each of the clusters are shown in Figure 1.Models 1 and 2 consider polynomial and sinusoidal functions; the former is designed to assess the effect of rapidly changing signals on the clustering quality whereas the latter could be used, for instance, to mimic monthly average temperatures in different climates.Model 3 consists of (raw and transformed) Gaussian functions and is used to test the impact of sudden peaks on signal clustering.Finally, Model 4, taken from [19] attempts to model swimmers' progression curves.The time vector (x coordinate) varies from model to model, while the number of simulated functions per cluster is 25 for all of them.To construct the clusters, additive white Gaussian noise is incorporated to each model to mimic the randomness in the data collection process.
The DF parameter requires a careful choice for each model.Higher values of this parameter can account for larger variations of a function, and therefore Models 2 and 3 would require higher DFs.In our experiments, the specific value for each situation was selected in the set {4,. . ., 50} according to an elbow-like rule for the plot of the (average) distortion against the DFs; these are provided in the last column of Table 1.
For each model we generated 1000 independent datasets that were clustered with the nine methods and considered different levels of noise, σ ∈ {0.5, 1, 1.5, 2}.The results reported in this manuscript correspond to σ = 1.Other values of σ produced similar relative outputs; note that increasing the standard deviation to a value greater than σ = 2 renders a very poor cluster accuracy for every method tested; however, FABRIk and FKMPP present slightly higher accuracy measures than the alternatives as the functional stage is noise-smoothing.x = (0, 0.01, 0.02, ..., 1) x = (0, 0.01, 0.02, ..., 1) x = (−10, −9.9, −9.8, ..., 10) (DF = 4) x = (0, 0.05, 0.1, ..., 1) Table 1: Description of the simulated models.The first column includes the DFs used in each model, according to an elbowlike rule.The second column provides the time vector where the functions are observed; the third column describes the signal defining each cluster.
To complete the study of our algorithm we used real data to assess whether it is of practical use.First, we have considered a dataset containing 200 electrocardiogram (ECG) signals, by a single electrode, 133 labeled as normal and 67 as abnormal (myocardial infarction), as formatted in [23] (units not provided).Each observation reflects a heartbeat and consists of 96 measurements.The dataset is available at the UCR Time Series Classification Archive [7].
Secondly, the Gyroscope dataset was recorded using a Xiaomi Pocophone F1.The mobile phone was laid on a table and moved to follow four patterns: a straight line, a curvy line, an arch and a rotation from side to side on the spot.The yaw angular velocity in rad/s was recorded using the Sensor Record app available in Google Play Store.
Each recording for each pattern was truncated to 527 registered time points, spaced by 10ms, in order for all data points to have the same length.Thus, their duration is approximately 5 seconds.The dataset consists of 11 recordings for each pattern and is available as Supplementary material.
Since all the methods tested in this study are random, in the sense that different runs produce in general distinct centroids, we ran each of them 1000 times for each dataset.

Performance evaluation
The overall performance of these methods has been evaluated according to five different measures that fall into four categories:  • Accuracy: We measure how similar the clusters are to the true groups by means of the Adjusted Rand Index (ARI) [13] and the clustering correctness, which is computed as the percentage of label agreement (i.e.correctly assigned elements), according to the label permutation that yields the maximum set similarity.
• Dispersion: The obvious choice to determine how compact the clusters G 1 , . . ., G K are is the distortion , where c j is the gravity center of cluster G j .Its evaluation is done by identifying each cluster from the partitioning labels and calculating the corresponding centroid, in the original (multivariate) data space.
• Convergence: We assess the convergence speed with the number of iterations required by the k -Means algorithm to converge after being initialized.
• Computational cost: Finally, we consider the execution time, in seconds, used by each algorithm from start to finish.Calculations are carried out with an Intel Core i7-6700HQ CPU with 2.60GHz and 8 GB RAM.
The performance of the methods we consider is assessed in terms of the median x, the mean x and the standard deviation s for all five measures. .

Results
Simulated data.We used the four models to evaluate the performance of FABRIk in different situations.
For Model 1 the FABRIk method -followed by BRIk-outperforms the alternatives with respect to all the evaluation measures except for the execution time, as shown in Table 2, where statistics for the distortion have been rounded to four significant figures.In particular, all the techniques based on clustering the vectors of coefficients are drastically worse than the other ones.The same situation is observed for all the scenarios we have considered (different models and levels of noise and presence or absence of missing data) and thus we do not include them in the subsequent tables for compactness.
Notably, in this model the variability of the first four measures is remarkably smaller for FABRIk.Here, the synthetic groups 2 and 3 are easily confounded and inappropriate initial seeds lead k-Means to merge these two groups into a single cluster and, consequently, to split one of the other groups into two clusters.This situation considerably reduces the ARI of the corresponding algorithms, whose distributions become bimodal.As an example, we considered a single dataset following Model 1, and compared the output of FABRIk, with B = 25, versus 25 runs of k-Means with random initialization.Our method correctly allocates all the elements (ARI = 1), whereas none of 25 runs of standard k-Means is capable of retrieving the correct grouping (average ARI = 0.9177), with the confusion of clusters 2 and 3 in four of the runs.
Figure 3, upper panel, depicts violin plots of the ARI distributions; the ones corresponding to the correctness (not shown) display a similar pattern.The effect of wrong allocations is also reflected in the distribution of the distortion: all the methods except BRIk and FABRIk have bimodal densities or heavy upper tails, as shown in Figure 3, bottom panel.This behavior is observed in a significant number of runs of the methods, but FABRIK -followed by BRIk-seems to find the right partitioning more often.
Despite a median test does not reject the equality of medians for the accuracy measures (correctness and ARI), we conducted pairwise t-tests for testing equality of means.As expected from their asymmetric distributions, we obtained p-values lower than 10 −45 for the comparison of FABRIk with the other methods, except for BRIk, with p-values in the order of 10 −3 and 10 −6 .
With respect to the missing data case (sparse data), we report in Table 3 the results for a percentage p = 25% of missing data.Similar relative outputs can be observed in the other cases.FABRIk is again the best method in terms of correctness, ARI and number of iterations to reach convergence, followed by BRIk and FKMPP.With respect to the distortion, FABRIk is only slightly surpassed by BRIk.Regarding the execution time, KM, BRIk and KMPP required longer times for the interpolation step, and hence the methods based on FDA are, globally, a more suitable option.In contrast to the previous case, in Models 2-4 FABRIk has in general a distortion slightly higher than that of KM, BRIk and KMPP but smaller than that of the other initialization methods, as shown in Tables 4-9.
This can be explained by accounting for the two different data spaces we are considering and our process of computing the distortion.From the point of view of the functional data space, FABRIk is simply the initialization of k-Means with appropriate seeds.However, from the point of view of the initial multivariate data space, the clustering obtained with FABRIk does not necessarily (and not even frequently) correspond to a local minimum of the distortion in this space, therefore yielding higher values of the objective function.Nevertheless, FABRIk is the best option among the functional methods.On the contrary, it consistently provides remarkably higher accuracy measures and faster convergence in terms of the number of iterations.It also has a longer execution time, as expected.However, its ranking improves again if missing data are considered.
To assess the relevance of this increment in the computational cost we compared these results with the strategy of initializing k-Means several times and choosing the set of seeds providing the lowest distortion.For instance, in Model 2, KM with 200 random starts increases the average ARI from 0.4200 to 0.4549, which is far from the 0.6467 obtained with our method, and requires 0.1263 seconds on average.
In these models, all the pairwise tests for equality of means and medians for correctness, ARI and distortion, to compare FABRIk with the other methods, yielded p-values with an order of magnitude between  For the ECG dataset, the DFs were set to 15 according to the elbow rule and we chose an oversampling factor of 1 for speed, as using a denser time grid produces a similar output.Table 10 summarizes our results.Note that the quality of the clustering recovery in terms of ARI is small.We do not find prominent differences across methods.In particular, all of them require a single iteration to converge and have the  For the Gyroscope dataset, the DFs were set to 15, once more, according to the elbow rule.The oversampling factor was set to 1. Again, a similar performance is observed for higher values of this parameter, which does not influence the final results.In contrast to the previous case, the values of correctness and ARI are much higher.However the FABRIk method finds more accurate groups, obtaining ARI values larger than 0.9 in roughly 60% of the iterations, whereas for instance, this percentage is around 35% and 20% for KM and FKMPP, respectively.Also, for distortion it is the second best option (after BRIk) and shows the least variability, followed by BRIk.In fact, for these two methods, the accuracy and dispersion measures have bi-modal distributions, while those corresponding to the other algorithms present three or more peaks.With respect to the number of iterations all methods have similar values, with BRIk and FABRIk slightly better on average.However, the computational cost of our method, is the largest one.

Implementation
We have implemented an R package, briKmeans, to provide the basic tools to run both BRIk and FABRIk.Users can tune the different parameters of the methods through the functions parameters and retrieve the corresponding initial seeds and the resulting k-Means output, which includes the partitioning of the data set.For instance, the following simple call > fabrik(exampleM1, k=4, degFr=10) will run FABRIk with DFs set to 10 and the rest of parameters set to default, and return k=4 clusters for the dataset exampleM1.The clusters can be visualized individually in parallel coordinates [14] by means of the plotKmeansClustering function, including the final centroids.In Figure 4 we illustrate this representation for a dataset following Model 1, with σ = 1.Note that users can also turn to the elbowRule function to plot the distortion associated to FABRIk against the DFs in order to optimize this parameter.

Conclusion
In this work we have developed FABRIk, an initialization method for k -Means that extends the BRIk algorithm to the functional data case.It takes d-dimensional longitudinal observations from continuous functions as an input dataset and returns the D-dimensional initial seeds for k-Means after a functional approximation process via B-splines and a re-sampling stage.
Similarly to its precursor BRIk, our method is flexible in several ways.The number of bootstrap replicates B can be tuned by the user; in general, low values of B are enough to produce a relevant improvement over the alternatives.Additionally, the DFs and the oversampling factor m can be chosen to best adapt to the data.An oversampling factor of 1 has proven to yield similar results to higher values of this parameter, while remaining less computationally expensive.In particular, the DFs are selected according to the elbow rule.Nevertheless, our experiments show that a wide range of values for these parameters are also suitable.The clustering algorithm used to partition the cluster centers is an extra feature that can be determined by the user.Finally, one could potentially use any feasible data depth definition, but our recommendation is to choose MBD for its fast computation, its applicability to both functional and multivariate data and because it has proven to score high in the accuracy measures.
We have made both methods publicly available through the R package briKmeans (on the CRAN repository), which also allows following the elbow-like rule for selecting suitable values of the DF parameter and representing the clusters, along with the final centroids, in parallel coordinates.
We have compared our functional initialization strategy to its multivariate version and to two more techniques, with and without the FDA approach.Furthermore, we have assessed the behavior of the methods based on clustering the B-splines coefficients obtained for each data point, which have proven to be poor competitors.
Generally speaking, FABRIk works well with both synthetic and real data.It is an advantageous method that offers higher quality in terms of clustering recovery at the cost of a longer computational time and, commonly, a slightly larger distortion.In addition, we have shown that in some situations, particularly with the real data we have considered, FABRIk rises as a more reliable way of initializing k-Means, which consistently provides better accuracy results with lower variance.Moreover, as any technique based on a functional approximation of the observations, it allows denoising and imputation of missing data.
S1. FOR (b in 1 : B) DO -Obtain the b-th bootstrap sample X b of the set X.

Figure 1 :
Figure 1: Functions originating each of the clusters for the simulated models by adding Gaussian noise to each sampled component independently.

Figure 2 :
Figure 2: Real data.Left panel: heart electrical activity recorded during a cardiac cycle for patients with a normal heart or with a cardiac condition; units not provided.Right panel: gyroscope yaw velocity readings (in rad/s) for four different patterns, registered at steps of 0.01s

Figure 3 :
Figure 3: Violin plots, in red, for the distribution of the ARI index (top) and the distortion (bottom) in Model 1, with σ = 1.The corresponding wider boxplots are superimposed in gray.BRIk and FABRIk have a consistent behaviour whereas the other methods have more spread or bimodal distributions, with heavy lower and upper tails, respectively.

4 Figure 4 :
Figure 4: Representation of the four clusters retrieved by FABRIk for a dataset following Model 1 with σ = 1, along with the final centroid (solid line).Our method correctly allocates all the elements (ARI = 1).

Table 2 :
corresponding wider boxplots are superimposed in gray.BRIk and FABRIk have a consistent behaviour whereas the other methods have more spread or bimodal distributions, with heavy lower and upper tails, respectively.
Summary statistics for Model 1.The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 3 :
Summary statistics for Model 1 with 25% missing values.The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.10−262 and 10 −9 .In summary, we can report a significant improvement over the alternatives.

Table 4 :
Summary statistics for Model 2. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.
Real data.We next applied all the initialization methods to the real data.

Table 5 :
Summary statistics for Model 2 with 25% missing values.The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 6 :
Summary statistics for Model 3. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 7 :
Summary statistics for Model 3 with 25% missing values.The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 8 :
Summary statistics for Model 4. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.samemedianforcorrectness, ARI and distortion.Yet, FABRIk leads to the best average correctness and second best average ARI and distortion, and has the smallest standard deviations.This corresponds to a single-mode distribution: a scenario similar to that depicted in Figure3for simulated data.As usual, FABRIk is largely outperformed in terms of execution time by those methods that do not rely on the B-spline

Table 9 :
Summary statistics for Model 4 with 25% missing values.The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 10 :
Summary statistics for the ECG dataset.The median, mean and variance of 1000 runs of each initialization method for the five performance evaluation measures are provided.Best medians and means are bold-faced.

Table 11 :
Summary statistics for the Gyroscope dataset.The median, mean and variance of 1000 runs of each initialization method for the five performance evaluation measures are provided.Best medians and means are bold-faced.