1 Introduction

Amongst all non-hierarchical clustering algorithms, k-Means is the most widely used in every research field, from signal processing to molecular genetics. It is an iterative method that works by allocating each data point to the cluster with nearest gravity center until assignments no longer change or a maximum number of iterations is reached. Despite having a fast convergence to a minimum of the distortion –i.e., the sum of squared Euclidean distances between each data point and its nearest cluster center– (Selim and Ismail 1984), the method has well known disadvantages, including its dependence on the initialization process. If inappropriate initial points are chosen, the method can exhibit drawbacks such as getting stuck on a bad local minimum, converging more slowly or producing empty clusters (Celebi 2011). The existence of multiple local optima, which has proven to depend on the dataset size and on the overlap of clusters, greatly influences the performance of k-Means (Steinley 2006). Also, the method is known to be NP-hard (Garey and Johnson 1979); this has motivated numerous efforts to find techniques that provide sub-optimal solutions. Therefore, there are several methods for initializing k-Means with suitable seeds, though none of them is universally accepted; see He et al (2004); Steinley and Brusco (2007); Celebi et al (2013), for example.

Recently, the BRIk method (Bootstrap Random Initialization for k-Means) has been proposed in Torrente and Romo (2021) as a relevant alternative. This technique has two separate stages. In the first one, the input dataset is bootstrapped B times. k-Means is then run over these bootstrap replicates, with randomly chosen initial seeds, to obtain a set of cluster centers that form more compact groups than those in the original dataset. In the second stage these cluster centers are partitioned into groups and the deepest point of each of them is calculated. These prototypes are used as the initialization points for the k-Means clustering algorithm.

The BRIk method is flexible as it allows the user to make different choices regarding the number B of bootstrap replicates, the technique to cluster the bootstrap centers and the depth notion used to find the most representative seed for each cluster. There is a variety of data depth definitions, all of which allow generalizing the unidimensional median and rank to the multivariate or the functional data contexts. The main difference between them is that the latter models longitudinal data as continuous functions and thus incorporates much more information into the analysis. Furthermore, the study of functional data presenting “phase variation” –in addition to “amplitude variation”– can commonly benefit from the procedure known as curve registration or time warping. This aligns functions by identifying the timing of prominent features in the curves and transforms time so that these occur at the same instants of –transformed– time (Ramsay and Li 1998).

Over the last two decades there has been a substantial growth in the functional data analysis (FDA) literature, including techniques for cluster analysis (Ferreira and Hitchcock 2009; Jacques and Preda 2014), classification (López-Pintado and Romo 2003; Leng and Müller 2006), analysis of variance (Zhang 2013) and principal components analysis (Hall 2018), among others. In particular, the –computationally feasible– Modified Band Depth (MBD) (López-Pintado and Romo 2009) has been successfully applied to FDA for shape outlier detection (Arribas-Gil and Romo 2014), functional boxplot construction (Sun and Genton 2011) or time warping (Arribas-Gil and Romo 2011); also, for BRIk, it is recommended to make use of the multivariate version of the MBD.

In this work we consider the FDA context and propose two alternative extensions of BRIk, that we have respectively called the Functional data Approach to BRIk (FABRIk) method and the Functional Data Extension of BRIK (FDEBRIk). The idea underlying both options is simple. We follow BRIk’s pattern of bootstrapping and clustering a collection of tighter groups of centroids, but, at the initial stage, we additionally fit a continuous function to longitudinal data. Then, at a later phase, we return to a multivariate vector space by sampling functions at a number D of time points. This offers computational feasibility and several advantages over standard multivariate techniques, including the possibility of smoothing data to reduce noise or putting observations with missing features (time points) into the analysis. Note that the focus of this work is to present an innovative initialization method for K-Means and not a novel clustering algorithm. Within this framework, our approach can be classified as a filtering method, as designated by Jacques and Preda (2014).

The paper is organized as follows. Section 2 describes in detail our algorithms and the methods they will be compared to. Section 3 specifies the data, both simulated and real, and the quality measures used to assess FABRIk and FDEBRIk. Section 4 presents the overall results and in Sect. 5 we summarize our findings.

2 Methods

Consider a multivariate dataset \(X=\{{\textbf{x}}_1,\dots ,{\textbf{x}}_N\}\) of N observations in d dimension, \({\textbf{x}}_i=(x_{i1}, \dots , x_{id})\), \(i=1, 2, ... , N,\) that has to be partitioned into K clusters \(G_1,\dots ,G_K\) by means of the k-Means algorithm (Forgy 1965). Once K initial seeds (centers) \({\textbf{c}}_1,\dots ,{\textbf{c}}_K\) have been obtained, k-Means works in the following way. Each data point \({\textbf{x}}_i\) is assigned to exactly one of the clusters, \(G_{i_0}\), where , with \({{\mathbb {d}}}\) the Euclidean distance. Next, for the j-th cluster, the center is updated by calculating the component-wise mean of elements in \(G_j\). The assignment of elements to clusters and the computation of centers are repeated until there are no changes in the assignment or a maximum number of iterations is reached.

Since the k-Means output strongly depends on the initial seeds, there have been numerous efforts in the literature to obtain suitable initial centers for k-Means. In this work we focus on extending one of these methods, the BRIk algorithm, summarized as follows.

  1. S1.

    FOR (b in 1 : B) DO

    • Obtain the b-th bootstrap sample \(X_b\) of the set X.

    • Run k-Means, randomly initialized with K seeds, on \(X_b\) and store the final K centroids.

  2. S2.

    Group the dataset of \(K \times B\) centroids into K clusters, using some non-hierarchical algorithm.

  3. S3.

    Find the deepest point of each cluster from step S2, using some data depth notion; these are used as initial seeds of k-Means.

Steps S2 and S3 grant flexibility to BRIk. Specifically, the method is designed to use any clustering technique in step S2. Here we use the Partitioning Around the Medoids (PAM) method (Kaufman and Rousseeuw 1990), though the Ward algorithm (Ward Jr 1963) also exhibited a good performance in the experiments (Torrente and Romo 2021). Also, in step S3 it is recommended to use the MBD depth notion (López-Pintado and Romo 2009), whose multivariate version for bands formed by pairs of distinct elements of X is given by the expression

$$\begin{aligned} \!\!MBD({\textbf{x}}_i)=\displaystyle \frac{1}{d \times {{N}\atopwithdelims (){2}}} \sum _{1\le i_{1}<i_2\le N}\ \sum _{n=1}^{d} I_{\left\{ \min \left\{ x_{i_{1} n},x_{i_2 n}\right\} \le x_{in}\le \max \left\{ x_{i_{1}n},x_{i_2n}\right\} \right\} } , \end{aligned}$$

where \(I_A\) is the indicator function of set A. Thus, the BRIk algorithm’s third step relies on finding the MBD-deepest point of each cluster (center grouping) from S2.

In this work we adapt BRIk to the case where data come from function observations to provide better initial seeds for k-Means by taking advantage of the nature of the data. In particular, we first take B-splines to approximate the original data in the least squares sense. This process sets a basis of (continuous) piecewise polynomial functions of a given degree and constructs the linear combination of these that best fits the data, to provide an approximation to the original function that is continuous and differentiable to a certain order, with all its advantages.

At this stage, we propose two variants, which respectively enhance the multivariate or functional properties of the original data. The differences between both versions can be seen through the left and right paths in Fig. 1.

The most straightforward technique, FABRIk (left path in Fig. 1), takes a plain computational perspective and evaluates the functions obtained on new time points (on a grid as dense as desired) to get a new dataset in D dimension. Then, these re-sampled (multivariate) data are input to the BRIk algorithm to find the initial seeds of k-Means.

Fig. 1
figure 1

Workflow of the FABRIk (left path) and FDEBRIk (right path) methods. Black (red) boxes indicate procedures whose output is multivariate (functional) data. The three first steps of FABRIk are the ones used in FKM and FKMPP (color figure online)

Alternatively, relevance can be given to the functional nature of the data by accounting for the analytical expression of the fitted curves to compute distances between them (or their derivatives). In that case, to develop the FDEBRIk method (right path in Fig. 1), the step S1 in BRIk can be taken with the only modification of replacing k-Means by any technique suited for clustering functional data. In particular, we suggest to use the k-mean alignment (kma) method for curve clustering (Sangalli et al 2010). This technique generalizes k-Means by allowing curve alignment to a given template in an iterative process of template identification –corresponding to cluster centroids–; assignment and (possible) alignment of curves to such templates; and normalization to avoid cluster drifting. In the assignment and alignment step, if necessary, phase variability is removed by means of appropriate warping functions, while amplitude variability is accounted for with some similarity measure. In particular, in kma similarity between two (continuous, differentiable) functions \(x_i\) and \(x_j\) can be measured by the cosines of the angles between the functions

$$\begin{aligned} \rho _0(x_i,x_j) = \dfrac{\int _{\mathbb {R}}x_i(s) x_j(s) ds}{\sqrt{\int _{\mathbb {R}} x_i^2(s) ds} \sqrt{\int _{\mathbb {R}} x_j^2(s) ds}}, \end{aligned}$$
(1)

or between their derivatives

$$\begin{aligned} \rho _1(x_i,x_j)= \dfrac{\int _{\mathbb {R}}x'_i(s) x'_j(s) ds}{\sqrt{\int _{\mathbb {R}} (x'_i(s))^2 ds} \sqrt{\int _{\mathbb {R}} (x'_j(s))^2 ds}}, \end{aligned}$$
(2)

which are efficiently evaluated for B-splines. The kma process stops when the increments in the similarities are below a given threshold, while the initial templates (seeds) are chosen at random among the original curves.

Since this work focuses on finding suitable multivariate seeds to be used straightaway in k-Means, we have not considered the use of time-warping, as this would require to modify the original observations at each iteration too, let alone the increase of computational cost. Thus we leave the integration of our methodology with time-warping to address phase variation for a future study.

The FDEBRIk method, then, retrieves from the B runs of kma the non-aligned, functional centroids corresponding to each bootstrap sample and projects them into a D-dimensional (multivariate) vector space before proceeding with steps S2 and S3, which will provide the final collection of initial seeds for k-Means. In the cases where we need to specify which of the similarity measures (1) or (2) we are using, we will write \(\hbox {FDEBRIk}_0\) or \(\hbox {FDEBRIk}_1\), respectively.

To check the performance of our methods we have selected several initialization techniques. First, as benchmark, the classical Forgy approach (Forgy 1965), where the initial seeds are selected at random; we refer to this as the KM initialization.

Next, we have considered a widely-used algorithm, k-Means++ (KMPP) (Arthur and Vassilvitskii 2007), which aims at improving the random selection of the initial seeds in the following way. A random data point \({\textbf{c}}_1\) is firstly picked from the data set. This conditions the selection of the remaining initial seeds \({\textbf{c}}_j\), \(j=2,\dots ,K\), which are sequentially chosen among the remaining observations \({\textbf{x}}\) according to a probability proportional to , the squared Euclidean distance between the point \({\textbf{x}}\) and its closest seed \({\textbf{c}}_{j_x}\), where . Following this procedure, the initial centers are typically separated from each other and yield more accurate groupings.

Additionally, in order to assess the potential improvement of our method, the functional approximation stage of the FABRIk and FDEBRIk methods is added before the KM and the KMPP methods are run. That is, we take the B-spline fitting and the oversampling steps and use KM and KMPP on a new D-dimensional data set. This way we can make a complete and fair comparison of how the FDA approach affects the BRIk method against how it improves KM or KMPP.

Hence, on the one hand, we will compare eight different k-Means initialization techniques: KM and its FDA version –with B-spline fitting and oversampling– (designated as FKM); KMPP and KMPP with the functional approximation (denoted by FKMPP); and BRIk and its functional variants, FABRIk and FDEBRIk with \(\rho _0\) and \(\rho _1\). On the other hand, we want to compare our strategy of replacing the d-dimensional data by those estimated in D dimension against the popular proposal of clustering the B-splines coefficients (Abraham et al 2003). This is because two functions in the same cluster are expected to have similar vectors of coefficients, and also because in most situations these vectors are considerably smaller in size than the D-dimensional ones. Therefore, we ran k-Means on the set of computed coefficients, initialized with KM, KMPP and BRIk. We will refer to these approaches as C-KM, C-KMPP and C-BRIk.

In order to explore the advantages of our method, we have also carried out experiments where the data points had missing observations, what translates into ”sparse data” in the functional data context. For each simulated dataset we randomly removed a proportion \(p\in \{0.1, 0.25, 0.5\}\) of the coordinates of each d-dimensional vector. For FKM, FKMPP, FABRIk, FDEBRIk and the methods based on clustering the B-spline coefficients, we estimated each missing value in a given vector by means of the corresponding B-spline. For KM, BRIk and KMPP, we imputed the missing data by using linear interpolation; note that, for simplicity, the removal of coordinates was hence not applied to the first and last values. We then performed the analysis of the resulting data with each of the eleven methods mentioned.

3 Experimental setup

Our experiments were carried out in the R statistical language (R Core Team 2017), using the implementation of k-Means included in the stats package; the fitting of B-splines in the splines2 package; the kma algorithm in the fdakma package; and the MBD coding provided in the depthTools package (Torrente et al 2013). For each dataset, the number of clusters in the k-Means algorithm was set to be equal to the number of groups and the maximum number of iterations was set to default (10). For BRIk, FABRIk and FDEBRIk, we used bootstrap sizes of \(B=25\). Using a larger bootstrap size decreases the speed of these algorithms, while slightly improving the distortion. Cubic B-splines with no intercept and a varying number of equally spaced knots, depending on the model to be analyzed, were chosen to approximate our data; then new evenly spaced observations were obtained by using an oversampling process with different oversampling factors. An oversampling factor of m means that the number D of time points observed in the approximated function is m times the number of original input samples: \(D=m\times d\). The knots are defined through the degrees of freedom (DF) parameter. A DF value of n with cubic B-splines implementation indicates that \(n-3\) internal knots are placed uniformly in the horizontal axis. The resemblance of the approximated function to the real one in each of the models is determined by the DF parameter of the B-splines. Finally, as mentioned before, kma was used with no alignment.

3.1 Datasets

We conducted experiments involving simulated and real datasets. For the simulated ones we chose the four models described in Table 1; the functions giving origin to each of the clusters are shown in Fig. 2. Models 1 and 2 consider polynomial and sinusoidal functions; the former is designed to assess the effect of rapidly changing signals on the clustering quality whereas the latter could be used, for instance, to mimic monthly average temperatures in different climates. Model 3 consists of (raw and transformed) Gaussian functions and is used to test the impact of sudden peaks on signal clustering. Finally, Model 4, taken from Leroy et al (2018), attempts to model swimmers’ progression curves. The time vector (x coordinate) varies from model to model, while the number of simulated functions per cluster is 25 for all of them. To construct the clusters, additive white Gaussian noise, whose intensity is described by the standard deviation \(\sigma \), is incorporated to each model to mimic the randomness in the data collection process.

Table 1 Description of the simulated models. The second column provides the time vector where the functions are observed, whereas the third column describes the signal defining each cluster, the values computed in a coordinate-wise manner. The fourth column includes the DFs used in each model, according to an elbow-like rule. The last column displays the SNR of each cluster corresponding to a level of noise \(\sigma =1\)
Fig. 2
figure 2

Functions originating each of the clusters for the simulated models by adding Gaussian noise to each sampled component independently (color figure online)

The DF parameter requires a careful choice for each model. Higher values of this parameter can account for larger variations of a function, and therefore Models 2 and 3 would require higher DFs. In our experiments, the specific value for each situation was selected in the set {4,..., 50} according to an elbow-like rule for the plot of the (average) distortion against the DFs; these are provided in the fourth column of Table 1.

For each model, and using \(\sigma = 1\), we generated 1000 independent datasets that were clustered with the eleven methods. To understand how this level of noise affects each particular model, we provide in the last column of Table 1 the signal-to-noise ratio (SNR), calculated from the classical signal processing definition as \(10 \times \log _{10}(P_{signal}/P_{noise})\), where \(P_{signal}\) is the mean of the squared signal and \(P_{noise} = \sigma ^2\) (i.e., the noise variance). Nevertheless, in Fig. 2 it is apparent that, for each model, some clusters are more prone to confusion than others –see, for instance, clusters 2 (red) and 3 (green) of Model 4–, regardless of the particular SNR value.

Other values of \(\sigma \left( \in \{0.5, 1.5, 2 \}\right) \) produced similar relative outputs; note that increasing the standard deviation to a value greater than \(\sigma = 2\) renders a very poor cluster accuracy for every method tested; however, FABRIk, \(\hbox {FDEBRIk}_0\) and FKMPP present slightly higher accuracy measures than the alternatives as the functional stage is noise-smoothing.

To complete the study of our algorithms we used real data to assess whether they are of practical use.

First, we have considered a dataset containing 200 electrocardiogram (ECG) signals, by a single electrode, 133 labeled as normal and 67 as abnormal (myocardial infarction), as formatted in Olszewski (2001) (units not provided). Each observation reflects a heartbeat and consists of 96 measurements. The dataset is available at the UCR Time Series Classification Archive (Dau et al 2018). See Fig. 3, left panel.

Secondly, the Gyroscope dataset was recorded using a Xiaomi Pocophone F1. The mobile phone was laid on a table and moved to follow four patterns: a straight line, a curvy line, an arch and a rotation from side to side on the spot. The yaw angular velocity in rad/s was recorded using the Sensor Record app available in Google Play Store. This dataset was used to test the method’s applicability to sensor data, specifically targeting a potential use case in robotic applications.

Each recording for each pattern was truncated to 527 registered time points, spaced by 10ms, in order for all data points to have the same length. Thus, their duration is approximately 5 seconds. See Fig. 3, right panel. The dataset consists of 11 recordings for each pattern and is available as Supplementary material.

Since all the methods tested in this study are random, in the sense that different runs produce in general distinct centroids, we ran each of them 1000 times for each dataset.

Fig. 3
figure 3

Real data. Left panel: heart electrical activity recorded during a cardiac cycle for patients with a normal heart or with a cardiac condition; units not provided. Right panel: gyroscope yaw velocity readings (in rad/s) for four different patterns, registered at steps of 0.01s (color figure online)

3.2 Performance evaluation

The overall performance of these methods has been evaluated according to five different measures that fall into four categories:

  • Accuracy: We measure how similar the clusters are to the true groups by means of the Adjusted Rand Index (ARI) (Hubert and Arabie 1985) and the clustering correctness, which is computed as the percentage of label agreement (i.e. correctly assigned elements), according to the label permutation that yields the maximum set similarity.

  • Dispersion: The obvious choice to determine how compact the clusters \(G_1,\dots ,G_K\) are is the distortion , where \({\textbf{c}}_j\) is the gravity center of cluster \(G_j\). Its evaluation is done by identifying each cluster from the partitioning labels and calculating the corresponding centroid, in the original (multivariate) data space.

  • Convergence: We assess the convergence speed with the number of iterations required by the k-Means algorithm to converge after being initialized.

  • Computational cost: Finally, we consider the execution time, in seconds, used by each algorithm from start to finish. Calculations are carried out with an Intel Core i7-6700HQ CPU with 2.60GHz and 8 GB RAM.

The performance of the methods we consider is assessed in terms of the median \({\tilde{x}}\), the mean \({\bar{x}}\) and the standard deviation s for all five measures.

4 Results

4.1 Simulated data

We used the four models to evaluate the performance of FABRIk and FDEBRIk in different situations. For Model 1, the FABRIk and \(\hbox {FDEBRIk}_0\) methods –followed by BRIk and \(\hbox {FDEBRIk}_1\)– outperform the alternatives with respect to all the evaluation measures except for the execution time, as shown in Table 2, where statistics for the distortion have been rounded to four significant figures. In particular, it is clear that FDEBRIk has a much larger computational burden, specially when using the similarity \(\rho _1\).

Table 2 Summary statistics for Model 1. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively

All the techniques based on clustering the vectors of coefficients are drastically worse than the other ones. The same situation is observed for all the scenarios we have considered (different models and levels of noise and presence or absence of missing data) and thus we do not include them in the subsequent tables for compactness.

Notably, in this model the variability of the first four measures is remarkably smaller for FABRIk and \(\hbox {FDEBRIk}_0\). Here, the synthetic groups 2 and 3 are easily confounded and inappropriate initial seeds lead k-Means to merge these two groups into a single cluster and, consequently, to split one of the other groups into two clusters. This situation considerably reduces the ARI of the corresponding algorithms, whose distributions become bimodal. As an example, we considered a single dataset following Model 1, and compared the output of FABRIk and FDEBRIk, with \(B=25\), versus 25 runs of k-Means with random initialization. Our methods correctly allocate all the elements (\(ARI= 1\)), whereas none of 25 runs of standard k-Means is capable of retrieving the correct grouping (average \(ARI= 0.9177\)), with the confusion of clusters 2 and 3 in four of the runs.

Fig. 4
figure 4

Violin plots, in red, for the distribution of the ARI index (top) and the distortion (bottom) in Model 1, with \(\sigma =1\). The corresponding wider boxplots are superimposed in gray. BRIk, FABRIk and FDEBRIk have a consistent behaviour whereas the other methods have more spread or bimodal distributions, with heavy lower and upper tails, respectively (color figure online)

Figure 4, upper panel, depicts violin plots of the ARI distributions; the ones corresponding to the correctness (not shown) display a similar pattern. The effect of wrong allocations is also reflected in the distribution of the distortion: all the methods except the ones based on bootstrap and MBD have bimodal densities or heavy upper tails, as shown in Fig. 4, bottom panel. This behavior is observed in a significant number of runs of the methods, but FABRIK and \(\hbox {FDEBRIk}_0\) –followed by BRIk and \(\hbox {FDEBRIk}_1\)– seem to find the right partitioning more often.

Despite a median test does not reject the equality of medians for the accuracy measures (correctness and ARI), we conducted pairwise t-tests for testing equality of means. As expected from their asymmetric distributions, we obtained p-values lower than \(10^{-43}\) for the comparison of FABRIk and FDEBRIk with the other methods, except for BRIk, with p-values in the order of \(10^{-3}\) and \(10^{-6}\). In addition, when \(\hbox {FDEBRIk}_1\) is compared to FABRIk and \(\hbox {FDEBRIk}_0\), the p-values are also in the order of \(10^{-6}\), but the test leads to clearly not rejecting the null hypothesis for FABRIk versus \(\hbox {FDEBRIk}_0\). As a consequence, the slight improvement in the correctness achieved by \(\hbox {FDEBRIk}_0\) over FABRIk in this particular model does not compensate the increase in the computational cost.

Table 3 Summary statistics for Model 1 with 25% missing values. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively

With respect to the missing data case (sparse data), we report in Table 3 the results for a percentage \(p=25\%\) of missing data. Similar relative outputs can be observed in the other cases. FDEBRIk and FABRIk are again the best methods in terms of correctness and ARI, whereas FABRIk and BRIk need a lower number of iterations to reach convergence. With respect to the distortion, FABRIk is only slightly surpassed by FDEBRIk and BRIk. Regarding the execution time, KM, BRIk and KMPP required longer times than before due to the interpolation step, whilst no additional computation is needed for the functional approaches; hence the methods based on FDA are, globally, a more suitable option (with the same caveat as before regarding FDEBRIk’s execution time).

In contrast to the previous case, in Models 2–4 FABRIk has in general a distortion slightly higher than that of KM, BRIk and KMPP but smaller than that of the other initialization methods with the functional approximation, as shown in Tables 4, 5 and Tables S1–S4 in the Suplementary material.

Table 4 Summary statistics for Model 2. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively
Table 5 Summary statistics for Model 2 with 25% missing values. The median, mean and variance of 1000 independent datasets for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively

This difference in the distortion of multivariate and functional methods can be explained by accounting for the two different data spaces we are considering and our process of computing the distortion. In particular, from the point of view of the functional data space, FABRIk is simply the initialization of k-Means with appropriate seeds. However, from the point of view of the initial multivariate data space, the clustering obtained with FABRIk does not necessarily (and not even frequently) correspond to a local minimum of the distortion in this space, therefore yielding higher values of the objective function. A similar explanation applies to the other functional techniques. On the contrary, FABRIk, \(\hbox {FDEBRIk}_0\) and \(\hbox {FDEBRIk}_1\) consistently provide remarkably higher accuracy measures, with FABRIk and \(\hbox {FDEBRIk}_0\) getting a faster convergence in terms of the number of iterations. They also have longer execution times, as expected. However, FABRIk’s ranking improves again if missing data are considered.

To assess the relevance of this increment in the computational cost we compared these results with the strategy of initializing k-Means several times and choosing the set of seeds providing the lowest distortion. For instance, in Model 2, KM with 200 random starts increases the average ARI from 0.4200 to 0.4549, which is far from the 0.6396, 0.6530 and 0.6315 obtained with our methods, and requires 0.1263 seconds on average, which is not far from FABRIK’s execution time. This highlights the suitability, most of all, of FABRIk for the common situation in which optimality in terms of an external quality measure (i.e., high values of correctness or ARI) is not necessarily in agreement with optimality in terms of an internal quality measure (low values of distortion). Nevertheless, if computational cost is not a critical issue, FDEBRIk surpasses FABRIk in many situations.

In these models, all the pairwise tests for equality of means and medians for correctness, ARI and distortion, to compare FABRIk and FDEBRIk with the other methods and between them, yielded low p-values with an order of magnitude smaller than \(10^{-9}\). In summary, we can report a significant improvement over the alternatives.

4.2 Real data

We next applied all the initialization methods to the real data.

For the ECG dataset, the DFs were set to 15 according to the elbow rule and we chose an oversampling factor of 1 for speed, as using a denser time grid produces a similar output. Table 6 summarizes our results.

Table 6 Summary statistics for the ECG dataset. The median, mean and variance of 1000 runs of each initialization method for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively

Note that the quality of the clustering recovery in terms of ARI is low. Except for \(\hbox {FDEBRIk}_1\), whose performance values are clearly the worst, we do not find prominent differences across methods. In particular, all of them require a single iteration to converge and all except \(\hbox {FDEBRIk}_1\) have the same median for correctness, ARI and distortion. Yet, \(\hbox {FDEBRIk}_0\) leads to the best average correctness, ARI and distortion, followed by BRIk and FABRIk, and has the smallest standard deviations. This corresponds to a single-mode distribution: a scenario similar to that depicted in Fig. 4 for simulated data. As usual, FABRIk and FDEBRIk are largely outperformed in terms of execution time by those methods that do not rely on the B-spline approximation.

For the Gyroscope dataset, the DFs were set to 15, once more, according to the elbow rule. The oversampling factor was set to 1. Again, a similar performance is observed for higher values of this parameter, which does not influence the final results.

Table 7 Summary statistics for the Gyroscope dataset. The median, mean and variance of 1000 runs of each initialization method for the five performance evaluation measures are provided. Best and second-best medians and means are in boldface and italics, respectively

In contrast to the previous case, the values of correctness and ARI are much higher, as shown in Table 7. However, the FABRIk method finds more accurate groups, obtaining ARI values larger than 0.9 in roughly 60% of the iterations, whereas for instance, this percentage is around 35% and 20% for KM and FKMPP, respectively. Also, for distortion, it achieves low values (after FDEBRIk and BRIk) and shows, after \(\hbox {FDEBRIk}_1\), the least variability, followed by BRIk and \(\hbox {FDEBRIk}_0\). In fact, for BRIk, FABRIk and \(\hbox {FDEBRIk}_0\), the accuracy and dispersion measures have bi-modal distributions, while for \(\hbox {FDEBRIk}_1\) these values are distributed around a single mode, which explains the lowest value of the variance. However, this mode is clearly not the best local minimum. On the contrary, the distributions corresponding to the other algorithms present three or more peaks, what highlights the more consistent performance of the techniques based on bootstrap and MBD. On the other hand, the computational cost of BRIk, FABRIk and FDEBRIk are the largest ones. With respect to the number of iterations, all methods have similar values, with BRIk and FABRIk slightly better on average.

4.3 Implementation

We have implemented an R package, briKmeans, to provide the basic tools to run BRIk, FABRIk and FDEBRIk, with \(\rho _0\) and \(\rho _1\). Users can tune the different parameters of the methods through the functions parameters and retrieve the corresponding initial seeds and the resulting k-Means output, which includes the partitioning of the data set. For instance, the following simple call

> fabrik(exampleM1, k=4, degFr=10)

will run FABRIk with DFs set to 10 and the rest of parameters set to default, and return k=4 clusters for the dataset exampleM1. The clusters can be visualized individually in parallel coordinates (Inselberg 1985) by means of the plotKmeansClustering function, including the final centroids. In Fig. 5 we illustrate this representation for a dataset following Model 1, with \(\sigma =1\). Note that users can also turn to the elbowRule function to plot the distortion associated to FABRIk or FDEBRIk against the DFs in order to optimize this parameter.

Fig. 5
figure 5

Representation of the four clusters retrieved by FABRIk for a dataset following Model 1 with \(\sigma = 1\), along with the final centroid (solid line). Our method correctly allocates all the elements (\(ARI=1\)) (color figure online)

5 Conclusion

In this work we have developed FABRIk and FDEBRIk, two initialization methods for k-Means that extend the BRIk algorithm to the functional data case at two different levels. Both take d-dimensional longitudinal observations from continuous functions as an input dataset and return the D-dimensional initial seeds for k-Means after a functional approximation process via B-splines and a re-sampling stage. The difference between them is the step at which the re-sampling is carried out and, thus, the extent of use of the functional nature of the initial data.

Similarly to their precursor BRIk, our methods are flexible in several ways. The number of bootstrap replicates B can be tuned by the user; in general, low values of B are enough to produce a relevant improvement over the alternatives. Additionally, the oversampling factor m and the DFs can be chosen to best adapt to the data. An oversampling factor of 1 has proven to yield similar results to higher values of this parameter, while remaining less computationally expensive. The DFs are selected according to the elbow rule. Nevertheless, our experiments show that a wide range of values for these parameters are also suitable. The clustering algorithm used to partition the cluster centers is an extra feature that can be determined by the user. In particular, for the FDEBRIk method we have chosen the kma algorithm, which offers an additional degree of freedom through the selection of the similarity measure between functions. We have considered the measures \(\rho _0\) and \(\rho _1\), which account for the cosines of the angles between the functions or their derivatives, respectively. Our study demonstrates that the performance of the method is consistently better when the similarity is based on the cosine of the angle between the functions, as opposed to the cosine of the angle between their derivatives. Finally, one could potentially use any feasible data depth definition, but our recommendation is to choose MBD for its fast computation and because it has proven to score high in the accuracy measures.

We have compared our functional initialization strategies to their multivariate version and to two more techniques, with and without the FDA approach. Furthermore, we have assessed the behavior of the methods based on clustering the B-splines coefficients obtained for each data point, which have proven to be poor competitors.

Generally speaking, FABRIk works well with both synthetic and real data, though FDEBRIk with \(\rho _0\) commonly measures up to or surpasses it, with respect to the correctness, ARI and distortion. Only for the Gyroscope dataset FABRIk gets substantially better values of the distortion than FDEBRIk and each of the competitors. Nevertheless, FABRIK has a computational cost that is two orders of magnitude smaller than that of \(\hbox {FDEBRIk}_0\), and competes with the other ones in the case of sparse data, which is a valuable aspect of the method. Thus, the major reason for choosing one against the other should be the computational cost. On the other hand, FDEBRIk with \(\rho _1\) only ranks as the best option in one of the datasets that we have considered; in addition, its computational cost is markedly larger, making it the less suitable option of the methods that we have proposed. In summary, FABRIk and \(\hbox {FDEBRIk}_0\) provide an advantageous solution that offers higher quality (in terms of clustering recovery, i.e., ARI and correctness) than other techniques at the cost of a longer computational time and, commonly, a slightly larger distortion. However, these aspects can be alleviated, respectively, by using a lower or higher value of B.

Additionally, we have shown that in some situations, and particularly with the real data we have considered, FABRIk and \(\hbox {FDEBRIk}_0\) rise as more reliable ways of initializing k-Means, which consistently provide better accuracy results with lower variances. This reinforces the practical application of the methods for data analysis. Moreover, as any technique based on a functional approximation of the observations, they allow denoising and imputation of missing data.