k -Means Clustering for Persistent Homology

Persistent homology is a methodology central to topological data analysis that extracts and summarizes the topological features within a dataset as a persistence diagram; it has recently gained much popularity from its myriad successful applications to many domains. However, its algebraic construction induces a metric space of persistence diagrams with a highly complex geometry. In this paper, we prove convergence of the k - means clustering algorithm on persistence diagram space and establish theoretical properties of the solution to the optimization problem in the Karush–Kuhn–Tucker framework. Additionally, we perform numerical experiments on various representations of persistent homology, including embeddings of persistence diagrams as well as diagrams themselves and their generalizations as persistence measures; we ﬁnd that clustering performance directly on persistence diagrams and measures outperform their vectorized representations.


Introduction
Topological data analysis (TDA) is a recently-emerged field of data science that lies at the intersection of pure mathematics and statistics.The core approach of the field is to adapt classical algebraic topology to the computational setting for data analysis.The roots of this approach first appeared in the early 1990s by Frosini (1992) for image analysis and pattern recognition in computer vision (Verri et al., 1993); soon afterwards, these earlier ideas by Frosini & Landi (2001) were reinterpreted, developed, and extended concurrently by Edelsbrunner et al. (2002) and Zomorodian & Carlsson (2005).These developments laid the foundations for a comprehensive set of computational topological tools which have then enjoyed great success in applications to a wide range of data settings in recent decades.Some examples include sensor network coverage problems (de Silva & Ghrist, 2007); biological and biomedical applications, including understanding the structure of protein data (Gameiro et al., 2014;Kovacev-Nikolic et al., 2016;Xia et al., 2016) and the 3D structure of DNA (Emmett et al., 2015), and predicting disease outcome in cancer (Crawford et al., 2020); robotics and the problem of goal-directed path planning (Bhattacharya et al., 2015), as well as other general planning and motion problems, such as goal-directed path planning (Pokorny et al., 2016) and determining the cost of bipedal walking (Vasudevan et al., 2013); materials science to understand the structure of amorphous solids (Hiraoka et al., 2016); characterizing chemical reactions and chemical structure (Murayama et al., 2023); financial market prediction (Ismail et al., 2020); quantifying the structure of music (Bergomi & Baratè, 2020); among many others (e.g., Otter et al., 2017).
Persistent homology is the most popular methodology from TDA due to its intuitive construction and interpretability in applications; it extracts the topological features of data and computes a robust statistical representation of the data under study.A particular advantage of persistent homology is that it is applicable to very general data structures, such as point clouds, functions, and networks.Notice that among these examples, the data may be endowed with only a notion of relatedness (point clouds) or may not even be a metric space at all (networks).
A major limitation of persistent homology, however, is that its output takes a somewhat unusual form of a collection of intervals; more specifically, it is a multiset of half-open intervals together with the set of zerolength intervals with infinite multiplicity, known as a persistence diagram.Statistical methods for intervalvalued data (e.g., Billard & Diday, 2000) cannot be used to analyze persistence diagrams because their algebraic topological construction gives rise to a complex geometry, which hinders their direct applicability to classical statistical and machine learning methods.The development of techniques to use persistent homology in statistics and machine learning has been an active area of research in the TDA community.Largely, the techniques entail either modifying the structure of persistence diagrams to take the form of vectors (referred to as embedding or vectorization) so that existing statistical and machine learning methodology may be applied directly (e.g., Bubenik, 2015a;Adams et al., 2017a); or developing new statistical and machine learning methods to accommodate the algebraic structure of persistence diagrams (e.g., Reininghaus et al., 2015).The former approach is more widely used in applications, while the latter approach tends to be more technically challenging and requires taking into account the intricate geometry of persistence diagrams and the space of all persistence diagrams.Our work studies both approaches experimentally and contributes a new theoretical result in the latter setting.
In machine learning, a fundamental task is clustering, which groups observations of a similar nature to determine structure within the data.The k-means clustering algorithm is one of the most well-known clustering methods.For structured data in linear spaces, the k-means algorithm is widely used, with wellestablished theory on stability and convergence.However, for unstructured data in nonlinear spaces, such as persistence diagrams, it is a nontrivial task to apply the k-means framework.The key challenges are twofold: (i) how to define the mean of data in a nonlinear space, and (ii) how to compute the distance between the data points.These challenges are especially significant in the task of obtaining convergence guarantees for the algorithm.For persistence diagrams specifically, the cost function is not differentiable nor subdifferentiable in the traditional sense because of their complicated metric geometry.
In this paper, we study the k-means algorithm on the space of persistence diagrams both theoretically and experimentally.Theoretically, we establish convergence of the k-means algorithm.In particular, we provide a characterization of the solution to the optimization problem in the Karush-Kuhn-Tucker (KKT) framework and show that the solution is a partial optimal point, KKT point, as well as a local minimum.Experimentally, we study various representations of persistent homology in the k-means algorithm, as persistence diagram embeddings, on persistence diagrams themselves, as well as on their generalizations as persistence measures.To implement k-means clustering on persistence diagram and persistence measure space, we reinterpret the algorithm taking into account the appropriate metrics on these spaces as well as constructions of their respective means.Broadly, our work contributes to the growing body of increasingly important and relevant non-Euclidean statistics (e.g., Blanchard & Jaffe, 2022) that deals with, for example, spaces of matrices under nonlinear constraints (Dryden et al., 2009); quotients of Euclidean spaces defined by group actions (Le & Kume, 2000); and Riemannian manifolds (Miolane et al., 2020).
The remainder of this paper is organized as follows.We end this introductory section with an overview of existing literature relevant to our study.Section 2 provides the background on both persistent homology and k-means clustering.We give details on the various components of persistent homology needed to implement the k-means clustering algorithm.In Section 3, we provide main result on the convergence of the k-means algorithm in persistence diagram space in the KKT framework.Section 4 provides results to numerical experiments comparing the performance of k-means clustering on embedded persistence diagrams versus persistence diagrams and persistence measures.We close with a discussion on our contributions and ideas for future research in Section 5.
Related Work A previous study of the k-means clustering in persistence diagram space compares the performance to other classification algorithms; it also establishes local minimality of the solution to the optimization problem in the convergence of the algorithm (Marchese et al., 2017).Our convergence study expands on this previous study to the more general KKT framework, which provides a more detailed convergence analysis, and differs experimentally by studying the performance of k-means clustering specifically in the context of persistent homology on simulated data as well as a benchmark dataset.
Clustering and TDA more generally has also been recently overviewed, explaining how topological concepts may apply to the problem of clustering (Panagopoulos, 2022).Other work studies topological data analytic clustering for time series data as well as space-time data (Islambekov & Gel, 2019;Majumdar & Laha, 2020).

Background and Preliminaries
In this section, we provide details on persistent homology as the context in which our study takes place; we also outline the k-means clustering algorithm.We then provide more technical details on the specifics of persistent homology that are needed to implement the k-means clustering algorithm, as well as of Karush-Kuhn-Tucker optimization.

Persistent Homology
Algebraic topology is a field in pure mathematics that uses abstract algebra to study general topological spaces.A primary goal of the field is to characterize topological spaces and classify them according to properties that do not change when the space is subjected to smooth deformations, such as stretching, compressing, rotating, and reflecting; these properties are referred to as invariants.Invariants and their associated theory can be seen as relevant to complex and general real-world data settings in the sense that they provide a way to quantify and rigorously describe the "shape" and "size" of data, which can then be considered as summaries for complex, non-Euclidean data structures and settings are becoming increasingly important and challenging to study.A particular advantage of TDA, given its algebraic topological roots, is its flexibility and applicability both when there is a clear formalism to the data (e.g., rigorous metric space structure) and when there is a less obvious structure (e.g., networks and text).
Persistent homology adapts the theory of homology from classical algebraic topology to data settings in a dynamic manner.Homology algebraically identifies and counts invariants as a way to distinguish topological spaces from one another; various different homology theories exist depending on the underlying topological space of study.When considering the homology of a dataset as a finite topological space X, it is common to use simplicial homology over a field.A simplicial complex is a reduction of a general topological space to a discrete version, as a union of simpler components, glued together in a combinatorial fashion.An advantage of considering simplicial complexes and studying simplicial homology is that there are efficient algorithms to compute homology.More formally, these basic components are k-simplices, where each k-simplex is the convex hull of k + 1 affinely independent points x 0 , x 1 , . . ., x k , denoted by [x 0 , x 1 , . . ., x k ].A set constructed combinatorially of k-simplices is a simplicial complex, K.
Persistent homology is a dynamic version of homology, which is built on studying homology over a filtration.A filtration is a nested sequence of topological spaces, X 0 ⊆ X 1 ⊆ • • • ⊆ X n = X; there exist different kinds of filtrations, which are defined by their nesting rule.In this paper, we study Vietoris-Rips (VR) filtrations for finite metric spaces (X, d X ), which are commonly used in applications and real data settings in TDA.Let ϵ 1 ≤ ϵ 2 ≤ • • • ≤ ϵ n be an increasing sequence of parameters where each ϵ i ∈ R ≥0 .The Vietoris-Rips complex of X at scale ϵ i VR(X, ϵ i ) is constructed by adding a node for each x j ∈ X and a k-simplex for each set {x j1 , x j2 , . . ., x j k+1 } with diameter d(x i , x j ) smaller than ϵ i for all 0 ≤ i, j ≤ k.We then obtain a Vietoris-Rips filtration which is a sequence of inclusions of sets.
The sequence of inclusions given by the filtration induces maps in homology for any fixed dimension •.Let H • (X, ϵ i ) be the homology group of VR(X, ϵ i ) with coefficients in a field.This gives rise to the following sequence of vector spaces: The collection of vector spaces H • (X, ϵ i ), together with their maps (vector space homomorphisms), H • (X, ϵ i ) → H • (X, ϵ j ), is called a persistence module.
The algebraic interpretation of a persistence module and how it corresponds to topological (more specifically, homological) information of data is formally given as follows.For each finite dimensional H • (X, ϵ i ), the persistence module can be decomposed into rank one summands corresponding to birth and death times of homology classes (Chazal et al., 2016) The collection of these birth-death intervals [ϵ i , ϵ j ) represents the persistent homology of the Vietoris-Rips filtration of X. Taking each interval as an ordered pair of birth-death coordinates and plotting each in a plane R 2 yields a persistence diagram; see Figure 3a.This algebraic structure and its construction generate the complex geometry of the space of all persistence diagrams, which must be taken into account when performing statistical analyses on persistence diagrams representing the topological information of data.
More intuitively, homology studies the holes of a topological space as a kind of invariant, since the occurrence of holes and how many there are do not change under continuous deformations of the space.Dimension 0 homology corresponds to connected components; dimension 1 homology corresponds to loops; and dimension 2 homology corresponds to voids or bubbles.These ideas can be generalized to higher dimensions and the holes are topological features of the space (or dataset).Persistent homology tracks the appearance of these features with an interval, where the left endpoint of the interval signifies the first appearance of a feature within the filtration and the right endpoint signifies its disappearance.The length of the interval can therefore be thought of as the lifetime of the topological feature.The collection of the intervals is known as a barcode and summarizes the topological information of the dataset (according to dimension).A barcode can be equivalently represented as a persistence diagram, which is a scatterplot of the birth and death points of each interval.See Figure 2 for a visual illustration.Intuitively, points that are far away from the diagonal are the long intervals corresponding to topological features that have a long lifetime in the filtration.These points are said to have high persistence and are generally considered to be topological signal, while points closer to the diagonal are considered to be topological noise.
Persistence Measures A more general representation of persistence diagrams that aims to facilitate statistical analyses and computations is the persistence measure.The construction of persistence measures arises from the equivalent representation of persistence diagrams as measures on Ω of the form x∈D∩Ω n x δ x , where x ranges over the off-diagonal points in a persistence diagram D, n x is the multiplicity of x, and δ x is the Dirac measure at x (Divol & Chazal, 2019).Considering then all Radon measures supported on Ω gives the following definition.
Definition 2 (Divol & Lacombe (2021b)).Let µ be a Radon measure supported on Ω.For 1 ≤ p < ∞, the p-total persistence in this measure setting is defined as where x ⊤ is the projection of x to the diagonal ∂Ω.Any Radon measure with finite p-total persistence is called a persistence measure.
Persistence measures resemble heat map versions of persistence diagrams on the upper half-plane where individual persistence points are smoothed, so the intuition on their birth-death lifetimes as described previously and illustrated in Figure 2 are lost.However, they gain the advantage of a linear structure where statistical quantities are well-defined and much more straightforward to compute (Divol & Lacombe, 2021a).(Ghrist, 2008).The panel on the left shows an annulus, which is the underlying space of interest; the topology of an annulus is characterized by 1 connected component (corresponding to H 0 ), 1 loop (corresponding to H 1 ), and 0 voids (corresponding to H 2 ).17 points are sampled from the annulus and a simplicial complex (in order to be able to use simplicial homology) is constructed from these samples in the following way: The sampled points are taken to be centers of balls with radius ϵ, which grows from 0 to ∞.At each value of ϵ, whenever two balls intersect, they are connected by an edge; when three balls mutually intersect, they are connected by a triangle (face); this construction continues for higher dimensions.The simplicial complex resulting from this overlapping of balls as the radius ϵ grows is shown to the right of each snapshot at different values of ϵ.As ϵ grows, connected components merge, cycles are formed, and fill up.These events are tracked by a bar for each dimension of homology, shown on the right; the lengths of the bars corresponds to the lifetime of each topological feature as ϵ grows.Notice that as ϵ reaches infinity, there remains a single connected component tracked by the longest bar for H 0 .For H 1 , there are bars of varying length, including several longer bars, which suggests irregular sampling of the annulus; the longest bar corresponds to the single loop of the annulus.Notice also that there is one short H 2 bar, which is likely to be a spurious topological feature corresponding to noisy sampling.
By generalizing persistence diagrams to persistence measures, a larger space of persistence measures is obtained, which contains the space of persistence diagrams as a proper subspace (Divol & Lacombe, 2021b).In other words, when it comes to discrete measures on the half plane, a persistence measure is nothing but a persistence diagram with different representation.Thus, any distribution on the space of persistence diagrams is naturally defined on the space of persistence measures.The mean of distributions is also well defined for persistence measures if it is well defined for persistence diagrams (see Section 2.3).Note that on the space of persistence measures, the mean of a distribution is a persistence measure, which can be an abstract Radon measure that is not representable as a persistence diagram.In fact, in general, the empirical mean on the space of persistence measures does not lie in the subspace of persistence diagrams.However, it enjoys the computational simplicity which does not happen to other means defined on the space of persistence diagrams.
Metrics for Persistent Homology There exist several metrics for persistence diagrams; we focus on the following for its computational feasibility and the known geometric properties of the set of all persistence diagrams (persistence diagram space) induced by this metric.Definition 3.For p > 0, the p-Wasserstein distance between two persistence diagrams D 1 and D 2 is defined as , where ∥ • ∥ q denotes the q-norm, 1 ≤ q ≤ ∞ and γ ranges over all bijections between D 1 and D 2 .
For a persistence diagram D, its p-total persistence is W p,q (D, D ∅ ).Notice here that the intuition is similar to that given in Definition 2, but technically different to the Radon measure setting.We refer to the space of persistence diagrams D p as the set of all persistence diagrams with finite p-total persistence.Equipped with the p-Wasserstein distance, (D p , W p,q ) is a well-defined metric space.
The space of persistence measures is equipped with the following metric.
Definition 4. The optimal partial transport distance between equivalent representation persistence measures µ and ν is Here, Π denotes the set of all Radon measures on Ω × Ω where for all Borel sets A, B ⊆ Ω, The space of persistence measures M p , together with OT p,q , is an extension and generalization of (D p , W p,q ).In particular, (D p , W p,q ) is a proper subset of (M p , OT p,q ) and OT p,q coincides with W p,q on the subset D p (Divol & Lacombe, 2021a).

k-Means Clustering
The k-means clustering algorithm is one of the most widely known unsupervised learning algorithms.It groups data points into k distinct clusters based on their similarities and without the need to provide any labels for the data (Hartigan & Wong, 1979).The process begins with a random sample of k points from the set of data points.Each data point is assigned to one of the k points, called centroids, based on minimum distance.Data points assigned to the same centroid form a cluster.The cluster centroid is updated to be the mean of the cluster.Cluster assignment and centroid update steps are repeated until convergence.
A significant limitation of the k-means algorithm is that the final clustering output is highly dependent on the initial choice of k centroids.If initial centroids are close in terms of distance, this may result in suboptimal partitioning of the data.Thus, when using the k-means algorithm, it is preferable to begin with centroids that are as far from each other as possible such that they already lie in different clusters, which makes it more likely for convergence to the global optimum rather than a local solution.k-means++ (Arthur & Vassilvitskii, 2006) is an initialization algorithm that addresses this issue which we adapt in our implementation of the k-means algorithm, in place of the original random initialization step.
1: Initialization: From (D 1 , . . ., D n ), randomly select k points as the initial centroids (m 1 , . . ., m k ).2: Assignment: Assign each data point D i to cluster k * i based on distance to each of the k centroids: where dist(• , •) stands for p-Wasserstein distance for persistence diagrams and p-optimal partial transport distance for persistence measures.3: Update: Recompute the Fréchet mean or mean persistence measure m k as the mean of the data points belonging to each cluster.
Repeat assignment and update steps until cluster assignment does not change, i.e., the algorithm converges.
Finding the true number of clusters k is difficult in theory.Practically, however, there are several heuristic ways to determine the near optimal number of clusters.For example, the elbow method seeks the cutoff point of the loss curve (Thorndike, 1953); the information criterion method constructs likelihood functions for the clustering model (Goutte et al., 2001); the silhouette method tries to optimize the silhouette function of data with respect to the number of clusters (De Amorim & Hennig, 2015).An experimental comparison of different determination rules can be found in Pham et al. (2005).
The original k-means algorithm Hartigan & Wong (1979) takes vectors as input and calculates the squared Euclidean distance as the distance metric in the cluster assignment step.In this work, we aim to adapt this algorithm that intakes representations of persistent homology as input (persistence diagrams, persistence measures, and embedded persistence diagrams) and the Wasserstein distance (Definition 3) and the optimal partial transport distance (Definition 4) between persistence diagrams and persistence measures, respectively, as metrics.

Means for Persistent Homology
The existence of a mean and the ability to compute it is fundamental in the k-means algorithm.This quantity and its computation are nontrivial in persistence diagram space.A previously proposed notion of means for sets of persistence diagrams is based on Fréchet means-generalizations of means to arbitrary metric spaces-where the metric is the p-Wasserstein distance Turner et al. (2014).
Specifically, given a set of persistence diagrams D = {D 1 , . . ., D N }, define an empirical probability measure ρ = 1 Since any persistence diagram D that minimizes the Fréchet function is the Fréchet mean of the set D, the Fréchet mean in general need not be unique.This is the case in (D 2 , W 2,2 ), which is known to be an Alexandrov space with curvature bounded from below, meaning that geodesics and therefore Fréchet means are not even locally unique (Turner et al., 2014).Nevertheless, they are computable as local minima, using a greedy algorithm that is a variant of the Hungarian algorithm, but have caveats: the Fréchet function is not convex on D 2 , which means the algorithm often finds only local minima; additionally, there is no initialization rule and arbitrary persistence diagrams are chosen as starting points, meaning that different initializations lead to an unstable performance and potentially different local minima.In our work, we will focus on the 2-Wasserstein distance and we use the notation W 2 for short.For a set of persistence measures {µ 1 , . . ., µ N }, their mean is the mean persistence measure, which is simply the empirical mean of the set: μ = 1 N N i=1 µ i .We outline the k-means algorithm for persistence diagrams and persistence measures in Algorithm 1.

The Karush-Kuhn-Tucker Conditions
The Karush-Kuhn-Tucker (KKT) conditions are first order necessary conditions for a solution in a nonlinear optimization problem to be optimal.Definition 5. Consider the following minimization problem, min where f , g i , and h j are differentiable functions on R d .The Lagrange dual function is given by The dual problem is defined as The Karush-Kuhn-Tucker (KKT) conditions are given by Given any feasible x and u, v, the difference f (x) − g(u, v) is called the dual gap.The KKT conditions are always sufficient conditions for solutions to be optimal; they are also necessary conditions if strong duality holds, i.e., if the dual gap is equal to zero (Boyd & Vandenberghe, 2004).
The KKT conditions are applicable if all functions are differentiable over Euclidean spaces.The setup can be generalized for non-differentiable functions over Euclidean spaces but with well-defined subdifferentials (Boyd & Vandenberghe, 2004).In the case of persistent homology, the objective function is defined on the space of persistence diagrams, which is a metric space without linear structure and therefore requires special treatment.Specifically, we follow Selim & Ismail (1984) and define the KKT point for our optimization problem and show that the partial optimal points are always KKT points.These notions are formally defined and full technical details of our contributions are given in the next section.

Convergence of the k-Means Clustering Algorithm
Local minimality of the solution to the optimization problem has been previously established by studying the convergence of the k-means algorithm for persistence diagrams (Marchese et al., 2017).Here, we further investigate the various facets of the optimization problem in a KKT framework and study and derive properties of the solution taking into account the subtleties and refinements of the KKT setting previously discussed in Section 2.4.We focus on the 2-Wasserstein distance W 2 as discussed in Section 2.3.
Problem Setup Let D = {D 1 , . . ., D n } be a set of n persistence diagrams and let G = {G 1 , . . ., G k } be a partition of D, i.e., G i ∩ G j = ∅ for i ̸ = j and ∪ k i=1 G i = D. Let Z = {Z 1 , . . ., Z k } be k persistence diagrams representing the centroids of a clustering.The within-cluster sum of distances is defined as Define a k × n matrix ω = [ω ij ] by setting ω ij = 1 if D j ∈ G i and 0 otherwise.Then (2) can be equivalently expressed as We first define a domain for each centroid Z i : Let ℓ i be the number of off-diagonal points of D i ; let V be the convex hull of all off-diagonal points in ∪ n i=1 D i .Define S to be the set of persistence diagrams with at most n i=1 ℓ i off-diagonal points in V ; the set S is relatively compact (Mileyko et al., 2011, Theorem 21).Our primal optimization problem is min F(ω, Z) We relax the integer constraints in optimization (P 0 ) to continuous constraints by setting ω ij ∈ [0, 1].Let Θ be the set of all k × n column stochastic matrices, i.e., Θ = {ω ∈ R k×n | ω ij ≥ 0, and The relaxed optimization problem is then Finally, we reduce F(ω, Z) to a univariate function f (ω) = min{F (ω, Z), Z ∈ S k } by minimizing over the second variable.The reduced optimization problem is min f (ω) s.t.ω ∈ Θ.
(RP) Lemma 6.The optimization problems (P 0 ), (P), and (RP) are equivalent in the sense that the minimum values of the cost functions are equal.
Proof.Problems (P) and (RP) are equivalent as they have the same constraints and equal cost functions.
It suffices to prove that the reduced problem (RP) and the primal problem (P 0 ) are equivalent.
Claim 1 : f (w) is a concave function on the convex set Θ.
Let ω 1 , ω 2 ∈ Θ, and λ ∈ As a consequence, f (ω) always attains its minimum on the extreme points of Θ.Note that an extreme point is a point that cannot lie within the open interval of the convex combination of any two points.Claim 2 : The extreme points of Θ satisfy the constraints in (P 0 ).
The extreme points of Θ are 0-1 matrices which have exactly one 1 in each column (Cao et al., 2022).They are precisely the constraints in (P 0 ).
In summary, any optimal point for (RP) is also an optimal point for (P 0 ) and vice versa.Thus, all three optimization problems are equivalent.

Convergence to Partial Optimal Points
We prove that Algorithm 1 for persistence diagrams always converges to partial optimal points in a finite number of steps.The proof is an extension of Theorem 3.2 in Marchese et al. (2017).
Proof.The value of F only changes at two steps during each iteration.Fix an iteration t and let k } be the k centroids from iteration t.At the first step of the (t + 1)st iteration, since we are assigning all persistence diagrams to a closest centroid, for any datum D j , i ).
Theorem 8 is a revision of the original statement in Marchese et al. (2017), since the algorithm may not converge to local minima of (P).
Note that F(ω, Z) is differentiable as a function of ω but not differentiable as a function of Z.However, given that we are restricting to (D 2 , W 2 2 ), there is a differential structure and in particular, the square distance function admits gradients in the Alexandrov sense (Turner et al., 2014).
Given a point D ∈ D 2 , let Σ D be the set of all nontrivial unit-speed geodesics emanating from D. The angle between two geodesics γ 1 , γ 2 is defined as 2st .
By identifying geodesics γ 1 ∼ γ 2 with zero angles, we define the space of directions ( Σ D , ∠ D ) as the completion of Σ D / ∼ under the angular metric ∠ D .The tangent cone is the Euclidean cone over Σ D with the cone metric defined as The inner product is defined as if the limit exists and is independent of selection of γ v .The gradient of h at D is a tangent vector Lemma 9. (Turner et al., 2014) 1. Q D is Lipschitz continuous on any bounded set; 2. The differential and gradient are well-defined for Since F is a linear combination of squared distance functions, by Lemma 9, the gradient exists for any variable Z i .Thus, we formally write the gradient of F as Following Selim & Ismail (1984), we define the KKT points for (P) as follows.
where 1 is the all-1 vector.
Theorem 11.The partial optimal points of (P) are KKT points of (P).
Proof.Suppose (ω ⋆ , Z ⋆ ) is a partial optimal point.Since ω ⋆ is a minimizer of the function F(•, Z ⋆ ), it satisfies the KKT conditions of the constrained optimization problem min ω F(w, Z ⋆ ) s.t.ω ∈ Θ which are exactly the first two conditions of being a KKT point.Similarly, since Z ⋆ is a minimizer of the function F(ω ⋆ , •), by Lemma 9, the gradient vector is zero, which is the third condition of a KKT point.Thus, (ω ⋆ , Z ⋆ ) is a KKT point of (P).
Conversely, suppose (ω ⋆ , Z ⋆ ) is a KKT point.Since F(•, Z ⋆ ) is a linear function of ω and ω ∈ Θ is a linear constraint, the first two conditions are sufficient for ω ⋆ to be a minimizer of F(•, Z ⋆ ).Thus ω ⋆ satisfies the first condition of a partial optimal point.However, the condition ∇ Z F(ω ⋆ , Z ⋆ ) = 0 cannot guarantee that Z ⋆ satisfies the second condition of a partial optimal point.Note that the original proof in Selim & Ismail (1984) of Theorem 4 is also incomplete.
Moreover, being a partial optimal point or KKT point is not sufficient for being a local minimizer of the original optimization problem.A counterexample can be found in Selim & Ismail (1984).

Convergence to Local Minima
We now give a characterization of the local minimality of ω using directional derivatives of the objective function f (ω) in (RP).For any fixed element ω 0 ∈ Θ, let v ∈ R kn be a feasible direction, i.e., ω 0 + tv ∈ Θ for small t.The directional derivative of f (ω) along v is defined as if the limit exists.Let Z(ω 0 ) be the set of all Z minimizing the function F(ω 0 , •).We have the following explicit formula for the directional derivatives of f (ω).
Lemma 12.For any ω 0 ∈ Θ and any feasible direction v, the directional derivative Df (ω 0 ; v) exists, and For any Z ⋆ ∈ Z(ω 0 ), we have Thus we have lim sup It suffices to prove To do this, let t j be a sequence tending to 0 and Choose Z j ∈ Z(ω 0 + t j v) for each j.Since S k is relatively compact, there is a subsequence of {Z j } converging to some point Z ⋆ .We denote the subsequence by {Z j } for the sake of convenience.By Lemma 9, F(ω, Z) is continuous.Thus, for any Z ∈ S k , That is, Z ⋆ ∈ Z(ω 0 ).Therefore, we have lim inf Combining with (3), we showed that the limit exists and is equal to L(ω 0 ; v).
Since the function f (ω) is a differentiable continuous function on a convex set, the first-order necessary condition of local minima is a standard result in optimization.

Numerical Experiments
We experimentally evaluate and compare performance of the various persistence diagram representations in the k-means algorithm on simulated data as well as a benchmark shape dataset.We remark that our experimental setup differs from those conducted in the previous study by Marchese et al. (2017), which compares persistence-based clustering to other non-persistence-based machine learning clustering algorithms.Here, we are interested in the performance of the k-means algorithm in the context of persistent homology.We note that the aim of our study is not to establish superiority of persistence-based k-means clustering over other existing statistical or machine learning clustering methods, but rather to understand the behavior of the k-means clustering algorithm in settings of persistent homology.The practical benefit to such a study is that it can make clustering by k-means actually feasible in certain settings where it would otherwise not be: a notable example that we will study later on Section 4.2 is on data comprising sets of point clouds, where the natural Gromov-Hausdorff metric between point clouds is difficult to compute, and moreover, where the Fréchet mean for sets of point clouds-crucial for the implementation of k-means-has not yet been defined or studied, to the best of our knowledge.Specifically, in our numerical experiments, we implement the k-means clustering algorithms on vectorized persistence diagrams, persistence diagrams, and persistence measures.In the spirit of machine learning, many embedding methods have been proposed for persistence diagrams so that existing statistical methods and machine learning algorithms can then be applied to these vectorized representations, as discussed earlier.
We systematically compare the performance of k-means on three of the most popular embeddings.We also implement k-means on persistence diagrams themselves as well as persistence measures, which requires adapting the algorithm to comprise the respective metrics as well as the appropriate means.In total, we study a total of five representations for persistent homology in the k-means algorithm: three are embeddings of persistence diagrams, persistence diagrams themselves, and finally, their generalization to persistence measures.

Embedding Persistence Diagrams
The algebraic topological construction of persistence diagrams as outlined in Section 2.1 results in a highly nonlinear data structure where existing statistical and machine learning methods cannot be applied.The problem of embedding persistence diagrams has been extensively researched in TDA, with many proposals for vectorizations of persistence diagrams.In this paper, we study three of the most commonly-used persistence diagram embeddings and their performance in the k-means clustering algorithm: Betti curves; persistence landscapes (Bubenik, 2015b); and persistence images (Adams et al., 2017b).These vectorized persistence diagrams can then be directly applied to the original, non-persistence-based k-means algorithm.In this sense, we are providing a comparison of the implementation of the persistent homology version of k-means clustering to the implementation of classical k-means clustering on vectors computed from the output of persistent homology (persistence diagrams).This experiment thus remains restricted to studying exclusively the k-means algorithm as a clustering method, as well as restricted to persistent homology, but nevertheless provides a comparison of the algorithm in two distinct metric spaces.
The Betti curve of a persistence diagram D is a function that takes any z ∈ R and returns the number of points (x, y) in D, counted with multiplicity, such that z ∈ [x, y). Figure 3a shows an example of a persistence diagram and its corresponding Betti curve is shown in Figure 3b.It is a simplification of the persistence diagram, where information on the persistence of the points is lost.The Betti curve takes the form of a vector by sampling values from the function at regular intervals.
Persistence landscapes (Bubenik, 2015b) as functional summaries of persistence diagrams were introduced with the aim of integrability into statistical methodology.The persistence landscape is a set of functions {λ k } k∈N where λ k (t) is the kth largest value of max 0, min (x,y)∈D (t − x, y − t) .3a and shows its persistence landscape.The construction of the persistence landscape vector is identical to that of the Betti curve, but for multiple λ functions for one persistence diagram.The persistence image is the only representation we study that lies in Euclidean space; Betti curves and persistence landscapes are functions and therefore lie in function space.

Figure 3c continues the example of the persistence diagram in Figure
We remark that there is a notable computational time difference between raw persistence diagrams and embedded persistence diagrams.This is because when updating centroids of persistence diagrams (i.e., mean persistence measures and Fréchet means), expensive algorithms and techniques are involved to approximate the Wasserstein distance and search for local minima of the Fréchet function (1) (Turner et al., 2014;Flamary et al., 2021;Lacombe et al., 2018).

Simulated Data
We generated datasets equipped with known labels, so any clustering output can be compared to these labels using the Adjusted Rand Index (ARI)-a performance metric where a score of 1 suggests perfect alignment of clustering output and true labels, and 0 suggests random clustering (Hubert & Arabie (1985)).
We simulated point clouds sampled from the surfaces of 3 classes of common topological shapes: the circle S 1 , sphere S 2 , and torus T 2 .In terms of homology, any circle is characterized by one connected component and one loop, so we have H 0 (S 1 ) ∼ = Z and H 1 (S 1 ) ∼ = Z.Similarly, a sphere is characterized by one connected component and one void (bubble), so H 0 (S 2 ) ∼ = Z and H 2 (S 2 ) ∼ = Z, but there are no loops to a sphere since every cycle traced on the surface of a sphere can be continuously deformed to a single point, so H 1 (S 2 ) ∼ = 0. Finally, for the case of the torus, we have one connected component so H 0 (T 2 ) ∼ = Z and one void so H 2 (T 2 ) ∼ = Z; notice that there are two cycles on the torus that cannot be continuously deformed to a point, one that traces the surface to enclose the "donut hole" and the other smaller one that encircles the "thickness of the donut" so H 1 (T 2 ) ∼ = Z × Z.
We add noise from the uniform distribution on [−s, s] coordinate-wise to each data point of the point clouds, where s stands for the noise scale.Figure 4 shows noisy point cloud data from the torus with different noise scales.Table 1a shows the results of k-means clustering with 3 clusters on the five persistent homology representations of the simulated dataset.We fix the number of clusters to 3, given our prior knowledge of the data and because our focus is to consistently compare the performance accuracy of the k-means algorithms.Since circles, spheres, and tori are quite different in topology as described previously, the results from k-means algorithm is consistent with our knowledge of the topology of these classes of topological shapes.Figure 5 presents the persistence diagrams of the three different shapes we study.
Tables 1b and 1c show the average results from 100 repetitions comparing the persistence landscapes and persistence images, and the persistence diagrams and persistence measures, respectively.We see that and Lion form the third, with other classes scattering in all these clusters.Our results are consistent with a similar clustering experiment previously performed by Lacombe et al. (2018), who find two main clusters on a smaller database with 6 classes.We also find that persistence diagrams perform slightly better than persistence measures on this dataset.Among embedded persistence diagrams, the persistence landscapes give better clustering results, followed by persistence images and Betti curves, which is consistent with the performance seen in the previous section on the simulated datasets.

Software and Data Availability
The code used to perform all experiments is publicly available at the GitHub repository: https://github.com/pruskileung/PH-kmeans.

Discussion
In this paper, we studied the k-means clustering algorithm in the context of persistent homology.We studied the subtleties of the convergence of the k-means algorithm in persistence diagram space and in the KKT framework, which is a nontrivial problem given the highly nonlinear geometry of persistence diagram space.Specifically, we showed that the solution to the optimization problem is a partial optimal point, KKT point, as well as a local minimum.These results refine, generalize, and extend the existing study by Marchese et al. (2017), which shows convergence to a partial optimal point, which need not be a local minimum.Experimentally, we studied and compared the performance of the algorithm for inputs of three embedded persistence diagrams and modified the algorithm for inputs of persistence diagrams themselves as well as their generalizations to persistence measures.We found that empirically, clustering results on persistence diagrams and persistence measures directly were better than on vectorized persistence diagrams, suggesting a loss of structural information in the most popular persistence diagram embeddings.
Our results inspire new directions for future studies, such as other theoretical aspects of k-means clustering in the persistence diagram and persistence measure setting, for example, convergence to the "correct" clustering as the input persistence diagrams and persistence measures grow; and other properties of the algorithm such as convexity, other local or global optimizers, and analysis of the cost function.The problem of information preservation in persistence embedding has been previously studied where statistically, an embedding based on tropical geometry results in no loss of statistical information via the concept of sufficiency (Monod et al., 2019).Additional studies from the perspective of entropy would facilitate a better understanding and quantification of the information lost in embedding persistence diagrams that negatively affect the k-means algorithm.This in turn would inspire more accurate persistence diagram embeddings for machine learning.Another possible future direction for research is to adapt other clustering methods, including deep learning-based methods, to the setting of persistent homology.This would then allow for a comprehensive study of a wide variety of statistical and machine learning approaches for clustering in the context of persistent homology.

Figure 1 :
Figure 1: Example of a VR filtration and VR complexes at various increasing scales on a point cloud.
with infinite multiplicity.Points in Ω are called off-diagonal points.The persistence diagram with no off-diagonal points is called the empty persistence diagram, denoted by D ∅ .

Figure 2 :
Figure2: Example illustrating the procedure of persistent homology(Ghrist, 2008).The panel on the left shows an annulus, which is the underlying space of interest; the topology of an annulus is characterized by 1 connected component (corresponding to H 0 ), 1 loop (corresponding to H 1 ), and 0 voids (corresponding to H 2 ).17 points are sampled from the annulus and a simplicial complex (in order to be able to use simplicial homology) is constructed from these samples in the following way: The sampled points are taken to be centers of balls with radius ϵ, which grows from 0 to ∞.At each value of ϵ, whenever two balls intersect, they are connected by an edge; when three balls mutually intersect, they are connected by a triangle (face); this construction continues for higher dimensions.The simplicial complex resulting from this overlapping of balls as the radius ϵ grows is shown to the right of each snapshot at different values of ϵ.As ϵ grows, connected components merge, cycles are formed, and fill up.These events are tracked by a bar for each dimension of homology, shown on the right; the lengths of the bars corresponds to the lifetime of each topological feature as ϵ grows.Notice that as ϵ reaches infinity, there remains a single connected component tracked by the longest bar for H 0 .For H 1 , there are bars of varying length, including several longer bars, which suggests irregular sampling of the annulus; the longest bar corresponds to the single loop of the annulus.Notice also that there is one short H 2 bar, which is likely to be a spurious topological feature corresponding to noisy sampling.

Figure 6 :
Figure 6: Samples from the shape matching database.Each column shows different poses of the same class.

Table 2 :
Clustering results for shape matching data.