A gradient sampling algorithm for stratified maps with applications to topological data analysis

We introduce a novel gradient descent algorithm refining the well-known Gradient Sampling algorithm on the class of stratifiably smooth objective functions, which are defined as locally Lipschitz functions that are smooth on some regular pieces—called the strata—of the ambient Euclidean space. On this class of functions, our algorithm achieves a sub-linear convergence rate. We then apply our method to objective functions based on the (extended) persistent homology map computed over lower-star filters, which is a central tool of Topological Data Analysis. For this, we propose an efficient exploration of the corresponding stratification by using the Cayley graph of the permutation group. Finally, we provide benchmarks and novel topological optimization problems that demonstrate the utility and applicability of our framework.


Motivation and related work
In its most general instance nonsmooth, non convex, optimization seek to minimize a locally Lipschitz objective or loss function f : R n Ñ R. Without further regularity assumptions on f , most algorithmssuch as the usual Gradient Descent with learning rate decay, or the Gradient Sampling method-are only guaranteed to produce iterates whose subsequences are asymptotically stationary, without explicit convergence rates.Meanwhile, when restricted to the class of min-max functions (like the maximum of finitely many smooth maps), stronger guarantees such as convergence rates can be obtained [43].This illustrates the common paradigm in nonsmooth optimization: the richer the structure in the irregularities of f , the better the guarantees we can expect from an optimization algorithm.Note that there are algorithms specifically tailored to deal with min-max functions, e.g.[8].
Another example are bundle methods [37,38,42].They consist, roughly, in constructing successive linear approximations of f as a proxy for minimization.Their convergence guarantees are strong, especially when an additional semi-smoothness property of f [10,57] can be made.Other types of methods, like the variable metric ones, can also benefit from the semi-smoothness hypothesis [73].In many cases, convergence properties of the algorithm are not only dependent on the structure on f , but also on the amount of information about f that can be computed in practice.For instance, the bundle method [55] assumes that the Hessian matrix, when defined locally, can be computed.For accounts of the theory and practice in nonsmooth optimization, we refer the interested reader to [5,51,66].
The ability to cut R n in well-behaved pieces where f is regular, is another type of important structure.Examples, in increasing order of generality, are semi-algebraic, (sub)analytic, definable, tame (w.r.t. an o-minimal structure), and Whitney stratifiable functions [11].For such objective functions, the usual gradient descent (GD) algorithm, or a stochastic version of it, converges to stationary points [31].In order to obtain further theoretical guarantees such as convergence rates, it is necessary to design optimization algorithms specifically tailored for regular maps, since they enjoy stronger properties, e.g., tame maps are semi-smooth [48], and the generalized gradients of Whitney stratifiable maps are closely related to the (restricted) gradients of the map along the strata [11].Besides, strong convergence guarantees can be obtained under the Kurdyka-Łojasiewicz assumption [4,61], which includes the class of semi-algebraic maps.Our method is related to this line of work, in that we exploit the strata of R n in which f is smooth.
The motivation of this work stems from Topological Data Analysis (TDA), where geometric objects such as graphs are described by means of computable and topological descriptors.Persistent Homology (PH) is one such descriptor, and has been successfully applied in various areas such as neuroscience [30,7], material sciences [44,69], signal analysis [63,72], shape recognition [54], or machine learning [23,18].
Persistent Homology describes graphs, and more generally simplicial complexes, over n nodes by means of a signature called the barcode, or persistence diagram PHpxq.Here x is a filter function, that is a function on the nodes, which we view as a vector in R n .Loosely speaking, PHpxq is a finite multi-set of points in 2 P R (blue surface) attains its minimum at x ˚" p0, 0q and is not smooth along the line tz 1 " 0u.In particular, }∇f } ą 1 around x ˚, thus the gradient norm cannot be used as a stopping criterion.The traditional GD, for which updates are given by x k`1 " x k ´λ0 k`1 ∇f px k q, oscillates around tz 1 " 0u due to the non-smoothness of f and asymptotically converges toward x ˚because of the decaying learning rate λ 0 k`1 .In the meantime, non-smooth optimization methods that sample points around x k in order to produce reliable descent directions converge in finite time.Namely, the classical Gradient Sampling method randomly samples 3 points and manages to reach an p , ηq-stationary point of f in " 20.6 ˘3.9 iterations (averaged over 100 experiments), while our stratified approach improves on this by leveraging the fact that we explicitly have access to the two strata tz 1 ă 0u and tz 1 ą 0u where f is differentiable.In particular, we only sample additional points when x k is -near the line tz 1 " 0u, and reach an p , ηq-stationary point in 18 iterations.Right plots showcase the evolution of the objective value f px k q and the distance to the minimum }x k ´x˚} across iterations.Parameters: x 0 " p0.8, 0.8q, λ 0 " 10 ´1, " 10 ´1, η " 10 ´2.
the upper half-plane tpb, dq P R 2 , d ě bu that encodes topological and geometric information about the underlying simplicial complex and the function x.
Barcodes form a metric space Bar when equipped with the standard metrics of TDA, the so-called bottleneck and Wasserstein distances, and the persistence map PH : R n Ñ Bar is locally Lipschitz [27,28].However Bar is not Euclidean nor a smooth manifold, thus hindering the use of these topological descriptors in standard statistical or machine learning pipelines.Still, there exist natural notions of differentiability for maps in and out of Bar [53].In particular, the persistence map PH : R n Ñ Bar restricts to a locally Lipschitz, smooth map on a stratification of R n by polyhedra.If we compose the persistence map with a smooth and Lipschitz map V : Bar Ñ R, the resulting objective (or loss) function is itself Lipschitz and smooth on the various strata.From [31], and as recalled in [17], classical (Stochastic) Gradient Descent on f asymptotically converges to stationary points.Similarly, the Gradient Sampling (GS) method asymptotically converges.See [68] for an application of GS to topological optimization.
Nonetheless, it is important to design algorithms that take advantage of the structure in the irregularities of the persistence map PH, in order to get better theoretical guarantees.For instance, one can locally integrate the gradients of PH-whenever defined-to stabilize the iterates [68], or add a regularization term to f that acts as a smoothing operator [29].In this work, we rather exploit the stratification of R n induced by PH, as it turns out to be easy to manipulate.We will show in particular that we can efficiently access points x 1 located in neighboring strata of the current iterate x, as well as estimate the distance to these strata.

Contributions and outline of contents
Our new method, called Stratified Gradient Sampling (SGS), is a variation of the established GS algorithm, whose main steps for updating the current iterate x k P R n we recall in Algorithm 1 below.Our method is Algorithm 1 An update step with the Gradient Sampling algorithm Compute approximate subgradient G k :" t∇f px k q, ∇f px 1 k q, ¨¨¨, ∇f px m k qu 3: Compute descent direction g k :" argmint}g} 2 , g in convex hull of G k u 4: Find step size t k ě 0 so that f px k ´tk g k q ď f px k q ´βt k }g k } 2 (β P p0, 1q hyperparameter) 5: Ensure that f is differentiable at x k`1 :" x k ´tk g k by small perturbations motivated by the closing remarks of a recent overview of the GS methodology [15], in which the authors suggest that the GS theory and practice could be enhanced by assuming some extra structure on top of the non differentiability of f .
In this work, we deal with stratifiably smooth maps, for which the non differentiability is organized around smooth submanifolds that together form a stratification of R n .In Section 2, we review some background material in nonsmooth analysis and define stratifiably smooth maps f : R n Ñ R, a variant of the Whitney stratifiable functions from [11] for which we do not impose any Whitney regularity on the gluing between adjacent strata of R n , but rather enforce that there exist local C 2 -extensions of the restrictions of f to top-dimensional strata.
In order to update the current iterate x k when minimizing a stratifiably smooth objective function f , we introduce a new descent direction g k .As in GS, g k is obtained in our new SGS algorithm by collecting the gradients of samples around x k in an approximate subgradient G k , and then by taking the element with minimal norm in the convex set generated by G k .A key difference with GS is that we only need to sample as many points around x k as there are distinct strata close by, compare with the m ě n `1 samples of Algorithm 1.In Proposition 3, we show that we indeed obtain a descent direction, i.e., that we have the descent criterion f px k ´tk g k q ď f px k q ´βt k }g k } 2 (as in Line 4 of Algorithm 1) for a suitable choice of step size t k .
Our SGS algorithm is detailed in Section 3.1 and its analysis in Section 3.2.The convergence of the original GS methodology crucially relies on the sample size m ě n `1 in order to apply the Carathéodory Theorem to subgradients.Differently, our convergence analysis relies on the fact that the gradients of f , when restricted to neighboring strata, are locally Lipschitz.Hence, our proof of asymptotic convergence to stationary points (Theorem 2) is substantially different.In Theorem 3, we determine a convergence rate of our algorithm that holds for any proper stratifiably smooth map, which is an improvement over the guarantees of GS for general locally Lipschitz maps.Finally, in Section 3.3, we adapt our method and results to the case where only estimated distances to nearby strata are available.
In Section 4, we introduce the persistence map PH over a simplicial complex K, which gives rise to a wide class of stratifiably smooth objectivefunctions with rich applications in TDA.We characterize strata around the current iterate (i.e., filter function) x k by means of the permutation group over n elements, where n is the number of vertices in K.Then, the Cayley graph associated to the permutation group allows us to use Dijkstra's algorithm to efficiently explore the set of neighboring strata by increasing order of distances to x k , that are needed to compute descent directions.
Section 5 is devoted to the implementation of the SGS algorithm for the optimization of persistent homology-based objective functions f .In Section 5.1, we provide empirical evidence that SGS behaves better than GD and GS with a simple experiment about minimization of total persistence.In Section 5.3 and Section 5.2, we consider two novel topological optimization problems which we believe are of interest in real-world applications.On the one hand, the Topological Template Registration of a filter function x defined on a complex K, is the task of finding a filter function x1 over a smaller complex K 1 that preserves the barcode of x.On the other hand, given a Mapper graph G, which is a standard visualization tool for arbitrary data sets [67], we can bootstrap the data set in order to produce multiple bootstrapped graphs G i .The Topological Mean is then the task of finding a new graph G ˚whose barcode is as close as possible to the mean of the barcodes associated to the graphs G i .As a result we obtain a smoothed version G ˚of the Mapper graph G in which spurious and non-relevant graph attributes are removed.

A direction of descent for stratifiably smooth maps
In this section, we define the class of stratifiably smooth maps whose optimization is at stake in this work.For such maps, we can define an approximate subgradient and a corresponding descent direction, which is the key ingredient of our algorithm.

Nonsmooth Analysis
We first recall some useful background in nonsmooth analysis, essentially from [25].Throughout, f : R n Ñ R is a locally Lipschitz (non necessary smooth, nor convex) and proper (i.e., compact sublevel sets) function, which we aim to minimize.
First-order information of f at x P R n in a direction v P R n is captured by its generalized directional derivative: Besides, the generalized gradient is the following set of linear subapproximations: Given an arbitrary set S Ă R n of Lebesgue measure 0, we have an alternative description of the generalized gradient in terms of limits of surrounding gradients, whenever defined: Bf pxq " co lim ∇f px i q | x i Ñ x, ∇f px i q is defined , lim ∇f px i q exists, where co is the operation of taking the closure of the convex hull. 1 The duality between generalized directional derivatives and gradients is captured by the equality: The Goldstein subgradient [41] is an -relaxation of the generalized gradient: Given x P R n , we say that: x is a stationary point (for f ) if 0 P Bf pxq.
Any local minimum is stationary, and conversely if f is convex.We also have weaker notions.Namely, given , η ą 0, x is -stationary if 0 P B f pxq; and x is p , ηq-stationary if dp0, B f pxqq ď η.

Stratifiably smooth maps
Desirable properties for an optimization algorithm is that it produces iterates px k q k that either converge to an p , ηq-stationary point in finitely many steps, or whose subsequences (or some of them) converge to an -stationary point.For this, we work in the setting of objective functions that are smooth when restricted to submanifolds, that together partition R n .Definition 1.A stratification X " tX i u iPI of a closed subset X Ď R n is a locally finite partition of X by smooth submanifolds X i -called strata-such that for i ‰ j P I: This makes pX, X q into a stratified space.
Note that we do not impose any (usually needed) gluing conditions between adjacent strata, as we do not require them in the analysis.In particular, semi-algebraic, subanalytic, or definable subsets of R n , together with Whitney stratified sets are stratified in the above weak sense.We next define the class of maps f with smooth restrictions f |X i to strata X i of some stratification X , inspired by the Whitney stratifiable maps of [11] (there X is required to be Whitney) and the stratifiable functions of [34], however we further require that the restrictions f |X i admit local extensions of class C 2 .Definition 2. The map f : R n Ñ R is stratifiably smooth if there exists a stratification X of R n , such that for each stratum X i P X , the restriction f |X i admits an extension f i of class C 2 in a neighborhood of X i .
Remark 1.The slightly weaker assumption that the extension f i is continuously differentiable with locally Lipschitz gradient would have also been sufficient for our purpose.
We denote by X x the stratum containing x, and by X x Ď X the set of strata containing x in their closures.More generally, for ą 0, we let X x, Ď X be the set of strata X i such that the closure of the ball Bpx, q has non-empty intersection with the closure of X i .Local finiteness in the definition of a stratification implies that X x, (and X x ) is finite.
If f is stratifiably smooth and X i P X x is a stratum, there is a well-defined limit gradient ∇ X i f pxq, which is the unique limit of the gradients ∇f |X i px n q where x n P X i is any sequence converging to x.Indeed, this limit exists and does not depend on the choice of sequence since f |X i admits a local C 2 extension f i .The following result states that the generalized gradient at x can be retrieved from these finitely many limit gradients along the various adjacent top-dimensional strata.Proposition 1.If f is stratifiably smooth, then for any x P R n we have: More generally, for ą 0: Proof.We show the first equality only, as the second can be proven along the same lines.We use the description of Bf pxq in terms of limit gradients from Equation (3), which implies the inclusion of the right-hand side in Bf pxq.Conversely, let S be the union of strata in X x with positive codimension, which is of measure 0. Let x i be a sequence avoiding S, converging to x, such that ∇f px i q converges as well.Since X x is finite, up to extracting a subsequence, we can assume that all x i are in the same top-dimensional stratum X i P X x .Consequently, ∇f px i q converges to ∇ X i f pxq.

Direction of descent
Thinking of x as a current position, we look for a direction of (steepest) descent, in the sense that a perturbation of x in this direction produces a (maximal) decrease of f .Given ě 0, we let gpx, q be the projection of the origin on the convex set B f pxq.Equivalently, gpx, q solves the minimization problem: gpx, q " argmin }g}, g P B f pxq ( .
Introduced in [41], the direction ´gpx, q is a good candidate of direction for descent, as we explain now.Since gpx, q is the projection of the origin on the convex closed set B f pxq, we have the classical inequality xgpx, q, gpx, q ´gy ď 0 that holds for any g in the Goldstein subgradient at x. Equivalently, @g P B f pxq, x´gpx, q, gy ď ´}gpx, q} 2 .
Informally, if we think of a small perturbation x ´tgpx, q of x along this direction, for t ą 0 small enough, then f px ´tgpx, qq « f pxq ´t x∇f pxq, gpx, qy.Using Equation (7), since ∇f pxq P B f pxq, we deduce that f px ´tgpx, qq ď f pxq ´t}gpx, q} 2 .So f locally decreases at linear rate in the direction ´gpx, q.This intuition relies on the fact that ∇f pxq is well-defined so as to provide a first order approximation of f around x, and that t is chosen small enough.In order to make this reasoning completely rigorous, we need the following well-known result (see for instance [25]): Theorem 1 (Lebourg Mean value Theorem).Let x, x 1 P R n .Then there exists some y P rx, x 1 s and some w P Bf pyq such that: f px 1 q ´f pxq " xw, x 1 ´xy .
Let t ą 0 be lesser than }gpx, q} , in order to ensure that x 1 :" x ´tgpx, q is -close to x.Then by the mean value theorem (and Proposition 1), we have that f px ´tgpx, qq ´f pxq " ´t xw, gpx, qy for some w P B f pxq.Equation ( 7) yields as desired.
In practical scenarios however, it is unlikely that the exact descent direction ´gpx, q could be determined.Indeed, from Equation ( 6), it would require the knowledge of the set B f pxq, which consists of infinitely many (limits of) gradients in an -neighborhood of x.We now build, provided f is stratifiably smooth, a faithful approximation B f pxq of B f pxq, by collecting gradient information in the strata that are -close to x.
For each top-dimensional stratum X i P X x, , let x i be an arbitrary point in Of course, B f pxq depends on the choice of each x i P X i .But this will not matter for the rest of the analysis, as we will only rely on the following approximation result which holds for arbitrary choices of points x i : Proposition 2. Let x P R n and ą 0. Assume that f is stratifiably smooth.Let L be a Lipschitz constant of the gradients ∇f i restricted to Bpx, q X X i , where f i is some local C 2 extension of f |X i , and X i P X x, is top dimensional.Then we have: In particular, Note that, since the f i are of class C 2 , their gradients are locally Lipschitz, hence by compactness of Bpx, q, the existence of the Lipschitz constant L above is always guaranteed.
Proof.From Proposition 1, we have This yields the inclusion B f pxq Ď B f pxq.Now, let x 1 P R n , |x 1 ´x| ď , and let X i P X x 1 Ď X x, be a top-dimensional stratum touching x 1 .Based on how x i is defined in Equation ( 9), we have that x 1 and x i both belong to Bpx, q, and they both belong to the stratum X i .Therefore, |∇ X i f px 1 q ´∇X i f px i q| ď 2L , and so ∇ X i f px 1 q P B f pxq `Bp0, 2L q.The result follows from the fact that B f pxq `Bp0, 2L q is convex and closed.
Recall from Equation ( 8) that the (opposite to the) descent direction ´gpx, q is built as the projection of the origin onto B f pxq.Similarly, we define our approximate descent direction as ´gpx, q, where gpx, q is the projection of the origin onto the convex closed set B f pxq: We show that this choice yields a direction of decrease of f , in a sense similar to Equation ( 8).
Proposition 3. Let f be stratifiably smooth, and let x be a non-stationary point.Let 0 ă β ă 1, and 0 ą 0. Denote by L a Lipschitz constant for all gradients of the restrictions f i to the ball Bpx, 0 q (as in Proposition 2).Then: (i) For 0 ă ď 0 small enough we have ď 1´β 2L }gpx, q}; and (ii) For such , we have @t ď }gpx, q} , f px ´tgpx, qq ď f pxq ´βt}gpx, q} 2 .Proof.Saying that x is non-stationary is equivalent to the inequality }gpx, 0q} ą 0. We show that the map P R `Þ Ñ }gpx, q} P R `, which is non-increasing, is continuous at 0 `.Let be small enough such that the sets of strata incident to x are the same that meet with the -ball around x, i.e., X x, " X x .Such an exists since there are finitely many strata, which are closed sets, that meet with a sufficiently small neighborhood of x.Of course, all smaller values of enjoy the same property.By Proposition 1, we then have the nesting Bf pxq Ď B f pxq Ď Bf pxq `Bp0, 2L q, where L is a Lipschitz constant for the gradients in neighboring strata.In turn, 0 ď }gpx, 0q} ´}gpx, q} ď 2L .In particular, }gpx, q} Ñ }gpx, 0q} ą 0 as goes to 0, hence " op}gpx, q}q.Besides, the inclusion B f pxq Ď B f pxq (Proposition 2) implies that }gpx, q} ě }gpx, q} ą 0. This yields " op}gpx, q}q and so item (i) is proved.
We now assume that satisfies the inequality of item (i), and let 0 ď t ď }gpx, q} .By the Lebourg mean value theorem, there exists a y P rx, x ´tgpx, qs and some w P Bf pyq such that: f px ´tgpx, qq ´f pxq " t xw, ´gpx, qy .
Since t ď }gpx, q} , y is at distance no greater than from x.In particular, w belongs to B f pxq.From Proposition 2, there exists some w P B f pxq at distance no greater than 2L from w.We then rewrite: f px ´tgpx, qq ´f pxq " t xw ´w, ´gpx, qy `t x w, ´gpx, qy .
On the one hand, by the Cauchy-Schwarz inequality: where the last inequality relies on the assumption that ď 1´β 2L }gpx, q}.On the other hand, since gpx, q is the projection of the origin onto B f pxq, we obtain xgpx, q ´w, gpx, qy ď 0, or equivalently: x w, ´gpx, qy ď ´}gpx, q} 2 . ( Plugging the inequalities of Equations ( 12) and ( 13) into Equation ( 11) proves item (ii).
3 Stratified Gradient Sampling (SGS) In this section we develop a gradient descent algorithm for the optimization of stratifiably smooth functions, and then we detail its convergence properties.We require that the objective function f : R n Ñ R has the following properties: • (Proper): f has compact sublevel sets.
• (Stratifiably smooth): f is stratifiably smooth, and for each iterate x and ě 0 we have an oracle SampleOraclepx, q that samples one -close element x 1 in each -close top-dimensional stratum X 1 .
• (Differentiability check) We have an oracle DiffOraclepxq checking whether an iterate x P R n belongs to the set D Ă R n over which f is differentiable.
That f is a proper map is also needed in the original GS algorithm [16], but is a condition that can be omitted as in [52] to allow the values f px k q to decrease to ´8.In our case we stick to this assumption because we need the gradient of f (whenever defined) to be Lipschitz on sublevel sets.Similarly, the ability to check that an iterate x k belongs to D is standard in the GS methodology.We use it to make sure that f is differentiable at each iterate x k .For this, we call a subroutine MakeDifferentiable which slightly perturbs the iterate x k to achieve differentiability and to maintain a descent condition.Note that these considerations are mainly theoretical because generically the iterates x k are points of differentiability, hence MakeDifferentiable is unlikely to change anything.
The last requirement that f is stratifiably smooth replaces the classical weaker assumption used in the GS algorithm that f is locally Lipschitz and that the set D where f is differentiable is open and dense.There are many possible ways to design the oracle SampleOraclepx, q: for instance the sampling could depend upon arbitrary probability measures on each stratum, or it could be set by deterministic rules depending on the input px, q as will be the case for the persistence map in Section 4. However our algorithm and its convergence properties are oblivious to these degrees of freedom, as by Section 2.3 any sampling allows us to approximate the Goldstein subgradient B f px k q using finitely many neighbouring points to compute B f px k q.In turn we have an approximate descent direction g k which can be used to produce the subsequent iterate x k`1 :" x k ´tk g k as in the classical smooth gradient descent.

The algorithm
The details of the main algorithm SGS are given in Algorithm 2.
The algorithm SGS calls the method UpdateStep of Algorithm 3 as a subroutine to compute the right descent direction g k and the right step size t k .Essentially, this method progressively reduces the exploration radius k of the ball where we compute the descent direction g k :" gpx k , k q until the criteria of Proposition 3 ensuring that the loss sufficiently decreases along g k are met.
Given the iterate x k and the radius k , the calculation of g k :" gpx k , k q is done by the subroutine ApproxGradient in Algorithm 4: points x 1 in neighboring strata that intersect the ball Bpx k , k q are sampled using SampleOraclepx k , k q to compute the approximate Goldstein gradient and in turn the descent direction g k .
Much like the classical GS algorithm, our method behaves like the well-known smooth gradient descent where the gradient is replaced with a descent direction computed from gradients in neighboring strata.
A key difference however is that, in order to find the right exploration radius k and step size t k , the UpdateStep needs to maintain a constant C k to approximate the ratio 1´β 2L of Proposition 3, as no Lipschitz constant L may be explicitly available.
To this effect, UpdateStep maintains a relative balance between the exploration radius k and the norm of the descent direction g k , controlled by C k , i.e., k » C k }g k }.As we further maintain C k » 1´β 2L , we know that the convergence properties of k and g k are closely related.Thus, the utility of this controlling constant is mainly theoretical, to ensure convergence of the iterates x k towards stationary points in Theorem 2. In practice, we start with a large initial constant C 0 , and decrease it on line 11 of Algorithm 3 whenever it violates a property of the target constant 1´β 2L given by Proposition 3.
Algorithm 2 SGSpf, x 0 , , η, C 0 , β, γq Require: Loss function f , initial iterate x 0 P D, exploration radius ą 0, initial constant C 0 ą 0 controlling exploration radius, critical distance to origin η ě 0, descent rate 0 ă β ă 1, step size decay rate 0 ă γ ă 1 2L , this loop never occurs by (ii) of Proposition 3 Remark 2. Assume that we dispose of a common Lipschitz constant L for all gradients ∇f i in theneighborhood of the current iterate x k , recall that f i is any C 2 extension of the restriction f |X i to the neighboring top-dimensional stratum X i P X x, .Then we can simplify Algorithm 3 by decreasing the exploration radius k progressively until k ď p1´βq 2L }gpx k , k q} as done in Algorithm 6: This ensures by Proposition 3 that the resulting update step satisfies the descent criterion f px k ´tk gpx k , k qq ă f px k q ´βt k }gpx k , k q} 2 .In particular the parameter C k is no longer needed, and the theoretical guarantees of the simplified algorithm are unchanged.Note that for objective functions from TDA (see Section 4), the stability theorems (e.g. from [27]) often provide global Lipschitz constants, enabling the use of the simplified update step described in Algorithm 6.
Remark 3. In the situation of Remark 2, let us further assume that P R `Þ Ñ }gpx, q} P R `is nonincreasing.This monotonicity property mimics the fact that P R `Þ Ñ }gpx, q} P R `is non-increasing, since increasing grows the Goldstein generalized gradient B f pxq, of which gpx, q is the element with minimal norm.If the initial exploration radius does not satisfy the termination criterion (Line 8), # Add gradients from remote strata 5: end for 6: Solve the quadratic minimization problem g k " argmint}g} 2 , g P copG k qu 7: return g k # g k " gpx k , k q is the approximate steepest descent direction Replace x k`1 with a sample in Bpx k ´tk g k , rq Algorithm 6 SimpleUpdateSteppf, x k , , η, β, γq Break, return t k " 0 and g k # Set η " 0 to reach an -stationary point 5: end if This way Algorithm 6 is further simplified: in constant time we find a k that yields the descent criterion f px k ´tk gpx k , k qq ă f px k q βt k }gpx k , k q} 2 , and the parameter γ is no longer needed.A careful reading of the proofs provided in the following section yields that the convergence rate (Theorem 3) of the resulting algorithm is unchanged, however the asymptotic convergence (Theorem 2), case pbq, needs to be weakened: converging subsequences converge to -stationary points instead of stationary points.We omit details for the sake of concision.

Convergence
We show convergence of Algorithm 2 towards stationary points in Theorem 2. Finally, Theorem 3 provides a non-asymptotic sub linear convergence rate, which is by no mean tight yet gives a first estimate of the number of iterations required in order to reach an approximate stationary point.Theorem 2. If η " 0, then almost surely the algorithm either paq converges in finitely many iterations to an -stationary point, or pbq produces a bounded sequence of iterates px k q k whose converging subsequences all converge to stationary points.
As an intermediate result, we first show that the update step computed in Algorithm 3 is obtained after finitely many iterations and estimate its magnitude relatively to the norm of the descent direction.
Lemma 1. UpdateSteppf, x k , , η, C k , β, γq terminates in finitely many iterations.In addition, let L be a Lipschitz constant for the restricted gradients ∇f i (as in Proposition 2) in the -neighborhood of x k .Assume that 1´β 2L ď C k .If x k is not an p , ηq-stationary point, then the returned exploration radius k satisfies: Moreover the returned controlling constant C k`1 satisfies: Proof.If x k is a stationary point, then ApproxGradient returns a trivial descent direction g k " 0 because the approximate gradient G k contains ∇f px k q (Line 1).In turn, UpdateStep terminates at Line 4. Henceforth we assume that x k is not a stationary point and that the breaking condition of Line 4 in Algorithm 3 is never reached (otherwise the algorithm terminates).Therefore, at each iteration of the main loop, either C k`1 is replaced by γC k`1 (line 9), or k is replaced by γ k (line 12), until both the following inequalities hold (line 14): Once C k`1 becomes lower than 1´β 2L , inequality (A) implies inequality (B) by Proposition 3 piiq.It then takes finitely many replacements k Ð γ k to reach inequality (A), by Proposition 3 (i).At that point (or sooner), Algorithm 3 terminates.This concludes the first part of the statement, namely UpdateStep terminates in finitely many iterations.
Next we assume that x k is not an p , ηq-stationary point, which ensures that the main loop of UpdateStep cannot break at Line 5. We have the invariant C k`1 ě γ 1´β 2L : this is true at initialization (C k`1 " C k ) by assumption, and in later iterations C k`1 is only replaced by γC k`1 whenever (A) holds but not (B), which forces C k`1 ě 1´β 2L by Proposition 3 (ii).At the end of the algorithm, k ď C k`1 }gpx k , k q} by inequality (A), and so we deduce the right inequality k ď minpC k }gpx k , k q}, q.
Besides, if both (A) and (B) hold when entering the main loop (line 11) for the first time, then k " .Otherwise, let us consider the penultimate iteration of the main loop for which the update step is 1 γ k .Then, either condition (A) does not hold, namely γ k q}, or condition (B) does not hold, which by the assertion (ii) of Proposition 3 implies 1 γ k ě 1´β 2L }gpx k , 1 γ k q}.In any case, we deduce that Proof of Theorem 2. We assume that alternative paq does not happen.By Lemma 1, Algorithm 3 terminates in finitely many iteration and by Line 14 we have the guarantee: The subsequent iterate x k`1 is initialized at x k ´tk gpx k , k q by MakeDifferentiable (see Algorithm 5) and replaced by a sample in a progressively shrinking ball Bpx k ´tk gpx k , k q, rq until two conditions are reached.The first condition is that f is differentiable at x k`1 , and since D has full measure by Rademacher's Theorem, this requirement is almost surely satisfied in finitely many iterations.The second condition is that which by Equation ( 14) and continuity of f is satisfied in finitely many iterations as well.Therefore MakeDifferentiable terminates in finitely many iterations almost surely.In particular, the sequence of iterates px k q k is infinite.By Equation ( 15) the sequence of iterates' values pf px k qq k is decreasing and it must converge by compactness of the sublevel sets below f .Using Equation (15), we obtain: so that in particular, either k Ñ 0 or }gpx k , k q} Ñ 0. Let also L be Lipschitz constant for the restricted gradients ∇f i (as in Proposition 2) in the -offset of the sublevel set tx, f pxq ď f px 0 qu.Up to taking L large enough, there is another Lipschitz constant L 1 ă L ensuring that By Lemma 1, upon termination of Algorithm 3, ii) of Proposition 3 prevents Line 9 in Algorithm 3 from ever occurring again, i.e., C k " C 1 is constant in later iterations.Otherwise, C 1 satisfies C 1 ě 1´β 2L 1 just like C 0 .A quick induction then yields: Therefore, by Lemma 1: In particular, using the rightmost inequality and Equation ( 16), we get k Ñ 0 `.In turn, using the leftmost inequality, we get that The sequence of iterates px k q k is bounded; up to extracting a converging subsequence, we assume that it converges to some x ˚.Let 1 ą 0. We claim that 0 P B 1 f px ˚q, that is x ˚is 1 -stationary.As x k Ñ x ˚and k Ñ 0, we have that for k large enough Bpx k , 1 γ k q Ď Bpx ˚, 1 q, which implies that: Besides, from Proposition 2, we have B 1 γ k f px k q Ď B 1 γ k f px k q, so that gpx k , 1 γ k q P B 1 γ k f px k q.Hence gpx k , 1 γ k q P B 1 f px ˚q.In the limit, Equation ( 18) implies 0 P B 1 f px ˚q, as desired.Following [16], the intersection of the Goldstein subgradients B 1 f px ˚q, over 1 ą 0, yields Bf px ˚q.Hence, 0 P Bf px ˚q and x ˚is a stationary point.Theorem 3. If η ą 0, then Algorithm 2 produces an p , ηq-stationary point using at most O ´1 η minpη, q īterations.Proof.Assume that Algorithm 2 has run over k iterations without producing an p , ηq-stationary point.From Algorithm 3 (line 14), Algorithm 5 (Line 2) and the choice t j " j }gpx j , j q} of update step for j ď k, it holds that β j }gpx j , j q} ď f px j q ´f px j`1 q, and in turn k´1 ÿ j"0 j }gpx j , j q} ď where f 0 :" f px 0 q and f ˚is a minimal value of f .Besides, using Lemma 1, j is either bigger The two equations cannot simultaneously hold whenever which allows us to conclude.

Approximate distance to strata
The algorithm and its convergence assume that strata X that are -close to an iterate x can be detected by the oracle SampleOraclepx, q.However in practice computing distances dpx, Xq to sub manifolds may be expansive or even impossible.Instead we can hope for approximate distances dpx, Xq.Typically when we have an assignment px, Xq P R n ˆX Þ ÝÑ xX P X Ď R n , at our disposal, we can define dpx, Xq :" dpx, xX q, and replace the accurate oracle SampleOraclepx, q with the following approximate oracle: Therefore for the purpose of this section we make the following assumption: To every iterate x P R n and stratum X we can associate an element xX that belongs to X, in particular we have the corresponding oracle ApproxSampleOracle.Moreover there exists a constant a ě 1 such that the resulting approximate distance to strata dpx, Xq :" dpx, xX q satisfies: @x P R n , @X P X , dpx, Xq ď adpx, Xq.
Note that we always have a reverse inequality dpx, Xq ě dpx, Xq since xX P X.In the case of the persistence map this will specialize to dpx, Xq ď dpx, Xq ď 2dpx, Xq, that is a " 2, see Proposition 7.
We then replace the approximate Goldstein subgradient B f pxq with B f pxq, defined in the exact same way except that it is computed from strata satisfying dpx, Xq ď , that is, B f pxq contains ∇f px X q for each such stratum.The proof of Proposition 2 adapts straightforwardly to the following statement: Figure 2: Sublevel sets and superlevel sets filtrations illustrated on graphs.paq Input graph (V,E) along with the values of a function x : V Ñ R (blue).pb, c, dq Sublevel sets for t = 1,2,3 respectively.pe, f, gq Superlevel sets for t = 3,2,1 respectively.Proposition 4. Let x P R n and ą 0. Assume that f is stratifiably smooth.Let L be a Lipschitz constant of the gradients ∇f i restricted to Bpx, a q X X i , where f i is some local C 2 extension of f |X i , and X i P X x, is top dimensional.Then we have: Proof.The inclusion B f pxq Ď B f pxq is clear.Conversely, let ∇ X f px 1 q P B f pxq, where X is a topdimensional stratum incident to x 1 and |x 1 ´x| ď .We then have dpx, Xq ď a and hence xX is a point in Ba f pxq which is pa `1q -close to x 1 .Therefore ∇ X f px 1 q P Ba f pxq `Bp0, pa `1qL q.The rest of the proof is then conducted as in Proposition 2.
The vector ĝpx, q with minimal norm in B f pxq can then serve as the new descent direction in place of gpx, q: Proposition 5. Let f be stratifiably smooth, and x be a non strationnary point.Let 0 ă β ă 1, and 0 ą 0. Denote by L a Lipschitz constant for all gradients of the restriction f i in the ball Bpx, a 0 q (as in Proposition 4).
Proof.The proof of Proposition 3 can be replicated by replacing with a and using Proposition 4 instead of Proposition 2.
In light of this result, we can use g k " ĝpx k , k q as a descent direction, which in practice simply amounts to replace the accurate oracle SampleOraclepx k , k q in Algorithm 4 with the approximate oracle ApproxSampleOraclepx k , k q.The only difference is that the assignment of update step in Algorithm 3 (Line 7) should take the constant a into account, namely: The convergence analysis of Section 3.2 holds as well for this algorithm, and the proofs of Theorem 2 and Theorem 3 are unchanged.

Application to Topological Data Analysis
In this section, we define the persistence map PH : R n Ñ Bar which is a central descriptor in TDA that gives rise to prototypical stratifiably smooth objective functions f in this work.We refer the reader to [36,62,76] for full treatments of the theory of Persistence.We then introduce the stratification that makes PH a stratifiably smooth map, by means of the permutation group.Using the associated Cayley graph we give a way to implement the oracle SampleOraclepx, q that samples points in nearby top dimensional strata, which is the key ingredient of Algorithm 4 for computing descent directions.

The Persistence Map
Persistent Homology and Barcodes Let n P N, and let tv 1 , . . ., v n u be a (finite) set of vertices.A simplicial complex K is a subset of the power set Pptv 1 , . . ., v n uq whose elements are called simplices, and which is closed under inclusion: if σ P K is a simplex and σ 1 Ď σ, then σ 1 P K.The dimension of the complex is the maximal cardinality of its simplices minus one.In particular a 1-dimensional simplicial complex is simply an undirected graph.
A filter function is a function on the vertices of K, which we equivalently view as a vector x P R n .Given t P R, we have the sub complexes K ďt " tσ P K, @v P σ, xpvq ď tu.This yields a nested sequence of sub complexes called the sublevel sets filtration of K by x: See Figure 2 for an illustration on graphs.The (Ordinary) Persistent Homology of x in degree p P t0, ¨¨¨, dim Ku records topological changes in Equation ( 19) by means of points pb, dq P R 2 , here b ă d, called intervals.For instance, in degree p " 0, an interval pb, dq corresponds to a connected component that appears in K ďb and that is merged with an older component in K ďd .In degree p " 1 and p " 2, intervals track loops and cavities respectively, and more generally an interval pb, dq in degree p tracks a p-dimensional sphere that appears in K ďb and persists up to K ďd .Note that there are possibly infinite intervals pb, 8q for p-dimensional cycles that persist forever in the filtration Equation (19).Such intervals are not easy to handle in applications, and it is common to consider the (Extended) Persistent Homology of x, for which they do not occur, i.e. we append the following sequence of pairs involving superlevel sets K ět :" tσ P K| @v P σ, xpvq ě tu to Equation ( 19): K -pK, Hq ¨¨¨pK, K ěs q pK, K ět q ¨¨¨pK, Kq.The set Bar of barcodes can be made into a metric space as follows.Given two barcodes D :" tpb, dqu and D 1 :" tpb 1 , d 1 qu, a partial matching γ : D Ñ D 1 is a bijective map from some subset A Ď D to some B Ď D 1 .For q ě 1 the q-th diagram distance W q pD, D 1 q is the following cost of transferring intervals pb, dq to intervals pb 1 , d 1 q, minimized over partial matchings γ between D and D 1 : In particular the intervals that are not in the domain A and image B of γ contribute to the total cost relative to their distances to the diagonal tb " du Ă R 2 .The Stability Theorem [27,28] implies that the map PH : R n Ñ Bar, which we refer to as the persistence map in what follows, is Lipschitz continuous.

Differentiability of Persistent Homology
Next we recall from [53] the notions of differentiability for maps in and out of Bar and the differentiability properties of PH.Note that the results of [53] focus on ordinary persistence, yet they easily adapt to extended persistence, see e.g.[75].
Given r P N, we define a local C r -coordinate system as a collection of C r real-valued maps coming in pairs b i , d i : U Ñ R defined on some open Euclidean set U Ď R n , indexed by a finite set I, and satisfying b i pxq ď d i pxq for all x P U and i P I.A local C r -coordinate system is thus equally represented as a map valued in barcodes B : x P U Þ Ñ b i pxq, d i pxq ( iPI P Bar, where each interval pb i pxq, d i pxqq is identified and tracked in a C r manner. Similarly, in a neighborhood of the origin for all n P N and local C r -coordinate system B defined around the origin such that Bp0q " D.
These notions compose together via the chain rule, so for instance an objective function f " V ˝B : R n Ñ R m is differentiable in the usual sense as soon as B and V are so.
We now define the stratification X of R n such that the persistence map B " PH is r-differentiable (for any r) over each stratum.Denote by Σ n the group of permutations on t1, ¨¨¨, nu.Each permutation π P Σ n gives rise to a closed polyhedron S π :" which is a cell in the sense that its (relative) interior is a top-dimensional stratum of our stratification X .The (relative) interiors of the various faces of the cells S π form the lower dimensional strata.In terms of filter functions, a stratum is simply a maximal subset whose functions induce the same pre-order on vertices of K. We then have that any persistence based loss is stratifiably smooth w.r.t.this stratification.
Proposition 6.Let V : Bar Ñ R be a 2-differentiable map.Then the objective function f :" V ˝PH is stratifiably smooth for the stratification X induced by the permutation group Σ n .
Proof.From Proposition 4.23 and Corollary 4.24 in [53], on each a cell S π we can define a local C 2 coordinate system that consists of linear maps b i , d i : S π Ñ R, in particular it admits a C 2 extension on a neighborhood of S π .Since V is globally 2-differentiable, by the chain rule, we incidentally obtain a local C 2 extension f i of f |Sπ " pV ˝PHq |Sπ .
Remark 4. Note that the condition that f " V ˝PH is a proper map, as required for the analysis of Algorithm 2, sometimes fails because PH may not have compact level-sets.The intuitive reason for this is that a filter function x can have an arbitrarily large value on two distinct entries-one simplex creates a homological cycle that the other destroys immediately-that may not be reflected in the barcode PHpxq.
Hence the fiber of PH is not bounded.However, when the simplicial complex K is (homeomorphic to) a compact oriented manifold, any filter function must reach its maximum at the simplex that generates the fundamental class of the manifold (or one of its components), hence PH has compact level-sets in this case.Finally, we note that it is always possible to turn a loss function f based on PH into a proper map by adding a regularization term that controls the norm of the filter function x.

Exploring the space of strata
In the setting of Proposition 6, the objective function f " V ˝PH : R n Ñ R is a stratifiably smooth map, where the stratification involved is induced by the group Σ n of permutations on t1, ¨¨¨, nu, with cells S π as in Equation (22).In order to calculate the approximate subgradient B f pxq, we need to compute the set X x, of cells S π that are at Euclidean distance no greater than from x: Estimating distances to strata In practice however, solving the quadratic problem of Equation ( 23) to compute d 2 px, S π q can be done in Opn log nq time using solvers for isotonic regression [9].Since we want to approximate many such distances to neighboring cells, we rather propose the following estimate which boils down to Op1q computations to estimate d 2 px, S π q.For any π P Σ n , we consider the mirror of x in S π , denoted by x π P R n and obtained by permuting the coordinates of x according to π: In the rest of this section, we assume that the point x is fixed and has increasing coordinates, x i ď x i`1 , which can always be achieved after a suitable re-ordering of these coordinates.The proxy d 2 px, x π q then yields a good estimate of d 2 px, S π q, as expressed by the following result.
Proposition 7.For any permutation π P Σ n , we have: Proof.The left inequality is clear from the fact that x π belongs to the cell S π .To derive the right inequality, let xπ be the projection of x onto S π .It is a well-known fact in the discrete optimal transport literature, or alternatively a consequence of Lemma 2 below, that so that in particular d 2 px π , xπ q ď d 2 px, xπ q.Consequently, d 2 px, x π q ď d 2 px, xπ q `d2 px π , xπ q ď 2d 2 px, xπ q " 2d 2 px, S π q.
Our approximate oracle for estimating the Goldstein subgradient, see Section 3.3, computes the set of mirrors x π that are at most -away from the current iterate x :" x k , that is: Recall that the oracle is called several times in Algorithm 3 when updating the current iterate x k with a decreasing exploration radius k .However, for the oracle above we have ApproxSampleOraclepx, 1 q Ď ApproxSampleOraclepx, q whenever 1 ď , so that once we have computed ApproxSampleOraclepx k , k q for an initial value k and the current x k , we can retrieve ApproxSampleOraclepx k , 1 q for any 1 ă k in a straightforward way, avoiding re-sampling neighboring points around x k and computing the corresponding gradient each time k decreases, saving an important amount of computational resources.
Sampling in nearby strata In order to implement the oracle ApproxSampleOraclepx, q, we consider the Cayley graph with permutations Σ n as vertices and edges between permutations that differ by elementary transpositions (those that swap consecutive elements).In other words, the Cayley graph is the dual of the stratification of filter functions: a node corresponds uniquely to a cell S π and an edge corresponds to a pair of adjacent cells.We explore this graph starting at the identity permutation using an arbitrary exploration procedure, for instance the Depth-First Search (DFS) algorithm.During the exploration, assume that the current node, permutation π, has not yet been visited (otherwise we discard it).If d 2 px, x π q ď , then we record the mirror point x π .Else, d 2 px, x π q ą , and in this case we do not explore the children of π.Note that given a child π 1 of π, we retrieve x π 1 and d 2 px, x π 1 q from x π and d 2 px, x π q in Op1q time.The following result entails that this procedure indeed computes ApproxSampleOraclepx, q.Proposition 8. Let π 1 P Σ n be a permutation differing from the identity.Then there must be at least one parent π of π 1 in the Cayley graph such that d 2 px, x π q ď d 2 px, x π 1 q.
Proposition 8 is a straight consequence of the following well-known, elementary lemma.Lemma 2. Let x, y P R n be two vectors whose coordinates are sorted in the same order, namely x i ď x j ô y i ď y j .Given π P Σ n a permutation, let invpπq be the set of inversions, i.e. pairs pi, jq satisfying pj ´iqpπpjq ´πpiqq ă 0. Then Remark 6.For an arbitrary filter function x, the computation of the barcode PHpxq has complexity Op#K 3 q, here #K is the number of vertices and edges in the graph K (or the number of simplices if K is a simplicial complex).In the SGS algorithm we need to compute PHpx π q for each cell S π near the current iterate x k , which can quickly become too expansive.Below we describe two heuristics that we implemented in some of our experiments (see Section 5.3) to reduce time complexity.The first method bounds the number of strata that can be explored with a hyper-parameter N P N, enabling a precise control of the memory footprint of the algorithm.In this case exploring the Cayley graph of Σ n using Dijkstra's algorithm is desirable, since it allows to retrieve the N strata that are the closest to the current iterate x k .Note that in the original Dijkstra's algorithm for computing shortest-path distances to a source node, each node is reinserted in the priority queue each time one of its neighbors is visited.However in our case we dispose of the exact distances dpx, x π q to the source each time we encounter a new node, permutation π, so we can simplify Dijkstra's algorithm by treating each node of the graph at most once.The second approach is memoization: inside a cell S π , all the filter functions induce the same pre-order on the n vertices of K, hence the knowledge of the barcode PHpx π q of one of its filter functions allows to compute PHpx 1 π q for any other x 1 π P S π in Op#Kq time.We can take advantage of this fact by recording the cells S π (and the barcode PHpx π q of one filter function x π therein) that are met by the SGS (or GS) algorithm during the optimization, thereby avoiding redundant computations whenever the cell S π is met for a second time.

Experiments
In this section we apply our approach to the optimisation of objective functions based on the persistence map PH, and compare it with other methods.There are two natural classes of objective functions that we can build on top of the barcode PHpxq.One consists in turning PHpxq into a vector using one of the many existing vectorisation techniques for barcodes [14,2,24,18] and then to apply any standard objective function defined on Euclidean vector space.In this work we focus on the second type of objective functions which are based on direct comparisons of barcodes by means of metrics W q on Bar as introduced in Section 4. We consider three experimental settings in increasing level of complexity.Section 5.1 is dedicated to the optimization of an elementary objective function in TDA that allows for explicit comparisons of SGS with other optimization techniques.Section 5.2 and Section 5.3 introduce two novel topological optimization tasks: that of topological registration for translating filter functions between two distinct simplicial complexes, and that of topological Fréchet mean for smoothing the Mapper graphs built on top of a data set.
Implementation Our implementation is done in Python 3 and relies on TensorFlow [1] for automaticdifferentiation, Gudhi [56] for TDA-related computations (barcodes, distances W q , Mapper graphs), cvxpy [32] for solving the quadratic minimization problem involved in Algorithm 4. Our implementation handles both ordinary and extended persistence, complexes of arbitrary dimension, and can easily be tuned to enable general objective functions (assuming those are provided in an automatic differentiation framework).Our code is publicly available at https://github.com/tlacombe/topt.

Proof-of-concept: Minimizing total extended persistence
The goal of this experiment is to provide a simple yet instructive framework where one can clearly compare different optimization methods.Here we consider the vanilla Gradient Descent (GD), its variant with learning-rate Decay (GDwD), the Gradient Sampling (GS) methodology and our Stratified Gradient Sampling (SGS) approach.Recall that GD is very well-suited to smooth optimization problems, while GS deals with objective functions that are merely locally Lipschitz with a dense subset of differentiability.To some extent, SGS is specifically tailored to functions with an intermediate degree of regularity since their restrictions to strata are assumed to be smooth, and this type of functions arise naturally in TDA.We consider the elementary example of filter functions x on the graph obtained from subdividing the unit interval with n vertices and the associated (extended) barcodes PHpxq " PH 0 pxq in degree 0.2 When the target diagram is empty, D " H, the objective x Þ Ñ W 1 pPHpxq, Hq to minimize is also known in the TDA literature as the total extended persistence of PHpxq: Throughout the minimization, the sublevel sets of x are simplified until they become topologically trivial: Perspxq is minimal if and only if x is constant.This elementary optimization problem enables a clear comparison of the GD, GS and SGS methods.
For each mode P tGD, GDwD, GS, SGSu we get a gradient g mode k and thus build a sequence of iterates For GD, the update step k " is constant, for GDwD it is set to be k " {p1`kq, and for mode P tGS, SGSu it is reduced until Perspx k ´ k g mode k q ă Perspx k q ´β k }g mode k } 2 (and in addition k ă C k }g SGS k } for SGS).In each case the condition }g mode k } ď η is used as a stopping criterion.For the experiments, the graph consists of n " 5 vertices, x 0 " p0.4,0.72, 0, 0.3, 0.14q, " η " 0.01, and we also have the hyper-parameters γ " 0.5 and β " 0.5 for the GS and SGS algorithm.The results are illustrated in Figure 3.
Whenever differentiable, the objective Pers has gradient norm greater than 1, so in particular it is not differentiable at its minima, which consists of constant functions.Therefore GD oscillates around its optimal value: the stopping criterion }g GD k } ď η is never met which prevents from detecting convergence.Setting k to decay at each iteration in GDwD theoretically ensures the convergence of the sequence px k q k , but comes at the expense of a dramatic decrease of the convergence rate.
In contrast, the GS and SGS methods use a fixed step-size k yet they converge since they compute a descent direction by minimizing }g} over the convex hull of the surrounding gradients t∇Perspx k q, ∇Perspx p1q q, . . ., ∇Perspx pmq qu, as described in Algorithm 1 and Algorithm 4.Here x p1q , . . ., x pmq are either sampled randomly around the current iterate x k (with m " n `1) for GS or in the strata around x k (if any) for SGS.We observe that it takes less iterations for SGS to converge: 137 iterations versus " 165 iterations for GS (averaged over 10 runs).This is because in GS the convex hull of the random sampling tx p1q , . . ., x pmq u may be far from the actual generalized gradient B f , incidentally producing sub-optimal descent directions and missing local minima, while in SGS the sampling takes all nearby strata into account which guarantees a reliable direction (as in Proposition 3), and in fact since the objective Pers restricts to a linear map on each stratum the approximate gradient B f px k q equals B f px k q.
Another difference is that GS samples n `1 " 6 nearby points at each iteration k, while SGS samples as many points as there are nearby strata, and for early iterations there is just one such stratum.In practice, this results in a total running time of " 2.7s for GS vs. 2.4s for SGS to reach convergence. 3

Topological Registration
We now present the more sophisticated optimization task of topological registration.This problem takes inspiration from registration experiments in shapes and medical images analysis [49,33,35], where we want to translate noisy real-world data (e.g.MRI images of a brain) into a simpler and unified format (e.g. a given template of the brain).
Problem formulation In a topological analog of this problem the observation consists of a filter function F defined on a simplicial complex K which may have, for instance, a large number of vertices, and the template consists of a simplicial complex K 1 simpler than K (e.g. with fewer vertices).The goal is then to find a filter function x on K 1 such that pK 1 , xq faithfully recovers the topology of the observation pK, F q. Formally we minimise the q-th distance (q P r1, `8s) between their barcodes where we include the complexes in the notations PHpF, Kq of the barcodes to make a clear distinction between filter functions defined on K and K 1 .Experiment We minimise (25) using our SGS approach.The observed simplicial complex K is taken to be the subdivision of the unit circle with 120 vertices, see Figure 4. Let F " F 0 `ζ where F 0 P R 120 is a piecewise linear map satisfying F 0 r0s " 0, F 0 r30s " 1, F 0 r45s " 0.05, F 0 r60s " 0.35, F 0 r75s " 0.1, F 0 r90s " 0.8 and ζ is a uniform random noise in r0, 0.1s 120 .The (extended) barcode of F 0 consists of two long intervals p0, 1q, p0.05, 0.9q and one smaller interval p0.1, 0.35q that corresponds to the small variation of F 0 (Figure 4 (a, left of the plot)).The stability of persistent homology implies that the barcode of F , which is a noisy version of F 0 , contains a perturbation of these three intervals along with a collection of small intervals of the form pb, dq with pd ´bq ă 0.1, since 0.1 is the amplitude of the noise ζ.The persistence diagram representation of this barcode can be seen on Figure 4 (d, blue triangles): the three intervals are represented by the points away from the diagonal tx " yu Ă R 2 and the topological noise is accounted by the points close to the diagonal.We propose to compute a topological registration x of pK, F q for two simpler circular complexes with n " 4 and n " 15 vertices respectively (Figure 4, (b,c)).We initialize the vertex values x 0 randomly (uniformly in r0, 1s n ), and minimize (25) via SGS.We use q " 2, and the parameters of Algorithm 4 are set to " 0.01, η " 0.01, β " 0.5, γ " 0.5.
With n " 4 vertices, the final filter function x returned by Algorithm 4 reproduces the two main peaks of F that correspond to the long intervals p0, 1q, p0.05, 0.9q, but it fails to reproduce the small bump corresponding to p0.1, 0.35q as it lacks the degrees of freedom to do so.A fortiori the noise appearing in F is completely absent in x, as observed in Figure 4 (d) where the two points appearing in the barcode of x 0 are pushed towards the two points of the target barcode of F as it is the best way to reduce the distance W q .Using n " 15 vertices the barcode PHpxq retrieves the third interval p0.1, 0.35q as well and thus the final filter function x reaches a lower objective value.However x also fits some of the noise, as one of the interval in the diagram of x k is pushed toward a noisy interval close to the diagonal (see Figure 4 (d)).

Topological Mean of Mapper graphs
In our last experiment, we provide an application of our SGS algorithm to the Mapper data visualization technique [67].Intuitively, given a data set X, Mapper produces a graph MappXq, whose attributes, such as its connected components and loops, reflect similar structures in X.For instance, the branches of the Mapper graph in [65] correspond to the differentiation of stem cells into specialized cells.Besides its potential for applications, Mapper enjoys strong statistical and topological properties [12,21,20,19,59,6].
In this last experiment, we propose an optimization problem to overcome one of the main Mapper limitations, i.e., the fact that Mapper sometimes contains irrelevant features, and solve it with the SGS algorithm.For a proper introduction to Mapper and its main parameters we refer the reader to the appendix (Appendix A).
Problem formulation It is a well-known fact that the Mapper graph is not robust to certain changes of parameters which may introduce artificial graph attributes, see [3] for an approach to curate MappXq from its irrelevant attributes.In our case we assume that MappXq is a graph embedded in some Euclidean space R d (d " 2 in our experiments), which is typically the case when the data set X is itself in R d , and we modify the embedding of the nodes in order to cancel geometric outliers.For notational clarity we distinguish between the embedded graph MappXq Ď R d and its underlying abstract graph K. Let n be the number of vertices of K.
We propose an elementary scheme inspired from [20] in order to produce a simplified version of MappXq.For this, we consider a family of bootstrapped data sets X1 , . . ., Xk obtained by sampling the data set cardpXq times with replacements, from which we derive new mapper graphs K 1 , ¨¨¨, K k , whose embeddings Mapp X1 q, . . ., Mapp Xk q in R d are fixed during the experiment.In particular, given a fixed unit vector e in R d , the projection F e onto the line parametrized by e induces filter functions for each K i , hence barcodes PHpF e , K i q.
We minimize the following objective over filter functions Fe P R n : W 2 pPHp Fe , Kq, PHpF e , K i qq 2 P R. (26) By viewing the optimized filter function Fe as the coordinates of the vertices of K along the e-axis, we obtain a novel embedding of the mapper graph MappXq in R d that is the topological barycenter of the family pF e , Mapp Xi qq.
To further improve the embedding MappXq, we jointly optimize Eq. ( 26) over a family te j u j of directions.Intuitively, irrelevant graph attributes do not appear in most of the subgraphs Mapp Xi q and thus are removed in the optimized embedding of MappXq.
Remark 7. In some sense, the minimization Equation ( 26) corresponds to pulling back to filter functions the well-known minimization problem Bar Q D Þ Ñ ř k i"1 W 2 pD, D i q 2 that defines the barycenter or Fréchet mean of barcodes D 1 , . . ., D k , see [70].Indeed, a topological mean of a set of filter functions x 1 , . . ., x k on simplicial complexes K 1 , . . ., K k can be defined as a minimizer of x P R n Þ Ñ ř W 2 pPHpx, Kq, PHpx i , K i qq 2 .In our experiment, x is interpreted as a radial projection onto the e-axis, and in fact when considering several directions te j u j the mean resulting from the optimization is actually that of the so-called Persistent Homology Transform from [71].
Experiment To illustrate this new method for Mapper, we consider a data set X of single cells characterized by chromatin folding [60].Each cell is encoded by the squared distance matrix M of its DNA fragments.This data set was previously studied in [22], in which it was shown that the Mapper graph could successfully capture the cell cycle, represented as a big loop in the graph.However, this attribute could only be observed by carefully tuning the parameters.Here we start with a Mapper graph computed out of arbitrary parameters, and then curate the graph using bootstrap iterates as explained in the previous paragraphs.
Specifically, we processed the data set X with the stratum-adjusted correlation coefficient (SCC) [74], with 500kb and convolution parameter h " 1 on all chromosomes.Then, we run a kernel PCA on the SCC matrix to obtain two lenses and computed a Mapper graph from these lenses using resolution 15, gain 0.4 on both lenses, and hierarchical clustering with threshold 2 on Euclidean distance.See Appendix A for a description of these parameters.The resulting Mapper graph MappXq displayed in Figure 5 (upper left) contains the expected main loop associated to the cell cycle, but it also contains many spurious branches.However computing the Mapper graph with same parameters on a bootstrap iterate results in less branches but also in a coarser version of the graph (Figure 5, upper middle).
After using the SGS algorithm capped at 150 strata (see Remark 6), " 0.01, η " 0.01, β " 0.5, γ " 0.5, initialized with MappXq, and with loss computed out of 10 bootstrap iterates and 4 directions with angles t0, π{2, π{4, ´π{4u, the resulting Mapper, shown in Figure 5 (upper right), offers a good compromise: its resolution remains high and it is curated from irrelevant and artifactual attributes.Ech interval is pulled back in the original space through f ´1 and then separated into its connected components with C. In particular, the cover element obtained with the preimage of the green interval is separated into its two connected components.The nerve of this new cover is then computed to obtain the Mapper.One can see that with only four intervals, the topology of the double torus is only partially captured since only one loop is present in the Mapper instead of two.

Figure 1 :
Figure 1: A proof-of-concept comparison between different gradient descent techniques.The objective function f : pz 1 , z 2 q P R 2 Ñ 10 logp1 `|z 1 |q `z22 P R (blue surface) attains its minimum at x ˚" p0, 0q and is not smooth along the line tz 1 " 0u.In particular, }∇f } ą 1 around x ˚, thus the gradient norm cannot be used as a stopping criterion.The traditional GD, for which updates are given by x k`1 " x k ´λ0k`1 ∇f px k q, oscillates around tz 1 " 0u due to the non-smoothness of f and asymptotically converges toward x ˚because of the decaying learning rate λ 0 k`1 .In the meantime, non-smooth optimization methods that sample points around x k in order to produce reliable descent directions converge in finite time.Namely, the classical Gradient Sampling method randomly samples 3 points and manages to reach an p , ηq-stationary point of f in " 20.6 ˘3.9 iterations (averaged over 100 experiments), while our stratified approach improves on this by leveraging the fact that we explicitly have access to the two strata tz 1 ă 0u and tz 1 ą 0u where f is differentiable.In particular, we only sample additional points when x k is -near the line tz 1 " 0u, and reach an p , ηq-stationary point in 18 iterations.Right plots showcase the evolution of the objective value f px k q and the distance to the minimum }x k ´x˚} across iterations.Parameters: x 0 " p0.8, 0.8q, λ 0 " 10 ´1, " 10 ´1, η " 10 ´2.

sět ( 20 )
Together intervals pb, dq form the (extended) barcode PH p pxq of x in degree p, which we simply denote by PHpxq when the degree is clear from the context.Definition 3. A barcode is a finite multi-set of pairs pb, dq P R 2 called intervals, with b ď d.Two barcodes differing by intervals of the form pb, bq are identified.We denote by Bar the set of barcodes.

Figure 3 :
Figure3: Comparison of vanilla Gradient Descent (GD), Gradient Descent with decay (GDwD), Gradient Sampling (GS) and our Stratified Gradient Sampling (SGS) on a toy example.(a) The evolution of filter functions px k q k as the total extended persistence Pers is minimised with the SGS method.Purple arrows indicate descent direction at k " 0. As expected, the minimization tends to make px k q k topologically as trivial as possible, that is flat in this context (b) The barcodes PHpx k q represented as persistence diagrams extended with the point pminpxq, maxpxqq.(c) The value Perspx k q of the objective function across iterations k.(d) The corresponding gradient norms p}g k }q k .Only GS and SGS reach the stopping criterion }g k } ă η.

Figure 4 :
Figure 4: Illustration of topological registration (a) The target function defined on a (circular) simplicial complex of which we want to reproduce the topology.(b) The registration obtained when using a template with n " 4 vertices.Purple arrows indicate descent direction at k " 0. (c) The registration obtained when using a template with n " 15 vertices.(d) The target persistence diagram (blue triangles) along with the diagram trajectories through iterations for both cases (green and brown, respectively).(e) The values of the objective function across iterations.Using a larger template allows to attain lower objective values.(f) The corresponding gradient norms, both reaching the stopping criterion }g k } ď η.

Figure 5 :
Figure 5: Different Mapper graphs colored with the first kernel PCA component.(Top row) Left: original Mapper graph computed with a set of arbitrary parameters with many spurious branches.Middle: Mapper graph obtained from bootstrap with very low resolution.Right: curated Mapper graph obtained as the Fréchet mean of the bootstrap iterates.(Bottom row) Left: visualization of the data set with kernel PCA.Right: the evolution of the loss (26) during the optimization process.

Figure 6 :
Figure 6: Example of Mapper computation on a double torus with height function covered by four intervals.Ech interval is pulled back in the original space through f ´1 and then separated into its connected components with C. In particular, the cover element obtained with the preimage of the green interval is separated into its two connected components.The nerve of this new cover is then computed to obtain the Mapper.One can see that with only four intervals, the topology of the double torus is only partially captured since only one loop is present in the Mapper instead of two.