\v{C}ech-Delaunay gradient flow and homology inference for self-maps

We call a continuous self-map that reveals itself through a discrete set of point-value pairs a sampled dynamical system. Capturing the available information with chain maps on Delaunay complexes, we use persistent homology to quantify the evidence of recurrent behavior. We establish a sampling theorem to recover the eigenspace of the endomorphism on homology induced by the self-map. Using a combinatorial gradient flow arising from the discrete Morse theory for \v{C}ech and Delaunay complexes, we construct a chain map to transform the problem from the natural but expensive \v{C}ech complexes to the computationally efficient Delaunay triangulations. The fast chain map algorithm has applications beyond dynamical systems.


Introduction
Suppose M is a compact subset of R n and f : M → M is a continuous self-map with finite Lipschitz constant. We study the thus defined dynamical system in the setting in which f reveals itself through a sample, by which we mean a finite set X ⊆ M, a self-map g : X → X, and a real number ρ such that g(x) − f (x) ≤ ρ for every x ∈ X. We call ρ the approximation constant of the sample. Calling this setting a sampled dynamical system, we formalize a concept that appears already in [3]. It is less demanding than the classical discrete dynamical system, in which time is discrete but space is not [10]. We believe that this relaxation is essential to make inroads into experimental studies, in which pairs (x, f (x)) can be observed individually, while the self-map remains in the dark. The approximation constant models the experimental uncertainty, but it is also needed to accommodate a finite sample. Consider for example the map f : [0, 1] → [0, 1] defined by f (x) = x 2 . Letting u be the smallest positive value in a finite set X ⊆ [0, 1], its image does not belong to X: f (u) ∈ X. We call λ = max x,y∈X,x =y g(x) − g(y) x − y (1) the Lipschitz constant of g. It is not necessarily close to the Lipschitz constant of f , even in the case in which the ρ-neighborhoods of the points in X cover M. However, Kirszbraun proved that for every g : X → X there is a continuous extension f 0 : M → M that has the same Lipschitz constant. Specifically, this is a consequence of the more general Kirszbraun Extension Property [11,12]. Let F be a fixed field and let H(M; F) denote the homology of M with coefficients in F. Hence, H(M; F) is a vector space. Throughout the paper we only use homology with coefficients in the field F, so we abbreviate the notation to H(M). The map f 0 induces a linear map H(f 0 ) : H(M) → H(M). A natural characterization of this linear map are the t-eigenvectors. They capture homology classes invariant under the self-map up to a multiplicative factor t, called an eigenvalue. The t-eigenvectors span the t-eigenspace of the map. Starting with a finite filtration of the domain of the map, we get t-eigenspaces at every step, connected by linear maps, and therefore a finite path in the category of vector spaces, called an eigenspace module. The Stability Theorem in [3] implies a connection between the dynamics of g and f 0 , namely that for every eigenvalue t the interleaving distance between the eigenspace modules induced by g and by f 0 is at most the Hausdorff distance between the graph of g and that of f 0 . Furthermore, the Inference Theorem in the same paper implies that for small enough ρ and any eigenvalue, the eigenspace module for g gives the correct dimension of the corresponding eigenspace of the endomorphism between the homology groups of M induced by f 0 .

Prior Work and Results.
We employ the discrete Morse theory forČech and Delaunay complexes developed in [1] to address the computational problem of estimating the homology of a self-map from a finite sample. Our results continue the program started in [3], with the declared goal to embed the concept of persistent homology in the computational approach to dynamical systems. Specifically, we contribute by improving the computation of persistent recurrent dynamics. This improvement is based on several interacting innovations, which lead to better theoretical guarantees as well as better computational efficiency than in [3]: 1. We use the parallel filtrations ofČech and Delaunay complexes and the existence of a collapse from the former to the latter established in [1] to define chain maps between Delaunay complexes. 2. We construct the chain maps by implementing the collapse implicitly, avoiding the prohibitive construction of theČech complex. 3. We establish inference results with a less stringent sampling condition than given in [3], depending only on the self-map and the domain. The improved computational efficiency derives primarily from the use of Delaunay rather thanČech or Vietoris-Rips complexes. Indeed, in the targeted 2-dimensional case, the size of the Delaunay triangulation is at most six times the number of data points, while theČech and Vietoris-Rips complexes reach exponential size for large radii. The improved theoretical guarantees rely on the use of chain maps that avoid the information loss caused by the interaction of local expansion and partial maps observed in [3]. The improvements are obtained using refined mathematical and computational methods as mentioned above.
We first explain how we useČech complexes, namely as an intermediate step to construct the chain maps from one Delaunay complex to another. Recall the Kirszbraun intersection property for balls established by Gromov [8]: letting Q be a finite set of points in R n , and g : Q → R n a map that satisfies g(x)−g(y) ≤ x−y for all x, y ∈ Q, then in which B r (x) is the closed ball with radius r and center x. Similarly, if we weaken the condition to g(x) − g(y) ≤ λ x − y , for some λ > 1, then the common intersection of the balls B λr (g(x)) is non-empty. This implies that the image of the Delaunay complex for radius r includes in theČech complex for radius λr. To return to the Delaunay triangulation, we exploit the collapsibility of theČech complex for radius λr to the Delaunay complex of radius λr recently established in [1]. We second explain how we collapse without explicit construction of theČech complex. Starting with a simplex, we use a modification of Welzl's miniball algorithm [13] to follow the flow induced by the collapse step by step until we arrive at the Delaunay complex, where the image of the simplex is now a chain. The expected running time for a single step is linear in the number of points, so we have a fast algorithm provided the number of steps in the collapse is not large. While we do not have a bound on this number, our computational experiments provide evidence that it is typically small. In each column, we get the eigenspace by comparing the inclusion between Delaunay-Čech complexes with the chain map obtained with theČech complex as intermediary. The map DČechr(X) →Čech λr (X) is induced by g, while the map Cech λr (X) → DČech λr (X) is a simplicial collapse, and similarly for s instead of r.
We give a global picture of our algorithm in Figure 1. In the top row, we see a filtration of Delaunay-Čech complexes, which are convenient substitutes for the better known Delaunay complexes (also called alpha complexes) with the same homotopy type. The left map down from the top row is inclusion, and the right map down is the chain map induced by g. As indicated, the right map is composed of the inclusion into theČech complex and the discrete flow induced by the collapse. In the bottom row, we see the eigenspace module computed by comparing the left and right vertical maps.
1.2. Outline. Section 2 describes the background in discrete Morse theory, its application toČech and Delaunay complexes, and its extension to persistent homology. Section 3 addresses the algorithmic aspects of our method, which include the proof of collapsibility and the generalization of the miniball algorithm. Section 4 explains the circumstances under which the eigenspace of the self-map can be obtained from the eigenspace module of the discrete sample. Section 5 presents the results of our computational experiments, comparing them with the algorithm in [3]. Section 6 concludes this paper. In contrast, the Delaunay-Čech complex is the full simplex on the three vertices (not shown), as all simplices are Delaunay and enclosed by a sphere of radius r.

Background
In this section, we introduce concepts from discrete Morse Theory [6] and apply them toČech as well as to Delaunay complexes of finite point sets [1]. We begin with the definition of the complexes and finish by complementing the picture with the theory of persistent homology.

Geometric Complexes. Our approach to dynamical systems is based oň
Cech complexes and Delaunay complexes -two common ingredients in topological data analysis -and the Delaunay-Čech complexes, which offer a convenient computational short-cut.
Cech complexes. Let X ⊆ R n be finite, r ≥ 0, and B r (x) be the closed ball of points at distance r or less from x ∈ X. TheČech complex of X for radius r consists of all subsets of X for which the balls of radius r have a non-empty common intersection: it is isomorphic to the nerve of the balls of radius r centered at the points in X. Equivalently,Čech r (X) consist of all subsets Q ⊆ X having an enclosing sphere of radius at most r. For r smaller than half the distance between the two closest points,Čech r (X) = X, and for r larger than √ 2/2 times the distance between the two farthest points,Čech r (X) is the full simplex on the vertices X, denoted by ∆(X). The size of ∆(X) is exponential in the size of X, which motivates the following construction.
Delaunay triangulations. The Voronoi domain of a point x ∈ X consists of all points u ∈ R n for which x minimizes the distance from u: The Voronoi tessellation of X is the set of Voronoi domains dom(x, X) with x ∈ X. Assuming general position of the points in X, any p + 1 Voronoi domains are either disjoint or they intersect in a common (n − p)-dimensional face. The Delaunay triangulation of X consists of all subsets of X for which the Voronoi domains have a non-empty common intersection: it is isomorphic to the nerve of the Voronoi tessellation. Equivalently, Del r (X) consist of all subsets Q ⊆ X having an empty circumsphere (containing no points of X in its interior). Again assuming general position, the Delaunay triangulation is an n-dimensional simplicial complex with natural geometric realization in R n . The Upper Bound Theorem for convex polytopes implies that the number of simplices in Del(X) is at most some constant times card X to the power n/2 . In n = 2 dimensions, this is linear in card X, which compares favorably to the exponentially many simplices in theČech complexes.
Delaunay-Čech complexes. To combine the small size of the Delaunay triangulation with the scale-dependence of theČech complex, we define the Delaunay-Čech complex of X for radius r as the intersection of the two: Observe that the Delaunay triangulation effectively curbs the explosive growth of simplex numbers, but does so only if the points are in general position. We will therefore assume that the points in X are in general position, justifying the assumption with computational simulation that enforce this assumption [5].
Delaunay complexes. There is a more direct way to select subcomplexes of the Delaunay triangulation using r as a parameter. Specifically, the Delaunay complex of X for radius r consists of all subsets of X for which the restriction of the Voronoi domains to the balls of radius r have a non-empty common intersection: it is isomorphic to the nerve of the restricted Voronoi domains. Equivalently, Del r (X) consist of all subsets Q ⊆ X having an empty circumsphere of radius at most r. The Delaunay complexes, also known as alpha complexes, are the better known relatives of the Delaunay-Čech complexes. We use the They satisfy Del r (X) ⊆ DČech r (X), and it is easy to exhibit sets X and radii r for which the two complexes are different. See Figure 2 for an illustrating example. As proved in [1], the Delaunay complex has the same homotopy type as the Delaunay-Čech complex for the same radius. This is indeed the reason we can freely use the latter as a substitute of the former.

Radius Functions.
Structural properties of the geometric complexes are conveniently expressed in terms of their radius functions. In each case, the function maps a simplex to the smallest radius, r, for which the simplex belongs to the complex: All three functions are monotonic, by which we mean that the radius assigned to any simplex is greater than or equal to the radii assigned to its faces. This property is sufficient to define their persistence diagrams, as we will see shortly. However, we will need more, namely compatible discrete gradients of the radius functions. After introducing the discrete Morse theory of Forman [6] as a framework within which discrete gradients can be defined, we will return to the question of compatibility.
Discrete Morse theory. In a nutshell, a monotonic function on a simplicial complex, F : K → R, is a discrete Morse function if any two contiguous sublevel sets differ by a single elementary collapse or a critical simplex. We are now more precise. A pair consists of two simplices, P ⊆ Q, with dimensions dim Q = 1 + dim P . A discrete vector field is a partition, V , of K into pairs and singletons. It is acyclic if there is a monotonic function, F : K → R, with F (P ) = F (Q) iff P and Q belong to a pair in V . Such a function F is called a discrete Morse function, and V is its discrete gradient. A simplex is critical if it is in a singleton of V , and it is non-critical if it belongs to a pair of V .
The reason for our interest in this formalism is its connection to the homotopy type of complexes. To explain suppose Q ∈ K maximizes F . If Q belongs to a pair (P, R) ∈ V , then we can remove both and obtain a smaller simplicial complex, K \{P, R}. We refer to this operation as an elementary collapse, we say K collapses to the smaller complex, denoted K K \ {P, R}, and we note that both complexes have the same homotopy type. If on the other hand Q is a critical simplex, its removal changes the homotopy type of the complex.
Collapsing the geometric complexes. The radius functions are not necessarily discrete Morse functions, but they are amenable to discrete gradients. To explain what we mean, consider a monotonic function, F : K → R, and call Q ∈ K critical if F (Q) is different from the values of all proper faces and cofaces of Q. We say that an acyclic partition of K into pairs and singletons is compatible with F if every sublevel set of F is a union of pairs and singletons in this partition, and Q is in a singleton of the partition iff Q is a critical simplex of F . The proof of collapsibility in [1] hinges on the fact that there is an acyclic partition, V , of ∆(X) that is simultaneously compatible with R C , R D , and R DC . Indeed, the existence of this acyclic partition is at the core of the proof of Theorem 5.10 in [1], which asserts thatČ ech r (X) DČech r (X) Del r (X) (10) for every finite set X ⊆ R n in general position, and every r ≥ 0. Observe that this implies that the three radius functions have the same set of critical simplices. Indeed, these are the sets Q ⊆ X for which the smallest enclosing sphere passes through all points of Q and no point of X lies inside this sphere.

Persistent Homology.
In its original conception, persistent homology starts with a filtration of a topological space, it applies the homology functor for coefficients in a field F, and it decomposes the resulting sequence of vector spaces into indecomposable summands [4,14]. This decomposition is unique and has an intuitive interpretation in terms of births and deaths of homology classes. We flesh out the idea using the filtration of Delaunay-Čech complexes as an example.
Let X ⊆ R n be finite and in general position, and recall that R DC : Del(X) → R is the radius function whose sublevel sets are the Delaunay-Čech complexes. R DC is monotonic but not necessarily discrete Morse. The Delaunay triangulation is finite, which implies that R DC has only finitely many sublevel sets. To index them consecutively, we write r 1 < r 2 < . . . < r N for the values and K i = R −1 DC [0, r i ] for the i-th Delaunay-Čech complex of X. Applying the homology functor, we get in which we write H(K i ) for the direct sum of the homology groups of all dimensions. Together with the maps h i,j : H(K i ) → H(K j ) induced by the inclusions K i ⊆ K j , which are linear, we call this diagram the persistent homology of the filtration. More generally, a diagram of vector spaces with this shape is called a persistence module. Such a module is indecomposable if all vector spaces are trivial, except for an interval of 1-dimensional vector spaces, F → F → . . . → F, that are connected by isomorphisms. Indeed, (11), and more generally, any persistence module of finitedimensional vector spaces, can be written as the direct sum of indecomposable modules, and this decomposition is essentially unique. See [3, Basis Lemma] for a constructive proof. If an interval starts at position i and ends at position j − 1, then we say there is a homology class born at K i that dies entering K j . To allow for the case j − 1 = N , we introduce r N +1 = ∞ and represent the interval by the birth-death pair (r i , r j ). Its dimension is the homological degree in which the class arises, and its persistence is r j − r i .
By construction, the rank of H(K i ) is the number of indecomposable modules whose intervals cover r i . It is readily computed from the multiset of birthdeath pairs, which we call the persistence diagram of the radius function, denoted Dgm(R DC ). More generally, we can use this diagram to compute the rank of the image of h i,j for i ≤ j; see e.g. [2, page 152].

Computing theČech-Delaunay gradient flow
The main algorithmic challenge we face in this paper is the local computation of the gradient that induces the collapse of theČech to the Delaunay-Čech complex. Specifically, we trace chains through the collapse, using their images to construct the chain map that is central to our analysis. We explain the algorithm in three stages: first sketching the relevant steps of the existence proof, second describing how we compute minimum separating spheres, and third explaining the discrete flow that constructs the chain map. Once we arrive at the eigenspaces, we compute their persistent homology with the software implementing the algorithms in [3].
3.1. Computing Separating Spheres. At the core of the discrete gradient flow is the construction of smallest separating spheres, which are defined as follows. Let X ⊆ R n be a finite set of points in general position, and let A ⊆ X be a subset.
• all points of Q lie inside or on the sphere, and • all points of A lie outside or on the sphere.
If a point belongs to both A and Q, then it must lie on the separating sphere. Given Q and A, a separating sphere may or may not exist, and if it exists, then there is a unique smallest separating sphere, which we denote S(Q, A).
The smallest separating sphere can be characterized in geometric terms as follwos. For a sphere S, write Incl S, Excl S ⊆ X for the subsets of enclosed and excluded points, with On S = Incl S ∩ Excl S. Now assume that S is the smallest circumsphere of the points On S, i.e., the center z of S lies in their affine hull: By general position, the affine combination is unique, and ρ x = 0 for all x ∈ On S. We call the front face and the back face of On S, respectively. The following lemma states necessary and sufficient conditions for a sphere to be a smallest separating sphere. It is a special case of the general Karush-Kuhn-Tucker conditions, expressed in geometric and combinatorial terms.
. Let X be a finite set of points in general spherical position, and let Q, Based on these optimality conditions, we can state a recursive formula for the smallest separating sphere.
Proof. We only show the first part, with x ∈ Q, the other part being analogous. We now turn these results into an algorithm for computing the smallest separating sphere of sets Q, A ⊆ X, or deciding that no separating sphere exists. We pattern the algorithm after the randomized algorithm for the smallest enclosing sphere described in [13], which we recall first.
Welzl's randomized miniball algorithm. The smallest enclosing sphere of a set Q ⊆ R n is determined by at most n + 1 of the points. In other words, there is a subset R ⊆ Q of at most n + 1 points such that the smallest enclosing sphere of R is also the smallest enclosing sphere of Q. The algorithm below makes essential use of this observation. It partitions Q into two disjoint subsets: R containing the points we know lie on the smallest enclosing sphere, and P = Q \ R. Initially, R = ∅ and P = Q. In a general step, the algorithm removes a random point from P and tests whether it lies on or inside the recursively computed smallest enclosing sphere of the remaining points. If yes, the point is discarded, and if no, the point is added to R. 1 sphere Enclose(P, R): Since the algorithm makes random choices, its running time is a random variable. Remarkably, the expected running time is linear in the number of points in Q, and the reason is the high probability that the randomly chosen point, x, lies inside the recursively computed smallest enclosing sphere and can therefore be discarded.
Generalization to smallest separating spheres. Rather than enclosing spheres, we need separating spheres to compute the collapse. Here we get an additional case, when the sphere does not exist, which we indicate by returning null. As before, we work with two sets of points: R containing the points we know lie on the smallest separating sphere, and P containing the rest. Initially, R = Q ∩ A and P = (Q ∪ A) \ R. Each point has enough memory to remember whether it belongs to Q and thus needs to lie on or inside the sphere, or to A and thus needs to lie on our outside the sphere. We say the point contradicts S if it lies on the wrong side. Since the smallest separating sphere is again determined by at most n + 1 of the points, the expected running time of the algorithm is linear in the number of points, as before. The correctness of the algorithm is warranted by Lemma 2.
Iterative version with move-to-front heuristic. Because finding separating spheres is at the core of our algorithm, we are motivated to improve its running time, even if it is only by a constant factor. Following the advise in [7], we turn the tail-recursion into an iteration and combine this with a move-to-front heuristic. Indeed, if a point contradicts the current sphere, it is likely that it does the same to a later computed sphere. The earlier the point is tested, the faster this new sphere can be rejected. Storing the points in a linear list, early testing of this point can be enforced by moving it to the front of the list. Write L for the list, which contains all points of Q∪A, and write L(i) for the point stored at the i-th location. As before, each point remembers whether it belongs to Q, to A, or to both. In addition, we mark the points we know lie on the smallest separating sphere as members of R, initializing this set to R = Q ∩ A. Furthermore, we initialize m = card (Q ∪ A). Section 5 will present experimental evidence that the move-to-front heuristic accelerates the computations.

Collapsing non-Delaunay simplices.
Recall that the collapsing sequence in (10) is facilitated by a discrete gradient, W , that is compatible with all three radius functions. To collapse aČech complex to the Delaunay-Čech complex, we only need the pairs in W that partition the difference:Čech r (X) \ DČech r (X) ⊆ ∆(X) \ Del(X). This difference is indeed partitioned solely by pairs because all singletons contain critical simplices, which belong to Del(X). The discrete gradient on the full simplex ∆(X) determined by those non-Delaunay pairs will be denoted by V .
Following [1, Lemma 5.8], we note that every pair of the discrete gradient V is of the form (P, R) with P ⊆ R ⊆ X and R \ P = {x} for a unique vertex v ∈ R. In other words, (P, R) ∈ V uniquely determines the vertex in which the two simplices differ, and given Q ∈ {P, R} together with this vertex, we can recover the pair as (P, R) = (Q \ {x}, Q ∪ {x}). We therefore introduce the map ψ : ∆(X) \ Del(X) → X defined by mapping the non-Delaunay simplex Q to the corresponding vertex, ψ(Q) = x, and we use this map to represent the discrete gradient V .
We now describe the construction of the map ψ from [1] that defines the discrete gradient V , whose pairs partition the non-Delaunay simplices. To this end, we choose an arbitrary but fixed total ordering x 1 , x 2 , . . . , x N of the points in X. For each 0 ≤ j ≤ N , we write X j = {x i | i ≤ j} for the prefix. Given a non-Delaunay simplex Q ∈ ∆(X) \ Del(X), let E Q ⊆ X be the subset of points that lie on or outside of the smallest enclosing sphere of Q, and for each 0 ≤ j ≤ N , define A j = E Q ∪ X j . The sequence A 0 , A 1 , . . . , A N starts with just the exterior points, A 0 = E Q , and ends with all points, A N = X. Since Q ∈ Del(X), there is a minimal index j ≤ N such that Q and A j do not permit a separating sphere. We use the corresponding vertex x j to define ψ(Q) = x j . To compute ψ(Q), it thus suffices to iterate through the sequence A 0 , A 1 , . . . , A N and find the first index j such that there is no sphere separating Q from A j . This can be determined using the algorithm described in Section 3.1.

Constructing the Chain Map.
We now have the necessary prerequisites for constructing the chain map. Specifically, given a cycle in DČech r (X), we are interested in computing its image, which is a cycle in DČech s (X), with r ≤ s ≤ ρ + λr. The construction of the chain map is an application of the discrete Morse theoretic formalism of a discrete gradient flow and the corresponding stabilization map, which we now review.
We follow the notation in [6], in which the discrete gradient flow is formulated as a map on chains. Let K be a simplicial complex and V a discrete gradient on K. In our sitation, K =Čech r (X), and V contains the pairs defined by the map ψ introduced in Section 3.2, which partitionČech r (X) \ DČech r (X). It is convenient to consider the discrete gradient as a chain map. Fixing an orientation on each simplex, this chain map is defined by linear extension of the map on the oriented simplices given by where the sign is chosen so that P appears with coefficient −1 in the boundary of R. In terms of the map ψ defining the gradient V as discussed in Section 3.2, this definition can be rewritten as This map sends every oriented p-simplex to 0 or to an oriented (p+1)-simplex. The linear extension yields a homomorphism V : C(K) → C(K), which maps every pchain to a possibly trivial (p + 1)-chain. Recall that the boundary map, ∂ : C(K) → C(K), sends every p-chain to a possibly trivial (p − 1)-chain. We use both to introduce Φ : in which c is a p-chain and its image, Φ(c), is a possibly trivial p-chain. We call Φ the discrete gradient flow induced by V . Importantly, it commutes with the boundary map: ∂Φ = Φ∂, which makes it a chain map; see [6,Theorem 6.4]. Moreover, the iteration of Φ stabilizes in the sense that Φ M = Φ N for large enough M and N [6, Theorem 7.2]. We call this chain map the stabilization map of Φ and denote it by Φ ∞ .
In this paper, we apply the discrete flow exclusively to cycles. In other words, c ∈ C(K) satisfies ∂c = 0, which simplifies the above formula (14) to In order to evaluate the stabilization map Φ ∞ , we simply iterate Φ until it stabilizes. The most demanding step in each iteration is the computation of smallest separating spheres, as discussed in Section 3.1.

Eigenspace Inference
We use the chain maps connecting the Delaunay-Čech complexes to construct a persistence module of eigenspaces from the sample g : X → X, and specify properties of the sampled dynamical system under which the eigenspaces of the underlying self-map can be inferred from this module. Because of this specific goal, we typically work with coefficients in a finite field of larger order, in contrast to the typical setup in applied topology, where homology is often taken with coefficients in the field Z 2 . 4.1. Eigenspaces. Given a finite set X ⊆ M ⊆ R n , we recall that R DC : Del(X) → R is the radius function whose sublevel sets are the Delaunay-Čech complexes of X. Let r 1 < r 2 < . . . < r N be the values of R DC , and write DČech r (X) = R −1 DC [0, r] for the Delaunay-Čech complex at radius r. We construct the persistence diagram of this filtration, denoted Dgm(R DC ), which is a multi-set of intervals of the form [r i , r j ). For each such interval, there is a unique homology class born at DČech ri (X) that maps to 0 when it dies entering DČech ri (X), and the collection of such classes gives a basis for the homology group of every complex in the filtration.
To define the eigenspace, for each r we consider two maps between homology groups, ι r , κ r : H(DČech r (X)) → H(DČech r+q (X)), in which ι is induced by the inclusion DČech r (X) ⊆ DČech r+q (X), κ is induced by the chain map composed of g followed by the stabilization map Φ ∞ , and q ≥ 0 is chosen such that all generators of H(DČech r (X)) have images under the chain map κ in H(DČech r+q (X)). For geometric reasons, the corresponding radius satisfies r + q ≤ λr, in which λ is the Lipschitz constant of g. It is convenient to represent ι r and κ r by matrices that write the images of the generators of H(DČech r (X)) in terms of the generators of H(DČech r+q (X)). Following [3], we consider the generalized eigenspace of the two maps for an eigenvalue t: In words, E t (κ r , ι r ) is generated by the cycles in DČech r (X) whose images under κ r are homologous to t times their images under ι. Note that this is a slight modification of the classic eigenvalue problem in which the image and the range are identical. This is not the case for κ r , so we compare it to ι r to get the eigenspace. The maps between the eigenspaces, are obtained as restrictions of the maps h r,s : H(DČech r (X)) → H(DČech s (X)) induced by inclusion. For fixed t ∈ F, we have a sequence of eigenspaces, which together with the maps e t ri,rj form a persistence module. Recall from Section 2.3 that this persistence module has an essentially unique interval decomposition. We can therefore compute the persistence diagram, which we refer to as the eigenspace diagram of g for eigenvalue t, denoted Egm(g, t).

Maps between Nerves.
We will relate the eigenspace of f for t with the eigenspace module in three steps. The second step will use results about nerves of covers, which we now review.
Let X be a topological space and U = (U i ) i∈I a cover of X. U is closed or open if every U i is closed or open, respectively, and U is good if the common intersection of any subset of cover elements is empty or contractible. Recall that the nerve of U is the collection of subsets with non-empty common intersection: Calling B a simplex, the nerve is an abstract simplicial complex. A partition of unity subordinate to U is a collection of continuous nonnegative functions φ i : X → R ≥0 such that i∈I φ i (x) = 1 for every x ∈ X, and the support of φ i is contained in U i for every i ∈ I. Assuming a geometric realization of the nerve in which v i denotes the vertex that represents the subset U i ∈ U, we introduce the map The Nerve Theorem as stated in [9] asserts that r is a homotopy equivalence provided U is a good cover that has a subordinate partition of unity. Such a partition exists for example if U is open and X is paracompact, which includes X ⊆ R n . We expand on the Nerve Theorem, using the map r from (20) to relate a continuous map with a corresponding simplicial map between nerves. Proof. Let x ∈ X, and let τ (x) = conv{w j ∈ J | f (x) ∈ V j }, where w j denotes the vertex corresponding to the subset V j ∈ V. Note that we have s(f (x)) ∈ τ (x) by construction of s. Similarly let σ(x) = conv{v i ∈ I | x ∈ U i } and note that r(x) ∈ σ(x) by construction of r. By assumption on the map g,

by a straight-line homotopy between s(f (x)) and h(r(x)) within τ (x).
We note that the commutativity up to homotopy of the diagram (21) does not require the covers of X and Y to be good.

Inference.
We now relate the eigenspace E t (f ) of the self-map f with a generalized eigenspace obtained from the sample g. The value of this comparison derives from the assumption that f remains unknown, beyond g, so its eigenspace can be approached only indirectly, through the properties of g. We begin by recalling the assumptions: • f : M → M is a continuous self-map with Lipschitz constant λ; • g : X → X is a finite sample of f with approximation constant ρ; • the Hausdorff distance between X and M is δ = d M (X, M). Note that this implies g(x) − f (y) ≤ ρ + λ x − y since the left-hand side is at for all x ∈ X. Hence g defines a simplicial map fromČech δ (X) toČech η (X), and we get two maps in homology, in which γ is induced by g and  is induced by inclusion.
We now consider the generalized eigenspace of the two maps for an eigenvalue t: noting that this is a special case of the setting considered in Section 4.1. We show that under appropriate conditions this generalized eigenspace is isomorphic to E t (f ). We need some definitions to prepare the first step. Recall that B δ (x) is the closed ball with radius δ centered at x ∈ R n . For M ⊆ R n , we call M δ = x∈M B δ (x) the δ-neighborhood of M. By the Kirszbraun Extension Property [11,12], f : M → M extends to a map f δ : M δ → M δ with the same Lipschitz constant. Similarly, f extends to a map f θ : M θ → M η , again with the same Lipschitz constant, in which θ = max(η, 2δ), with η = ρ + λδ as before. The following diagram organizes the homology groups of the spaces relevant to our argument. Apart from f * , f δ * , and f θ * , any map in the diagram is induced by inclusion. (25) We claim that this eigenspace is isomorphic to the one considered in (24).
Proof. By finiteness of X, there is ε > 0 such that the inclusion of X δ in the interior of X δ+ε is a homotopy equivalence andČech δ (X) is isomorphic to the nerve of the cover of X δ+ε by open balls of radius δ + ε. We can thus apply (20) and get two commutative diagrams via Lemma 3: The diagrams imply φ ∼ = γ and ι ∼ = , so the eigenspaces are also isomorphic, as claimed.
For the second step, we add two assumptions: that the map from H(M) → H(M δ ) is an isomorphism, and that the map from H(M δ ) → H(M θ ) is a monomorphism. This implies that a is surjective and that b is injective; see (25). We claim that under the combined assumptions, the eigenspace of f : M → M for t ∈ F is isomorphic to the eigenspace considered in Lemma 4.
Proof. We have ker a ⊆ ker φ simply because φ = b•f δ * •a, and we have ker a = ker ι because ι = b • a with b injective. This implies ker φ ∩ ker ι = ker a. Hence, Since b is injective, the kernel in (30) is isomorphic to E t (f δ * ). This concludes the proof since H(M) ∼ = H(M δ ), by assumption, and therefore Summarizing Lemmas 4 and 5, we have a connection between the eigenspace of the given self-map and the eigenspace module (18).

Computational Experiments
In this section, we analyze the performance of our algorithm experimentally and compare the results with those reported in [3]. For ease of reference, we call the algorithm in [3] the Vietoris-Rips or VR-method and the algorithm in this paper the Delaunay-Čech or DČ-method. We begin with the introduction of the casestudies -self-maps on a circle and a torus -and end with statistics collected during our experiments. 5.1. Expanding Circle Map. The first case-study is an expanding map from the circle to itself. To add noise, we extend it to a self-map on the plane, f : C → C defined by f (z) = z 2 . While traversing the circle once, the image under f travels around the circle twice. To generate the data, we randomly chose N points on the unit circle, and letting z i be the i-th such point, we pick a point x i from an isotropic Gaussian distribution with center z i and width σ = 0.1. Note that while the noise from a Gaussian distribution is unbounded, for large enough N and sufficiently small σ (in dependence on N ), a random sample noisy still has a high probability of satisfying the sampling conditions from Section 4. Write X for the set of points x i , and let the image of x i be the point g(x i ) ∈ X that is closest to x 2 i . As explained earlier, we construct the filtration of Delaunay-Čech complexes of X and compute eigenspace diagrams for all eigenvalues in a sufficiently large finite field to avoid aliasing effects. Our choice is F = Z 1009 . Recall that the definition of the eigenspace module in Section 4.1 required a choice of q ≥ 0. For our computations, we always chose the smallest admissible value.
Drawing N = 100 points, we compare the DČ-method of this paper with the VR-method in [3]. For eigenvalue t = 2, both methods give a non-empty eigenspace diagram consisting of a single point. Figure 3 illustrates the results by showing the generating cycle computed with the DČ-method on the left and its image on the right.

Torus Maps.
The second case-study consists of three self-maps on the torus, which we construct as a quotient of the Cartesian plane; see Figure 4.
The 1-dimensional homology group of the torus has only two generating cycles. Letting one wrap around the torus in meridian direction and the other in longitudinal direction, we see that f 1 doubles both generators, f 2 exchanges the generators, and f 3 adds them but also preserves the first generator. Correspondingly, f 1 has  Circle map. Repeating the circle map experiment with N = 100 points ten times, we show the superimposed twenty eigenspace diagrams (ten each for the two methods) in the upper left panel of Figure 5. Points of the VR-method are marked blue while points of the DČ-method are marked red. The eigenvector for t = 2 is detected each time. However, the DČ-method detects the recurrence consistently earlier than the VR-method, with smaller birth and death values but also with smaller average persistence. The shift of the birth values is easy to rationalize: a cycle arises for the same radius in both filtrations, but remains without image in the VR-method until the radius is large enough to capture the image of every edge in the cycle. The shift of the death value is more difficult to explain and perhaps related to the fact that the DČ-method maps a cycle in one complex, K r , to a later complex, K s with r ≤ s ≤ ρ + λr in the filtration of Delaunay-Čech complexes. Monitoring r and s in 100 runs for a range of number of points, we show the average Lipschitz constant and the average ratio s r in Table 1  no false negatives in this experiment, but we see a small number of false positives reported by the VR-method (the points in the upper right corner of the first panel in Figure 5, all for eigenvalues t = 2). This indicates that the VR-method is more susceptible to noise than the DČ-method. To support our claim, we compute the eigenspace diagrams using the DČ-method with increased noise, and indeed find no false positives; see Figure 6. Torus maps. The situation is similar for the three torus maps, whose eigenspace diagrams are shown in the next three panels of Figure 5. The eigenvectors of f 1 , f 2 , f 3 are represented by points on the upper edges of the panels, indicating that their corresponding homology classes last until the last complex in the filtration. This is different in the VR-method because the Vietoris-Rips complex for large radii is less predictable than the Delaunay-Čech complex. In contrast to the circle map, we observe false positives also in the DČ-method. They show up as points with small to moderate persistence in the three diagrams. We also have false positives in the VR-method, but the results are difficult to compare because for complexity reasons we could not run the algorithm beyond N = 200 points. As another indication of improved accuracy of the DČ-method, we note that the eigenspace diagrams we observe in our experiments do not suffer the problem of abundant eigenvalues discussed in [3, Section 6.4].

Runtime Analysis.
We analyze the running time of the DČ-method for sets of N points, with N varying from 100 to 10000. For the persistent homology computation, we use coefficients in the field Z 1009 . The time is measured on a notebook class computer with 2.6GHz Intel Core i7-6600U processor and 16GB RAM.
Overall running time. We begin with a brief comparison of the two methods, first of the overall running time for computing eigenspace diagrams; see Table 2. As mentioned earlier, the VR-method uses Vietoris-Rips complexes, which grow fast with the number of points and the radius. We could therefore run this method for N = 100 and 150 points only, terminating the run for N = 200 points after half an hour. To get a better feeling for the running time of the DČ-method, we  plot the results in Figure 7, adding curves to indicate the asymptotic experimental performance. The outcome suggests that the computational complexity of the DČmethod is between quadratic and cubic in the number of points. We note that more than half of the time is used to compute smallest separating spheres.
Flowing an edge. To gain further insight into the time needed to flow a cycle from theČech to the Delaunay-Čech complex, we present statistics for collapsing a random edges in a variety of settings. The edges are constructed from 100, 1000, 10000 points chosen along the unit circle with added Gaussian noise, and from 100, 1000, 10000 points chosen uniformly in [0, 1) 2 . For each data set, we pick two points at random and monitor the effort it takes to flow this edge from thě Cech complex to the Delaunay-Čech complex. Specifically, we iterate Φ on each edge individually until the result stabilizes. The statistics in Table 3 shows how many times Φ is iterated and how many points are tested inside each call to compute the discrete gradient. The statistics for the circle and the square are similar, with consistently larger numbers when we pick the edges in the square. Overall running time 1.5e-08*x^2.66 Computing spheres 5e-09*x^2.71 Overall running time Computing spheres Figure 7: The time needed to compute the eigenspace diagram of the expanding circle map with the DČ-method as a function of the number of sampled points. We also show the amount of time spent to compute separating spheres, which is more than half the overall running time. The time for computing the Delaunay-Čech complexes and the persistence diagrams is less than 0.5 seconds in all cases and therefore not shown. To estimate the asymptotic behavior, we use the least squares technique to fit lines to the log-log data points; see the right panel. Excluding the results for data with less than N = 5000 points we get slopes 2.66 and 2.71, which suggests that the experimental running time of our algorithm is between quadratic and cubic in the input size.  Table 3: Statistics for flowing 1000 randomly chosen edges from theČech to the Delaunay-Cech complex. Top two rows: the average and maximum number of iterations of Φ to flow an edge from theČech to the Delaunay-Čech complex. Bottom two rows: the average and maximum number of points tested to find a set for which the separating sphere does not exists.
Smallest separating spheres. Our analysis shows that the DČ-method spends most of the time computing smallest separating spheres. For this reason, we compare the straightforward implementation (function Separate), with the heuristic improvement (function MoveToFront). We generate the points in [0, 1) 2 as described above. For both functions, we randomly pick 10000 edges from theČech complex and another 10000 edges from the Delaunay-Čech complex, and we test for each edge whether or not there exists a sphere that separates the edge from the rest of the points. Figure 8 shows that the running time of both functions depends linearly on the number of points, which is to be expected. The best-fit linear functions suggest that the move-to-front heuristic is faster than the more naive extension of the miniball algorithm to finding smallest separating spheres. The difference is more pronounced for edges of theČech complex (left panel) for which we expect more points inside the circumscribed spheres and an early contradiction to the existence of a separating sphere. In contrast, the difference in performance is negligible for edges sampled from the Delaunay-Čech complex, for which separating spheres exist by construction.

Discussion
The main contributions of this paper are the construction of a filtration-preserving chain map from aČech filtration to the correspondingČech-Delaunay filtration, the construction of a geometrically meaningful chain self-map map on a Delaunay triangulation from a self map on a point set, and its application to computing eigenspaces of sampled dynamical systems. Following the proof of collapsibility in [1], we get an efficient algorithm for the chain map though implicit treatment of theČech complex. The reported research raises a number of questions: • Can we give theoretical upper bounds on the number of individual collapses needed to flow a cycle to its image under the stabilization map of theČech-Delaunay gradient flow? • Can the computation of smallest separating spheres be further improved by customizing the procedure to small sets inside the sphere, or by taking advantage of the coherence between successive calls? We expect that the fast chain map algorithm has applications beyond this paper, including to the transport of structural information between meshes, and to the visualization of topological information shared by related high-dimensional dataset.