Abstract
We call a continuous selfmap that reveals itself through a discrete set of pointvalue pairs a sampled dynamical system. Capturing the available information with chain maps on Delaunay complexes, we use persistent homology to quantify the evidence of recurrent behavior. We establish a sampling theorem to recover the eigenspaces of the endomorphism on homology induced by the selfmap. Using a combinatorial gradient flow arising from the discrete Morse theory for Čech and Delaunay complexes, we construct a chain map to transform the problem from the natural but expensive Čech complexes to the computationally efficient Delaunay triangulations. The fast chain map algorithm has applications beyond dynamical systems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Suppose \({{\mathbb {M}}}\) is a compact subset of \({{\mathbb {R}}}^n\) and \(f :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) is a continuous selfmap with finite Lipschitz constant. We study the thus defined dynamical system in the setting in which f reveals itself through a sample, by which we mean a finite set \(X \subseteq {{\mathbb {M}}}\), a selfmap \(g :X \rightarrow X\), and a real number \(\rho \) such that \(\Vert {g(x)}{f(x)}\Vert \le \rho \) for every \(x \in X\). We call \(\rho \) the approximation constant of the sample. Calling this setting a sampled dynamical system, we formalize a concept that appears already in Edelsbrunner et al. (2015). It is less demanding than the classical discrete dynamical system, in which time is discrete but space is not (Kaczynski et al. 2004). We believe that this relaxation is essential to make inroads into experimental studies, in which pairs (x, f(x)) can be observed individually, while the selfmap remains in the dark. The approximation constant models the experimental uncertainty, but it is also needed to accommodate a finite sample. Consider for example the map \(f :[0,1] \rightarrow [0,1]\) defined by \(f(x)=\frac{x}{2}\). Letting u be the smallest positive value in a finite set \(X\subseteq [0,1]\), its image does not belong to X: \(f(u)\not \in X\). We call
the Lipschitz constant of g. It is not necessarily close to the Lipschitz constant of f, even in the case in which the \(\rho \)neighborhoods of the points in X cover \({{\mathbb {M}}}\). However, Kirszbraun proved that for every \(g :X \rightarrow X\) there is a continuous extension \(f_0 :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) that has the same Lipschitz constant. Specifically, this is a consequence of the more general Kirszbraun Extension Property (Kirszbraun 1934; Wells and Williams 1975). Let \({{\mathbb {F}}}\) be a fixed field and let \({\textsf {H}}({{\mathbb {M}}};{{\mathbb {F}}})\) denote the homology of \({{\mathbb {M}}}\) with coefficients in \({{\mathbb {F}}}\). Hence, \({\textsf {H}}({{\mathbb {M}}};{{\mathbb {F}}})\) is a vector space. Throughout the paper we only use homology with coefficients in the field \({{\mathbb {F}}}\), so we abbreviate the notation to \({\textsf {H}}({{\mathbb {M}}})\). The map \(f_0\) induces a linear map \({\textsf {H}}(f_0) :{\textsf {H}}({{\mathbb {M}}}) \rightarrow {\textsf {H}}({{\mathbb {M}}})\). A natural characterization of this linear map are the teigenvectors. They capture homology classes invariant under the selfmap up to a multiplicative factor t, called an eigenvalue. The teigenvectors span the teigenspace of the map. Starting with a finite filtration of the domain of the map, we get teigenspaces at every step, connected by linear maps, and therefore a finite path in the category of vector spaces, called an eigenspace module. The Stability Theorem in Edelsbrunner et al. (2015) implies a connection between the dynamics of g and \(f_0\), namely that for every eigenvalue t the interleaving distance between the eigenspace modules induced by g and by \(f_0\) is at most the Hausdorff distance between the graph of g and that of \(f_0\). Furthermore, the Inference Theorem in the same paper implies that for small enough \(\rho \) and any eigenvalue, the eigenspace module for g gives the correct dimension of the corresponding eigenspace of the endomorphism between the homology groups of \({{\mathbb {M}}}\) induced by \(f_0\).
1.1 Prior work and results
We employ the discrete Morse theory for Čech and Delaunay complexes developed in Bauer and Edelsbrunner (2017) to address the computational problem of estimating the homology of a selfmap from a finite sample. Our results continue the program started in Edelsbrunner et al. (2015), with the declared goal to embed the concept of persistent homology in the computational approach to dynamical systems. Specifically, we contribute by improving the computation of persistent recurrent dynamics. This improvement is based on several interacting innovations, which lead to better theoretical guarantees as well as better computational efficiency than in Edelsbrunner et al. (2015):

1.
We use the parallel filtrations of Čech and Delaunay complexes and the existence of a collapse from the former to the latter established in Bauer and Edelsbrunner (2017) to define chain maps between Delaunay complexes.

2.
We construct the chain maps by implementing the collapse implicitly, avoiding the prohibitive construction of the Čech complex.

3.
We establish inference results with a less stringent sampling condition than given in Edelsbrunner et al. (2015), depending only on the selfmap and the domain.
The improved computational efficiency derives primarily from the use of Delaunay rather than Čech or Vietoris–Rips complexes. Indeed, in the targeted 2dimensional case, the size of the Delaunay triangulation is at most six times the number of data points, while the Čech and Vietoris–Rips complexes reach exponential size for large radii. The improved theoretical guarantees rely on the use of chain maps that avoid the information loss caused by the interaction of local expansion and partial maps observed in Edelsbrunner et al. (2015). The improvements are obtained using refined mathematical and computational methods as mentioned above.
We first explain how we use Čech complexes, namely as an intermediate step to construct the chain maps from one Delaunay complex to another. Recall the Kirszbraun intersection property for balls established by Gromov (1987): letting Q be a finite set of points in \({{\mathbb {R}}}^n\), and \(g :Q \rightarrow {{\mathbb {R}}}^n\) a map that satisfies \(\Vert {g(x)}{g(y)}\Vert \le \Vert {x}{y}\Vert \) for all \(x, y \in Q\), then
in which \(B_r(x)\) is the closed ball with radius r and center x. Similarly, if we weaken the condition to \(\Vert {g(x)}{g(y)}\Vert \le \lambda \Vert {x}{y}\Vert \), for some \(\lambda > 1\), then the common intersection of the balls \(B_{\lambda r} (g(x))\) is nonempty. This implies that the image of the Delaunay complex for radius r includes in the Čech complex for radius \(\lambda r\). To return to the Delaunay triangulation, we exploit the collapsibility of the Čech complex for radius \(\lambda r\) to the Delaunay complex of radius \(\lambda r\) recently established in Bauer and Edelsbrunner (2017). We second explain how we collapse without explicit construction of the Čech complex. Starting with a simplex, we use a modification of Welzl’s miniball algorithm (Welzl 1991) to follow the flow induced by the collapse step by step until we arrive at the Delaunay complex, where the image of the simplex is now a chain. The expected running time for a single step is linear in the number of points, so we have a fast algorithm provided the number of steps in the collapse is not large. While we do not have a bound on this number, our computational experiments provide evidence that it is typically small.
We give a global picture of our algorithm in Fig. 1. In the top row, we see a filtration of Delaunay–Čech complexes, which are convenient substitutes for the better known Delaunay complexes (also called alpha complexes) with the same homotopy type. The left map down from the top row is inclusion, and the right map down is the chain map induced by g. As indicated, the right map is composed of the inclusion into the Čech complex and the discrete flow induced by the collapse. In the bottom row, we see the eigenspace module computed by comparing the left and right vertical maps.
1.2 Outline
Section 2 describes the background in discrete Morse theory, its application to Čech and Delaunay complexes, and its extension to persistent homology. Section 3 addresses the algorithmic aspects of our method, which include the proof of collapsibility and the generalization of the miniball algorithm. Section 4 explains the circumstances under which the eigenspace of the selfmap can be obtained from the eigenspace module of the discrete sample. Section 5 presents the results of our computational experiments, comparing them with the algorithm in Edelsbrunner et al. (2015). Section 6 concludes this paper.
2 Background
In this section, we introduce concepts from discrete Morse Theory (Forman 1998) and apply them to Čech as well as to Delaunay complexes of finite point sets (Bauer and Edelsbrunner 2017). We begin with the definition of the complexes and finish by complementing the picture with the theory of persistent homology.
2.1 Geometric complexes
Our approach to dynamical systems is based on Čech complexes and Delaunay complexes—two common ingredients in topological data analysis—and the Delaunay–Čech complexes, which offer a convenient computational shortcut.
2.1.1 Čech complexes
Let \(X \subseteq {{\mathbb {R}}}^n\) be finite, \(r\ge 0\), and \(B_{r}{({x})}\) be the closed ball of points at distance r or less from \(x \in X\). The Čech complex of X for radius r consists of all subsets of X for which the balls of radius r have a nonempty common intersection:
it is isomorphic to the nerve of the balls of radius r centered at the points in X. Equivalently, \({{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\) consist of all subsets \(Q \subseteq X\) having an enclosing sphere of radius at most r. For r smaller than half the distance between the two closest points, \({{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}=X\), and for r larger than \(\sqrt{2}/2\) times the distance between the two farthest points, \({{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\) is the full simplex on the vertices X, denoted by \(\Delta (X)\). The size of \(\Delta (X)\) is exponential in the size of X, which motivates the following construction.
2.1.2 Delaunay triangulations
The Voronoi domain of a point \(x \in X\) consists of all points \(u \in {{\mathbb {R}}}^n\) for which x minimizes the distance from u: \({\mathrm{dom}}{({x},{X})} = \{ u \in {{\mathbb {R}}}^n \mid \Vert {x}{u}\Vert \le \Vert {y}{u}\Vert , \text{ for } \text{ all } y \in X\}\). The Voronoi tessellation of X is the set of Voronoi domains \({\mathrm{dom}}{({x},{X})}\) with \(x \in X\). Assuming general position of the points in X, any \(p+1\) Voronoi domains are either disjoint or they intersect in a common \((np)\)dimensional face. The Delaunay triangulation of X consists of all subsets of X for which the Voronoi domains have a nonempty common intersection:
it is isomorphic to the nerve of the Voronoi tessellation. Equivalently, \({\mathrm{Del}}_{r}{({X})}\) consist of all subsets \(Q \subseteq X\) having an empty circumsphere (containing no points of X in its interior). Again assuming general position, the Delaunay triangulation is an ndimensional simplicial complex with natural geometric realization in \({{\mathbb {R}}}^n\). The Upper Bound Theorem for convex polytopes implies that the number of simplices in \({\mathrm{Del}}_{}{({X})}\) is at most some constant times \({\mathrm{card}}\,{X}\) to the power \(\lceil n/2 \rceil \). In \(n=2\) dimensions, this is linear in \({\mathrm{card}}\,{X}\), which compares favorably to the exponentially many simplices in the Čech complexes.
2.1.3 Delaunay–Čech complexes
To combine the small size of the Delaunay triangulation with the scaledependence of the Čech complex, we define the Delaunay–Čech complex of X for radius r as the intersection of the two:
Observe that the Delaunay triangulation effectively curbs the explosive growth of simplex numbers, but does so only if the points are in general position. We will therefore assume that the points in X are in general position, justifying the assumption with computational simulation that enforce this assumption (Edelsbrunner and Mücke 1990).
2.1.4 Delaunay complexes
There is a more direct way to select subcomplexes of the Delaunay triangulation using r as a parameter. Specifically, the Delaunay complex of X for radius r consists of all subsets of X for which the restriction of the Voronoi domains to the balls of radius r have a nonempty common intersection:
it is isomorphic to the nerve of the restricted Voronoi domains. Equivalently, \({\mathrm{Del}}_{r}{({X})}\) consist of all subsets \(Q \subseteq X\) having an empty circumsphere of radius at most r. The Delaunay complexes, also known as alpha complexes, are the better known relatives of the Delaunay–Čech complexes. We use the They satisfy \({\mathrm{Del}}_{r}{({X})}\subseteq {{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\), and it is easy to exhibit sets X and radii r for which the two complexes are different. See Fig. 2 for an illustrating example. As proved in Bauer and Edelsbrunner (2017), the Delaunay complex has the same homotopy type as the Delaunay–Čech complex for the same radius. This is indeed the reason we can freely use the latter as a substitute of the former.
2.2 Radius functions
Structural properties of the geometric complexes are conveniently expressed in terms of their radius functions. In each case, the function maps a simplex to the smallest radius, r, for which the simplex belongs to the complex:
All three functions are monotonic, by which we mean that the radius assigned to any simplex is greater than or equal to the radii assigned to its faces. This property is sufficient to define their persistence diagrams, as we will see shortly. However, we will need more, namely compatible discrete gradients of the radius functions. After introducing the discrete Morse theory of Forman (1998) as a framework within which discrete gradients can be defined, we will return to the question of compatibility.
2.2.1 Discrete Morse theory
In a nutshell, a monotonic function on a simplicial complex, \(F:K \rightarrow {{\mathbb {R}}}\), is a discrete Morse function if any two contiguous sublevel sets differ by a single elementary collapse or a critical simplex. We are now more precise. A pair consists of two simplices, \(P \subseteq Q\), with dimensions \({\mathrm{dim}}\,{Q} = 1 + {\mathrm{dim}}\,{P}\). A discrete vector field is a partition, V, of K into pairs and singletons. It is acyclic if there is a monotonic function, \(F:K \rightarrow {{\mathbb {R}}}\), with \(F(P) = F(Q)\) iff P and Q belong to a pair in V. Such a function \(F\) is called a discrete Morse function, and V is its discrete gradient. A simplex is critical if it is in a singleton of V, and it is noncritical if it belongs to a pair of V.
The reason for our interest in this formalism is its connection to the homotopy type of complexes. To explain suppose \(Q \in K\) maximizes \(F\). If Q belongs to a pair \((P, R) \in V\), then we can remove both and obtain a smaller simplicial complex, \(K {\setminus } \{P, R\}\). We refer to this operation as an elementary collapse, we say K collapses to the smaller complex, denoted \(K \searrow K {\setminus } \{P, R\}\), and we note that both complexes have the same homotopy type. If on the other hand Q is a critical simplex, its removal changes the homotopy type of the complex.
2.2.2 Collapsing the geometric complexes
The radius functions are not necessarily discrete Morse functions, but they are amenable to discrete gradients. To explain what we mean, consider a monotonic function, \(F:K \rightarrow {{\mathbb {R}}}\), and call \(Q \in K\) critical if \(F(Q)\) is different from the values of all proper faces and cofaces of Q. We say that an acyclic partition of K into pairs and singletons is compatible with \(F\) if every sublevel set of \(F\) is a union of pairs and singletons in this partition, and Q is in a singleton of the partition iff Q is a critical simplex of \(F\). The proof of collapsibility in Bauer and Edelsbrunner (2017) hinges on the fact that there is an acyclic partition, V, of \(\Delta (X)\) that is simultaneously compatible with \({\mathcal {R}}_{\mathrm{C}}\), \({\mathcal {R}}_{\mathrm{D}}\), and \({\mathcal {R}}_{\mathrm{DC}}\). Indeed, the existence of this acyclic partition is at the core of the proof of Theorem 5.10 in Bauer and Edelsbrunner (2017), which asserts that
for every finite set \(X \subseteq {{\mathbb {R}}}^n\) in general position, and every \(r \ge 0\). Observe that this implies that the three radius functions have the same set of critical simplices. Indeed, these are the sets \(Q \subseteq X\) for which the smallest enclosing sphere passes through all points of Q and no point of X lies inside this sphere.
2.3 Persistent homology
In its original conception, persistent homology starts with a filtration of a topological space, it applies the homology functor for coefficients in a field \({{\mathbb {F}}}\), and it decomposes the resulting sequence of vector spaces into indecomposable summands (Edelsbrunner et al. 2002; Zomorodian and Carlsson 2005). This decomposition is unique and has an intuitive interpretation in terms of births and deaths of homology classes. We flesh out the idea using the filtration of Delaunay–Čech complexes as an example.
Let \(X \subseteq {{\mathbb {R}}}^n\) be finite and in general position, and recall that \({\mathcal {R}}_{\mathrm{DC}}:{\mathrm{Del}}_{}{({X})} \rightarrow {{\mathbb {R}}}\) is the radius function whose sublevel sets are the Delaunay–Čech complexes. \({\mathcal {R}}_{\mathrm{DC}}\) is monotonic but not necessarily discrete Morse. The Delaunay triangulation is finite, which implies that \({\mathcal {R}}_{\mathrm{DC}}\) has only finitely many sublevel sets. To index them consecutively, we write \(r_1< r_2< \cdots < r_N\) for the values and \(K_i = {\mathcal {R}}_{\mathrm{DC}}^{1}[0,r_i]\) for the ith Delaunay–Čech complex of X. Applying the homology functor, we get
in which we write \({\textsf {H}}(K_i)\) for the direct sum of the homology groups of all dimensions. Together with the maps \(h_{i,j} :{\textsf {H}}(K_i) \rightarrow {\textsf {H}}(K_j)\) induced by the inclusions \(K_i \subseteq K_j\), which are linear, we call this diagram the persistent homology of the filtration. More generally, a diagram of vector spaces with this shape is called a persistence module. Such a module is indecomposable if all vector spaces are trivial, except for an interval of 1dimensional vector spaces, \({{\mathbb {F}}}\rightarrow {{\mathbb {F}}}\rightarrow \cdots \rightarrow {{\mathbb {F}}}\), that are connected by isomorphisms. Indeed, (11), and more generally, any persistence module of finitedimensional vector spaces, can be written as the direct sum of indecomposable modules, and this decomposition is essentially unique. See Edelsbrunner et al. (2015, Basis Lemma) for a constructive proof. If an interval starts at position i and ends at position \(j1\), then we say there is a homology class born at \(K_i\) that dies entering \(K_j\). To allow for the case \(j1 = N\), we introduce \(r_{N+1} = \infty \) and represent the interval by the birthdeath pair \((r_i, r_j)\). Its dimension is the homological degree in which the class arises, and its persistence is \(r_jr_i\).
By construction, the rank of \({\textsf {H}}(K_i)\) is the number of indecomposable modules whose intervals cover \(r_i\). It is readily computed from the multiset of birthdeath pairs, which we call the persistence diagram of the radius function, denoted \({\textsf {Dgm}}_{}{({{\mathcal {R}}_{\mathrm{DC}}})}\). More generally, we can use this diagram to compute the rank of the image of \(h_{i,j}\) for \(i \le j\); see e.g. Edelsbrunner and Harer (2010, p. 152).
3 Computing the Čech–Delaunay gradient flow
The main algorithmic challenge we face in this paper is the local computation of the gradient that induces the collapse of the Čech to the Delaunay–Čech complex. Specifically, we trace chains through the collapse, using their images to construct the chain map that is central to our analysis. We explain the algorithm in three stages: first sketching the relevant steps of the existence proof, second describing how we compute minimum separating spheres, and third explaining the discrete flow that constructs the chain map. Once we arrive at the eigenspaces, we compute their persistent homology with the software implementing the algorithms in Edelsbrunner et al. (2015).
3.1 Computing separating spheres
At the core of the discrete gradient flow is the construction of smallest separating spheres, which are defined as follows. Let \(X \subseteq {{\mathbb {R}}}^n\) be a finite set of points in general position, and let \(A \subseteq X\) be a subset. An \((n1)\)dimensional sphere separates another subset \(Q \subseteq X\) from A if

all points of Q lie inside or on the sphere, and

all points of A lie outside or on the sphere.
If a point belongs to both A and Q, then it must lie on the separating sphere. Given Q and A, a separating sphere may or may not exist, and if it exists, then there is a unique smallest separating sphere, which we denote S(Q, A).
The smallest separating sphere can be characterized in geometric terms as follwos. For a sphere S, write \({{\,\mathrm{Incl}\,}}{S}, {{\,\mathrm{Excl}\,}}{S} \subseteq X\) for the subsets of enclosed and excluded points, with \({{\,\mathrm{On}\,}}{S} = {{\,\mathrm{Incl}\,}}{S} \cap {{\,\mathrm{Excl}\,}}{S}\). Now assume that S is the smallest circumsphere of the points \({{\,\mathrm{On}\,}}{S}\), i.e., the center z of S lies in their affine hull:
By general position, the affine combination is unique, and \(\rho _x \ne 0\) for all \(x \in {{\,\mathrm{On}\,}}{S}\). We call
the front face and the back face of \({{\,\mathrm{On}\,}}{S}\), respectively. The following lemma states necessary and sufficient conditions for a sphere to be a smallest separating sphere. It is a special case of the general Karush–Kuhn–Tucker conditions, expressed in geometric and combinatorial terms.
Lemma 1
(Combinatorial KKT Conditions Bauer and Edelsbrunner 2017) Let X be a finite set of points in general spherical position, and let \(Q, A \subseteq X\). A sphere S satisfies \(S = S(Q,A)\) iff

(i)
S is the smallest circumsphere of the points \({{\,\mathrm{On}\,}}{S}\),

(ii)
\({{\,\mathrm{Front}\,}}{S} \subseteq Q \subseteq {{\,\mathrm{Incl}\,}}{S}\), and

(iii)
\({{\,\mathrm{Back}\,}}{S} \subseteq A \subseteq {{\,\mathrm{Excl}\,}}{S}\).
Based on these optimality conditions, we can state a recursive formula for the smallest separating sphere.
Lemma 2
Assume that S(Q, A) exists. If \(x \in Q\), then
Similarly, if \(x \in A\), then
Proof
We only show the first part, with \(x \in Q\), the other part being analogous.
First, assume that \(S:=S({Q {\setminus } \{x\}, A})\) encloses x. Then we have \(Q \subseteq {{\,\mathrm{Incl}\,}}S\), and thus \(S({Q, A}) = S\) by Lemma 1.
On the other hand, if \(S({Q {\setminus } \{x\}, A})\) does not enclose x, then we must have \(S := S({Q, A}) \ne S({Q {\setminus } \{x\}, A})\), and thus Lemma 1 gives \({{\,\mathrm{Front}\,}}S \not \subseteq Q {\setminus } \{x\}\). But Lemma 1 also gives \({{\,\mathrm{Front}\,}}S \subseteq Q\), and so we must have \(x \in {{\,\mathrm{Front}\,}}S\). Since \({{\,\mathrm{Front}\,}}S \subseteq {{\,\mathrm{On}\,}}S \subseteq {{\,\mathrm{Excl}\,}}S\), it follows that \(A \cup \{x\} \subseteq {{\,\mathrm{Excl}\,}}S\), and thus \(S({Q, A \cup \{x\}}) = S\) by Lemma 1. \(\square \)
We now turn these results into an algorithm for computing the smallest separating sphere of sets \(Q, A \subseteq X\), or deciding that no separating sphere exists. We pattern the algorithm after the randomized algorithm for the smallest enclosing sphere described in Welzl (1991), which we recall first.
3.1.1 Welzl’s randomized miniball algorithm
The smallest enclosing sphere of a set \(Q \subseteq {{\mathbb {R}}}^n\) is determined by at most \(n+1\) of the points. In other words, there is a subset \(R \subseteq Q\) of at most \(n+1\) points such that the smallest enclosing sphere of R is also the smallest enclosing sphere of Q. The algorithm below makes essential use of this observation. It partitions Q into two disjoint subsets: R containing the points we know lie on the smallest enclosing sphere, and \(P = Q {\setminus } R\). Initially, \(R = \emptyset \) and \(P = Q\). In a general step, the algorithm removes a random point from P and tests whether it lies on or inside the recursively computed smallest enclosing sphere of the remaining points. If yes, the point is discarded, and if no, the point is added to R.
Since the algorithm makes random choices, its running time is a random variable. Remarkably, the expected running time is linear in the number of points in Q, and the reason is the high probability that the randomly chosen point, x, lies inside the recursively computed smallest enclosing sphere and can therefore be discarded.
3.1.2 Generalization to smallest separating spheres
Rather than enclosing spheres, we need separating spheres to compute the collapse. Here we get an additional case, when the sphere does not exist, which we indicate by returning null. As before, we work with two sets of points: R containing the points we know lie on the smallest separating sphere, and P containing the rest. Initially, \(R = Q \cap A\) and \(P = (Q \cup A) {\setminus } R\). Each point has enough memory to remember whether it belongs to Q and thus needs to lie on or inside the sphere, or to A and thus needs to lie on our outside the sphere. We say the point contradicts S if it lies on the wrong side.
Since the smallest separating sphere is again determined by at most \(n+1\) of the points, the expected running time of the algorithm is linear in the number of points, as before. The correctness of the algorithm is warranted by Lemma 2.
3.1.3 Iterative version with movetofront heuristic
Because finding separating spheres is at the core of our algorithm, we are motivated to improve its running time, even if it is only by a constant factor. Following the advise in Gärtner (1999), we turn the tailrecursion into an iteration and combine this with a movetofront heuristic. Indeed, if a point contradicts the current sphere, it is likely that it does the same to a later computed sphere. The earlier the point is tested, the faster this new sphere can be rejected. Storing the points in a linear list, early testing of this point can be enforced by moving it to the front of the list. Write \({\mathcal {L}}\) for the list, which contains all points of \(Q \cup A\), and write \({\mathcal {L}}(i)\) for the point stored at the ith location. As before, each point remembers whether it belongs to Q, to A, or to both. In addition, we mark the points we know lie on the smallest separating sphere as members of R, initializing this set to \(R = Q \cap A\). Furthermore, we initialize \(m = {\mathrm{card}}\,{(Q \cup A)}\).
Section 5 will present experimental evidence that the movetofront heuristic accelerates the computations.
3.2 Collapsing nonDelaunay simplices
Recall that the collapsing sequence in (10) is facilitated by a discrete gradient, W, that is compatible with all three radius functions. To collapse a Čech complex to the Delaunay–Čech complex, we only need the pairs in W that partition the difference: \({{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})} {\setminus } {{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})} \subseteq \Delta (X) {\setminus } {\mathrm{Del}}_{}{({X})}\). This difference is indeed partitioned solely by pairs because all singletons contain critical simplices, which belong to \({\mathrm{Del}}_{}{({X})}\). The discrete gradient on the full simplex \(\Delta (X)\) determined by those nonDelaunay pairs will be denoted by V.
Following Bauer and Edelsbrunner (2017, Lemma 5.8), we note that every pair of the discrete gradient V is of the form (P, R) with \(P \subseteq R \subseteq X\) and \(R {\setminus } P = \{x\}\) for a unique vertex \(v \in R\). In other words, \((P,R) \in V\) uniquely determines the vertex in which the two simplices differ, and given \(Q \in \{P, R\}\) together with this vertex, we can recover the pair as \((P, R) = (Q {\setminus } \{x\}, Q \cup \{x\})\). We therefore introduce the map \(\psi :\Delta (X) {\setminus } {\mathrm{Del}}_{}{({X})} \rightarrow X\) defined by mapping the nonDelaunay simplex Q to the corresponding vertex, \(\psi (Q) = x\), and we use this map to represent the discrete gradient V.
We now describe the construction of the map \(\psi \) from Bauer and Edelsbrunner (2017) that defines the discrete gradient V, whose pairs partition the nonDelaunay simplices. To this end, we choose an arbitrary but fixed total ordering \(x_1, x_2, \ldots , x_N\) of the points in X. For each \(0 \le j \le N\), we write \(X_j = \{x_i \mid i \le j\}\) for the prefix. Given a nonDelaunay simplex \(Q \in \Delta (X) {\setminus } {\mathrm{Del}}_{}{({X})}\), let \(E_Q \subseteq X\) be the subset of points that lie on or outside of the smallest enclosing sphere of Q, and for each \(0 \le j \le N\), define \(A_j = E_Q \cup X_j\). The sequence \(A_0, A_1, \ldots , A_N\) starts with just the exterior points, \(A_0 = E_Q\), and ends with all points, \(A_N = X\). Since \(Q \not \in {\mathrm{Del}}_{}{({X})}\), there is a minimal index \(j \le N\) such that Q and \(A_j\) do not permit a separating sphere. We use the corresponding vertex \(x_j\) to define \(\psi (Q) = x_j\). To compute \(\psi (Q)\), it thus suffices to iterate through the sequence \(A_0, A_1, \ldots , A_N\) and find the first index j such that there is no sphere separating Q from \(A_j\). This can be determined using the algorithm described in Sect. 3.1.
3.3 Constructing the chain map
We now have the necessary prerequisites for constructing the chain map. Specifically, given a cycle in \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\), we are interested in computing its image, which is a cycle in \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{s}{({X})}\), with \(r \le s \le \rho + \lambda r\). The construction of the chain map is an application of the discrete Morse theoretic formalism of a discrete gradient flow and the corresponding stabilization map, which we now review.
We follow the notation in Forman (1998), in which the discrete gradient flow is formulated as a map on chains. Let K be a simplicial complex and V a discrete gradient on K. In our sitation, \(K = {{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\), and V contains the pairs defined by the map \(\psi \) introduced in Sect. 3.2, which partition \({{ \check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})} {\setminus } {{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\). It is convenient to consider the discrete gradient as a chain map. Fixing an orientation on each simplex, this chain map is defined by linear extension of the map on the oriented simplices given by
where the sign is chosen so that P appears with coefficient \(1\) in the boundary of R. In terms of the map \(\psi \) defining the gradient V as discussed in Sect. 3.2, this definition can be rewritten as
This map sends every oriented psimplex to 0 or to an oriented \((p+1)\)simplex. The linear extension yields a homomorphism \(V :{\textsf {C}}(K) \rightarrow {\textsf {C}}(K)\), which maps every pchain to a possibly trivial \((p+1)\)chain. Recall that the boundary map, \(\partial :{\textsf {C}}(K) \rightarrow {\textsf {C}}(K)\), sends every pchain to a possibly trivial \((p1)\)chain. We use both to introduce \(\Phi :{\textsf {C}}(K) \rightarrow {\textsf {C}}(K)\) defined by
in which c is a pchain and its image, \(\Phi (c)\), is a possibly trivial pchain. We call \(\Phi \) the discrete gradient flow induced by V. Importantly, it commutes with the boundary map: \(\partial \Phi = \Phi \partial \), which makes it a chain map; see Forman (Forman 1998, Theorem 6.4). Moreover, the iteration of \(\Phi \) stabilizes in the sense that \(\Phi ^M=\Phi ^N\) for large enough M and N (Forman 1998, Theorem 7.2). We call this chain map the stabilization map of \(\Phi \) and denote it by \(\Phi ^\infty \).
In this paper, we apply the discrete flow exclusively to cycles. In other words, \(c \in {\textsf {C}}(K)\) satisfies \(\partial c = 0\), which simplifies the above formula (14) to
In order to evaluate the stabilization map \(\Phi ^\infty \), we simply iterate \(\Phi \) until it stabilizes. The most demanding step in each iteration is the computation of smallest separating spheres, as discussed in Sect. 3.1.
4 Eigenspace Inference
We use the chain maps connecting the Delaunay–Čech complexes to construct a persistence module of eigenspaces from the sample \(g :X \rightarrow X\), and specify properties of the sampled dynamical system under which the eigenspaces of the underlying selfmap can be inferred from this module. Because of this specific goal, we typically work with coefficients in a finite field of larger order, in contrast to the typical setup in applied topology, where homology is often taken with coefficients in the field \({\mathbb {Z}}_2\).
4.1 Eigenspaces
Given a finite set \(X \subseteq {{\mathbb {M}}}\subseteq {{\mathbb {R}}}^n\), we recall that \({\mathcal {R}}_{\mathrm{DC}}:{\mathrm{Del}}_{}{({X})} \rightarrow {{\mathbb {R}}}\) is the radius function whose sublevel sets are the Delaunay–Čech complexes of X. Let \(r_1< r_2< \cdots < r_N\) be the values of \({\mathcal {R}}_{\mathrm{DC}}\), and write \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})} = {\mathcal {R}}_{\mathrm{DC}}^{1} [0, r]\) for the Delaunay–Čech complex at radius r. We construct the persistence diagram of this filtration, denoted \({\textsf {Dgm}}_{}{({{\mathcal {R}}_{\mathrm{DC}}})}\), which is a multiset of intervals of the form \([r_i, r_j)\). For each such interval, there is a unique homology class born at \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r_i}{({X})}\) that maps to 0 when it dies entering \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r_i}{({X})}\), and the collection of such classes gives a basis for the homology group of every complex in the filtration.
To define the eigenspace, for each r we consider two maps between homology groups, \(\iota _r, \kappa _r :{\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}) \rightarrow {\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r+q}{({X})})\), in which \(\iota _r\) is induced by the inclusion \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})} \subseteq {{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r+q}{({X})}\), \(\kappa _r\) is induced by the chain map composed of g followed by the stabilization map \(\Phi ^\infty \), and \(q \ge 0\) is chosen such that all generators of \({\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})})\) have images under the chain map \(\kappa \) in \({\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r+q}{({X})})\). It is convenient to represent \(\iota _r\) and \(\kappa _r\) by matrices that write the images of the generators of \({\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})})\) in terms of the generators of \({\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r+q}{({X})})\). Following Edelsbrunner et al. (2015), we consider the generalized eigenspace of the two maps for an eigenvalue t:
In words, \({{\textsf {E}}}^t (\kappa _r, \iota _r)\) is generated by the cycles in \({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}\) whose images under \(\kappa _r\) are homologous to t times their images under \(\iota _r\). Note that this is a slight modification of the classic eigenvalue problem in which the image and the range are identical. This is not the case for \(\kappa _r\), so we compare it to \(\iota _r\) to get the eigenspace. The maps between the eigenspaces,
are obtained as restrictions of the maps \(h_{r,s} :{\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{r}{({X})}) \rightarrow {\textsf {H}}({{\mathrm{D}\check{\mathrm{C}}\mathrm{ech}}}_{s}{({X})})\) induced by inclusion. For fixed \(t \in {{\mathbb {F}}}\), we have a sequence of eigenspaces,
which together with the maps \(e_{{r_i},{r_j}}^t\) form a persistence module. Recall from Sect. 2.3 that this persistence module has an essentially unique interval decomposition. We can therefore compute the persistence diagram, which we refer to as the eigenspace diagram of g for eigenvalue t, denoted \({\textsf {Egm}}_{}{({g},{t})}\).
4.2 Maps between nerves
We will relate the eigenspace of f for t with the eigenspace module in two steps. The second step will use results about nerves of covers, which we now review.
Let \({{\mathbb {X}}}\) be a topological space and \({\mathcal {U}} = (U_i)_{i \in I}\) a cover of \({{\mathbb {X}}}\). \({\mathcal {U}}\) is closed or open if every \(U_i\) is closed or open, respectively, and \({\mathcal {U}}\) is good if the common intersection of any subset of cover elements is empty or contractible. Recall that the nerve of \({\mathcal {U}}\) is the collection of subsets with nonempty common intersection:
Calling \({\mathcal {B}}\) a simplex, the nerve is an abstract simplicial complex. A partition of unity subordinate to \({\mathcal {U}}\) is a collection of continuous nonnegative functions \(\phi _i :{{\mathbb {X}}}\rightarrow {{\mathbb {R}}}_{\ge 0}\) such that \(\sum _{i \in I} \phi _i (x) = 1\) for every \(x \in {{\mathbb {X}}}\), and the support of \(\phi _i\) is contained in \(U_i\) for every \(i \in I\). Assuming a geometric realization of the nerve in which \(v_i\) denotes the vertex that represents the subset \(U_i \in {\mathcal {U}}\), we introduce the map
The Nerve Theorem as stated in Hatcher (2002) asserts that r is a homotopy equivalence provided \({\mathcal {U}}\) is a good cover that has a subordinate partition of unity. Such a partition exists for example if \({\mathcal {U}}\) is open and \({{\mathbb {X}}}\) is paracompact, which includes \({{\mathbb {X}}}\subseteq {{\mathbb {R}}}^n\). We expand on the Nerve Theorem, using the map r from (20) to relate a continuous map with a corresponding simplicial map between nerves.
Lemma 3
Let \({\mathcal {U}} = (U_i)_{i \in I}\) and \({\mathcal {V}} = (V_j)_{j \in J}\) be open covers of spaces \({{\mathbb {X}}}\) and \({{\mathbb {Y}}}\) with corresponding subordinate partitions of unity. Let \(f :{{\mathbb {X}}}\rightarrow {{\mathbb {Y}}}\) be continuous, let \(g :I \rightarrow J\) be such that \(f(U_i) \subseteq V_{g(i)}\) for every \(i \in I\), and write \(h :N({\mathcal {U}}) \rightarrow N({\mathcal {V}})\) for the linear simplicial map induced by g. Then the diagram
commutes up to homotopy, in which r and s are constructed as in (20).
Proof
Let \(x \in {{\mathbb {X}}}\), and let \(\tau (x) = {{\,\mathrm{conv}\,}}\{ w_j \in J \mid f(x) \in V_j \}\), where \(w_j\) denotes the vertex corresponding to the subset \(V_j \in {\mathcal {V}}\). Note that we have \(s (f(x)) \in \tau (x)\) by construction of s. Similarly let \(\sigma (x) = {{\,\mathrm{conv}\,}}\{ v_i \in I \mid x \in U_i\}\) and note that \(r(x) \in \sigma (x)\) by construction of r. By assumption on the map g, \(x \in U_i\) implies \(f(x) \in V_{g(i)}\). Equivalently, if \(v_i\) is a vertex of \(\sigma (x)\), then \(h(v_i)=w_{g(i)}\) is a vertex of \(\tau (x)\). This implies that \(h (r(x)) \in \tau (x)\). Hence, \(s \circ f \simeq h \circ r\) by a straightline homotopy between s(f(x)) and h(r(x)) within \(\tau (x)\). \(\square \)
We note that the commutativity up to homotopy of the diagram (21) does not require the covers of \({{\mathbb {X}}}\) and \({{\mathbb {Y}}}\) to be good. See also Chazal and Oudot (2008, Lemma 3.4) and Ferry et al. (2014, Proposition 4.2) for related statements about the functoriality of the nerve of a cover.
4.3 Inference
We now relate the eigenspace \({{\textsf {E}}}^t (f)\) of the selfmap f with a generalized eigenspace obtained from the sample g. The value of this comparison derives from the assumption that f remains unknown, beyond g, so its eigenspace can be approached only indirectly, through the properties of g. We begin by recalling the assumptions:

\(f :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) is a continuous selfmap with Lipschitz constant \(\lambda \);

\(g :X \rightarrow X\) is a finite sample of f with approximation constant \(\rho \);

the Hausdorff distance between X and \({{\mathbb {M}}}\) is \(\delta = d_M (X, {{\mathbb {M}}})\).
Note that this implies \(\Vert {g(x)}{f(y)}\Vert \le \rho + \lambda \Vert {x}{y}\Vert \) since the lefthand side is at most \(\Vert {g(x)}{f(x)}\Vert + \Vert {f(x)}{f(y)}\Vert \). Setting \(\eta = \rho + \lambda \delta \), we note that
for all \(x \in X\). Hence g defines a simplicial map from \({{ \check{\mathrm{C}}\mathrm{ech}}}_{\delta }{({X})}\) to \({{ \check{\mathrm{C}}\mathrm{ech}}}_{\eta }{({X})}\), and we get two maps in homology,
in which \(\gamma \) is induced by g and \(\jmath \) is induced by inclusion.
We now consider the generalized eigenspace of the two maps for an eigenvalue t:
noting that this is a special case of the setting considered in Sect. 4.1. We show that under appropriate conditions this generalized eigenspace is isomorphic to \({{\textsf {E}}}^t (f)\). We need some definitions to prepare the first step. Recall that \(B_\delta (x)\) is the closed ball with radius \(\delta \) centered at \(x \in {{\mathbb {R}}}^n\). For \({{\mathbb {M}}}\subseteq {{\mathbb {R}}}^n\), we call \({{\mathbb {M}}}_\delta = \bigcup _{x \in {{\mathbb {M}}}} B_\delta (x)\) the \(\delta \)neighborhood of \({{\mathbb {M}}}\). By the Kirszbraun Extension Property (Kirszbraun 1934; Wells and Williams 1975), \(f :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) extends to a map \(f_\delta :{{\mathbb {M}}}_\delta \rightarrow {{\mathbb {M}}}_\delta \) with the same Lipschitz constant. Similarly, f extends to a map \(f_\theta :{{\mathbb {M}}}_\theta \rightarrow {{\mathbb {M}}}_\theta \), again with the same Lipschitz constant, in which \(\theta = \max (2\delta , \eta )\), with \(\eta = \rho + \lambda \delta \) as before. The following diagram organizes the homology groups of the spaces relevant to our argument. Apart from \(f_*\), \({f_\delta }_*\), and \({f_\theta }_*\), any map in the diagram is induced by inclusion.
Consider \(\iota :{\textsf {H}}(X_\delta ) \rightarrow {\textsf {H}}(X_\theta )\), let \(\iota = b \circ a\) with \(a :{\textsf {H}}(X_\delta ) \rightarrow {\textsf {H}}({{\mathbb {M}}}_\delta )\) and \(b :{\textsf {H}}({{\mathbb {M}}}_\delta ) \rightarrow {\textsf {H}}(X_\theta )\), and define \(\phi = b \circ {f_\delta }_* \circ a :{\textsf {H}}(X_\delta ) \rightarrow {\textsf {H}}(X_{\theta })\). To compare \(\phi \) with \(\iota \), we consider their eigenspace,
We claim that this eigenspace is isomorphic to the one considered in (24).
Lemma 4
\({{\textsf {E}}}^t (\phi , \iota ) \cong {{\textsf {E}}}^t (\gamma ,\jmath )\).
Proof
By finiteness of X, there is \(\varepsilon > 0\) such that the inclusion of \(X_\delta \) in the interior of \(X_{\delta +\varepsilon }\) is a homotopy equivalence and \({{ \check{\mathrm{C}}\mathrm{ech}}}_{\delta }{({X})}\) is isomorphic to the nerve of the cover of \(X_{\delta +\varepsilon }\) by open balls of radius \(\delta +\varepsilon \). We can thus apply (20) and get two commutative diagrams via Lemma 3:
The diagrams imply \(\phi \cong \gamma \) and \(\iota \cong \jmath \), so the eigenspaces are also isomorphic, as claimed. \(\square \)
For the second step, we add two assumptions: that the map \({\textsf {H}}({{\mathbb {M}}}) \rightarrow {\textsf {H}}({{\mathbb {M}}}_\delta )\) induced by inclusion is an isomorphism, and that the induced map \({\textsf {H}}({{\mathbb {M}}}_\delta ) \rightarrow {\textsf {H}}({{\mathbb {M}}}_\theta )\) is a monomorphism. This implies that a is surjective and that b is injective; see (25). We claim that under the combined assumptions, the eigenspace of \(f :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) for \(t \in {{\mathbb {F}}}\) is isomorphic to the eigenspace considered in Lemma 4.
Lemma 5
\({{\textsf {E}}}^t (f) \cong {{\textsf {E}}}^t (\phi ,\iota )\).
Proof
We have \(\ker a \subseteq \ker \phi \) simply because \(\phi = b \circ {f_\delta }_* \circ a\), and we have \(\ker a = \ker \iota \) because \(\iota = b \circ a\) with b injective. This implies \(\ker \phi \cap \ker \iota = \ker a\). Hence,
Since b is injective, the kernel in (30) is isomorphic to \({{\textsf {E}}}^t ({f_\delta }_*)\). This concludes the proof since \({\textsf {H}}({{\mathbb {M}}}) \cong {\textsf {H}}({{\mathbb {M}}}_{\delta })\), by assumption, and therefore \({{\textsf {E}}}^t ({f_\delta }_*) \cong {{\textsf {E}}}^t(f)\). \(\square \)
Summarizing Lemmas 4 and 5 , we obtain a sampling theorem for inferring the eigenspace of the given selfmap from the sampled eigenspace module (18).
Theorem 1
Let \(f :{{\mathbb {M}}}\rightarrow {{\mathbb {M}}}\) be a selfmap with Lipschitz constant \(\lambda \), and let \(g :X \rightarrow X\) be a finite sample of f with approximation error \(\rho \) and Hausdorff distance \(\delta = d_H (X, {{\mathbb {M}}})\). Suppose that the inclusion \({{\mathbb {M}}}\hookrightarrow {{\mathbb {M}}}_\delta \) induces an isomorphism in homology, while the inclusion \({{\mathbb {M}}}_\delta \hookrightarrow {{\mathbb {M}}}_\theta \) for \(\theta = \max (2\delta ,\rho + \lambda \delta )\) induces a monomorphism. Then the dimension of the eigenspace \({{\textsf {E}}}^t (f)\) equals the dimension of the generalized eigenspace \({{\textsf {E}}}^t (\gamma ,\jmath )\).
5 Computational experiments
In this section, we analyze the performance of our algorithm experimentally and compare the results with those reported in Edelsbrunner et al. (2015). For ease of reference, we call the algorithm in Edelsbrunner et al. (2015) the Vietoris–Rips or VRmethod and the algorithm in this paper the Delaunay–Čech or DČmethod. We begin with the introduction of the casestudies — selfmaps on a circle and a torus — and end with statistics collected during our experiments.
5.1 Expanding circle map
The first casestudy is an expanding map from the circle to itself. To add noise, we extend it to a selfmap on the plane, \(f :{{\mathbb {C}}}\rightarrow {{\mathbb {C}}}\) defined by \(f(z) = z^2\). While traversing the circle once, the image under f travels around the circle twice. To generate the data, we randomly chose N points on the unit circle, and letting \(z_i\) be the ith such point, we pick a point \(x_i\) from an isotropic Gaussian distribution with center \(z_i\) and width \(\sigma = 0.1\). Note that while the noise from a Gaussian distribution is unbounded, for large enough N and sufficiently small \(\sigma \) (in dependence on N), a random sample noisy still has a high probability of satisfying the sampling conditions from Sect. 4. Write X for the set of points \(x_i\), and let the image of \(x_i\) be the point \(g(x_i) \in X\) that is closest to \(x_i^2\). As explained earlier, we construct the filtration of Delaunay–Čech complexes of X and compute eigenspace diagrams for all eigenvalues in a sufficiently large finite field to avoid aliasing effects. Our choice is \({{\mathbb {F}}}= {{\mathbb {Z}}}_{1009}\). Recall that the definition of the eigenspace module in Sect. 4.1 required a choice of \(q \ge 0\). For our computations, we always chose the smallest admissible value.
Drawing \(N=100\) points, we compare the DČmethod of this paper with the VRmethod in Edelsbrunner et al. (2015). For eigenvalue \(t = 2\), both methods give a nonempty eigenspace diagram consisting of a single point. Figure 3 illustrates the results by showing the generating cycle computed with the DČmethod on the left and its image on the right.
5.2 Torus maps
The second casestudy consists of three selfmaps on the torus, which we construct as a quotient of the Cartesian plane; see Fig. 4. For \(i = 1, 2, 3\), the map \(f_i :[0,1)^2 \rightarrow [0,1)^2\) sends a point \(x = (x_1, x_2)^T\) to \(f_i(x) = A_i x\), in which
The 1dimensional homology group of the torus has only two generating cycles. Letting one wrap around the torus in meridian direction and the other in longitudinal direction, we see that \(f_1\) doubles both generators, \(f_2\) exchanges the generators, and \(f_3\) adds them but also preserves the first generator.
Correspondingly, \(f_1\) has two eigenvectors for the eigenvalue \(t = 2\), \(f_2\) has two distinct eigenvalues \(t = 1\) and \(t = 1\), and \(f_3\) has only one eigenvector for \(t = 1\). The input data for our algorithm, X, consists of 100 points uniformly chosen in \([0,1)^2\).
To define the image of a point \(x \in X\), we compute the point \(A_i x\) and let the image be the nearest point \(g_i(x) \in X\). The eigenspace diagrams of \(f_1, f_2, f_3\) for selected eigenvalues are shown in the last three panels of Fig. 5.
5.3 Accuracy
To study how accurate the two methods are, we look at false positives and false negatives, and the persistence of the recurrent features of the underlying smooth maps.
5.3.1 Circle map
Repeating the circle map experiment with \(N = 100\) points ten times, we show the superimposed twenty eigenspace diagrams (ten each for the two methods) in the upper left panel of Fig. 5. Points of the VRmethod are marked blue while points of the DČmethod are marked red. The eigenvector for \(t = 2\) is detected each time. However, the DČmethod detects the recurrence consistently earlier than the VRmethod, with smaller birth and death values but also with smaller average persistence. The shift of the birth values is easy to rationalize: a cycle arises for the same radius in both filtrations, but remains without image in the VRmethod until the radius is large enough to capture the image of every edge in the cycle. The shift of the death value is more difficult to explain and perhaps related to the fact that the DČmethod maps a cycle in one complex, \(K_r\), to a later complex, \(K_s\) with \(r \le s \le \rho + \lambda r\) in the filtration of Delaunay–Čech complexes. Monitoring r and s in 100 runs for a range of number of points, we show the average Lipschitz constant and the average ratio \(\tfrac{s}{r}\) in Table 1.
There are no false negatives in this experiment, but we see a small number of false positives reported by the VRmethod (the points in the upper right corner of the first panel in Fig. 5, all for eigenvalues \(t \ne 2\)). This indicates that the VRmethod is more susceptible to noise than the DČmethod. To support our claim, we compute the eigenspace diagrams using the DČmethod with increased noise, and indeed find no false positives; see Fig. 6.
5.3.2 Torus maps
The situation is similar for the three torus maps, whose eigenspace diagrams are shown in the next three panels of Fig. 5. The eigenvectors of \(f_1, f_2, f_3\) are represented by points on the upper edges of the panels, indicating that their corresponding homology classes last until the last complex in the filtration. This is different in the VRmethod because the Vietoris–Rips complex for large radii is less predictable than the Delaunay–Čech complex. In contrast to the circle map, we observe false positives also in the DČmethod. They show up as points with small to moderate persistence in the three diagrams. We also have false positives in the VRmethod, but the results are difficult to compare because for complexity reasons we could not run the algorithm beyond \(N = 200\) points. As another indication of improved accuracy of the DČmethod, we note that the eigenspace diagrams we observe in our experiments do not suffer the problem of abundant eigenvalues discussed in Edelsbrunner et al. (2015, Section 6.4).
5.4 Runtime analysis
We analyze the running time of the DČmethod for sets of N points, with N varying from 100 to 10000. For the persistent homology computation, we use coefficients in the field \({{\mathbb {Z}}}_{1009}\). The time is measured on a notebook class computer with 2.6GHz Intel Core i76600U processor and 16GB RAM.
5.4.1 Overall running time
We begin with a brief comparison of the two methods, first of the overall running time for computing eigenspace diagrams; see Table 2. As mentioned earlier, the VRmethod uses Vietoris–Rips complexes, which grow fast with the number of points and the radius. We could therefore run this method for \(N = 100\) and 150 points only, terminating the run for \(N = 200\) points after half an hour.
To get a better feeling for the running time of the DČmethod, we plot the results in Fig. 7, adding curves to indicate the asymptotic experimental performance. The outcome suggests that the computational complexity of the DČmethod is between quadratic and cubic in the number of points. We note that more than half of the time is used to compute smallest separating spheres.
5.4.2 Flowing an edge
To gain further insight into the time needed to flow a cycle from the Čech to the Delaunay–Čech complex, we present statistics for collapsing a random edges in a variety of settings. The edges are constructed from 100, 1000, 10000 points chosen along the unit circle with added Gaussian noise, and from 100, 1000, 10000 points chosen uniformly in \([0,1)^2\).
For each data set, we pick two points at random and monitor the effort it takes to flow this edge from the Čech complex to the Delaunay–Čech complex. Specifically, we iterate \(\Phi \) on each edge individually until the result stabilizes. The statistics in Table 3 shows how many times \(\Phi \) is iterated and how many points are tested inside each call to compute the discrete gradient. The statistics for the circle and the square are similar, with consistently larger numbers when we pick the edges in the square.
5.4.3 Smallest separating spheres
Our analysis shows that the DČmethod spends most of the time computing smallest separating spheres. For this reason, we compare the straightforward implementation (function Separate), with the heuristic improvement (function MoveToFront). We generate the points in \([0,1)^2\) as described above. For both functions, we randomly pick 10000 edges from the Čech complex and another 10000 edges from the Delaunay–Čech complex, and we test for each edge whether or not there exists a sphere that separates the edge from the rest of the points. Figure 8 shows that the running time of both functions depends linearly on the number of points, which is to be expected. The bestfit linear functions suggest that the movetofront heuristic is faster than the more naive extension of the miniball algorithm to finding smallest separating spheres. The difference is more pronounced for edges of the Čech complex (left panel) for which we expect more points inside the circumscribed spheres and an early contradiction to the existence of a separating sphere. In contrast, the difference in performance is negligible for edges sampled from the Delaunay–Čech complex, for which separating spheres exist by construction.
6 Discussion
The main contributions of this paper are the construction of a filtrationpreserving chain map from a Čech filtration to the corresponding Čech–Delaunay filtration, the construction of a geometrically meaningful chain selfmap map on a Delaunay triangulation from a self map on a point set, and its application to computing eigenspaces of sampled dynamical systems. Following the proof of collapsibility in Bauer and Edelsbrunner (2017), we get an efficient algorithm for the chain map though implicit treatment of the Čech complex. The reported research raises a number of questions:

Can we give theoretical upper bounds on the number of individual collapses needed to flow a cycle to its image under the stabilization map of the Čech–Delaunay gradient flow?

Can the computation of smallest separating spheres be further improved by customizing the procedure to small sets inside the sphere, or by taking advantage of the coherence between successive calls?
We expect that the fast chain map algorithm has applications beyond this paper, including to the transport of structural information between meshes, and to the visualization of topological information shared by related highdimensional dataset.
References
Bauer, U., Edelsbrunner, H.: The Morse theory of Čech and Delaunay complexes. Trans. Am. Math. Soc. 369(5), 3741–3762 (2017)
Chazal, F., Oudot, S.Y.: Towards persistencebased reconstruction in Euclidean spaces. In: Computational Geometry (SCG’08), pp. 232–241. ACM, New York (2008)
Edelsbrunner, H., Harer, J.: Computational Topology: An Introduction. American Mathematical Society, Providence (2010)
Edelsbrunner, H., Mücke, E.P.: Simulation of simplicity: a technique to cope with degenerate cases in geometric algorithms. ACM Trans. Graph. 9(1), 66–104 (1990)
Edelsbrunner, H., Letscher, D., Zomorodian, A.: Topological persistence and simplification. Discrete Comput. Geom. 28(4), 511–533 (2002)
Edelsbrunner, H., Jabłoński, G., Mrozek, M.: The persistent homology of a selfmap. Found. Comput. Math. 15(5), 1213–1244 (2015)
Ferry, S., Mischaikow, K., Nanda, V.: Reconstructing functions from random samples. J. Comput. Dyn. 1(2), 233–248 (2014)
Forman, R.: Morse theory for cell complexes. Adv. Math. 134(1), 90–145 (1998)
Gärtner, B.: Fast and robust smallest enclosing balls. In: Proceedings of the 7th Annual European Symposium on Algorithms, ESA ’99, pp. 325–338. Springer, Berlin (1999)
Gromov, M.: Monotonicity of the volume of intersection of balls. In: Geometrical Aspects of Functional Analysis (1985/1986), Volume 1267 of Lecture Notes in Mathematics, pp. 1–4. Springer, Berlin (1987)
Hatcher, A.: Algebraic Topology. Cambridge University Press, Cambridge (2002)
Kaczynski, T., Mischaikow, K., Mrozek, M.: Computational Homology. Applied Mathematical Sciences, vol. 157. Springer, New York (2004)
Kirszbraun, M.: Über die zusammenziehende und Lipschitzsche Transformationen. Fundam. Math. 22(1), 77–108 (1934)
Wells, J.H., Williams, L.R.: Embeddings and Extensions in Analysis. Springer, New York (1975). Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 84
Welzl, E.: Smallest enclosing disks (balls and ellipsoids). In: New Results and New Trends in Computer Science (Graz, 1991), Volume 555 of Lecture Notes in Computer Science, pp. 359–370. Springer, Berlin (1991)
Zomorodian, A., Carlsson, G.: Computing persistent homology. Discrete Comput. Geom. 33(2), 249–274 (2005)
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This research has been supported by the DFG Collaborative Research Center SFB/TRR 109 “Discretization in Geometry and Dynamics”, by Polish MNiSzW Grant No. 2621/7.PR/12/2013/2, by the Polish National Science Center under Maestro Grant No. 2014/14/A/ST1/00453 and Grant No. DEC2013/09/N/ST6/02995.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bauer, U., Edelsbrunner, H., Jabłoński, G. et al. Čech–Delaunay gradient flow and homology inference for selfmaps. J Appl. and Comput. Topology 4, 455–480 (2020). https://doi.org/10.1007/s41468020000588
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41468020000588