The Persistent Homology of a SelfMap
 1.7k Downloads
 5 Citations
Abstract
Considering a continuous selfmap and the induced endomorphism on homology, we study the eigenvalues and eigenspaces of the latter. Taking a filtration of representations, we define the persistence of the eigenspaces, effectively introducing a hierarchical organization of the map. The algorithm that computes this information for a finite sample is proved to be stable, and to give the correct answer for a sufficiently dense sample. Results computed with an implementation of the algorithm provide evidence of its practical utility.
Keywords
Discrete dynamical systems Computational topology Persistent homology Category theory Algebraic algorithms Convergence Stability Computational experimentsMathematics Subject Classification
55N35 37B99 18A99 55U991 Introduction
In recent years, the theory of persistent homology [11, 23] has become a useful tool in several areas, including shape analysis [12], scientific visualization [13], highdimensional data analysis [3], but also in mathematics itself [21]. The specific aim of this paper is an approach to the persistence of endomorphisms induced in homology by continuous selfmaps. The longterm goal is to embed persistence in the computational analysis of dynamical systems, as pursued in [14] and the related literature.
In the case of finitely generated homology with field coefficients, the homomorphism induced by a continuous map between topological spaces is a linear map between finitedimensional vector spaces. Such a map \(\varphi : Y \rightarrow X\) is characterized up to conjugacy by its rank. This is in contrast to a linear selfmap, \(\phi : X \rightarrow X\), which in the case of an algebraically closed field^{1} is characterized up to conjugacy by its Jordan form. A weaker piece of information is the eigenvectors, which in our setting captures the homology classes that are invariant under the selfmap. Therefore, it is natural to study the persistence of eigenvalues and eigenspaces as a first step to the full understanding of the persistence of the map. We define it in terms of the persistence of vector spaces, a concept that has been around for some time. Specifically, it has been presented as the general idea of zigzag persistence [2], which is based on the theory of quivers [9]. Since we need an algorithm that provides not only the persistence intervals but also a special basis, we give an independent presentation of the concept. We believe this presentation is elementary and in the spirit of the theory of persistent homology. We also note that its generalization to zigzag persistence is straightforward. Beyond describing the algorithm for the persistence of eigenvalues and eigenspaces, we analyze its performance, proving that the persistence diagram it produces is stable under perturbations of the input, and the algorithm converges to the homology of the studied map, reaching the correct ranks for a sufficiently fine sample. In addition, we exhibit results obtained with a software implementation, which suggest that the persistent homology of eigenspaces picks up the important dynamics already from a relatively small sample.
We would hope that \(f\) extends to simplicial maps on these complexes, but this is unfortunately not the case in general. For instance, \(f\) maps the endpoints of the edge \(x_1 x_7\) in \(K_3\) to the points \(x_2\) and \(x_6\), but they are not endpoints of a common edge in \(K_3\). The reason for this situation is the expanding character of \(f\). To still make sense of the map, we construct the maximal partial simplicial maps, \(\kappa _i: K_i {\nrightarrow }K_i\) consistent with \(f\). Figure 1 shows the domains of these maps in the bottom row, and for \(i = 3, 4\), we can see how \(\kappa _i\) wraps the convex octagon twice around the hole in \(K_i\) shown right above. This reflects the fundamental feature of \(f\), namely that its image wraps around the circle twice. To see this more formally, we compare the homology classes in the domains with their images. For \(i = 1, 3, 5\), the inclusion of the domain of \(\kappa _i\) in \(K_i\) induces an isomorphism in homology. The comparison therefore reduces to the study of eigenvectors of an endomorphism. The lack of isomorphism for \(i = 2, 4\) may be overcome by the study of eigenvectors of pairs of linear maps. In this particular case, we are able to conclude that the eigenspace for eigenvalue \(t = 2\) appears in \(K_3\) and lasts to \(K_4\), thus reconstructing the essential character of \(f\) from a very small sample.
To summarize, there are differences between the partial simplicial maps and the underlying continuous map; see in particular the reorganization that takes place at \(i = 2\) and \(i = 4\). The hope to recover the properties of the latter from the former is based on the ability of persistence to provide a measure that transcends fluctuations and identifies what stays the same when things change.
Outline Section 2 introduces the categories of partial functions, matchings, and linear maps. Section 3 discusses towers within these categories and introduces the concept of persistence. Section 4 describes the algorithm that computes the persistent homology of an endomorphism from a hierarchical representation of the underlying selfmap. Section 5 proves that the algorithm converges and produces stable persistence diagrams. Section 6 presents results obtained with our implementation of the algorithm. Section 7 concludes the paper.
2 Categories
We find the language of category theory convenient to talk about persistent homology; see MacLane [16] for a standard reference. Most importantly, we introduce the category of finite sets and matchings, which will lead to an elementary exposition of persistence.
2.1 Partial Functions
We recall that a category consists of objects and (directed) arrows between objects. Importantly, there is the identity arrow from every object to itself, and arrows compose associatively. An arrow, \(\theta : K \rightarrow L\), is invertible if it has an inverse, \(\theta ^{1}: L \rightarrow K\), such that \(\theta ^{1} \theta \) and \(\theta \theta ^{1}\) are the identity arrows for \(K\) and \(L\). If there is an invertible arrow from \(K\) to \(L\), then the two objects are isomorphic. Every category in this paper contains a zero object, which is characterized by having exactly one arrow to and one arrow from every other object. It is unique up to isomorphisms. Two arrows \(\kappa : K \rightarrow K'\) and \(\lambda : L \rightarrow L'\) are conjugate if there are invertible arrows \(\theta : K \rightarrow L\) and \(\theta ' : K' \rightarrow L'\) that commute with \(\kappa \) and \(\lambda \); that is, \(\theta ' \kappa = \lambda \theta \). A functor is an arrow between categories, assigning to each object and each arrow of the first category an object and an arrow of the second category in such a way that the identity arrows are mapped to identity arrows and the functor commutes with the composition of the arrows.
We use a category whose arrows generalize functions between sets as the basis of other categories. Specifically, a partial function is a relation \(\xi \subseteq X \times Y\) such that every \(x \in X\) is either not paired or paired with exactly one element in \(Y\) [15]. We denote it by \(\xi : X {\nrightarrow }Y\), observing that there is a largest subset \(X' \subseteq X\) such that the restriction \(\xi : X' \rightarrow Y\) is a function. We call \(\mathsf{dom\,}{\xi } := X'\) the domain and \(\mathsf{ker\,}{\xi } := X  X'\) the kernel of \(\xi \). For each \(x \in X'\), we write \(\xi (x)\) for the unique element \(y \in Y\) paired with \(x\), as usual. Similarly, we write \(\xi (A)\) for the set of elements \(\xi (x)\) with \(x \in A {\; \cap \;}X'\). The image of \(\xi \) is of course the entire reachable set, \(\mathsf{im\,}{\xi } := \xi (X)\). If \(\xi : X {\nrightarrow }Y\) and \(\eta : Y {\nrightarrow }Z\) are partial functions, then their composition is the partial function \(\eta \xi : X {\nrightarrow }Z\) consisting of all pairs \((x,z) \in X \times Z\) for which there exists \(y \in Y\) such that \(y = \xi (x)\) and \(z = \eta (y)\). Thus, we have a category of sets and partial functions, which we denote as \({\mathbf{Part}}\). The zero object in this category is the empty set, which is connected to all other sets by empty partial functions. It will be convenient to limit the objects in this category to finite sets.
2.2 Matchings
2.3 Linear Maps
Assuming a fixed field, we now consider the category \({\mathbf{Vect}}\), whose objects are the finitedimensional vector spaces over this field, and whose arrows are the linear maps between these vector spaces. The dimension of a vector space, \(U\), is of course the cardinality of its basis, which we denote as \(\mathrm{dim\,}{U}\). Letting \(\upsilon : U \rightarrow V\) be a linear map, we write \(\mathsf{ker\,}{\upsilon } := \upsilon ^{1} (0)\) for the kernel, \(\mathsf{im\,}{\upsilon } := \upsilon (U)\) for the image, and \(\mathsf{rank\,}{\upsilon } := \mathrm{dim\,}{\upsilon }(U)\) for the rank of \(\upsilon \).
We have a functor \(\mathrm{Lin }:{\mathbf{Mch}}\rightarrow {\mathbf{Vect}}\) which sends a finite set \(A\) to the linear space spanned by \(A\) and a matching \(\alpha :A\rightarrow B\) to the linear map defined on a basis vector \(a\in A\) as \(\alpha (a)\) if \(a\) is in the domain of \(\alpha \) and as zero otherwise. It follows from the discussion of the preceding paragraph that for every linear map \(\upsilon \) in \({\mathbf{Vect}}\), there exists a matching \(\alpha \) in \({\mathbf{Mch}}\) such that \(\upsilon =\mathrm{Lin }(\alpha )\) and any two matchings with this property are conjugate.
2.4 Eigenvalues and Eigenspaces
2.5 Eigenspace Functor for Pairs
3 Towers and Persistence
In this section, we study what can persist along paths in a category, similarly to the very recent, independently obtained results [1]. We show that parallel paths in \({\mathbf{Vect}}\) and \({\mathbf{Mch}}\) will naturally lead to the concept of persistent homology, now within a more general framework than in the traditional literature. We note that the concept of tower introduced in the sequel corresponds to persistence module in [23]. We prefer to speak about towers to avoid phrases like ‘persistent module of persistence modules’.
3.1 Paths and Categories of Paths in a Category
Suppose \({{\mathcal Y}}= (Y_i, \eta _i)\) is a second tower in the same category, and there is a vector of arrows \(\varphi = (\varphi _i)\), with \(\varphi _i: X_i \rightarrow Y_i\), such that \(\eta _i \varphi _i = \varphi _{i+1} \xi _i\) for all \(i\). Referring to this vector of arrows as a morphism, we denote this by \(\varphi : {{\mathcal X}}\rightarrow {{\mathcal Y}}\). To verify that the towers and morphisms form a new category, we note that the identity morphism is the vector of identity arrows, and that morphisms compose naturally. The zero object is the tower consisting solely of zero objects. Finally, an isomorphism is an invertible morphism; it consists of invertible arrows between objects that commute with the arrows of the towers. We remark that arrows and morphisms are alternative terms for the same notion in category theory. We find it convenient to use both so we can emphasize different levels of the construction.
3.2 Persistence in a Tower of Matchings
This relationship motivates us to introduce the persistence diagram of \({{\mathcal A}}\) as the multiset of persistence intervals, which we denote as \(\mathrm{Dgm}({{\mathcal A}})\). Note that \(\#_{[i,j]}\) is the multiplicity of \([i,j]\). The number of intervals in the persistence diagram, counted with multiplicities, is therefore \(\# \mathrm{Dgm}({{\mathcal A}}) = \sum _{i \le j} \#_{[i,j]}\). It is important to observe that the persistence diagram characterizes the tower up to isomorphisms.
Equivalence Theorem A
 (i)
\({{\mathcal A}}\) and \({{\mathcal B}}\) are isomorphic;
 (ii)
the rank functions of \({{\mathcal A}}\) and \({{\mathcal B}}\) coincide;
 (iii)
the persistence diagrams of \({{\mathcal A}}\) and \({{\mathcal B}}\) are the same.
Proof
(i) \(\Rightarrow \) (ii). Since \({{\mathcal A}}\) and \({{\mathcal B}}\) are isomorphic, we have invertible arrows \(\theta _i: A_i \rightarrow B_i\) that commute with the arrows in \({{\mathcal A}}\) and \({{\mathcal B}}\). It follows that \(\mathsf{rank\,}{\alpha _i^j} = \mathsf{rank\,}{\beta _i^j}\), for all \(i \le j\).
(ii) \(\Rightarrow \) (iii). The rank function determines the multiplicities of the intervals in the persistence diagram by (11).
(iii) \(\Rightarrow \) (i). To construct the required isomorphism, we first match up the intervals, and using the matching, we match up the underlying points. \(\square \)
3.3 Persistence in a Tower of Linear Maps
We return to assuming a fixed field, and consider a tower \({{\mathcal U}}= (U_i, \upsilon _i)\) in the category of vector spaces over this field.^{4} For each \(i\), let \(A_i\) be a basis of \(U_i\). Restricting \(\upsilon _i\) to \(A_i\) and \(A_{i+1}\), we get a partial function \(\alpha _i: A_i {\nrightarrow }A_{i+1}\), again for every \(i\).
Definition
We call the tower of partial functions \({{\mathcal A}}= (A_i, \alpha _i)\) a basis of the tower \({{\mathcal U}}\) if \(\alpha _i\) is a matching and \(\mathsf{rank\,}{\alpha _i} = \mathsf{rank\,}{\upsilon _i}\), for every \(i\).
Equivalence Theorem B
 (i)
\({{\mathcal U}}\) and \({{\mathcal V}}\) are isomorphic;
 (ii)
the rank functions of \({{\mathcal U}}\) and \({{\mathcal V}}\) coincide;
 (iii)
the persistence diagrams of \({{\mathcal U}}\) and \({{\mathcal V}}\) are the same.
Proof
Implications (i) \(\Rightarrow \) (ii) and (ii) \(\Rightarrow \) (iii) are trivial. To see that (iii) \(\Rightarrow \) (i) select bases \({{\mathcal A}}\) and \({{\mathcal B}}\), respectively, in \({{\mathcal U}}\) and \({{\mathcal V}}\) by the Basis Lemma below. Then \(\mathrm{Dgm}({{\mathcal A}}) = \mathrm{Dgm}({{\mathcal B}})\). Thus, Equivalence Theorem A implies that \({{\mathcal A}}\) and \({{\mathcal B}}\) are isomorphic, and the bijections between the bases \({{\mathcal A}}\) and \({{\mathcal B}}\) define the requested isomorphisms between \({{\mathcal U}}\) and \({{\mathcal V}}\). \(\square \)
3.4 Tower Bases
We now prove the technical result assumed above to get Equivalence Theorem B. It is not new and can be found in different words and with a less elementary proof in [2, 23].
Basis Lemma
Every tower in \({\mathbf{Vect}}\) has a basis.
Proof
In Phase 1, we use column operations to turn \(M_i\) into column echelon form, as sketched in Fig. 4 in the middle. We get a strictly descending staircase of nonzero entries, with zeros and nonzeros below and zeros above the staircase. Here, we call the collection of topmost nonzero entries in the columns the staircase, and we multiply with inverses so that all entries in the staircase are equal to \(1\). By definition, each column contains at most one element of the staircase, and by construction, each row contains at most one element of the staircase. The reduction to echelon form is done from right to left in the sequence of matrices; that is, in the order of decreasing index \(i\). Indeed, every column operation in \(M_i\) changes the basis of \(U_i\), so we need to follow up with the corresponding row operation in \(M_{i1}\). Since \(M_{i1}\) has not yet been transformed to echelon form, there is nothing else to do.
In Phase 2, we use row operations to turn the column echelon into the normal form, as sketched in Fig. 4 on the right. Equivalently, we preserve the staircase and turn all nonzero entries below it into zeros. To do this for a single column, we add multiples of the row of its staircase element to lower rows. Processing the columns from left to right, this requires no backtracking. The reduction to normal form is done from left to right in the sequence of matrices; that is, in the order of increasing index \(i\). Each row operation in \(M_i\) changes the basis of \(U_{i+1}\), so we need to follow up with the corresponding column operation in \(M_{i+1}\). This operation is a righttoleft column addition, which preserves the echelon form. Since \(M_{i+1}\) has not yet been transformed to normal form, there is nothing else to do.
In summary, we have an algorithm that turns each matrix \(M_i\) into a matrix in which every row and every column contain at most one nonzero element, which is \(1\). This is the matrix of a matching. Since we use only row and column operations, the ranks of the matrices are the same as at the beginning. Each column operation in \(M_i\) has a corresponding operation on the basis of \(U_i\). Similarly, each row operation in \(M_i\) has a corresponding operation on the basis of \(U_{i+1}\). By performing these operations on the bases of the vector spaces, we arrive at a basis of the tower.\(\square \)
3.5 Persistent Homology and Derivations
Persistent homology as introduced in [11, 23] may be viewed as a special case of the persistence of towers of vector spaces. To see this, let \({{\mathcal C}}= (C_i, \gamma _i)\) be a tower of chain complexes, with \(\gamma _i\) the inclusion of \(C_i\) in \(C_{i+1}\), and obtain \({{\mathcal H}}= (H_i, \eta _i)\) by applying the homology functor. Assuming coefficients in a field, the latter is a tower of vector spaces. The persistent homology groups are the images of the \(\eta _i^j\). A version of persistent homology, recently introduced in [2], studies a sequence of vector spaces \(U_i\) and linear maps, some of which go forward, from \(U_i\) to \(U_{i+1}\), while others go backward, from \(U_{i+1}\) to \(U_i\). This is a zigzag module if we have exactly one map between any two contiguous vector spaces. It turns out that the theory of persistence generalizes to this setting. In view of our approach based on matchings, this is not surprising. Indeed, the inverse of a matching is again a matching, so that there is no difference at all in the category of matchings. To achieve the same in the category of vector spaces, we only need to adapt the above algorithm to obtain the zigzag generalization of the Basis Lemma. The adaptation is also straightforward, running the algorithm on a sequence of matrices that are the original matrices for the forward maps and the transposed matrices for the backward maps.
There are several ways one can derive towers from towers, and we discuss some of them. Letting \({{\mathcal U}}= (U_i, \upsilon _i)\) and \({{\mathcal V}}= (V_i, \nu _i)\) be towers in \({\mathbf{Vect}}\), we call \({{\mathcal V}}\) a subtower of \({{\mathcal U}}\) if \(V_i \subseteq U_i\) and \(\nu _i\) is the restriction of \(\upsilon _i\) to \(V_i\) and \(V_{i+1}\), for each \(i\). Given \({{\mathcal U}}\) and a subtower \({{\mathcal V}}\), we can take quotients and define the quotient tower, \({{\mathcal U}}/ {{\mathcal V}}= (U_i / V_i, \varrho _i)\), where \(\varrho _i\) is the induced map from \(U_i/V_i\) to \(U_{i+1}/V_{i+1}\). Similarly, we can construct towers from a morphism \(\varphi : {{\mathcal U}}\rightarrow {{\mathcal V}}\), where we no longer assume that \({{\mathcal V}}\) is a subtower of \({{\mathcal U}}\). Taking kernels and images, we get the tower of kernels, which is a subtower of \({{\mathcal U}}\), and the tower of images, which is a subtower of \({{\mathcal V}}\). Taking the quotients, \(U_i / \mathsf{ker\,}{\varphi _i}\) and \(V_i / \mathsf{im\,}{\varphi _i}\), we furthermore define the towers of coimages and of cokernels. In [7], towers of kernels are used in the analysis of sampled stratified spaces and introduced along with the towers of images and cokernels. The benefit of the general framework presented in this section is that persistence is now defined for all these towers, without the need to prove or define anything else.
Of particular interest are the quotient towers, \({{\mathcal U}}/\mathsf{gker\,}{\phi }\) and \({{\mathcal U}}/\mathsf{gim\,}{\phi }\), because of their relation to the Leray functor [18] and Conley index theory [8, 17]. They may be of interest in the future study of the persistence of the Conley index applied to sampled dynamical systems.
3.6 Tower of Eigenspaces
Of particular interest to this paper is the tower of eigenspaces. When studying the eigenvectors of the endomorphisms, we do this for each eigenvalue in turn. To begin, we note that \(\phi : {{\mathcal U}}\rightarrow {{\mathcal U}}\) is a tower in the category \(\mathbf{Endo}({\mathbf{Vect}})\). Indeed, each \(\phi _i: U_i \rightarrow U_i\) is an object, and \(\upsilon _i: U_i \rightarrow U_{i+1}\) commutes with \(\phi _i\) and \(\phi _{i+1}\). Applying the eigenspace functor, \(E_t\), we get the tower \({{\mathcal E}}_t (\phi ) = (E_t (\phi _i), \delta _{t,i})\) in \({\mathbf{Vect}}\). Its objects are the eigenspaces, \(E_t (\phi _i)\), and its arrows are the restrictions, \(\delta _{t,i}\), of the \(\upsilon _i\) to \(E_t (\phi _i)\) and \(E_t (\phi _{i+1})\). We refer to it as the eigenspace tower of \(\phi \) for eigenvalue \(t\).
Much of the technical challenge we face in this paper derives from the difficulty in constructing linear selfmaps from sampled selfmaps. This motivates us to extend the above construction to a pair of morphisms. Let \({{\mathcal V}}= (V_i, \nu _i)\) be a second tower in \({\mathbf{Vect}}\), let \(\varphi , \psi : {{\mathcal U}}\rightarrow {{\mathcal V}}\) be morphisms between the two towers, and recall that this gives a tower in \(\mathbf{Pairs}({\mathbf{Vect}})\). Its objects are the pairs \(\varphi _i, \psi _i : U_i \rightarrow V_i\), and its arrows are the commutative diagrams with vertical maps \(\upsilon _i\) and \(\nu _i\), as in (8). Similar to the singlemap case, we apply the eigenspace functor, \(E_t\), now from \(\mathbf{Pairs}({\mathbf{Vect}})\) to \({\mathbf{Vect}}\). This gives the tower \({{\mathcal E}}_t (\varphi , \psi ) = (E_t (\varphi _i, \psi _i), \epsilon _{t,i})\) in \({\mathbf{Vect}}\). Its objects are the eigenspaces, and its arrows are the linear maps that map \([u] \in E_t (\varphi _i, \psi _i)\) to \([\upsilon _i (u)] \in E_t (\varphi _{i+1}, \psi _{i+1})\). We refer to it as the eigenspace tower of the pair \((\varphi , \psi )\) for the eigenvalue \(t\). This is the main new tool in our study of selfmaps. Of particular interest will be the persistence module of this tower.
4 Algorithm
Assuming a hierarchical representation of an endomorphism, we explain how to compute the persistent homology of its eigenspaces in three steps. The general setting consists of a filtration and an increasing sequence of selfmaps. In Step 1, we compute the bases of the two towers obtained by applying the homology functor to the filtrations of spaces and domains. In Step 2, we construct matrix representations of the linear maps in the morphism between the two towers. In Step 3, we compute the eigenvalues and the corresponding eigenspaces as well as their persistence.
4.1 Hierarchical Representation
Assuming the above hierarchical representation of the sampled map, we explain now how to compute the persistent homology of its eigenspaces in three steps. In Step 1, we compute the bases of the two towers obtained by applying the homology functor to the filtrations of spaces and domains. In Step 2, we construct matrix representations of the linear maps in the morphism between the two towers. In Step 3, we compute the eigenvalues and the corresponding eigenspaces as well as their persistence.
Step 1: Spaces Applying the homology functor, we get the tower \({{\mathcal X}}= (X_i, \xi _i)\) from the filtration of domains \(\mathsf{dom\,}{\kappa _i}\), and the tower \({{\mathcal Y}}= (Y_i, \eta _i)\) from the filtration of complexes \(K_i\). In this step, we compute the bases of these towers, which we explain for \({{\mathcal X}}\). Importantly, we represent all domains and maps in a single data structure, and we compute the basis in a single step that considers all maps at once.
Call \(\mathsf{dom\,}{\kappa _i}  \mathsf{dom\,}{\kappa _{i1}}\) the \(i\) th block of simplices, and sort \(\mathsf{dom\,}{\kappa }\) as \({\sigma }_1, {\sigma }_2, \ldots , {\sigma }_m\) such that each simplex succeeds its faces, and the \(i\)th block succeeds the \((i1)\)st block, for every \(1 \le i \le n\). Let \(D\) be the ordered boundary matrix of \(\mathsf{dom\,}{\kappa }\); that is, \(D [k,\ell ]\) is nonzero if \({\sigma }_k\) is a codimension\(1\) face of \({\sigma }_\ell \), and \(D [k,\ell ] = 0\), otherwise. The ordering implies that the submatrix consisting of the first \(i\) blocks of rows and the first \(i\) blocks of columns is the boundary matrix of \(\mathsf{dom\,}{\kappa _i}\), for each \(i\). We use the original persistence algorithm [10, Chap. VII.1] to construct the basis. Similar to the echelon form, it creates a collection of distinguished nonzero entries, at most one per column and row, but to preserve the order, it does not arrange them in a staircase. Specifically, the algorithm uses lefttoright column additions to get \(D\) into reduced form, which is a matrix \(R\) so that the lowest nonzero entries of the columns belong to distinct rows.
Given an index \(1 \le i \le n\), we identify the persistence intervals that contain \(i\) and get a basis of \(X_i\) by gathering the vectors in the corresponding columns of \(R\). The collection of these bases forms a basis of \({{\mathcal X}}\); see Sect. 3. Running the same algorithm on the filtration of \(K\), we get a basis of \({{\mathcal Y}}\).
Step 2: Maps Let \(\varphi _i, \psi _i: {{\mathcal X}}\rightarrow {{\mathcal Y}}\) be the morphisms such that \(\varphi _i\) is induced by \(\iota _i : \mathsf{dom\,}{\kappa _i} \rightarrow K_i\) and \(\psi _i\) is induced by \(\kappa _i' : \mathsf{dom\,}{\kappa _i} \rightarrow K_i\). In the second step of the algorithm, we construct matrix representations of the two morphisms. Both matrices, \(\Phi \) for \(\varphi \) and \(\Psi \) for \(\psi \), have their columns indexed by the nonzero columns of the reduced matrix of \({{\mathcal X}}\) and their rows indexed by the nonzero columns of the reduced matrix of \({{\mathcal Y}}\). We explain the computations for \(\Psi \).
The two computed matrices represent the morphisms, \(\varphi \) and \(\psi \), from which the matrices \(\Phi _i\) and \(\Psi _i\) representing the arrows, \(\varphi _i\) and \(\psi _i\), can be extracted. To do so, we first find all persistence intervals that contain \(i\), as before. Second, we collect the intersections of all corresponding rows and columns, as illustrated in Fig. 6.
5 Analysis
Given a finite set of sample points and their images, we apply the algorithm of Sect. 4 to compute information about the otherwise unknown map acting on an unknown space. In this section, we prove that under mild assumptions—about the space, the map, and the sample—this information includes the correct dimension of the eigenspaces. We also show that the persistence diagrams of the eigenspace towers are stable under perturbations of the input.
5.1 Graphs and Distances
We note that there are functions \(f\) for which \({\mathrm{hfs}{({{{\mathbb X}}})}} < {\mathrm{hfs}{({{G{f}}})}}\), but there are also functions for which the relation is reversed. For example, the graph of the function that wraps the unit circle in \({{\mathbb R}}^2\) \(k\) times around itself is a curve on a torus in \({{\mathbb R}}^4\). For large values of \(k\), thickening this curve by a small radius suffices to get the same homotopy type as the torus, while thickening the circle by the same radius does not change its homotopy type.
5.2 Sublevel Sets
To apply the algorithms in Sect. 4, we use an indirect approach that encodes the sublevel sets in computationally more amenable simplicial complexes. To explain this connection, we construct a complex by drawing a ball of radius \(r\) around each point of \(S\), and let \(K_r = K_r (S)\) be the nerve of this collection. It is sometimes referred to as the Čech complex of \(S\) and \(r\); see [10, Chap. III]. Similarly, we let \(L_r = L_r ({G{g}})\) be the Čech complex of \({G{g}}\) and \(r\).^{7} While the complexes are abstract, they are constructed over geometric points, which we use to form maps. Specifically, we write \(p_r: L_r \rightarrow K_r\) for the simplicial map we get by projecting \({{\mathbb R}}^\ell \times {{\mathbb R}}^\ell \) to the first factor, and we write \(q_r: L_r \rightarrow K_r\) if we project to the second factor. Both are simplicial maps because for every simplex in \(L_r\), its projections to the two factors both belong to \(K_r\). Note that \(p_r\) is injective, which implies that its inverse is a partial simplicial map, \(p_r^{1}: K_r {\nrightarrow }L_r\), and its restriction to the domain is a simplicial isomorphism. Composing it with \(q_r\), we get the partial simplicial map \(q_r p_r^{1}: K_r {\nrightarrow }K_r\).
We are now ready to relate this construction with the setup we use for our algorithm in Sect. 4. There, we begin with a partial simplicial map, \(\kappa : K {\nrightarrow }K\), and a filtration of \(K\). The filtration is furnished by the sequence of Čech complexes of \(S\), which ends with the complete simplicial complex \(K\) over the points in \(S\), and \(\kappa \) is the partial simplicial map defined by \(g: S \rightarrow S\). In this case, \(\kappa \) happens to be a simplicial map because \(K\) is complete. For each radius, \(r\), we have defined \(\kappa _r: K_r {\nrightarrow }K_r\) as the restriction of \(\kappa \), which is a partial simplicial map. It is not difficult to prove that \(\kappa _r\) is equal to the map we have obtained by composing \(p_r^{1}\) and \(q_r\) before. We state this result and its consequence for towers of eigenvalues without proof.
Projection Lemma
Let \(\kappa _r: K_r {\nrightarrow }K_r\) be the partial simplicial map obtained by restricting \(\kappa \) to \(K_r\), and let \(p_r, q_r : L_r \rightarrow K_r\) be the simplicial maps induced by projecting to the two factors. Then \(\kappa _r = q_r p_r^{1}\), for every \(r \ge 0\).
Recall that \(\kappa _r'\) is the restriction of \(\kappa _r\) to the domain, and \(\iota _r: \mathsf{dom\,}{\kappa _r} \rightarrow K_r\) is the inclusion map. In view of the Projection Lemma, we can freely move between the tower of eigenspaces we get for \((\iota , \kappa ')\) and \((p,q)\), which we do in the sequel.
5.3 Interleaving
Interleaving Lemma
Proof
The vertical arrows are replicas of (26), which are justified by \(a+{\varepsilon }\le b\) and \(c \le d{\varepsilon }\). Each horizontal arrow connects objects in \(\mathbf{Pairs}(\mathbf{Top})\) induced by the same selfmap, so we only need \(a \le d\) and \(b \le c\), which we get from the assumptions. Applying the homology functor, we get the same diagram, with spaces and maps replaced by the corresponding vector spaces and linear maps. Indeed, the horizontal maps are clear, and the vertical maps are induced by the inclusions that exist because of the assumed relations between \(a, b, c, d\), and \({\varepsilon }\). Applying now the eigenspace functor, we get the diagram in (27), which is easily seen to commute. The proof for the diagram in (28) is similar and omitted. \(\square \)
5.4 Small Thickenings
ANR Lemma
Let \({{\mathbb X}}\subseteq {{\mathbb R}}^\ell \) be a compact absolute neighborhood retract, let \(f: {{\mathbb X}}\rightarrow {{\mathbb X}}\) be such that the associated distance functions are tame, and let \(r\) be positive but smaller than \(\min \{ {\mathrm{hfs}{({{{\mathbb X}}})}}, {\mathrm{hfs}{({{G{f}}})}} \}\). Writing \(\mu _r, \nu _r\) for the maps induced in homology by the restrictions of \(p, q\) to \(Gf_r\) and \({{\mathbb X}}_r\), the eigenspace \(E_t (\mu _r, \nu _r)\) is isomorphic to \(E_t (\phi )\), for every eigenvalue \(t\).
5.5 Convergence
Inference Theorem
Proof
The Inference Theorem may be interpreted as a statement of convergence of our algorithm: if the sampling is fine enough, then we are guaranteed to get the dimensions of the eigenspaces as dimensions of persistent homology groups.
5.6 Stability
Stability Theorem
Proof
According to [4, Thm. 4.9], we only need to verify the \({\varepsilon }\)strong interleaving of the two towers for \({\varepsilon }\) equal to the Hausdorff distance between \({G{h}}\) and \({G{k}}\), but this is guaranteed by the Interleaving Lemma. \(\square \)
Letting \(h\) and \(k\) be finite samples of \(f: {{\mathbb X}}\rightarrow {{\mathbb X}}\), the Stability Theorem implies that the information they convey about the given function cannot be arbitrarily different. Setting \(h = f\) and \(k = g\), the theorem quantifies the extent to which the persistence diagram for the sampled points can deviate from that of the original selfmap.
6 Experiments
In this section, we present the results of a small number of computational experiments based on the implementation of the algorithm described in Sect. 4. We begin with a brief review of the algorithm and a discussion of the design decisions used in writing the software.
6.1 Implementation
 Step 1

compute the bases of the towers \({{\mathcal X}}= (X_i, \xi _i)\) and \({{\mathcal Y}}= (Y_i, \eta _i)\) defined by the filtrations of the domains and complexes. We use the original persistence algorithm with a sparsematrix representation in which the nonzero elements of each column are stored in a data structure referred to as a vector in C++.
 Step 2

compute the matrix representations of the morphisms \(\varphi , \psi : {{\mathcal X}}\rightarrow {{\mathcal Y}}\). Similar to Step 1, the algorithm works by incremental reduction of two matrices. The result is a compact representation of the maps \(\epsilon _{t,i}\) in the eigenspace tower.
 Step 3

construct the sequence of eigenspaces and compute the corresponding persistence diagram. Since the matrices representing the maps \(\epsilon _{t,i}\) tend to be small and dense, we use their full representations and the algorithm in the proof of the Basis Lemma.
Time in seconds for constructing the \(2\)skeleton of the Vietoris–Rips complex and executing the steps of our algorithm for one eigenvalue
No. of points  Skeleton  Step 1  Step 2  Step 3 

40  0.14  0.39  0.00  0.04 
60  0.67  2.12  0.00  0.15 
80  2.21  6.84  0.01  0.39 
100  5.18  23.49  0.01  0.86 
120  11.13  63.09  0.01  1.50 
140  19.53  137.86  0.03  2.18 
We mention that the running time can be further improved. In particular, the current implementation is generic, working for any field of coefficients, and the code implementing Step 1 has not yet been optimized. Note the dramatic drop in the running time from Step 1 to Step 2. The reason is the surprisingly small numbers of generators needed in the construction of the matrices \(\Phi \) and \(\Psi \). In the first set of experiments, we get between \(3\) and \(21\) generators for the filtration of \(\mathsf{dom\,}{\kappa }\) and between \(5\) and \(24\) generators for the filtration of \(K\). Compare this with the 10,700 to 457,450 simplices in the \(2\)skeleta of the Vietoris–Rips complex which have to be processed in Step 1. The code for Step 3 takes more time than for Step 2 because it executes computationally demanding procedures in linear algebra.
6.2 Expansion
In our first set of computational experiments, we consider the unit circle in the complex plane, and the function \(f: {{\mathbb S}}^1 \rightarrow {{\mathbb S}}^1\) defined by \(f(z) := z^2\). It maps each point on the circle to the point with twice the angle. The \(1\)dimensional homology of the circle has rank \(1\), with the circle itself being a generator. Under \(f\), the image of this generator is the circle that wraps around \({{\mathbb S}}^1\) twice. We see that the map expands the space, doubling the angle between any two points. Our main interest is to see whether the methods of this paper can detect this simple fact.
As expected, the persistence of the interval decreases as the noise increases. For \(\sigma = 0.30\), we get a lowpersistence interval for every value of \(t\). While we do not observe this all the time, this is a generic phenomenon, and we will shed light on it shortly. For now we just mention that the occurrence of every field value as eigenvalue indicates that we do not have sufficient data to see the features of the map.
6.3 Reflection
Instead of showing the individual \(1\)dimensional persistence diagrams, we superimpose them into one diagram. To further facilitate the comparison with the first set of experiments, we draw the superimposed diagrams side by side in Fig. 11. In both cases, the Gaussian noise varies between \(0.00\) and \(0.30\). We limit the comparison to the eigenvalues \(t = 2\) for the expansion, and \(t = 1\) (the inverse of \(1\) in the used finite field) for the reflection. The diagrams clearly show that the persistence interval shrinks with increasing noise. Indeed, the birthcoordinate grows and the deathcoordinate shrinks, so that the sum stays approximately constant, with a faint tendency to shrink. We also see that for the larger noise levels, there are sometimes spurious persistence intervals.
6.4 Abundance of Eigenvalues
We wish to shed light on the phenomenon that for some datasets and some complexes in the filtration, every field value is an eigenvalue of the pair of linear maps. While it might be surprising at first, there is an elementary explanation that has to do with computing the eigenvalues for a pair instead of a single map.
Let us now look at the linear algebra of the situation. In Step 2, we compute matrices \(\Phi _r\) and \(\Psi _r\) representing \(\varphi _r, \psi _r: Y_r \rightarrow X_r\), and in Step 3, we compute the nullspace of \(\Phi _r  t \Psi _r\). The entries of this matrix are degree\(1\) polynomials in \(t\). Let \(t_0\) be a value at which the matrix reaches its maximum rank, which we denote as \(k_0\). Clearly, \(k_0 \le \min \{ \#\mathrm{rows}, \#\mathrm{columns} \}\). Note that \(\Phi _r  t_0 \Psi _r\) has a fullrank minor of size \(k_0\) times \(k_0\). Let \(\Delta (t)\) be the determinant of that minor, but now for arbitrary values \(t\). It is a polynomial of degree \(k_0\), and because \(\Delta (t_0) \ne 0\), it is not identically zero and therefore has at most \(k_0\) roots. By choice of \(t_0\), this implies that the matrix has maximum rank for all but at most \(k_0\) values of \(t\). Correspondingly, the nullspace has minimum dimension, \(\#\mathrm{columns}  k_0\), for all but at most \(k_0\) values of \(t\). This is the dimension of \(\bar{E}_t (\varphi _r, \psi _r)\). We still take the quotient by dividing with \(\mathsf{ker\,}{\varphi _r} {\; \cap \;}\mathsf{ker\,}{\psi _r}\), which amounts to reduce the dimension by the dimension of that intersection, which we denote as \(k_1\). The resulting dimension of the nullspace is the same for all but at most \(k_0\) values of \(t\), namely \(\#\mathrm{columns}  k_0  k_1\). If \(k_1 < \#\mathrm{columns}  k_0\), then \(E_t (\varphi _r, \psi _r)\) has positive rank for every value of \(t\). This is what happens for the expanding datasets generated with width \(\sigma = 0.15, 0.24, and 0.30\) and for the persistence intervals represented by the dots in the left diagram in Fig. 11. In all other cases, we have \(k_1 = \#\mathrm{columns}  k_0\).
In conclusion, we mention that the extension of the eigenvalue problem to pairs of linear maps for not necessarily square matrices is not well understood. A relevant unpublished manuscript is [5], in which properties of the solution are discussed and a reduction algorithm is given.
6.5 Torus Maps
7 Discussion

Can the persistence of the eigenspace towers of a pair of morphisms be computed directly, for all eigenvalues simultaneously? To answer this question, we may have to study the persistence of generalized eigenspaces and Jordan form representations of endomorphisms.

The category approach to persistence opens the door to a number of derived towers, including generalized kernels and generalized images. How can we use their persistence to enhance our understanding of discretely sampled dynamical systems?
Footnotes
 1.
A field is algebraically closed if every nonconstant polynomial over the field has a root.
 2.
To be more precise, we should call it a weak inverse, because \(\alpha ^{1}\alpha \) and \(\alpha \alpha ^{1}\) are identities on the domain and image of \(\alpha \) and not necessarily on \(A\) and \(B\). We simplify language by ignoring this subtlety.
 3.
We note a difference in convention to most of the related literature, in which persistence intervals are defined halfopen. In particular, \([k,\ell ]\) in our notation corresponds to \([k,\ell +1)\) in [10]. Reading them as intervals in \({{\mathbb Z}}\), they are the same.
 4.
Part of the theory in this section can be developed for the more general case of finitely generated modules over a principal ideal domain. For reasons of simplicity, and because the crucial connection to matchings relies on stronger algebraic properties, we limit this discussion to vector spaces over a field right from the start.
 5.
With occasionally more elaborate formalism, everything we say can be generalized to \({{\mathbb X}}\) embedded in a general metric space.
 6.
In cases in which the image of a point is not in \(S\), we can snap the image to the nearest point in \(S\), which usually implies only a small perturbation of the map. Similarly, we can relax the assumption that \(g\) be a restriction of \(f\) to allow for errors due to noise, for imprecision of measurement, and for approximations in computation.
 7.
A practically more convenient alternative is the Vietoris–Rips complex that consists of all simplices spanned by the data points whose diameters do not exceed \(2r\). We will use it for the computations discussed in Sect. 6, but for now we stay with the Čech complex, which has the theoretical advantage that its homotopy type agrees with that of the sublevel set for the same \(r\).
 8.
Following [6], we assume that each persistence diagram contains copies of all empty intervals—points of the form \((a,a)\)—which are used to complete a bijection or decrease the maximum distance.
Notes
Acknowledgments
This research is partially supported by the Toposys project FP7ICT318493STREP, by ESF under the ACAT Research Network Programme, by the Russian Government under mega project 11.G34.31.0053, and by the Polish National Science Center under Grant No. N201 419639.
References
 1.P. Bubenik and J.A. Scott. Categorification of persistent homology. Discrete Comput. Geom. 51 (2014), 600–627.zbMATHMathSciNetCrossRefGoogle Scholar
 2.G. Carlsson and V. de Silva. Zigzag persistence. Found. Comput. Math. 10 (2010), 367–405.zbMATHMathSciNetCrossRefGoogle Scholar
 3.G. Carlsson, T. Ishkanov, V. de Silva and A. Zomorodian. On the local behavior of spaces of natural images. Internat. J. Comput. Vision 76 (2008), 1–12.CrossRefGoogle Scholar
 4.F. Chazal, D. CohenSteiner, M. Glisse, L. J. Guibas and S. Y. Oudot. Proximity of persistence modules and their diagrams. In “Proc. 25th Sympos. Comput. Geom., 2009”, 237–246.Google Scholar
 5.S. Cohen and C. Tomasi. Systems of bilinear equations. Techn. Rept., Dept. Comput. Sci., Stanford Univ., California, 1997.Google Scholar
 6.D. CohenSteiner, H. Edelsbrunner and J. Harer. Stability of persistence diagrams. Discrete Comput. Geom. 37 (2007), 103–120.zbMATHMathSciNetCrossRefGoogle Scholar
 7.D. CohenSteiner, H. Edelsbrunner, J. Harer and D. Morozov. Persistent homology for kernels, images, and cokernels. In “Proc. 20th Ann. ACMSIAM Sympos. Discrete Alg., 2009”, 1011–1020.Google Scholar
 8.C. Conley. Isolated Invariant Sets and the Morse Index. Amer. Math. Soc., CBMS Regional Conf. Series Math. 38, 1978.Google Scholar
 9.H. Derksen and J. Weyman. Quiver representations. Notices Amer. Math. Soc. 52 (2005), 200–206.zbMATHMathSciNetGoogle Scholar
 10.H. Edelsbrunner and J. L. Harer. Computational Topology. An Introduction. Amer. Math. Soc., Providence, Rhode Island, 2010.Google Scholar
 11.H. Edelsbrunner, D. Letscher and A. Zomorodian. Topological persistence and simplification. Discrete Comput. Geom. 28 (2002), 511–533.zbMATHMathSciNetCrossRefGoogle Scholar
 12.J. Gamble and G. Heo. Exploring uses of persistent homology for statistical analysis of landmarkbased shape data. J. Multivar. Analysis 101 (2010), 2184–2199.zbMATHMathSciNetCrossRefGoogle Scholar
 13.A. Gyulassi, V. Natarajan, V. Pascucci, P.T. Bremer and B. Hamann. A topological approach to simplification of threedimensional scalar fields. IEEE Trans. Visual. Comput. Graphics (2006), 474–484.Google Scholar
 14.T. Kaczynski, K. Mischaikow and M. Mrozek. Computational Homology. SpringerVerlag, New York, New York, 2004.zbMATHCrossRefGoogle Scholar
 15.S. Kleene. Introduction to MetaMathematics. North Holland, Amsterdam, the Netherlands, 1952.Google Scholar
 16.S. MacLane. Categories for the Working Mathematician. SpringerVerlag, New York, New York, 1971.Google Scholar
 17.K. Mischaikow, M. Mrozek (2002) The Conley index theory. In: B. Fiedler, G. Iooss, N. Kopell (eds) Handbook of Dynamical Systems III: Towards Applications. Elsevier, Singapore 393–460Google Scholar
 18.M. Mrozek. Leray functor and the cohomological Conley index for discrete time dynamical systems. Trans. AMS 318 (1990), 149–178.zbMATHMathSciNetCrossRefGoogle Scholar
 19.M. Mrozek. Normal functors and retractors in categories of endomorphisms. Univ. Jagiell. Acta. Math. 29 (1992), 181–198.MathSciNetGoogle Scholar
 20.E. Spanier. Algebraic Topology. McGrawHill or SpringerVerlag, New York, 1966.zbMATHGoogle Scholar
 21.S. Weinberger. What is ... persistent homology? Notices Amer. Math. Soc. 58 (2011), 36–39.zbMATHMathSciNetGoogle Scholar
 22.A. Zomorodian. Fast construction of the VietorisRips complex. Computer and Graphics 34 (2010), 263–271.CrossRefGoogle Scholar
 23.A. Zomorodian and G. Carlsson. Computing persistent homology. Discrete Comput. Geom. 33 (2005), 249–274.zbMATHMathSciNetCrossRefGoogle Scholar
 24.CAPD: computer assisted proofs in dynamics. http://www.capd.ii.uj.edu.pl.
 25.CAPD::RedHom: reduction algorithms for homology computation. http://www.redhom.ii.uj.edu.pl.
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.