1 Introduction

The science of data is constantly changing. Classic [1] and emerging [2] machine learning tools and network-theoretic descriptions [3, 4] are opening windows on the intimate details of systems as diverse as individual behavioural and mobility patterns [5], the human consciousness and language [6], the control and robustness of biological systems [7, 8], and even quantum dynamics [9].

While network approaches can describe the fabric of relations between agents in a system, or molecules in an organism, they are constrained in their descriptive power to pairwise interactions (i.e. edges), which might not always be justified when focusing on phenomena that involve group dynamics (e.g. scientific collaboration, genetic pathways) or higher-order descriptions (e.g. viral evolution, molecule folding). Similarly, machine learning tools are extremely efficient in classifying and segmenting datasets into consistent patterns or clusters. However, at times they find themselves challenged when asked to produce an organic description of the interactions among a system’s components, and they often suffer from the curse of dimensionality due to their underlying geometric formalization. Over the last decade, a set of new techniques for data analysis, based on a set-theoretic formalism, has been gaining traction. The set-theoretic foundation makes them topological in nature (and hence geometry-independent), and so they have come to be collectively referred to as Topological Data Analysis (TDA). The reason for the growing interest in TDA is its capacity to capture the large- and mesoscale shape of datasets via their algebraic-topological structure [10]: on the one hand, TDA allows one to enrich network descriptions with higher order ones (i.e. many-body interactions); on the other, it adds also a notion of organization, or shape, to the descriptions obtained from traditional classification techniques. In this note, we aim to introduce TDA, some of its successful applications to real data analysis and why we believe it to be important for the data science community as a set of goggles complementary to the existing ones.

2 Why topological data analysis?

The novelty of TDA is that it studies the shape of topological spaces at the mesoscopic scale by going beyond standard measures defined on data points’ pairs. This is done by moving from networks to simplicial complexes. The latter are obtained from elementary objects, called simplices, built from such simple polyhedra as points, line segments, triangles, tetrahedra, and their higher dimensional analogues, glued together along their faces.

Simplicial complexes were first introduced in 1895 by Poincaré in his seminal work “Analysis Situs” [11] as a simplicial decomposition (triangulation) of a manifold. They have been since used to store in discrete form key information on a topological space and to transform complicated topological problems into more familiar algebraic ones with the introduction of simplicial homology (we refer to Aleksandrov [12] for a beautiful account of the birth of combinatorial topology). Being the fundamental method in combinatorial topology [13], the use of simplicial complexes is not new in science, as for example, they are the secret behind every 3D rendering and image recognition software [14]. They have however taken a new life with the emergence of TDA techniques [15, 16].

In this arena, simplicial complexes constitute the choice representation of many-body interactions of complex systems. In fact, by glueing together simplices of different sizes and composition, one is able to describe varied, heterogeneous and changing interactions. The resulting simplicial complexes efficiently summarise the shape of the underlying datasets and yield mesoscopic information about how simplices coordinate with one another across intermediate and large scales within the complexes. TDA summaries can be read out from the simplicial complexes directly, or by studying the patterns of holes in all dimensions that define their shapes (via the corresponding homology groups) [17]. These summaries are both informative and guaranteed to be robust to perturbations [18]: in particular, they do not vary under changes in coordinates or under deformations of the individual samples, which makes them parsimonious descriptions of arbitrary datasets. The possibility to represent complex interactions of any order, together with the robustness to missing and corrupted data quality issues common many real datasets, is the rationale behind the growing number of applications in biology [19], neuroscience [20, 21], social sciences [22, 23], physics [24, 25], quantum computation [26], and nanotechnologies [27].

3 TDA in practice

In everyday applications, the TDA tool-kit consists essentially of two main techniques: topological simplification (via the Mapper algorithm [28, 29]), and persistent homology [30, 31].

3.1 Topological simplification

The aim of topological approaches to data is to produce sparser readable summaries of complex datasets. Mapper, introduced by Singh et al. [29], is the main tool for this type of direct data exploration. It produces a topological skeleton of a dataset (akin to a balls-and-sticks representation) by slicing the data-space in overlapping slabs according to case-specific quantities and performing local clustering within each. The resulting clusters are then linked together to recover a simplified yet (provably) complete picture of the overall topology, which however provides more structured information than standard techniques, such as PCA, MDS, or clustering techniques alone could.

The fact that the clustering is performed locally in each slice makes this approach computationally convenient as it can be easily be computed in parallel. It is therefore a good tool for the analysis of large-scale datasets which can be used in a framework of big data analysis such as the Google’s MapReduce paradigm [32]. Indeed, Mapper is most well-known for the discovery of a new subtype of breast cancer [33] from genetic data, but it has also found successful applications in other biomedical studies [34], e.g. the identification of diabetes subtypes [35] and of different pulmonary conditions [36], as well as in industrial [37] and commercial applications (e.g. Ayasdi).

3.2 Persistent homology

Despite Mapper’s facility of interpretation and computational advantages, it does not explicitly yield quantitative insights that allow for direct comparison within and across datasets. Persistent homology is able to do that. Unlike Mapper, it does not compress the data. Rather, it encodes data in a simplicial filtration, a series of progressively finer simplicial complexes. This filtration is then analysed to build a multi-scale low-dimensional summary that tracks the lifespan and evolution of connected components, holes and high dimensional voids along the sequence of simplicial complexes. That is, it identifies and quantifies the different kinds of ‘empty space’ embedded in the data, which implicitly make up the dataset’s shape. This grants the possibility to obtain insights into unique mesoscale structures otherwise invisible to standard analytical tools, which in turn motivates an ever growing application of persistent homology across fields, such as biology [38, 39], social science [22, 31] and neuroscience [20]. For example, in biology, Chan et al. [40] showed that viral genetic recombination is captured better by homological invariant than standard phylogenetic trees across a number of diseases. In the social sciences, Bajardi et al. [41] used homological features to characterise the correlations between socio-economic indicators and spatial structure of migrant communities in Milan. Persistent homology has found its widest application so far in the study of structural and functional brain connectivity where the range of scales and the complexity of the systems seem to benefit the most from a persistent homology description (e.g. [4244]).

Unfortunately, these features are somewhat less intuitive than those Mapper provides. However, statistical mechanical methods can give an important contribution to their interpretation. For example, Lord et al. [45] and Verovsek et al. [46] projected the results of persistent homology to simpler representation, i.e. lower dimensional scaffolds or skeletons providing localized information and making topological features amenable to network techniques. Also, Kahle [47] and Courtney et al. [48] have taken the first steps in constructing minimal topological random null models in order to provide a notion of which topological features should be considered significant and which ones noise.

4 Conclusion

TDA is still very much developing as a branch of data science. It provides a new paradigm, based on algebraic topology, for how we think about data, and has obtained its first successes. However, there still are many challenges to be met to fully exploit its potential. The most pressing one is the computational scalability of persistent homology, which currently prevents large-scale applications. Although the classic algorithms for topological features extraction have a memory and time complexity which is polynomial on the number of simplices, and the latter is exponential on the number of vertices. While the last five years have witnessed significant advances in parallel algorithms [4951] and simplicial reduction schemes [5255], new implementations and algorithmic improvements are paramount. A second issue is that of the localization of homological features within a given dataset. This is crucial if we want to pinpoint the effect of specific topological features (e.g. a certain disconnectivity pattern in brain regions, or a structural hole among scientific or professional collaborators), and to leverage it beyond classification purposes. Steps in this direction have been taken [46, 56], but more work is required to make this information easily accessible and interpretable. Tightly related to the previous point is the need to strengthen the link between machine learning and TDA. A few initial works have proposed ways to make persistent homology’s outputs directly amenable to standard machine learning techniques (typically via kernel approaches [5759]). However, there is still a wide gap to be bridged to obtain a productive integration of TDA’s novel perspective within the full machine-learning framework. Finally, at the community level, it is also necessary to help practitioners coming from outside the TDA community to discover and adopt these techniques. Efforts in this direction (see for example the introduction by [60], the development of shared TDA R libraries [61, 62]) are undergoing, but there is still ample space for contributions from the machine learning, mathematical, and complex networks communities.