1 Introduction

For conventional digital signals and images sampled on regular lattices, multiscale basis dictionaries, i.e., wavelet packet dictionaries including wavelet bases, local cosine dictionaries, and their variants (see, e.g., [1, Chap. 4, 7], [2, Chap. 6, 7], [3, Chap. 8]), have a proven track record of success: JPEG 2000 Image Compression Standard [4, Sec. 15.9]; Modified Discrete Cosine Transform (MDCT) in MP3 [4, Sec. 16.3]; discriminant feature extraction for signal classification [5,6,7], just to name a few. Considering the abundance of data measured on graphs and networks and the increasing importance to analyze such data (see, e.g., [8,9,10,11,12]), it is quite natural to lift/generalize these dictionaries to the graph setting. Our group have developed the graph versions of the block/local cosine and wavelet packet dictionaries for analysis of graph signals sampled at nodes so far. These include the Generalized Haar-Walsh Transform (GHWT) [13], the Hierarchical Graph Laplacian Eigen Transform (HGLET) [14], the Natural Graph Wavelet Packets (NGWPs) [15], and their relatives [16,17,18]; see also [19, 20]. Some of these will be briefly reviewed in the later sections.

In this article, we propose their generalization for analyzing data recorded on edges, faces (i.e., triangles), or more generally cells (i.e., polytopes) of a class of special graphs called simplicial complexes (e.g., a triangle mesh of a manifold). The key idea is to use the Hodge Laplacians and their variants for hierarchical partitioning of a set of \(\kappa \)-dimensional simplices in a given simplicial complex, and then build localized basis functions on these partitioned subsets. We demonstrate their usefulness for data representation on both illustrative synthetic examples and real-world simplicial complexes generated from a co-authorship/citation dataset and an ocean current/flow dataset.

1.1 Related work

Graph-based methods for analyzing data have been widely adopted in many domains, [21,22,23]. Often, these graphs are fully defined by data (such as a graph of social media “friends”), but they can also be induced through the persistence homology of generic point clouds [24]. In either case, the vast majority of these analytical techniques deal with signals which are defined on the vertices (or nodes) of a given graph. More recently, there has been a surge in interest in studying signals defined on edges, triangles, and higher-dimensional substructures within the graph [24,25,26,27,28]. The fundamental tool employed for analyzing these signals, the Hodge Laplacian, has been studied in the context of differential geometry for over half a century but has only recently entered the toolbox of applied mathematics. This rise in popularity is largely due to the adaptation of discrete differential geometry [29] in applications in computer vision [30, 31], statistics [32], topological data analysis [28, 33], and network analysis [34].

One of the key challenges to applying wavelets and similar constructions to vertex-based graph signals is that graphs lack a natural translation operator, which prevents the construction of convolutional operators and traditional Littlewood-Paley theory [19, 35, 36]. This challenge is also present for general \(\kappa \)-dimensional simplices. One method for overcoming this difficulty is to perform convolution solely in the “frequency” domain and define wavelet-like bases entirely in the coefficient space of the Laplacian (or in this case Hodge Laplacian) transform. Following this line of research, there have been several approaches to defining wavelets [37] and convolutional neural networks [38] in which the input signal is transformed in a series of coefficients in the eigenspace of the Hodge Laplacian. Unfortunately, the atoms (or basis vectors) generated by these methods are not always locally supported, and it can be difficult to interpret their role in analyzing a given graph signal.

An alternative path to the creation of wavelet-like dictionaries and transforms is to first develop a hierarchical block decomposition of the domain and then use this to develop multiscale transforms [13, 14, 18]. These techniques rely on recursively computing bipartitions of the domain and then generating localized bases on the subsets of the domain. In this work, we propose a simplex analog to the Fiedler vector [39] to solve a relaxed version of the simplex-normalized-cut problem, which we can apply iteratively to develop a hierarchical bipartition of the \(\kappa \)-dimensional simplices in a simplicial complex. From here, we are able to apply the general scheme of [14] and [13] to develop the Hierarchical Graph Laplacian Eigen Transform and the Generalized Haar-Walsh Transform, respectively, for a given collection of simplices of an arbitrarily high order. As a result, we can also generate orthonormal Haar bases, orthonormal Walsh bases, as well as data-adaptive orthonormal bases using the best-basis selection method [40].

The main challenge in lifting these transforms to the simplicial setting lies in the simplex orientations, which cause the resulting Laplacians generally to contain mixed positive and negative off-diagonal elements. We are no longer guaranteed a non-negative Perron vector [41] to use as a DC component, and so must incorporate the orientation information both in order to develop a Fiedler vector appropriate for partitioning the \(\kappa \)-dimensional simplices of a complex, and for interpreting the nature of the resulting partition. Further challenges lie in there being multiple ways to define adjacency between simplices, multiple ways to generalize simplex weights of pairs of adjacent simplices and multiple ways to balance the “upper" and “lower" parts of the Hodge Laplacian.

1.2 Outline

This article is organized as follows: In Sect. 2 we formally describe simplicial complexes and how their geometry leads to notions of adjacency and orientation. This allows us to define discrete differential operators acting on signals defined on the complex, which in turn are constructed from boundary operators that map between the \(\kappa \) and \(\kappa \pm 1\) degree faces of the complex. In Sect. 3 we use these boundary operators to describe the Hodge Laplacian and discuss several different variants, some analogous to different normalizations of the graph Laplacian and some more novel. In Sect. 4 we show how the eigenvectors of the Hodge Laplacian can be use to solve relaxed-cut-like problems to partition a complex. We also develop hierarchical bipartitions, which decompose a given complex roughly in half at each level until we are left with a division into individual elements. In Sect. 5 we use these bipartitions to develop orthonormal Haar bases. In Sect. 6, we create overcomplete dictionaries based on given bipartitions and, as a consequence, are also able to define a canonical orthonormal Walsh basis. At the end of this section we state two theorems which bound the decay rate of the dictionary coefficients and approximation power of our dictionaries. In Sect. 7, we present numerical experiments on both illustrative synthetic examples and real-world problems in signal approximation, clustering, and supervised classification. Finally, we conclude this article with Sect. 8 discussing potential future work.

We have implemented our multiscale simplicial signal transforms in Julia and Python, and code which builds the corresponding basis dictionaries, and was used to generate the figures in this article, is available at: https://github.com/UCD4IDS/MultiscaleSimplexSignalTransforms.jl.

2 Simplicial complexes

In this section we review concepts from algebraic topology to formally define simplicial complexes and introduce some notions of how two simplices can be “adjacent.” For a more thorough review, see [24, 26]. Given a vertex set \(V = \{v_1, \ldots , v_n\}\), a \(\kappa \)-simplex \(\sigma \) is a \((\kappa +1)\)-subset of V. A face of \(\sigma \) is a \(\kappa \)-subset of \(\sigma \), and so \(\sigma \) has \(\kappa +1\) faces. A co-face of \(\sigma \) is a \((\kappa +1)\)-simplex, of which \(\sigma \) is a face.

Suppose \(\sigma = \{v_{i_1}, \ldots , v_{i_{\kappa +1}}\}\), \(i_1< \cdots < i_{\kappa +1}\), and \(\alpha \subset \sigma \) is its face. Then, \(\sigma {\setminus } \alpha \) consists of a single vertex; let \(v_{i_{\ell ^*}}\) be that vertex where \(1 \le \ell ^* \le \kappa +1\). Then the natural parity of \(\sigma \) with respect to its face \(\alpha \) is defined as

$$\begin{aligned} {{\,\textrm{nat}\,}}(\sigma , \alpha ):= (-1)^{\ell ^*+1}~~. \end{aligned}$$

When \(\alpha \) is not a face of \(\sigma \), \({{\,\textrm{nat}\,}}(\sigma , \alpha ) = 0\). The natural parity of \(\kappa \)-simplices with respect to their faces generalizes the idea of a directed edge having a head vertex and a tail vertex, and is “natural” because it disallows situations analogous to a directed edge with two heads or two tails.

A simplicial complex C is a collection of simplices closed under subsets, where if \(\sigma \in C\), then \(\alpha \subset \sigma \implies \alpha \in C\). In particular, if \(\sigma \in C\), so does each face of \(\sigma \). Let \(\kappa _{\textrm{max}}(C):= \max \left\{ \kappa \, \vert \, \sigma \in C \text { is a} \kappa \text {-simplex}\right\} \), and let \(C_\kappa \) denote the set of \(\kappa \)-simplices in C for each \(\kappa = 1,\ldots , \kappa _{\textrm{max}}\). When \(\kappa > \kappa _{\textrm{max}}\), \(C_\kappa = \emptyset \). We also refer to C as a \(\kappa \)-complex to note that \(\kappa _{\textrm{max}}(C) = \kappa \). Let a \(\kappa \)-region of C refer to any non-empty subset of \(C_\kappa \).

Let C be a simplicial complex, and \(\sigma , \tau \in C_\kappa \), for some \(\kappa >0\). When \(\sigma , \tau \) share a face, they are weakly adjacent, denoted by \(\sigma \sim \tau \). Their shared boundary face is denoted \({{\,\textrm{bd}\,}}(\sigma , \tau )\). When \(\sigma \sim \tau \), additionally they both share a co-face, their hull, denoted by \({{\,\textrm{hl}\,}}(\sigma , \tau )\). If \(\sigma , \tau \in C\), \(\sigma \sim \tau \), and \({{\,\textrm{hl}\,}}(\sigma , \tau ) \in C\), then \(\sigma , \tau \) are strongly adjacent, denoted by \(\sigma \simeq \tau \). If \(\sigma \sim \tau \), but in C, then \(\sigma , \tau \) are \(\kappa \)-adjacent, denoted .

Fig. 1
figure 1

In this small 2-complex C, \(e_1 \sim e_4\) because they share the face \(v_2\), and \(e_1 \sim e_2\) because they share the face \(v_1\). Further \(e_1 \simeq e_2\) because their hull \(t_1 \in C\), but , so that . We have \(t_1 \sim t_2\) because they share the face \(e_3\), but as \({{\,\textrm{hl}\,}}(t_1, t_2) \notin C\), we have

2.1 Oriented simplicial complexes and boundary operators

An oriented simplex \(\sigma \) further has an orientation \(p_\sigma \in \{\pm 1\}\), which indicates whether its parity with its faces is the same as, or opposite to, its natural parity. When \(p_\sigma = +1\), we say \(\sigma \) is in natural orientation. For example, a directed edge \(e = (v_i,v_j)\) for \(i<j\) is in natural orientation, while if \(i>j\), \(p_e = -1\). An oriented simplicial complex contains at most one orientation for any given simplex.

Let \(X_\kappa \) be the space of real-valued functions on \(C_\kappa \) for each \(\kappa \in \{0, 1, \ldots , \kappa _{\textrm{max}}(C)\}\). In the case of graphs, \(X_0\) consists of functions taking values on vertices, or graph signals. \(X_1\) consists of functions on edges, or edge flows. A function in \(X_1\) is positive when the corresponding flow direction agrees with the edge orientation, and negative when the flow disagrees. \(X_2\) consists of functions on oriented triangles.

Given an oriented simplicial complex C, for each \(\kappa \in \{0, 1, \ldots , \kappa _{\textrm{max}} \}\), the boundary operator is a linear operator \({B_\kappa }: X_{\kappa +1} \rightarrow X_\kappa \), where for \(\sigma \in C_{\kappa +1}\), \(\alpha \in C_\kappa \), the corresponding matrix entries are \([{B_\kappa }]_{\alpha \sigma } = p_\sigma p_\alpha {{\,\textrm{nat}\,}}(\sigma , \alpha )\). Note that \(B_{\kappa _{\text {max}}} = \textrm{O}\), i.e., a zero matrix. Likewise, the coboundary operator for each \(\kappa \in \{0, 1, \ldots , \kappa _{\textrm{max}}\}\) is just \({B_\kappa ^{\scriptstyle {{\textsf{T}}}}}: X_\kappa \rightarrow X_{\kappa +1}\), the adjoint to \({B_\kappa }\). The expression of the entries of \({B_\kappa }\) as the relative orientation between simplex and face suggests that these are a natural way to construct functions taking local signed averages, according to adjacency in the simplicial complex.

2.2 Data on simplicial complexes

Signal processing on simplicial complexes arises as a natural problem in the setting where richer structure is incorporated in data, than just scalar functions and pairwise relationships. In this article, we assume the input data is given on an existing simplicial complex.

A simple directed graph \(G = (V, E)\) can always be represented as an oriented 1-complex \({{\tilde{G}}}\), with each directed edge \(e=(v_i, v_j)\) inserted as a 1-simplex having orientation \(p_e = {{\,\textrm{sign}\,}}(j-i)\). With this convention, natural orientation corresponds to the agreement of the edge direction with the global ordering of the vertices.

3 Hodge Laplacian

The boundary operators just introduced represent discrete differential operators encoding the structure of \(\kappa \)-regions in a simplicial complex, and so can be building blocks towards a spectral analysis of functions on those regions. For analyzing functions on \(\kappa \)-simplices with \(\kappa >0\), we will construct operators based on the Hodge Laplacian, or \(\kappa \)-Laplacian. As in [30], the combinatorial \(\kappa \)-Laplacian is defined for \(\kappa \)-simplices as

$$\begin{aligned} L_\kappa := B_{\kappa -1}^{\scriptstyle {{\textsf{T}}}}B_{\kappa -1} + B_\kappa B_\kappa ^{\scriptstyle {{\textsf{T}}}}~~. \end{aligned}$$

We refer to \(L^{\vee }_\kappa := B_{\kappa -1}^{\scriptstyle {{\textsf{T}}}}B_{\kappa -1}\) and \(L^\wedge _\kappa := B_\kappa B_\kappa ^{\scriptstyle {{\textsf{T}}}}\) as the lower and upper \(\kappa \)-Laplacians, respectively.

3.1 Simplex consistency

Let C be an oriented simplicial complex, and \(\sigma \sim \tau \in C_\kappa \), with \(\alpha = {{\,\textrm{bd}\,}}(\sigma , \tau )\). Then we may write \(L_\kappa \) as \({{\,\textrm{diag}\,}}(L_\kappa ) - S_\kappa \), where for \(\kappa >0\), \(S_\kappa \) is the signed adjacency matrix

When \(S_\kappa > 0\), we say \(\sigma , \tau \) are consistent, and otherwise they are inconsistent. A consistent pair of simplices view their shared boundary face in opposite ways; one as a head face, and the other as a tail face. An inconsistent pair of simplices view their shared boundary face identically. In the case of \(\kappa =1\), two directed edges are consistent when they flow into each other at their boundary vertex, and are inconsistent when they collide at the boundary vertex, either both pointing toward it, or both pointing away. Cases for \(\kappa =1, 2\) are demonstrated in Fig. 2.

Fig. 2
figure 2

Pairs of \(\kappa \)-simplices demonstrating consistency at their boundary face, for \(\kappa =1,2\). The mixed-color pairs are consistent, and the same-color pairs are inconsistent

The combinatorial \(\kappa \)-Laplacian represents signed adjacency between \(\kappa \)-adjacent simplices via their consistency. In particular, this means that \(L_\kappa \) depends only on the orientations of simplices in \(C_\kappa \). Naively, constructing the boundary matrices \(B_{\kappa -1}, B_\kappa \) then additionally requires superfluous sign information – the orientation of each member of both \(C_{\kappa -1}\) and \(C_{\kappa +1}\). This situation exactly mirrors that of the graph Laplacian \(L_0\): in order to construct \(L_0\) for an undirected graph via the product \(B_0 B_0^{\scriptstyle {{\textsf{T}}}}\), one must assign an arbitrary direction to each edge, and the resulting Laplacian is independent of that choice of directions.

3.2 Weighted and normalized Hodge Laplacians

In order to introduce a weighted simplicial complex, consider the symmetrically normalized graph Laplacian

$$\begin{aligned} L_0^{\textrm{sym}}:= D_0^{-1/2}B_0 D_1 B_0^{\scriptstyle {{\textsf{T}}}}D_0^{-1/2} = \left( D_0^{-1/2} B_0 D_1^{1/2}\right) \left( D_0^{-1/2} B_0 D_1^{1/2}\right) ^{\scriptstyle {{\textsf{T}}}}~~, \end{aligned}$$

where \(D_0 = {{\,\textrm{diag}\,}}(\vert B_0 \vert D_1 {\textbf{1}})\), the diagonal matrix of (weighted) vertex degrees, and \(D_1\) is the diagonal matrix of edge weights. Letting \(D_\kappa \) generally refer to a diagonal matrix containing \(\kappa \)-simplex weights, we proceed as in [28] and define the weighted symmetrically normalized \(\kappa \)-Laplacian as

$$\begin{aligned} L_\kappa ^{\textrm{sym}}:= {\mathfrak {B}}_{\kappa -1}^{\scriptstyle {{\textsf{T}}}}\mathfrak B_{\kappa -1} + {\mathfrak {B}}_{\kappa } {\mathfrak {B}}_{\kappa }^{\scriptstyle {{\textsf{T}}}}~~, \end{aligned}$$

where \({\mathfrak {B}}_\kappa := D_\kappa ^{-1/2} B_\kappa D_{\kappa +1}^{1/2}\). Here \(D_\ell = {{\,\textrm{diag}\,}}(\vert B_\ell \vert D_{\ell +1} {\textbf{1}})\) for \(\ell =\kappa -1, \kappa \), and \(D_{\kappa +1}\) is the diagonal matrix of \((\kappa +1)\)-hull weights.

From \(L_\kappa ^{\textrm{sym}}\) we may define the usual weighted unnormalized, and weighted random-walk normalized \(\kappa \)-Laplacians \(L_\kappa ^{\textrm{wt}}\) and \(L_\kappa ^{\textrm{rw}}\), via the formulas:

$$\begin{aligned} L_\kappa ^{\textrm{wt}}:= D_\kappa ^{1/2} L_\kappa ^{\textrm{sym}} D_\kappa ^{1/2} \quad \textrm{ and } \quad L_\kappa ^{\textrm{rw}}:= D_\kappa ^{-1} L_\kappa ^{\textrm{wt}}~~. \end{aligned}$$

While in the combinatorial case, \(L_\kappa \) vanishes for pairs \(\sigma \simeq \tau \), each of the weighted Laplacians may be nonzero whenever \(\sigma \sim \tau \).

The signed weighted adjacency matrices \(S_\kappa ^{\textrm{sym}}\), \(S_\kappa ^{\textrm{wt}}, S_\kappa ^{\textrm{rw}}\) are defined analogously to \(S_\kappa \), as the negative of the off-diagonal parts of their respective Laplacians. Figure 3 demonstrates \(S_\kappa \), \(S_\kappa ^{\textrm{sym}}\), \(S_\kappa ^{\textrm{wt}}\), and \(S_\kappa ^{\textrm{rw}}\) for a simple complex.

Fig. 3
figure 3

The complex from Fig. 1 on the left, with natural orientation displayed as directed edges and oriented triangles, together with four signed adjacency matrices, the combinatorial \(S_1\), the weighted symmetrically normalized \(S_1^{\textrm{sym}}\), the weighted unnormalized \(S_1^{\textrm{wt}}\), and the weighted random-walk normalized \(S_1^{\textrm{rw}}\), all with \(D_2 = I\). Notice that the weighted variants may have nonzero entries of various sign even for strongly adjacent simplices, unlike \(S_1\)

4 Cuts, Fiedler vectors, and hierarchical bipartitions

4.1 Fiedler vector

Let C be a simplicial complex, such that \(G = (C_0, C_1)\) is a connected graph. For a given \(\kappa \), let \({\varvec{p}}\) be a vector of orientations over \(C_\kappa \), with each \([ {\varvec{p}}]_\sigma = p_\sigma \in \{ \pm 1\}\), and let \(P = {{\,\textrm{diag}\,}}({\varvec{p}})\). Let \(L_\kappa ^{\textrm{wt}}\), \(\, \tilde{L}_\kappa ^{\textrm{wt}}\) denote the weighted \(\kappa \)-Laplacian of \(C_\kappa \) with natural orientations, and with orientations given by \({\varvec{p}}\), respectively. Let \(\lambda _0 \le \cdots \le \lambda _{n-1}\) be the eigenvalues of \(L_\kappa ^{\textrm{wt}}\) and \(\varvec{\phi }_0, \varvec{\phi }_1, \ldots , \varvec{\phi }_{n-1}\) be the corresponding eigenvectors where \(n= \vert C_\kappa \vert \). Then, let \((\, \tilde{\lambda }_i, \, \tilde{\varvec{\phi }}_i)\) be the eigenpairs for \(\, \tilde{L}_\kappa ^{\textrm{wt}}\). Because \(\, \tilde{L}_\kappa ^{\textrm{wt}} = P L_\kappa ^{\textrm{wt}} P\), \(\, \tilde{\lambda }_i = \lambda _i\) and \(\, \tilde{\varvec{\phi }}_i = P\varvec{\phi }_i\) for \(0\le i < n\).

For \(\kappa =0\), with the vertices of G in natural orientation, we have that \(\lambda _0 = 0\), \(\lambda _1 > 0\), \(\varvec{\phi }_0 = {\textbf{1}}\) and in particular is non-oscillatory, and that \(\varvec{\phi }_1\) acts as a single global oscillation, appropriate to partition the vertices of G with. Considering \(\, \tilde{L}_0^{\textrm{wt}}\) for nontrivial \({\varvec{p}}\ne \pm {\textbf{1}}\), \(\, \tilde{\varvec{\phi }}_0\) is oscillatory, and \(\, \tilde{\varvec{\phi }}_1\) is no longer appropriate for clustering; this is one reason that oriented 0-simplices are always considered to be in natural orientation.

For \(\kappa >0\) however, it is no longer true that \(\varvec{\phi }_0\) will be non-oscillatory. Let \({\varvec{p}}^*\) be a vector of orientations such that where \([\varvec{\phi }_0]_\sigma \ne 0\), \([ {\varvec{p}}^* ]_\sigma = {{\,\textrm{sign}\,}}([\varvec{\phi }_0]_\sigma )\). Then the corresponding \(\, \tilde{\varvec{\phi }}_0\) is non-oscillatory, and acts as a DC component. This motivates taking \({{\,\textrm{sign}\,}}(\varvec{\phi }_0) \odot \varvec{\phi }_1\) (element-wise) as the Fiedler vector of \(L_\kappa ^{\textrm{wt}}\), with which to partition \(C_\kappa \).

We will aim to bipartition \(\kappa \)-regions by following a standard strategy in spectral clustering, of minimizing a relaxation of a combinatorial cut function over possible partitions. Just as a graph cut is typically defined as the volume of edge weight which crosses a partition of the vertices, we can define the consistency cut of \(C_\kappa \) into subregions AB as

$$\begin{aligned} {{\,\textrm{Ccut}\,}}(A, B):= \sum _{\begin{array}{c} \sigma \in A, \tau \in B\\ \sigma \sim \tau \end{array}}[S_\kappa ^{\textrm{wt}}]_{\sigma \tau }~~. \end{aligned}$$

Because of the signs introduced by consistency, we consider \(S_\kappa ^{\textrm{wt}}\) as the signed, weighted adjacency matrix for a signed graph over \(C_\kappa \), and so can utilize the framework of signed Laplacians [42]. Let \([S_\kappa ^+]_{\sigma \tau }:= \max (0, [S_\kappa ^{\textrm{wt}}]_{\sigma \tau })\) and \([S_\kappa ^-]_{\sigma \tau }:= \max (0, -[S_\kappa ^{\textrm{wt}}]_{\sigma \tau })\), i.e., indicator functions for consistent/inconsistent pairs, respectively. Then, we can define the consistency volume \({{\,\textrm{Cvol}\,}}^\pm (A):= {{\,\textrm{Ccut}\,}}^\pm (A, A)\) and the signed \(\kappa \)-cut

$$\begin{aligned} \kappa \text {Cut}(A, B):= 2 {{\,\textrm{Ccut}\,}}^+(A, B) + {{\,\textrm{Cvol}\,}}^-(A) + {{\,\textrm{Cvol}\,}}^-(B)~~. \end{aligned}$$

In the \(\kappa =0\) case, with all vertices in natural orientation, \(S_0^{\textrm{wt}}\) is just the usual adjacency matrix, and so \(S_0^- = {\textbf{0}}\); hence \(\kappa \text {Cut}= 2{{\,\textrm{Ccut}\,}}\), yielding the traditional cut objective. For \(\kappa > 0\), \(\kappa \text {Cut}\) increases with the number of consistent pairs of \(\kappa \)-adjacent simplices across the partition, and with the number of inconsistent pairs within each \(\kappa \)-region. Equivalently, minimizing \(\kappa \text {Cut}\) requires maximizing consistent pairs within each \(\kappa \)-region, and maximizing inconsistent pairs across the partition.

Let \({{\overline{L}}}_\kappa \) be the signed Laplacian with signed adjacency \(S_\kappa ^{\textrm{wt}}\). Let A be a \(\kappa \)-region, \({\varvec{r}}_A:= {\textbf{1}}_{A} - {\textbf{1}}_{C_\kappa {\setminus } A}\), and define \(R_A(L):= {\varvec{r}}_A^{\scriptstyle {{\textsf{T}}}}L {\varvec{r}}_A\). Then because \({{\overline{L}}}_\kappa \) differs from \(L_\kappa ^{\textrm{wt}}\) only on the diagonal, \(R_A({{\overline{L}}}_\kappa )\) differs from \(R_A(L_\kappa ^{\textrm{wt}})\) by a constant independent of A. From [42], we know that \(R_A({{\overline{L}}}_\kappa ) \propto \kappa \text {Cut}(A, C_\kappa \setminus A)\). Hence, \(\min _{A\subset C_\kappa }R_A(L_\kappa ^{\textrm{wt}}) = \min _{A \subset C_\kappa }\kappa \text {Cut}(A, C_\kappa {\setminus } A)\), and we obtain \(\varvec{\phi }_0\) as a relaxed solution to \(\kappa \)-cut minimization.

Now, notice that if the orientations of \(C_\kappa \) were changed according to some \({\varvec{p}}\), this would be equivalent to a different choice of A; namely, if \([ {\varvec{p}}]_\sigma = -1\), then \(\sigma \) moves to the other side of the partition, either into or out of A. As all orientations are available to us, this includes one for which \(\, \tilde{\varvec{\phi }}_0\) is non-oscillatory, so that its sign does not partition \(C_\kappa \). We then instead take \(\, \tilde{\varvec{\phi }}_1\) as our relaxed solution, which we may compute via \({{\,\textrm{sign}\,}}(\varvec{\phi }_0) \odot \varvec{\phi }_1\).

An improved cut objective is the signed Ratio Cut, which encourages more balanced partitions:

$$\begin{aligned} {{\,\textrm{SignedRatioCut}\,}}(A):= \left( \frac{1}{\vert A \vert } + \frac{1}{\vert C_\kappa \setminus A\vert }\right) \kappa \text {Cut}(A, C_\kappa \setminus A)~~. \end{aligned}$$

From [42], we know that with \(r_A\) above scaled by a factor of \(c_A:= \sqrt{\vert A \vert / \vert C_\kappa {\setminus } A \vert }\), the analogous result holds, that the eigenvectors of \({{\overline{L}}}_\kappa \) yield a relaxed solution to \(\min _{A \subset C_\kappa }{{\,\textrm{SignedRatioCut}\,}}(A)\). However, the new dependence on A means the resulting objective is slightly different for \(L_\kappa \), so the relaxation is only approximate.

Finally, the signed Normalized Cut balances the partitions by degree rather than simplex count:

$$\begin{aligned} {{\,\textrm{SignedNormalizedCut}\,}}(A):= \left( \frac{1}{{{\,\textrm{Cvol}\,}}(A)} + \frac{1}{{{\,\textrm{Cvol}\,}}( C_\kappa \setminus A)}\right) \kappa \text {Cut}(A, C_\kappa \setminus A). \end{aligned}$$

Here, the eigenvectors of \({{\,\textrm{diag}\,}}({{\overline{L}}}_\kappa )^{-1} \overline{L}_\kappa \) yield a relaxed solution to \(\min _{A\subset C_\kappa }~\textrm{Signed}\) \(\textrm{Normalized Cut}(A)\), and an approximate relaxed solution is given by the eigenvectors of \(L_\kappa ^{\textrm{rw}}\). In our numerical experiments, we use the random-walk \(\kappa \)-Laplacian for bipartitioning simplicial complexes, and obtain its eigenvectors from those of \(L_\kappa ^{\textrm{sym}}\).

4.2 Hierarchical bipartitions

The foundation upon which our multiscale transforms on a \(\kappa \)-simplices \(C_\kappa \) of a given simplicial complex C are constructed is a hierarchical bipartition tree (also known as a binary partition tree) of \(C_\kappa \), a set of tree-structured \(\kappa \)-subregions of \(C_\kappa \) constructed by recursively bipartitioning \(C_\kappa \). This bipartitioning operation ideally splits each \(\kappa \)-subregion into two smaller \(\kappa \)-subregions that are roughly equal in size while keeping tightly-connected \(\kappa \)-simplices grouped together. More specifically, let \(C^j_k\) denote the \(k^{th}\) \(\kappa \)-subregion on level j of the binary partition tree of \(C_\kappa \) and \(n^j_k:= \left| C^j_k \right| \), where \(j, k \in {\mathbb {Z}}_{\ge 0}\). Note \(C^0_0 = C_\kappa \), \(n^0_0 = n\), i.e., level \(j=0\) represents the root node of this tree. Then the two children of \(C^j_k\) in the tree, \(C^{j+1}_{k'}\) and \(C^{j+1}_{k'+1}\), are obtained through partitioning \(C^j_k\) using the Fiedler vector of \(L^{\textrm{rw}}_\kappa (C^j_k)\). This partitioning is recursively performed until each subregion corresponding to the leaf contains only a simplex singleton. Note that \(k' = 2k\) if the resulting binary partition tree is a perfect binary tree. We note that even other (non-spectral) partitioning methods can be used to form the binary partition tree, but in this article, we stick with the spectral clustering using the Fiedler vectors. For more details see on hierarchical partitioning, (specifically for the \(\kappa = 0\) case), see [43, Chap. 3] and [18]. Figure 4 demonstrates such a hierarchical bipartition tree for a simple 2-complex consisting of triangles.

Fig. 4
figure 4

One possible hierarchical bipartitioning of a simple 2-complex, from \(j=0\) with no partition on the left, to \(j=5\) on the right, where each of the 21 triangles forms their own subregion. Colors indicate distinct subregions. We have highlighted the subregions that contain more than one element for \(j=4\)

5 Orthonormal \(\kappa \)-Haar bases

The classical Haar basis [44] was introduced in 1909 as a piecewise-constant compactly-supported multiscale orthonormal basis (ONB) for square-integrable functions but has since been recognized as a wavelet family and adapted to many domains. In one dimension, the family of Haar wavelets on the interval [0, 1] can be generated by the following mother and scaling (or father) functions:

$$\begin{aligned} \psi (x)= {\left\{ \begin{array}{ll} 1, &{} 0 \le x< \frac{1}{2}; \\ -1, &{} \frac{1}{2} \le x< 1; \\ 0, &{} \mathrm {otherwise.} \end{array}\right. } \qquad \qquad \qquad \quad \phi (x)= {\left\{ \begin{array}{ll} 1, &{} 0 \le x < 1; \\ 0, &{} \mathrm {otherwise.} \end{array}\right. } \end{aligned}$$

Unfortunately, these definitions do not generalize to non-homogeneous domains due to the lack of appropriate translation operators and dilation operators [36]. Instead, several methods have been proposed to generate similar bases, and overcomplete dictionaries to apply more abstract domains such as graphs and discretized manifolds [13, 17, 18]. Here, we describe a method to compute similar, piecewise-constant locally supported bases for \(\kappa \)-simplex-valued functional spaces, which we call the (orthonormal) \(\kappa \)-Haar bases.

Rather than basing our construction on some kind of translation or transportation schemes, we instead employ the hierarchical bipartition, as we discussed in Sect. 4.2, to divide the domain, i.e., the \(\kappa \)-simplices \(C_\kappa \) of a given simplicial complex C into appropriate locally-supported \(\kappa \)-regions. For each \(\kappa \)-region in the bipartition tree, if that region has two children in the tree, then we create a vector that is positive on one child, negative on the other, and zero elsewhere. To avoid sign ambiguity, we dictate that the positive portion is on the region whose region index is smaller among these two. See Algorithm 1 for the detail.

Several remarks on this basis are in order. First, since the division is not symmetrically dyadic, we need to compute the scaling factor for each region separately. For each given basis vector \(\varvec{\xi }\) except the scaling vector, we break it into positive and negative parts \(\varvec{\xi }^+\) and \(\varvec{\xi }^-\) and ensure that \(\sum _{i} ([\varvec{\xi }^+]_i + [\varvec{\xi }^-]_i) = 0\) and \(\Vert \varvec{\xi }\Vert =1\). If the members of \(\kappa \)-region are weighted, then this sum and norm can be computed with respect to those weights. Finally, we note that different hierarchical bipartition schemes may arise from the different weighting of the Hodge Laplacian, which will correspond to bases with different supports. Figure 5 demonstrates a 2-Haar basis based on the partition shown in Fig. 4.

Fig. 5
figure 5

The 2-Haar basis vectors for the bipartition shown in Fig. 4. The yellow, dark green, violet regions in each vector indicate its positive, zero, and negative components

figure a

6 Overcomplete dictionaries

In this section, we introduce two overcomplete dictionaries for analyzing real-valued functions defined on \(\kappa \)-simplices in a given simplicial complex: the \(\kappa \)-Hierarchical Graph Laplacian Eigen Transform (\(\kappa \)-HGLET), based on the Hierarchical Graph Laplacian Eigen Transform (HGLET) [14] and the \(\kappa \)-Generalized Haar-Walsh Transform (\(\kappa \)-GHWT), based on the Generalized Haar-Walsh Transform (GHWT) [13] for graph signals.

6.1 \(\kappa \)-Hierarchical Graph Laplacian Eigen Transform (\(\kappa \)-HGLET)

The first overcomplete transform we describe can be viewed as a generalization of the Hierarchical Block Discrete Cosine Transform (HBDCT). The classical HBDCT is generated by creating a hierarchical bipartition of the signal domain and computing the DCT of the local signal supported on each subdomain. We note that a specific version of the HBDCT (i.e., a homogeneous split of an input image into a set of blocks of size \(8 \times 8\) pixels) has been used in the JPEG image compression standard [45]. This process was generalized to the graph case in [14], i.e., the Hierarchical Graph Laplacian Eigen Transform (HGLET), from which we base our algorithm and notation. In turn, our \(\kappa \)-HGLET is a generalization of the HGLET for \(\kappa \)-simplices in a given simplicial complex. We organize this dictionary by grouping the elements into \(j_{\textrm{max}}+1\) orthonormal matrices \(\left\{ \Phi ^j\right\} _{j=0}^{j_{\textrm{max}}}\) where \(\Phi _j \in {\mathbb {R}}^{n \times n}\) represents the orthonormal basis formed from the jth level of the bipartition. More specifically, let \(\{\varvec{\phi }^j_{k,l} \}\) be the basis vectors in the \(\kappa \)-HGLET where j denotes the level of the partition (with \(j=0\) being the root), k indicates the partition within the level, and l indexes the elements within each partition in increasing frequency.

To compute the transform, we first compute the complete set of eigenvectors \(\{\varvec{\phi }^0_{0,l} \}_{l=0:n-1}\) of the Hodge Laplacian of the entire \(\kappa \)-simplices \(C_\kappa \) of a given simplicial complex C and order them by nondecreasing eigenvalues. We then partition \(C_\kappa \) into two disjoint \(\kappa \)-regions \(C^1_0\) and \(C^1_1\) as described in Sect. 4. We then compute the complete set of eigenvectors of the Hodge Laplacian on \(C^1_0\) and \(C^1_1\). We again order each set by nondecreasing frequency (i.e., eigenvalue) and label these \(\{\varvec{\phi }^1_{0,l} \}_{l=0:n^1_0-1}\) and \(\{\varvec{\phi }^1_{1,l} \}_{l=0:n^1_1-1}\) Note that \(n^1_0+n^1_1=n^0_0=n\), and that all of the elements in \(\{\varvec{\phi }^1_{0,l} \}\) are orthogonal to those in \(\{\varvec{\phi }^1_{1,l} \}\) since their supports are disjoint. Then the set \(\{\varvec{\phi }^1_{0,l} \}_{l=0:n^1_0-1} \cup \{\varvec{\phi }^1_{1,l} \}_{l=0:n^1_1-1}\) form an orthonormal basis for vectors on \(C_\kappa \) (after extending these vectors beyond their support by zeros). From here, we apply this process recursively, generating an orthonormal basis for each level in the given hierarchical bipartition tree. This process is detailed in Algorithm 2.

If the hierarchical bipartition tree terminates at every region containing only a \(\kappa \)-simplex singleton, then the final level will simply be the standard basis of \({\mathbb {R}}^n\). Each level of the dictionary contains an ONB whose vectors have the support of roughly half the size of the previous level. There are roughly \((1.5)^n\) possible ONBs formed by selecting different covering sets of regions from the hierarchical bipartition tree; see, e.g., [18, 46] for more about the number of possible ONBs in such a hierarchical bipartition tree. Finally, we note that the computational cost of generating the entire dictionary is \({\mathcal {O}}(n^3)\) and that any valid hierarchical bipartition tree can be used to create a similar dictionary. Figure 6 shows the 2-HGLET constructed on the 2-complex and bipartition shown in Fig. 4.

Fig. 6
figure 6

2-HGLET dictionary a 2-complex. Each row represents a different level j and the subregions within each level are shown with the boxes. Here, the color scale is consistent across each row to better visualize the smoothness of the elements. Note that we can generate an ONB by selecting any subset of boxes so that the union of those boxes contains exactly one element from each column

figure b

6.2 \(\kappa \)-Generalized Haar-Walsh Transform (\(\kappa \)-GHWT)

The second transform we present here is based on the Generalized Haar-Walsh Transform (GHWT) [13], which can itself be viewed as a generalization of the Wash-Hadamard transform. This basis is formed by first generating a hierarchical bipartition tree of \(C_\kappa \). We then work in a bottom-up manner, beginning with the finest level in which each region only contains a single element. We call these functions scaling vectors and label them \(\{ \varvec{\psi }^{j_{\textrm{max}}}_{k,0} \}_{k=0:n-1}\). For the next level, we first assign a constant scaling vector for support on each region. Then, for each region that contains two children in the bipartition tree, we form a Haar-like basis element by subtracting the scaling function associated with the child element with a higher index from that child element with a lower index. This procedure will form an ONB \(\{ \varvec{\psi }^{j_{\textrm{max}}-1}_{k,l} \}_{k=0:K^{j_{\textrm{max}}-1}-1, l=0:l(k)-1}\) (where \(K^{j_{\textrm{max}}-1}\) is the number of \(\kappa \)-subregions at level \(j_{\textrm{max}}-1\) and \(l(k) = 1\) or 2 depending on the subregion k) whose vectors have support of at most 2. For the next level, we begin by computing the scaling and Haar-like vectors as before. Next, for any region that contains three or more elements, we also compute Walsh-like vectors by adding and subtracting the Haar-like vectors in the children’s regions. From here, we form the rest of the dictionary recursively. A full description of this algorithm (for the \(\kappa =0\) case) is given in [14] and we present a generalized version in Algorithm 3. For ease of notation, we present the algorithm for an unnormalized dictionary. We would prefer a normalized dictionary in many applications, but the choice of norm can be problem dependent and can be done by simply looping thought the elements once the unnormalized dictionary has been created.

Figure 7 displays the 2-GHWT dictionary on the same 2-complex used in Figs. 5 and 6. We make several observations about this dictionary. First, like the \(\kappa \)-HGLET, each level of the dictionary forms an ONB, and at each level, basis vectors have the support of roughly half the size of the previous level. These basis vectors also have the same support as the \(\kappa \)-HGLET basis vectors (that is, \({{\,\textrm{supp}\,}}(\varvec{\phi }^j_{k,l}) = {{\,\textrm{supp}\,}}(\varvec{\psi }^j_{k,l})\) for all jkl). However, the computational cost of computing the \(\kappa \)-GHWT is only \({\mathcal {O}}(n \log n)\) compared to the \({\mathcal {O}}(n^3)\) of the \(\kappa \)-HGLET.

Fig. 7
figure 7

Coarse-to-Fine (C2F) 2-GHWT dictionary. The yellow, dark green, and violet regions in each vector indicate its positive, zero, and negative components, respectively. Each row represents a different level j and the subregions within each level are shown with the boxes. Similarly to the 2-HGLET dictionary in Fig. 6, we can generate an ONB by selecting any subset of boxes so that the union of those boxes contains exactly one element from each column

Finally, we note that at the coarsest level \((j=0)\) the \(\kappa \)-GHWT dictionary contains globally-supported piecewise-constant basis vectors, which are ordered by increasing oscillation (or “sequency”). This forms an ONB analogous to the classical Walsh Basis. This allows us to define an associated Walsh transform and conduct Walsh analysis on signals defined on simplicial complexes. Although not the primary focus of this article, we conduct some numerical experiments using the Walsh bases explicitly in Sect. 7.

figure c

6.3 Organizing the dictionaries

For many downstream applications, it is important to organize the order of these bases. In general, the \(\kappa \)-HGLET dictionary is naturally ordered in a Coarse-to-Fine (C2F) fashion. In each region, the basis vectors are ordered by frequency (i.e., eigenvalue). Similarly, the GHWT dictionary is also naturally ordered in a C2F fashion, with increasing “sequency” within each subgraph. Another useful way to order the GHWT is in a Fine-to-Coarse (F2C) ordering, which approximates “sequency” domain partitioning. See, e.g., Fig. 8, which shows the F2C 2-GHWT dictionary on the same 2-complex used in Fig. 7. We also note that the F2C ordering is not possible for the \(\kappa \)-HGLET dictionary because some parent subregions and the direct sum of their children subregions are not equivalent; see, e.g., [43, Eq. (5.6)] for the details. Other relabeling schemes, such as those proposed in [17, 18] may also be useful but are outside the scope of this article and will be explored further in our future work.

Fig. 8
figure 8

Fine-to-Coarse (F2C) 2-GHWT dictionary. Note that this dictionary is not generated by simply reversing the row indices of the C2F dictionary, but by also arranging each level (row) by “sequency.” That is, each row is first sorted by tag l (shown by the black boxes) then each tag is sorted by region index k. The vectors enclosed by the red boxes form the Haar basis for this 2-complex while the vectors in the bottom row form the Walsh basis

6.4 Basis and frame selection

Once we have established these arrangements of basis vectors, we can efficiently apply the best-basis algorithm [40] to select an ONB that is optimal for a task at hand for a given input signal or a class of input signals; see also our previous work of applying the best-basis algorithm in the graph setting [13,14,15, 17,18,19,20]. Given some cost function \({\mathcal {F}}\) and signal \({{\varvec{f}}}\), we traverse the bipartition tree and select the basis that minimizes \({\mathcal {F}}\) restricted to each region. For the C2F dictionary, we initialize the best basis as the finest \((j=j_{\textrm{max}})\) level of the GHWT dictionary. We then proceed upward one level at a time and compute the cost of each subregion at that level and compare it to the cost of the union of its children subregions. If the latter cost is lower, the basis is updated; if not, the children subregions (and their basis vectors) are propagated to the current level. This algorithm yields the C2F best basis. The F2C best basis is performed similarly, i.e., we begin with the globally-supported basis \((j=0)\) at the bottom of the rearranged tree and proceed in the same bottom-up direction. As for the HGLET dictionary, it has only a C2F basis as we discussed earlier.

In some contexts, it is not necessary to generate a complete ONB, but rather some sparse set of vectors in the dictionary (also known as atoms) that most accurately approximate a given signal or class of signals. In this case, we can directly apply the orthogonal matching pursuit of [47] to find the best m-dimensional orthogonal subframe (\(m \le n\)) selected from the dictionary. Additionally, for some downstream tasks, such as sparse approximation or sparse feature selection, generating orthogonal sets of atoms is not critical. In these cases, we can employ a greedy algorithm to generate efficient approximation. This algorithm simply selects the atoms in the dictionary with the largest coefficient, removes it, then computes the transform of the residual and proceeds so forth. This algorithm is quite expensive since it need to recompute the coefficients after each selection. Therefore, it is only suited for tasks when the number of elements are small, or we only need to compute a few features. These algorithms are studied extensively in the subsequent section.

6.5 Approximation theory

Signal approximation has been one of the most important applications for classical and graph wavelet bases. Theoretical justification of the efficacy often relies on developing approximation bounds and decay rates for the wavelet coefficients for various classes of signals under various norms (or seminorms). A number of these results have been specifically developed for graphs equipped with a hierarchical tree [48,49,50,51]. In general, these results are based on first defining a distance function between vertices of a graph, then defining a Hölder seminorm based on this distance function, and then finally computing the decay rates. By defining a distance between any two \(\kappa \)-simplices as the number of elements in the smallest partition in the tree that constrains both elements, we can generalize these results. Formally, for singleton \(\kappa \)-elements \(\sigma \) and \(\tau \) of \(C_\kappa \), and signal \({\varvec{f}}\), we define a distance function and then the associated Hölder seminorm as:

$$\begin{aligned} d(\sigma , \tau ):= \min \left\{ n^j_k \, \Big \vert \, \sigma , \tau \in C^j_k \right\} , \quad C_H({\varvec{f}}):= \sup _{\sigma \ne \tau } \frac{ \left| [{\varvec{f}}]_\sigma -[{\varvec{f}}]_\tau \right| }{d(\sigma , \tau )^\alpha } \end{aligned}$$

where \(\alpha \) is a constant in (0, 1]. With these definitions, the dictionary coefficient decay and approximation results of [50, 51] for the GHWT and HGLET can be applied to the \(\kappa \)-GHWT and \(\kappa \)-HGLET bases. For sake of space, we only state these theorems and do not reproduce the proofs since the adaptation is trivial after substituting the above definitions.

Theorem 1

For a simplicial complex C equipped with a hierarchical bipartition tree, suppose that a signal \({\varvec{f}}\) is Hölder continuous with exponent \(\alpha \) and constant \(C_H({\varvec{f}})\). Then the coefficients with \(l \ge 1\) for the \(\kappa \)-HGLET (\(c^j_{k,l}\)) and \(\kappa \)-GHWT (\(d^j_{k,l}\)) satisfy:

$$\begin{aligned} \left| c^j_{k,l} \right| \le C_H({\varvec{f}}) \left( n^j_k\right) ^{\alpha +\frac{1}{2}}, \quad \left| d^j_{k,l} \right| \le C_H({\varvec{f}}) \left( n^j_k\right) ^{\alpha +\frac{1}{2}} \end{aligned}$$

Proof

See Theorem 3.1 of [51]. \(\square \)

Theorem 2

For a fixed orthonormal basis \(\{\varvec{\phi }_l \}^{n-1}_{l=0}\) and a parameter \(0< \rho < 2\),

$$\begin{aligned} \Vert {\varvec{f}}- P_m {\varvec{f}}\Vert _2 \le \frac{\vert {\varvec{f}}\vert _\rho }{m^\beta } \end{aligned}$$

where \(P_m\) in the best nonlinear m-term approximation in the basis \(\{\varvec{\phi }_l \}^{n-1}_{l=0}\), \(\beta =\frac{1}{\rho } - \frac{1}{2}\) and \(\vert {\varvec{f}}\vert _\rho \) is defined as \(\vert {\varvec{f}}\vert _\rho := \left( \sum _{l=0}^{n-1} \vert \langle {\varvec{f}}, \varvec{\phi }_l \rangle \vert ^\rho \right) ^{\frac{1}{\rho }}\)

Proof

See Theorem 3.2 of [51] and Theorem 6.3 of [50]. \(\square \)

7 Numerical experiments

We demonstrate the efficacy of our proposed partitioning techniques and basis constructions by conducting a series of experiments. In Sect. 7.1 we show how our multiscale bases and overcomplete dictionaries can be used to sparsely approximate signals defined on \(\kappa \)-simplices. In Sect. 7.2 we show how these representations can be used in supervised classification and unsupervised clustering problems.

7.1 Approximation and signal compression

We begin with an illustrative example by creating some synthetic data for 1- and 2-simplices by triangulating a digital image. We start with a \(512 \times 512\) “peppers” image and map it to a Cartesian grid on the unit square \([0,1]^2\). We then randomly sample 1024 points within this square (not necessarily on a grid). We then create a triangular mesh from these points using Delaunay triangulation. Next, we interpolate the image from the Cartesian grid to the sampled vertices by computing the barycentric coordinate of each vertex from the square inside the Cartesian grid. Finally, we interpolate the signal to the edges and triangles of the triangulation by averaging the values of the vertices that they contain. The result, for our random seed, is a signal defined on the 3050 edges of the triangulation and another on the 2067 triangles. We now consider the sparse representation of these signals. Figure 9 shows the nonlinear approximation (i.e., using the largest expansion coefficients in magnitude) of the triangle-based signals in the Hodge Laplacian eigenbasis (Fourier), the orthonormal Haar basis, orthonormal Walsh basis as well as the approximation prescribed by applying the best-basis and greedy algorithms to the HGLET and GHWT dictionaries. Figure 10 shows the approximation error vs the number of terms used for both the edge-based and triangle-based functions.

A number of observations are in order. First, the multiscale dictionary-based methods consistently outperformed the generic orthonormal bases. The greedy approximation algorithm achieved the best approximation results, but it is also more costly to compute than any of the other methods, and the set of atoms used in the approximation may not be orthogonal. This may be detrimental to downstream tasks. Overall the GHWT-based method performed best, with the F2C best basis performing much better than the C2F best basis, which suggests that the fine-scale features of this signal are the most important. Similarly, the Walsh basis achieved much better results than the Haar basis, again emphasizing the necessity of capturing details at the fine scale.

Fig. 9
figure 9

Nonlinear approximation of the peppers image for \(\kappa =2\)

Fig. 10
figure 10

Nonlinear approximation errors of the peppers image, Left: \(L_2\) error, Right: \(\log (L_2\) error) for up to 50% of the terms retained. Top \(\kappa =1\), Bottom: \(\kappa =2\)

Next, we apply our approach to real-world data containing higher-order simplices with \(\kappa =0, 1, \ldots , 5\). The citation complex [38, 52] is a simplicial complex derived from the co-author/citation complex (CC) [53], which models the interactions between multiple authors of scientific papers. A paper with \(\kappa \) authors is represented by a \((\kappa -1)\)-simplex. We first build a graph whose vertices represent the authors in this CC database. Then, the vertices are connected by edges that represent co-authored papers. Note that if two authors co-authored multiple papers, these two vertices are connected by a single edge. Next, we assign each edge the sum of the citation numbers of all the co-authored papers by the authors, forming this edge as its weight (or value). Finally, we assign each higher-order simplex the sum of the values of its lower-order simplices as its value. See [38] for a more thorough description of the construction of this complex. Table 1 reports some basic information about the number of simplices of different dimensions in this citation complex. Figure 11 shows the nonlinear approximation of this signal (i.e., a vector of citation numbers) for \(\kappa =0, 1, \ldots , 5\) with the Delta, Fourier, Haar, HGLET, and GHWT bases. Figure 12 shows the log error. The HGLET and GHWT bases were selected by the best-basis algorithm.

In these experiments, we observe that the best bases (GHWT and HGLET) outperformed the canonical bases, with the GHWT being the most efficient basis for each \(\kappa \). Additionally, for \(\kappa > 0\), the orthonormal Haar basis performed best in the semi-sparse regime (1 and 10% of terms retrained). This suggests that the signals on each dimension of the citation complex are similar in that they are all close to being piecewise constant. However, when more terms are considered, the HGLET best basis achieved a lower approximation error than the orthonormal Haar basis achieved.

Table 1 The number of element in the \(\kappa \)-simplices in the citation complex for \(\kappa =0, 1, \ldots , 5\)
Fig. 11
figure 11

Nonlinear approximation of the Citation Complex for \(\kappa = 0, \ldots , 5\). Here the HGLET and GHWT bases are selected by the best basis algorithm

Fig. 12
figure 12

Top: Nonlinear approximation of the Citation Complex for \(\kappa = 0, \ldots , 5\). Bottom: Log of the error for up to 50% of the terms retained

7.2 Signal clustering and classification

Since the basis (and dictionary) vectors we present are both multiscale and built from the Hodge Laplacians that are aware of both topological and geometric properties of the domain [28], they can function as very powerful feature extractors for general data science applications. In this section, we present two downstream applications: 1) a supervised classification problem; and 2) an unsupervised clustering problem. For baselines, we compare our proposed dictionaries with Fourier and Delta (indicator function) bases and with the Hodgelets proposed in [37] for cases when \(\kappa =1\).

7.2.1 Supervised classification

First, we present our study in supervised classification. We begin by computing edge-valued signals for 1000 handwritten digits from the MNIST dataset [54] by sampling 500 points in the unit square and following the interpolation method presented for the peppers image in Sect. 7.1. We then compute the features of these images using the proposed orthogonal transforms and best bases from the overcomplete dictionaries. Next, we train a support vector machine (SVM) to classify the digits for each of the transformed representations using the 1000 training examples. Finally, we test these SVMs on the rest of the whole MNIST dataset.

We repeat this experiment for the FMNIST dataset [55], again using only 1000 examples for training data. Results are presented in Table 2. We remark that these tests are not meant to achieve state-of-the-art results for image classification but rather to showcase the effectiveness of these representations for downstream tasks. Unsurprisingly, the signal-adaptive dictionary methods outperformed the non-adaptive basis methods. Again, the piecewise-constant methods (GHWT, Haar) achieved better approximations than the smoother methods (Fourier, HGLET, Joint, and Separate Hodgelets). This is likely due to the near-binary nature of images in both datasets.

Table 2 Test Accuracy for SVMs trained on transforms of MNIST signals interpolated to a random triangulation

7.2.2 Unsupervised clustering

A natural setting for studying signals on 1-simplices \(C_1\) is the analysis of trajectories [28, 31, 37]. Of particular interest is the case where the domain has nontrivial topological features. Such is the case of the Global Drifter Program dataset, which tracks the positions of 334 buoys dropped into the ocean at various points around the island of Madagascar [37].

We begin by dividing the dataset into three subsets, train (\(\vert X_{\textrm{tr}} \vert =176\)), test (\(\vert X_{\textrm{te}} \vert =83\)) and validation (\(\vert X_{\textrm{vl}} \vert =84\)). We then use orthogonal matching pursuit [47] (OMP) to compute the m significant features of the training set. Next, we extract these features for the test set and use them to compute the centroids \(\{{\varvec{c}}_j\}_{j=1}^d\) for each cluster. To evaluate these clusters K-score (i.e., the standard k-means objective) on the transformed features of the validation set:

$$\begin{aligned} K\mathrm {-score}:= \dfrac{1}{N} \sum _{i=1}^{N} \min _{1 \le j \le d} \Vert {\varvec{f}}({\varvec{x}}_i) - {\varvec{c}}_j \Vert ^2, \quad {\varvec{x}}_i \in X_{\textrm{vl}}, \end{aligned}$$

where \(N=\vert X_{\textrm{vl}} \vert = 84\) and \({\varvec{f}}(\cdot )\) represents the feature extraction prescribed by applying OMP to the test set. We repeat this experiment for \(m=5,10,15,20,25\) (number of features) and \(d=2, \ldots , 7 \) (number of clusters). Figure 13 summarizes the results of this test, while Table 3 shows the full numerical results. In this experiment, the GHWT outperformed all other bases because the trajectories are roughly constant and locally supported. The orthogonal matching pursuit scheme can select elements with the correct support size, and the piecewise-constant nature of the GHWT atoms can capture the action of the trajectory with very few elements.

Fig. 13
figure 13

Extensive results for buoy cluster test. Leftmost figure shows which method preformed best, the second to the left shows the second best and so on. The x-axis in each subplot indicates the number of coefficients used and the y-axis is the number of clusters. Full numerical results are presented in Table 3

8 Conclusions and future work

In this article, we have developed several generalizations of orthonormal bases and overcomplete transforms/dictionaries for signals defined on \(\kappa \)-simplices, and demonstrated their usefulness for data representation on both illustrative synthetic examples and real-world simplicial complexes generated from a co-authorship/citation dataset and an ocean current/flow dataset. However, there are many more tools from harmonic analysis that we have not addressed in this article. From a theoretical standpoint, future work may involve: (1) defining additional families of multiscale transforms such as the extended Generalized Haar-Walsh Transform (eGHWT) [17] and Natural Graph Wavelet Packets (NGWPs) [15]; (2) exploring different best-basis selection criteria tailored for classification and regression problems such as the Local Discriminant Basis [5, 7] and the Local Regression Basis [6] on simplicial complexes; and (3) investigating nonlinear feature extraction techniques such as the Geometric Scattering Transform [56]. From an application standpoint, we look forward to applying the techniques presented here to data science problems in computational chemistry, weather forecasting, and genetic analysis, all of which have elements that are naturally modeled with simplicial complexes.