Multi-order hypergraph convolutional networks integrated with self-supervised learning

Hypergraphs, as a powerful representation of information, effectively and naturally depict complex and non-pair-wise relationships in the real world. Hypergraph representation learning is useful for exploring complex relationships implicit in hypergraphs. However, most methods focus on the 1-order neighborhoods and ignore the higher order neighborhood relationships among data on the hypergraph structure. These often result in underutilization of hypergraph structure. In this paper, we exploit the potential of higher order neighborhoods in hypergraphs for representation and propose a Multi-Order Hypergraph Convolutional Network Integrated with Self-supervised Learning. We first encode the multi-channel network of the hypergraph by a high-order spectral convolution operator that captures the multi-order representation of nodes. Then, we introduce an inter-order attention mechanism to preserve the low-order neighborhood information. Finally, to extract valid embedding in the higher order neighborhoods, we incorporate a self-supervised learning strategy based on maximizing mutual information in the multi-order hypergraph convolutional network. Experiments on several hypergraph datasets show that the proposed model is competitive with state-of-the-art baselines, and ablation studies show the effectiveness of higher order neighborhood development, the inter-order attention mechanism, and the self-supervised learning strategy.


Introduction
Hypergraphs [7] provide a natural way to model complex patterns of component connectivity in the real world. In comparison to graphs, hypergraphs can connect non-pair-wise relations, a pattern that contains more information. With the development of deep network-based learning methods, hypergraphs have been widely applied in many domains, including pose estimation [16,29] and brain state classification [6,32].
Recently, researchers propose several hypergraph-based neural network frameworks [2,12,19,41,45,46]. Most of these methods focus on hypergraph expansion or on extending different network structures. For the Hypergraph Neural Network [12] (HGNN), message propagates by a hypergraph Laplacian operator on a clique expansion hypergraph, and follows a node-hyperedge-node propagation strategy. Hyper-GCN [41] approximates hyperedges as pair-wise edges, and thus, the hypergraph learning problem is converted to a graph learning problem. Moreover, unified framework [19,46] of hypergraphs and graphs is emerger as a trend in recent times. The papers of an author from a hyperedge. Author 2 co-authored paper P2 with author 1, meaning that hyperedge 2 constitutes a 1-order neighbor of hyperedge 1. Similarly, by the connection of paper P3, hyperedge 3 is a 2-order neighbor of hyperedge 1 Generally, these methods are designed by message passing process, making nodes and hyperedges confine to 1-order neighborhood in single propagation. However, nodes and hyperedges with the same attributes do not exist only in the 1-order neighborhood. For example, multiple authors coauthored a paper, which can be considered as a node, and multiple papers containing the same author are connected to a hyperedge. As shown in Fig. 1, Paper P2 in hyperedge 1 has two common author, and hyperedge 2 is a 1-order neighbor of hyperedge 1. Paper P3 in hyperedge 2 has two common author, and hyperedge 3 is a 2-order neighbor of hyperedge 2. Such a connection provides a way to reveal patterns of cross-domain collaboration. Therefore, hypergraphs serve as a powerful representation method for retaining information through deeper and more complex connectivity relationships. Furthermore, some works [1,31] on graph learning focus on neighborhood expansion of adjacency matrix. Method [20] uses powers of the incidence matrix to obtain higher order relationships, but it cannot be adapted to hypergraphs with arbitrary hyperedge sizes. Specifically, a larger receptive field means that nodes may receive more performance-degrading noises. Although the higher order neighborhood encapsulates a rich representation, it also brings more challenges, and it remains an open problem to effectively extract valuable information from the complex higher order neighborhood of objects while maintaining the lower order neighborhood information.
To address the above challenges, we propose Multi-Order Hypergraph Convolutional Networks Integrated with Self-Supervised Learning (MO-HGCN), where the multi-order representation maintains in the manner of a multi-channel network. We first perform k-th expansions of Chebyshev polynomials for spectral convolution to obtain spectral 2order and spectral 3-order hypergraph convolution operators. Specifically, the operators are constructed as independent hypergraph convolution layers and modeled as a 2-order channel and a 3-order channel, respectively. In addition, we adaptly adjust the nodes' feature on the 1-order hypergraph convolution and utilize it as an enhanced information channel. Then, we propose an inter-order attention mechanism to learn the contrastive information among the different order neighborhoods. By assigning the attention scores to the node embedding of the higher order channel, the loworder neighborhood information is brought into focus. To extract valuable information of the higher order channels, we learn distinct representations in an self-supervised manner by incorporating maximizing mutual information-based contrastive learning. Finally, we fuse the node embedding learned from the 2-order and 3-order channels to represent the completed multi-order embeddings and optimize the weights of the network by joint learning. Compared with existing methods, MO-HGCN is a semi-supervised node classification model that combines self-supervised learning to obtain a multi-order neighborhood representation of nodes. The main contributions of this paper are as follows: • We propose a Multi-Order Hypergraph Convolutional Network Integrated with Self-Supervised Learning (MO-HGCN) to explicitly capture the complex relationships of higher order neighborhood by spectral high-order hypergraph convolution operators, and obtain a multi-order representation through a multi-channel network. • We propose an inter-order attention mechanism to maintain the information of low-order neighborhoods and learn the distinct representation of higher order neighborhoods by a mutual information maximization strategy in a self-supervised learning manner. • We conduct extensive experiments on several hypergraph datasets, and the results show the effectiveness of MO-HGCN compared with the state-of-the-art.

Hypergraph neural networks
In recent years, hypergraphs have gained attention among researchers, and representation learning methods based on hypergraph have been greatly developed. Feng et al. [12] propose the Hypergraph Neural Network (HGNN), a general framework which implements the message passing strategy on the hypergraph with a hyperedge convolutional layer. To avoid the limitations of inherent hypergraph structure, Jiang et al. [22] propose a dynamic hypergraph neural network that updates the hypergraph structure. Bai et al. [4] propose two trainable operators, namely Hypergraph Convolution and Hypergraph Attention, that can be extended and migrated in neural networks. Besides, some studies propose new hypergraph representation learning frameworks, such as HNHN [10], Hyper-SAGNN [45], HyperSAGE [2], and HGC-RNN [43].
In the exploration of hypergraph structure, HyperGCN [41] makes hypergraphs to be trained on graph convolutional networks by approximating hyperedges as pair-wise edges. Bandyopadhyay et al. [5] apply graph convolution on the line graph of the hypergraph to adapt variable-sized hyperedges. Yang et al. [42] treat the vertices and hyperedges equally to solve the symmetric information loss problem of data co-occurrence. Various types of practices [13] based on hypergraphs are also evolving, such as pose estimation [16,29], link prediction [11], recommendation [23,38,39,44], and brain state classification [6,32].
A recent trend combining hypergraphs with graph network methods has emerged as a result of the advantages of data modeling brought by non-pair-wise relations in hypergraphs. Huang et al. [19] propose a framework for modeling the message passing process in graph and hypergraph neural networks. Zhang et al. [46] consider a hypergraph with edge-related vertex weight, propose the generic hypergraph spectral convolution networks (GHSC), and present various variants of hypergraph neural networks.

Self-supervised learning
Self-supervised learning [21,24,30] is currently receiving considerable attention in deep learning, serving downstream tasks by learning useful information in unlabeled data. Selfsupervised learning has a wide range of applications in computer vision [3,18], natural language processing [28], and graph learning [17,25,34,37].
One popular approach on graph learning is mutual information maximization, i.e., global-local contrast. Hjelm et al. [18] introduce the application of mutual information maximization strategies on images by proposing Deep InfoMax (DMI). DMI is adapted to different downstream tasks by global-local contrast, e.g., local features are suitable for classification tasks. Veličković et al. [37] extend this paradigm to graph learning and propose the Deep Graph Infomax (DGI). DGI performs global and local neighborhood comparisons on graphs, enabling nodes to learn global and local structural information. InfoGraph [34] maximizes the mutual information between graph-level representations and substructured representations at different scales to learn the global graph representation. Rich representations are also learned from labeled and unlabeled datasets by semi-supervised learning.
The mutual information maximization strategy is extended to certain tasks on hypergraphs. Xia et al. [39] propose a dual channel hypergraph convolutional network, which employ self-supervised learning as an auxiliary task to enhance the performance of session recommendation. Yu et al. [44] use the higher order relations of hypergraphs to obtain complex relationships between users and compensates for the infor-mation loss due to multi-channel networks with multi-layer mutual information maximization. These works investigate the impact of mutual information maximization for different types of information, while our work explores the implications of mutual information maximization strategy in higher order neighborhood.

Method
In this section, we describe in detail the proposed Multi-Order Hypergraph Convolutional Network Integrated with Self-Supervised Learning (MO-HGCN). As shown in Fig. 2, MO-HGCN consists of a 2-order channel, a 3-order channel, and an enhanced information channel, with 2-order and 3-order channels as the outputs. Specifically, we design the spectral 2-order and 3-order hypergraph convolution operators to obtain the higher order information. Considering the importance of node features and the preservation of 1-order neighborhoods, we propose an inter-order attention mechanism in the multi-order hypergraph convolution network. Our goal is to fuse the multi-order information to obtain a multi-level representation of the nodes. We further introduce self-supervised learning on the hypergraph, i.e., mutual information maximization between different order to capture the distinctive higher order information of the nodes.

Preliminaries
Given a hypergraph G = (V, E, W), V is a vertex set {v 1 , v 2 , . . . , v n } with n nodes, and E is a hyperedge set {e 1 , e 2 , . . . , e m } with m hyperedges. The hyperedge weight W represents a diagonal matrix which the hyperedge weight set {W 1 , W 2 , . . . , W m } is the diagonal. Thus, hypergraph G can be represented as an incidence matrix H ∈ R |V|×|E| , and the entries of H denote as The Laplacian [47] of a hypergraph G denotes as where the D v is a diagonal matrix of vertex degree and the D e is a diagonal matrix of hyperedge degree. Alternatively, given a hypergraph G = (V, E, W, ) with n nodes and m hyperedges, the hypergraph Laplacian ∈ R n×n can be decomposed into an orthonormal eigen vectors and a non-negative eigenvalue diagonal matrix . For a hypergraph G and a signal x, the spectral convolution of a where the symbol represents the convolution operator, and g( ) indicates the Fourier coefficients. The function g( ) is further parameterized as K order polynomials which express as the truncated Chebyshev expansion. The Chebyshev polynomial is expanded as With the truncated Chebyshev expansion, the spectral convolution approximatively represents as where T k (˜ ) indicates the K order Chebyshev polynomial, and˜ = 2 λ max − I is a scaled Laplacian. Moreover, λ max ≈ 2 according to works [12,26]. Therefore, the spectral 1-order hypergraph convolution can be defined as

Multi-order hypergraph convolutional network
On the basis of the spectral convolution theory of hypergraphs, we further develop an operator that facilitates nodes and hyperedges to interact in higher order neighborhoods. For higher order hypergraph convolution, we extend the order of the neighborhoods from the perspective of spectral hypergraph convolution. According to the Chebyshev polynomial in Eq. (4), k is set to 2 to obtain the spectral 2-order hypergraph convolution operator as follows: where θ 0 , θ 1 , and θ 2 denote the parameters of filters g, and Following the way of works [12,26] to avoid over-parameterization, we reduce multiple parameters to a single parameter which is assumed as Thus, the spectral 2-order hypergraph convolution operator can be simplified as follows: With the spectral 2-order hypergraph convolution operator, the 2-order hypergraph convolution of signal X can be defined as where θ ∈ R C in ×C out represents the learnable parameter. Similarly, the spectral 3-order hypergraph convolution operator represents as whereθ 0 ,θ 1 ,θ 2 , andθ 3 denote the parameters of filters g. We also uses a single parameterθ to avoid over-parameterization as follows: With the single parameter, the spectral 3-order hypergraph convolution operator can be simplified as Thus, the 3-order hypergraph convolution of signal X can be defined as where θ ∈ R C in ×C out represents the learnable parameter.
In MO-HGCN, l in Eqs. (9) and (13) is set to 2. Therefore, the backbone network of MO-HGCN is a multi-channel network with two layers. This approach preserves the information of different order neighborhoods and facilitates the nodes to learn multi-order representations.

Inter-order attention
The spectral high-order hypergraph convolution operator allows each node to aggregate information from distal nodes and hyperedges. Such information may not be applicable to learn directly, which presents a challenge in regulating Fig. 3 The inter-order attention mechanism of MO-HGCN the involvement of low-order information. Therefore, we propose an inter-order attention mechanism to indicate the similarity between higher order neighborhoods and loworder neighborhoods. Unlike previous attentions, we focus on the comparison of attention between different orders of the same node, rather than among neighboring nodes. Particularly, we design an enhanced information channel based on 1-order hypergraph convolution that augments the nodes' own information. The convolution process of this channel is as follows: where h represents the enhanced node embedding, and the β ∈ R N denotes a learnable parameter that assigns different self-loop weights to each node. As shown in Fig. 3, we obtain the node embeddings z o2 and z o3 of the 2-order and 3-order channels after the first layer of convolution. Then, the attention mechanism is applied as between z o2 and h , and between z o3 and h, respectively.
The attention scores of the nodes embeddings z l i ∈ R K and z h i ∈ R K in the j-th dimensional feature for the low-order channel and the higher order channel are calculated by Eq. 15 where z l i j and z h i j represent the j-th dimensional feature of node i, M L P(•) : R 2×K → R K is the feature mapping function which set as a multi-layer perceptron, and denotes the concatenation operation. Therefore, the attention score α o2 between 2-order channel and enhanced information channel calculated as The attention score α o3 between 3-order channel and enhanced information channel calculated as The attention scores α o2 and α o3 are further assigned to the higher order node embedding z o2 and z o3 as a way to enhance the most relevant node representation between channels, and the processes are represented aŝ To preserve the original higher order message of node embedding, we fuse node embeddingsẑ o2 andẑ o3 with node embeddings z o2 and z o3 , respectively. The final obtained higher order channel node embeddingẑ o2 andẑ o3 are as follows: where the λ ∈ R 1 and μ ∈ R 1 denote the learnable parameter restricted to [0,1], controlling the involvement of two node embeddings.

Self-supervised learning auxiliary task
Multi-order hypergraph convolutional networks enable nodes to learn multiple levels of representations, further improving model performance. However, the multi-channel structure is independent of each other, and the higher order information usually contains varying degrees of redundant information. Therefore, it is worth considering how to extract the distinctive message from the multi-order hypergraph convolutional network. Inspired by mutual information maximization could improve the Deep Graph Infomax (DGI) [37] performance. We extend the mutual information maximization to the interorder to guide the model to reduce feature redundancy. Specifically, we construct a contrastive learning between the enhanced information channel and the higher order channel, respectively. For the 2-order channel, the positive sample pair is h i j ,z o2 i j and the negative sample pair is ĥ i j ,z o2 i j , whereĥ i j denotes the negative sample with row-wise shuffling. We utilize InfoNCE [18] as the loss function for Learn the node embeddings h, z o2 , and z o3 of first layer via Eqs. (14), (9), and (13) respectively; 7: Calculate the inter-order attention scores α o2 and α o3 via Eqs. (16) and (17), respectively; 8: Embed the enhanced node embeddingsẑ o2 andẑ o3 via Eqs. (18) and (19), respectively; 9: Establish the contrastive learning of h,z o2 , andz o3 via Eqs. (20) and (21); 10: Learn the node embeddings X (2) and X (3) of second layer via Eqs. (9) and (13), respectively; 11: Fuse the node embeddings X (2) and X (3) intoX via Eq. (24); 12: Minimize L with gradient descent optimization algorithm via Eq. (26); 13: end while contrastive learning, as follows: where S (• •) denotes the discriminator function as the dot product.
For the 3-order channel, the positive sample pair is h i j ,z o3 i j and the negative sample pair is ĥ i j ,z o3 i j , wherê h i j denotes the negative sample with row-wise shuffling. The InfoNCE loss function L s2 is defined as

Model learning
The node embeddingz o2 andz o3 input to the second layer of multi-order hypergraph neural network. The outputs of second layer are denoted as X (2) ∈ R N ×q , and X (3) ∈ R N ×q , where q is the number of classes. To conduct the node clas- sification, we adopt the summation strategy to achieve the fusion of multi-channel information by Eq. (24) X = X (2) + X (3) .
Then, we adopt the softmax function to predict the label Y byX. Thus, the cross-entropy loss function for node classification is defined as follows: where the Y i j denotes the true labels of V L . Therefore, the joint learning loss function is as follows: where η 1 and η 2 are hyperparameters that control the participation of self-supervised learning. Algorithm1 reports the overall process of MO-HGCN.

Experiments
In this section, we conduct experiments and validate our model by answering the following questions.
• Q1: How does MO-HGCN perform on the node classification task? • Q2: How does high-order spectral hypergraph convolution perform compared to 1-order spectral hypergraph convolution? • Q3: How does the inter-order attention mechanism contribute to the performance of MO-HGCN? • Q4: How sensitive is the performance of MO-HGCN to its parameter settings? • Q5: How does the self-supervised learning component affect the effectiveness of MO-HGCN?

Datasets
For the semi-supervised node classification task on hypergraphs, we use the five hypergraph datasets provided by HyperGCN [41] for validation. These datasets include the cocitation network and the co-authorship network. Summary of the datasets is shown in Table 1 and details are as follows: • Co-citation datasets: The original sources of the cocitation network hypergraph dataset are cora, citeseer, and PubMed. In the hypergraph construction, all documents are created as nodes, and documents cited by the same document are grouped as a hyperedge. Hyperedges containing only one node are removed and the node feature is the bag-of-words vector of documents. • Co-authorship datasets: The original sources of the coauthorship network hypergraph dataset are DBLP and cora. In the hypergraph construction, all papers are considered as nodes and papers authored by a author are grouped as a hyperedge. The nodes are characterized by the bag-of-words vector of papers.

Baselines
We compare the proposed method with state-of-the-art baselines that include a variety of hypergraph neural networks combined with different neural network models. Details of these approach are as follows: • MLP+HLR [2]: The method is a multi-layer perceptron using explicit hypergraph Laplacian for regularization. • HyperGCN [2]: HyperGCN approximates the hypergraph learning problem to a graph problem by pair-wise edges and provides a variant FastHyperGCN that reduces the training time. • HyperSAGE [2]: HyperSAGE utilizes a two-level neural messaging strategy to propagate information in the hypergraph and combines different neighborhood aggregation approaches of GraphSAGE [15]. • HGNN [12]: HGNN introduces the symmetric normalized hypergraph Laplacian [47] operator by means of spectral hypergraph theory and provides a general framework for hypergraph neural networks. • UniGCN [19]: UniGNN unifies the message passing process of graphs and hypergraphs into a framework, extending the Graph Neural Networks design naturally to hypergraphs. • UniGAT [19]: The method extends the aggregation process of Graph Attention Networks [36] to hypergraphs, so that nodes learn the attention weights of neighboring hyperedges. • UniGIN [19]: The method uses the mechanism of Graph Isomorphic Networks [40] to enhance the expressiveness by aggregating the information of neighboring hyperedges by nodes. • UniSAGE [19]: The method is a variant of GraphSAGE [15], which adapts to different tasks by means of different aggregation functions. • H-ChebNet [46]: Combined with ChebNet [9], a variant derived on the general hypergraph spectral convolution framework. • H-APPNP [46]: H-APPNP is a hypergraph convolutional network with APPNP [27] as the backbone network. • H-SSGC [46]: The method extends the SSGC [48] to a general hypergraph spectral convolution framework. • H-GCN [46]: H-GCN is a general hypergraph spectral convolution framework with Graph Convolutional Networks [26] as the backbone network. • H-GCNII [46]: H-GCNII extends the GCNII [8] to a general hypergraph spectral convolution framework, which is a deep network structures.

Experiments settings
For the semi-supervised node classification task, we use ACC (Accuracy) to evaluate the performance of the model. In the experimental setup, we utilize the Adma algorithm for training and set it as 2000 epochs. For the Cora (including Co-citation and Co-authorship) and Citeseer datasets, the learning rate is 0.005 and L2 regularization is 0.05. For the DBLP and Pubmed datasets, the learning rate is 0.05 and L2 regularization is 0.002. For the hyperparameters η 1 and η 2 , the Cora (including Co-citation and Co-authorship) are set to 0.005 and 0.005, respectively, while other datasets used are set to 0.001. Each dataset has ten different split training-test sets with consistent training-test ratios. We follow the way of work [19] to test the datasets. For the baselines, we cite the experimental results reported in the original paper, since the compared datasets and evaluation metric are consistent.

Performance analysis
We report the mean accuracy and standard deviation of the experimental results in Table 2, with the best results in bold and the second best results underlined. The experimental results show the advantage of our model in terms of its accuracy compared with the state-of-the-art, with improvements of 2.0%, 4.0%, 3.1%, and 1.0% on the co-citation cora, citeseer, PubMed, and co-authorship cora datasets, respectively. The best-performing method on the co-authorship DBLP dataset is H-GCNII. The experimental results in Table 2  The best results are marked in bold, while the second best results are underlined Comparatively to the pair-wise edge approximation of HyperGCN, MO-HGCN utilizes clique expansion to approximate the hypergraph structure as HGNN, and the multi-order approximation neighborhood further enlarges the receptive field of nodes and hyperedges. It is for this reason that MO-HGCN performs better than HyperGCN and HGNN. Compared to models combining the hypergraph with other GNN methods, although these models absorb the advantages of different GNN structures and perform well, the results show that multi-order hypergraph convolutional networks combining inter-order attention mechanisms and self-supervised learning can fully exploit the structural information of the hypergraph. The standard deviation results on multiple split sets also demonstrate that MO-HGCN achieves similar stability as the model that incorporates hypergraph and GNN methods.

Ablation study
We report in Fig. 4 the performance of different channels and different components of MO-HGCN as a way to investigate the contribution. As shown in Fig. 4, the 1-order of horizontal axis is a 1-order approximation of the hypergraph convolution, which is compared with the higher order channels. The 2-order and 3-order denote hypergraph convolutional networks using only the 2-order channel and the 3-order channel, respectively. The Multi-order represents the MO-HGCN only consisting of 2-order and 3-order channels. The Inter-order attention denotes a multi-order hypergraph convolutional network that involves inter-order attention. The Self-supervised represents a multi-order hypergraph network consisting of inter-order attention and self-supervised learning.
As can be observed from Fig. 4, the 2-order channel always performs better than the 3-order channel, while the 2order channel outperforms the 1-order channel in most cases but is inferior to the 2-order channel. This also indicates that the long-range information in higher order neighborhoods is not always directly applicable. The results of the Multi-order show that the fusion of multi-order information allows models to learn multiple levels of representation, thus further improving the performance. The self-supervised learning component delivers a significant boost compared to inter-order attention, suggesting a more prominent role for extracting the distinctive information of higher order information. The results in Fig. 4 answer question Q2: Channels and components in the model contribute differently, with inter-order attention and self-supervised learning taking full advantage of the natural information brought by multi-order neighborhoods.

Effectiveness of inter-order attention
We use box plots in Fig. 5 to report the distribution of attention scores in the inter-order attention mechanism as a way to investigate the question Q3. The 2-order and 3-order in Fig.  5 represent the attention scores distributions between the 2order channel and the enhanced information channel, and between the 3-order channel and the enhanced information channel, respectively. As shown in Fig. 5, the distribution of attention scores generated by the inter-order attention mechanism is concentrated in the lower score region, which indicates a large discrep- 2D visualization of T-SNE for node embedding on co-citation and co-authorship datasets. The first row represents the co-citation cora dataset, the second row represents the co-citation citeseer dataset, and the third row represents the co-authorship cora dataset. Note that we named the MO-HGCN without self-supervised learning as MO-HGCN (w/o) ancy between the node embedding generated by the higher order channel and the enhanced information channel. This also verifies that the nodes are able to receive information from higher order neighborhoods. Higher order channel node embeddings that are more similar to those of the enhanced information channel are assigned higher scores. As a result, information with low similarity to 1-order neighbors (containing the node's information) has a lower weight in the fusion of embeddings.

Effectiveness of self-supervised learning
We conduct parameter sensitivity experiments on the hyperparameters of the self-supervised learning, i.e., problem Q4.
For η 1 and η 2 , which were chosen in the range. Figure 5 reports the node classification accuracy of MO-HGCN for different η 1 and η 2 ranges. Figure 5 shows the stable performance of the MO-HGCN when η 1 and η 2 are chosen in a suitable range. Moreover, since self-supervised learning as an auxiliary task, η 1 and η 2 contribute more to the performance of the MO-HGCN at smaller valuesb (Fig. 6). To investigate the impact of self-supervised learning on the multi-order hypergraph convolutional network, i.e., question Q5, we visualize the node embeddings generated by Hyper-GCN, HGNN, MO-HGCN without self-supervised learning, and MO-HGCN, respectively. As shown in Fig. 7, we use T-SNE [35] to reduce the dimension of the node embeddings and perform projection of 2D coordinates to draw clusters, with each color representing a different class of nodes, respectively. To produce clear distributions, we test on the Cora and Citesseer datasets with a small number of nodes and report the clustering coefficients in Table 3.
In Fig. 7, the MO-HGCN produces clear clusters as a result of the node embeddings, and it also produces higher contour coefficients in Table 3 than the MO-HGCN without self-supervised learning. This shows that self-supervised learning helps the node embeddings to learn distinctive information, which enables the separation of node embeddings to be improved.

Conclusions
In this paper, we propose a Multi-Order Hypergraph Convolutional Network incorporating self-supervised learning (MO-HGCN) to explore the potential of hypergraphs on higher order neighborhoods. MO-HGCN consists of a multichannel network structure, where the higher order channels are composed of spectral 2-order and spectral 3-order hypergraph convolution operators, respectively. Through interorder attention, we design an enhanced information channel that preserves low-order neighborhood information. To mine distinctive information in the higher order channels, we introduce self-supervised learning as an auxiliary task to enhance the performance of MO-HGCN. Experiments show that MO-HGCN is competitive with state-of-the-art baselines, and MO-HGCN develops the potential of higher order neighborhoods through inter-order attention and self-supervised learning components. In future work, we would like to explore hypergraphs with heterogeneous nodes to investigate higher order neighborhood problems on heterogeneous hypergraphs.