Background

Due to the development of sequencing technology, more and more gene expression data have been detected. At the same time, there are many meaningful biological information in gene expression data. The effective analysis and research of this information are of great significance to the prevention and treatment of diseases. And multi-view data obtained by integrating data from different sources have gained much attention in the field of machine learning [1]. It is well known that gene expression data can be downloaded from The Cancer Genome Atlas (TCGA) platform. We then integrated the gene expression data into multi-view data for different diseases with the same genes. Multi-view data will provide a new perspective to mine the connections between multiple cancers.

To meet the demand for studying explosive gene expression data, modern biologists are increasingly concerned with clustering and feature selection. Clustering is the process of dividing a series of genes or samples into different subsets, and the genes or samples in the same subset are similar [2]. Generally speaking, feature selection can not only find useful information and eliminate noise, but also reduce the complexity of the computation. In this paper, we performed the selection of com-abnormal genes to study the relationship between genes and multiple cancers [3].

As an effective matrix decomposition method, non-negative matrix factorization (NMF) [4] is widely prevalent in bioinformatics [5], image representation [6], and other fields [7]. NMF can learn part-based representations of objects. This is consistent with the human brain’s perception mechanism. Some extensions to NMF have been proposed from different perspectives. For example, the non-negative local coordinate factorization (NLCF) was presented by imposing the locality coordinate constraint into the original NMF [8]. Kim et al. presented the sparse non-negative matrix factorization (NMFs) method in combination with sparse constraints [9]. In practical applications, the data are sometimes negative, so semi-non-negative matrix factorization (Semi-NMF) and convex non-negative matrix factorization (Convex-NMF) are derived to solve the problem of positive and negative data [10].

As we mentioned above, these methods have enhanced the performance of NMF, but there also exist the following limitations: (1) In fact, there is an intrinsic geometrical information in the high-dimensional data. But these methods ignore the nonlinear low-dimensional geometrical structure in the original data. (2) There are always noise and outliers in real data. Therefore, we need a robust NMF-based approach to effectively suppress noise and outliers.

For the first question, the graph regularized non-negative matrix factorization (GNMF) was presented to discover the manifold structure of raw data [11]. However, graph regularization is based on constructing k-nearest neighbors in a simple graph, which explores only the pairwise relationship between two sample points. Zeng et al. introduced hypergraph regularized non-negative matrix factorization (HNMF) to encode the relationship between two or more than two sample points [12]. Unlike simple graphs, the hyperedge of a hypergraph contains a series of related vertices. Therefore, high-order relationship of the data can be found. GNMF and HNMF consider important manifold information, but they are exceptionally sensitive to noise and outliers. For the second problem, using the L2, 1-norm when estimating the residual can be effectively alleviated [13].

Inspired by these work, this paper presents a novel NMF model called robust hypergraph regularized non-negative matrix factorization (RHNMF). It adds hypergraph regularization and L2, 1-norm to the traditional NMF. So it has the advantage of considering the higher-order relationship among samples and controlling the influences of noise and outliers. The main contributions of RHNMF are summarized as follows:

  1. (i)

    To capture high-order relationship between more sample points, hypergraph regularization is applied to the objective function. This makes sense for enhancing the performance of NMF-based methods.

  2. (ii)

    The L2, 1-norm instead of the Frobenius norm is used to estimate the residual approximation, so that the error term for each data point is no longer squared form. This will greatly suppress the effects of noise and outliers. And L2, 1-norm is suitable for clustering and feature selection because it produces sparse rows.

  3. (iii)

    Scientific and comprehensive experiments are designed on the multi-view datasets to prove the effectiveness of the RHNMF and achieved satisfactory results.

The rest of the paper is arranged as follows. In the “Methods” section, we introduce the NMF, L2, 1-norm, and hypergraph regularization. The proposed RHNMF method, the solution process, its convergence, and computational complexity analysis are also described in detail. Experimental results are demonstrated in the section “Results and discussion.” The conclusion is drawn in the section “Conclusions.”

Methods

Non-negative matrix factorization

In the field of bioinformatics, gene expression data are usually expressed in the form of a matrix. The sample is represented by a column of matrices, and the level of gene expression is represented by the rows of the matrix. Given a data matrix X = [x1, x2, …, xn] ∈ Rm × n, the column vector xj is a sample vector. NMF aims at finding two non-negative matrices U = [u1, u2, …, uk] ∈ Rm × k and V = [v1, v2, …, vn] ∈ Rk × n whose products are similar to the data matrix X [14]. U represents a basis matrix, and V represents a coefficient matrix. The minimizing objective function of the NMF is as below:

$$ \underset{\mathbf{U},\mathbf{V}}{\min }{\left\Vert \mathbf{X}-\mathbf{UV}\right\Vert}_F^2=\sum \limits_{j=1}^n{\left\Vert {\mathbf{x}}_j-{\mathbf{U}\mathbf{v}}_j\right\Vert}^2\kern0.5em s.t.\mathbf{U}\ge 0,\mathbf{V}\ge \mathbf{0},\kern0.5em $$
(1)

where ‖⋅‖F represents the Frobenius norm of matrix. xj can be seen as a linear combination of columns of U, parameterized by each column of V [15].

L 2, 1-norm

Given any matrix X ∈ Rm × n, the ‖X2, 1 is to first calculate L2-norm for rows to form a column matrix, and then calculate L1-norm for column matrix [13], i.e.,

$$ {\left\Vert \mathbf{X}\right\Vert}_{2,1}=\sum \limits_{i=1}^m\sqrt{\sum \limits_{j=1}^n{\mathbf{x}}_{i,j}^2}=\sum \limits_{i=1}^m{\left\Vert {\mathbf{x}}^i\right\Vert}_2 $$
(2)

As shown above, L2, 1-norm will cause row sparsity [16]. At the same time, the L2, 1-norm is not susceptible to noise and outliers, so the robustness of the algorithm can be improved.

Hypergraph regularization

Inspired by the simple graph theory, hypergraph came into being [17]. In the sample graph, one edge is connected by two data samples and the weight of the edge denotes the pairwise relationship between two sample points [11]. To solve this problem, hypergraph takes into account the relationships between multiple vertices and construct hyperedges for them [12].

Let a triple G = (V, E, W) represent a hypergraph, where vertex set is represented by V, hyperedge set is E, and W is the diagonal matrix that represents the weights of the hyperedges. As shown in Fig. 1a, this is an example of a hypergraph. There are six vertices and three hyperedges in this hypergraph. Then, the hyperedge set E = {e1 = {v1, v2, v3}, e2 = {v3, v4, v5}, e3 = {v5, v6, v7,v8}}. In Fig. 1b, H ∈ R|V| × |E| represents the hypergraph’s incidence matrix. Then, we can calculate it as below:

$$ \mathbf{H}\left(v,e\right)=\left\{\begin{array}{c}1\kern2.2em if\kern0.1em v\in e,\\ {}0\kern1.25em \mathrm{otherwise}.\end{array}\right. $$
(3)
Fig. 1
figure 1

Illustration of the hypergraph. a An example of a hypergraph. b Its corresponding incidence matrix

For any hyperedge ei, its weight Wi is denoted as follows:

$$ {\mathbf{W}}_i=\mathbf{W}\left({e}_i\right)=\sum \limits_{v_j\in {e}_i}\exp \left(-\frac{{\left\Vert {v}_i-{v}_j\right\Vert}_2^2}{\delta}\right), $$
(4)

where \( \delta =\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$k$}\right.\sum \limits_{v_j\in {e}_i}{\left\Vert {v}_i-{v}_j\right\Vert}_2^2 \), k represents the value of k nearest neighbors for each vertex. d(v) represents the degree of vertex v and is expressed as follows:

$$ d(v)=\sum \limits_{e\in E}w(e)\mathbf{H}\left(v,e\right). $$
(5)

And the degree of each hyperedge can be denoted as:

$$ f(e)=\sum \limits_{v\in V}\mathbf{H}\left(v,e\right). $$
(6)

The unnormalized hypergraph Laplacian matrix [17] is defined as:

$$ {\mathbf{L}}_{\mathrm{hyper}}={\mathbf{D}}_v-\mathbf{E}, $$
(7)

where E = HW(De)−1HT and Dv is a diagonal matrix composed of d(v). W denotes a diagonal matrix composed of w(e). De used to represent the diagonal matrix composed of f(e).

Hypergraph regularization [12] can be defined to minimize the following optimization problem:

$$ {\displaystyle \begin{array}{l}\kern0.5em \underset{\mathbf{V}}{\min}\frac{1}{2}\sum \limits_{e\in E}\sum \limits_{\left(i,j\right)\in e}\frac{w(e)}{f(e)}{\left\Vert {\mathbf{s}}_i-{\mathbf{s}}_j\right\Vert}^2\\ {}=\underset{\mathbf{V}}{\min } Tr\left(\mathbf{V}\left({\mathbf{D}}_v-\mathbf{E}\right){\mathbf{V}}^T\right)\\ {}=\underset{\mathbf{V}}{\min } Tr\left({\mathbf{V}\mathbf{L}}_{\mathrm{hyper}}{\mathbf{V}}^T\right),\end{array}} $$
(8)

where si and sj are low-dimensional representations of the original data points xi and xj.

The proposed method: robust hypergraph regularized non-negative matrix factorization (RHNMF)

Traditional NMF is a good part-based representation algorithm [4]. However, its objective function is a form of square residual. Therefore, traditional NMF is susceptible to noise and outliers. Moreover, NMF ignores the low-dimensional manifold embedded in the high-dimensional data.

To overcome the above limitations, we present a new method called RHNMF. It considers the robustness of the algorithm and the high-order relationship between the data. In other words, RHNMF method is the integration of NMF, L2, 1-norm, and hypergraph. The objective function of RHNMF is defined as follows:

$$ \underset{\mathbf{U},\mathbf{V}}{\min }{\left\Vert \mathbf{X}-\mathbf{UV}\right\Vert}_{2,1}+\alpha Tr\left({\mathbf{V}\mathbf{L}}_{\mathrm{hyper}}{\mathbf{V}}^T\right)\kern1em s.t.\mathbf{U}\ge 0,\mathbf{V}\ge 0, $$
(9)

where Tr(⋅) represents the trace of the matrix and α ≥ 0 denotes the weighting parameter to balance two terms.

Solution of RHNMF

By using ‖B2, 1 = Tr(BDBT), the objective function in Eq. (9) is expressed as follows:

$$ {\displaystyle \begin{array}{l}\kern0.5em Tr\left(\left(\mathbf{X}-\mathbf{UV}\right)\mathbf{D}{\left(\mathbf{X}-\mathbf{UV}\right)}^T\right)+\alpha Tr\left({\mathbf{V}\mathbf{L}}_{\mathrm{hyper}}\mathbf{V}\right)\\ {}= Tr\left({\mathbf{XDX}}^T\right)-2 Tr\left({\mathbf{XDV}}^T{\mathbf{U}}^T\right)+ Tr\left({\mathbf{U}\mathbf{V}\mathbf{DV}}^T{\mathbf{U}}^T\right)\\ {}\kern0.6em +\alpha Tr\left({\mathbf{V}\mathbf{L}}_{\mathrm{hyper}}{\mathbf{V}}^T\right),\end{array}} $$
(10)

where D denotes the diagonal matrix with ith diagonal element as

$$ {\displaystyle \begin{array}{c}{\mathbf{D}}_{jj}=\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\sqrt{\sum_{m=1}^i{\left(\mathbf{X}-\mathbf{U}\mathbf{V}\right)}_{m,j}^2+\gamma }$}\right.\\ {}\kern1.5em =\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\left\Vert {x}_j-\mathbf{U}{v}_j+\gamma \right\Vert $}\right.,\end{array}} $$
(11)

where γ represents the sufficiently small positive number infinitely close to 0 but not 0. The multiplication update rule is used to iteratively update the objective function to minimize the error. Then, the Lagrangian function f can be expressed as

$$ {\displaystyle \begin{array}{l}f= Tr\left({\mathbf{XDX}}^T\right)-2 Tr\left({\mathbf{XDV}}^T{\mathbf{U}}^T\right)+ Tr\left({\mathbf{U}\mathbf{VDV}}^T{\mathbf{U}}^T\right)\\ {}\kern1.3em +\alpha Tr\left({\mathbf{V}\mathbf{L}}_{\mathrm{hyper}}{\mathbf{V}}^T\right)+ Tr\left({\boldsymbol{\uppsi} \mathbf{U}}^T\right)+ Tr\left({\boldsymbol{\upvarphi} \mathbf{V}}^T\right),\end{array}} $$
(12)

where ψ = [ψik] and φ = [φkj] denote Lagrange multipliers which are constrained to U ≥ 0 and V ≥ 0, respectively.

The partial derivative of f with respect to U and V can be defined as follows:

$$ \frac{\partial f}{\partial \mathbf{U}}=-2{\mathbf{XDV}}^T+2{\mathbf{UVDV}}^T+\boldsymbol{\uppsi}, $$
(13)
$$ \frac{\partial f}{\partial \mathbf{V}}=-2{\mathbf{U}}^T\mathbf{XD}+2{\mathbf{U}}^T\mathbf{UVD}+2\alpha {\mathbf{VL}}_{\mathrm{hyper}}+\boldsymbol{\upvarphi} . $$
(14)

The iterative formulas of the objective function are expressed as follows:

$$ {u}_{ik}\leftarrow {u}_{ik}\frac{{\left({\mathbf{XDV}}^T\right)}_{ik}}{{\left({\mathbf{UVDV}}^T\right)}_{ik}}, $$
(15)
$$ {v}_{kj}\leftarrow {v}_{kj}\frac{{\left({\mathbf{U}}^T\mathbf{XD}+\alpha \mathbf{VE}\right)}_{kj}}{{\left({\mathbf{U}}^T\mathbf{UVD}+\alpha {\mathbf{VD}}_v\right)}_{kj}}. $$
(16)

Then, the corresponding algorithm is given in Algorithm 1.

figure a

Finally, we use Fig. 2 to illustrate our model. From Fig. 2, we can see that the original data matrix consists of different types of data. The RHNMF method with L2, 1-norm constraint and hypergraph regularization has good robustness. We can perform feature selection on the basis matrix and perform sample clustering on the coefficient matrix.

Fig. 2
figure 2

The whole framework of RHNMF

Convergence and complexity analysis

In this subsection, the computational costs of the RHNMF are presented. The general method to describe the computational complexity is to use arithmetic operations. Multiplicative iterative update rules guarantee U ≥ 0 and V ≥ 0. So we can iteratively update U and V until RHNMF’s objective function value is less than a sufficiently small number or the number of iterations exceeds the set maximum. It guarantees the convergence of the algorithm. Based on (15) and (16), we specifically analyze the arithmetic operations of each iteration of the RHNMF method. Assume that the original data matrix Xm × n, m represents the number of genes, the number of samples is represented by n, k denotes the number of factors, g represents the number of nearest neighbors when constructing hyperedges in our algorithm. Therefore, we need 2mnk + 2(m + n)k2 + n(g + 3)k additions, 2mnk + 2(m + n)k2 + (m + n)k + n(g + 1)k multiplications, and (m + n)k divisions for (15) and (16). The overall costs of RHNMF method are O(mnk).

Results and discussion

In this section, we apply the RHNMF model to cluster samples and select com-abnormal expression genes. To verify the validity of RHNMF, we compare it to other methods on multi-view dataset. These comparison methods include K-means, PCA, NMF [14], NMFL2, 1 [13], GNMF [11], HNMF [12], SHNMF [18], and RGNMF [19].

Datasets

The Cancer Genome Atlas (TCGA) program applies high-throughput sequencing technology to understand the mechanisms of the occurrence and development of cancer cells [20]. In this experiment, we testify the performance of the RHNMF method in four multi-view datasets, including pancreatic cancer (PAAD_GE), head and neck squamous cell carcinoma (HNSC_GE), esophagus cancer (ESCA_GE), and cholangiocarcinoma (CHOL_GE). The datasets are downloaded from the TCGA. Any three of the four gene expression data are processed into multi-view datasets. Therefore, a total of four multi-view data have been formed. It is the gene expression data that after normal samples are removed are our data used in this paper. Table 1 lists the detailed information for the multi-view datasets.

Table 1 Summary of four multi-view datasets

Parameter setting

In our proposed RHNMF method, the balance parameter α affects the experimental results. Because the value of the regularization parameter represents the degree of consideration of high-order relationship among data points, the value of the appropriate regularization parameter will contribute to the experimental results. So fivefold cross-validation is used to select the optimal parameters. The scope of the selection is {10r : r ∈ {−5, −4, −3, …, 3, 4, 5}}. Figure 3 depicts the effect of parameter changes on RHNMF clustering performance. We can see from Fig. 3 that the hypergraph regularization parameters α are 105, 105, 100, and 104 on PAAD_HNSC_CHOL_GE, PAAD_ESCA_CHOL_GE, PAAD_HNSC_ESCA_GE, and HNSC_ESCA_CHOL_GE, respectively.

Fig. 3
figure 3

Performance of the RHNMF set with different values of α

Clustering results

In the experiment, we perform 50 times for each method. To illustrate the superiority of our algorithm, we compare it with other methods in the clustering of multi-view data. Then, we employ the K-means algorithm on the decomposed coefficient matrix for sample clustering.

Evaluation metrics

In the experiment, we employ two evaluation metrics to evaluate the clustering results [21, 22]. The first evaluation metric is accuracy (AC), which is the percentage of samples that are correctly clustered. The second evaluation metric is normalized mutual information (NMI), which indicates the similarity between the cluster set we obtained and the actual cluster set. Then, the AC is calculated by

$$ AC=\frac{\sum_{i=1}^n\delta \left({s}_i,\mathrm{map}\left({r}_i\right)\right)}{n}\times 100\%, $$
(17)

where si denotes the ground truth label and ri represents the cluster label that is obtained in the clustering experiment. map(ri) denotes the mapping function that maps label ri to the label si using the Kuhn–Munkres algorithm [23]. Then, δ(x, y) denotes a delta function. When x = y, δ(x, y) is 1; otherwise, δ(x, y) is 0. In addition, n represents the number of samples.

NMI represents the degree of similarity between two cluster sets and it has been widely used. For two cluster sets C and C', NMI is expressed as:

$$ \mathrm{NMI}\left(C,{C}^{\hbox{'}}\right)=\frac{\mathrm{MI}\left(C,{C}^{\hbox{'}}\right)}{\max \left(H(C),H\left({C}^{\hbox{'}}\right)\right)}, $$
(18)

where H(C) and H(C') represent the entropies of C and C'. MI(C, C') represents the mutual information between two cluster sets.

Comparison of clustering performance

To illustrate the effectiveness of RHNMF, we perform the clustering experiment on the multi-view datasets. Then, we use AC and NMI to evaluate the clustering result. Finally, the details of clustering results are summarized in Table 2. According to Table 2, we can easily draw the following conclusions:

Table 2 Comparison of clustering performance in multi-view datasets

(i) On these four multi-view datasets, the HNMF and SHNMF outperform the GNMF method, and the RHNMF also is higher than the RGNMF method. This is because the graph regularization only considers the intrinsic geometric relationships between pairs of samples. Hypergraph regularization and sparse hypergraph regularization, on the other hand, consider the manifold structure among more samples. That is, hypergraph Laplacian is able to find geometric information between multiple samples with similar embedding. This shows that the method of applying the hypergraph regularization term constraint has higher clustering accuracy.

(ii) According to whether there is L2, 1-norm constraints in the error function, we divide the seven methods based on NMF into three groups for comparison. The NMFL2, 1 is approximately 5% and 10% bigger than the NMF on AC and NMI, respectively. RGNMF exceeds GNMF by 27% and 41% on AC and NMI. RHNMF is higher than HNMF and SHNMF, by about 22% and 30% on the mean of AC and NMI. We can see that the methods with L2, 1-norm have better clustering results. This is because multi-view data has more samples, so there will be more noise and outliers. Fortunately, L2, 1-norm can enhance the robustness of the algorithm.

(iii) The NMF clustering results on the PAAD_ESCA_CHOL_GE and PAAD_HNSC_ESCA_GE datasets are not the worst. The reason may be that the improvement of traditional NMF will cause the loss of useful information and affect the clustering results. The clustering result of K-means is obtained by directly clustering the original data set without dimensionality reduction. From Table 2, we can see that its clustering results are acceptable because it considers all the information in the datasets without losing any information.

(iv) In Table 2, we can observe that our RHNMF method outperforms other methods. The clustering accuracy is increased by at least 5% and 6% on all datasets. Therefore, it is reasonable that the combination of the hypergraph structure and L2, 1-norm makes the clustering effect obviously.

The development of single-cell RNA sequencing (scRNA-seq) technology has enabled the measurement of gene expression in individual cells. This gives us an unprecedented opportunity to study biological mechanisms at the cellular level. The main single-cell analysis is to study the heterogeneity of cells, that is, to cluster a large number of cells into different groups. Therefore, in this subsection, we perform clustering experiments on single-cell datasets using the nine methods described above. The single-cell dataset for lung epithelial cells is available in the NCBI’s Gene Expression Omnibus (GEO GSE84147), including 540 cells (215 control, 275 idiopathic pulmonary fibrosis patients, and 50 interstitial lung disease patients) [24]. Table 3 lists the experimental results of the different methods on the lung epithelial cell dataset. Table 3 shows that the RHNMF method gives the best clustering performance. Specifically, our method’s AC and NMI are about 1% and 0.5% better than the second best result. The reason is that our method considers the robustness of the algorithm and the high-order relationship between the data. And this also shows that our approach applies not only to TCGA datasets but also to single-cell datasets.

Table 3 The clustering performance of the nine methods on single-cell dataset

Com-abnormal gene selection results

Cancer is the most common type of modern diseases, and it is a serious threat to human life and health. Changes in the genome often lead to cancer [25, 26]. Therefore, we select com-abnormal genes on the PAAD_ESCA_CHOL_GE dataset (to save space, we only list the experimental results on the PAAD_ESCA_CHOL_GE dataset.). From the consideration of the connection among multiple cancers, pancreatic cancer (PAAD), esophagus cancer (ESCA), and cholangiocarcinoma (CHOL) are studied.

In the experiment, the gene selection method used is introduced in [5]. We select 100 genes from each method for comparison. GeneGards (http://www.genecards.org/) can analyze the selected genes. GeneCards is a searchable comprehensive database that succinctly provides genomes, proteomics, and all known and predicted human genes. Tables 4, 5, and 6 list the detailed experimental results.

Table 4 Performance comparison of com-abnormal gene selection in multi-view datasets
Table 5 Detailed analysis of the com-abnormal genes selected only by the RHNMF method
Table 6 Summary of the same com-abnormal genes discovered by eight methods

In Table 4, the N is obtained by matching the differential genes selected by each method to the virulence gene pool of every integrated dataset. The RHNMF method gives the largest N. This is because L2, 1-norm is not sensitive to noise and outliers. And the row sparsity produced by the L2, 1-norm constraint will contribute to the selection of com-abnormal genes. Therefore, our method is effective for the selection of com-abnormal genes.

The com-abnormal expressed genes selected by RHNMF and not selected by other methods are listed in Table 5. The relevant score refers to the correlation between genes and diseases. The higher relevance score means that abnormal expression of the gene is more likely to cause malignant tumor production. And we can see that the relevance scores of KRT18 with PAAD, ESCA, and CHOL are 11.99, 11.76, and 2.61, respectively. KRT18 (Keratin 18) is a protein-coding gene. It encodes the type I intermediate filament chain keratin 18. KRT18 has been shown to be associated with the appearance of PAAD, ESCA, and CHOL [27,28,29]. HSPA5 probably plays a role in facilitating the assembly of multimeric protein complexes inside the endoplasmic reticulum. The relevance scores of HSPA5 with PAAD, ESCA, and CHOL are 11.46, 9.13, and 0.88, respectively. HSPA5 has to do with the occurrence of PAAD, ESCA, and CHOL [30,31,32]. This suggests that biologists need to further study KRT18 and HSPA5 to better understand the link among PAAD, ESCA, and CHOL. And it shows that the RHNMF method is useful in selecting the com-abnormal genes.

Table 6 lists the same com-abnormal genes discovered by these eight methods. Table 6 is similar to Table 5. As we all know, a gene may be linked to a variety of diseases, and the emergence of a disease is the result of multiple genes acting together. KRT19 has the highest correlation score in these three diseases. Together with KRT8, it helps to link the contractile apparatus to dystrophin at the costameres of striated muscle. KRT19’s related diseases are lung cancer and thyroid cancer. Some literature has shown that KRT19 is related to PAAD, ESCA, and CHOL [33,34,35]. The com-abnormal genes in Table 6 are found in all eight methods, implying the importance of these genes. The com-abnormal gene selection by considering the links among different cancers is of great significance to PAAD, ESCA, and CHOL research.

Conclusions

In this paper, we design a novel non-negative matrix factorization model called RHNMF for sample clustering and the selection of com-abnormal genes. On the one hand, considering the low-dimensional manifold information existing in the high-dimensional data, the hypergraph regularization term is applied to the objective function of RHNMF. On the other hand, we use L2, 1-norm on the error function to enhance the robustness of the algorithm. Experimental results on the multi-view datasets demonstrate the superiority of the RHNMF in comparison with other representative methods.

However, the proposed method inevitably has limitations. For example, our method uses a traditional hypergraph, which does not capture high-order discriminant manifold information. For future work, we will introduce sparse hypergraph to solve the above problem.