Background

Detecting of protein complexes from PPI networks is a key problem to elucidate protein functions and identify biochemical, signal and biological processes in a cell. Like other biological molecules, most proteins do not work in isolation; they cooperate with other proteins to perform a particular biological function. These complexes are molecular aggregations of two or more proteins assembled by PPIs [1]. Accurate determination of complexes in PPI networks is crucial for understanding principles of cellular organization.

In past several years, a large number of technologies have been developed for the large-scale analysis of complex detection from PPI networks [213]. Heuristic-based algorithms find dense network regions by searching heuristically for potential cluster regions using an iterative greedy seed and extend strategy, one of the seminal efforts is MCODE [14] and proposed a density-based clustering approach to detect complexes, which picked vertices with large weights as initial clusters and further augmented them to detect dense connective clusters. Similar to MCODE, Altaf-UI-Amin proposed an algorithm called DPClus [15] with good accuracy. Li [16] proposed IPCA, it searches for subgraphs having small diameter and whose cluster property is above the interaction probability threshold. And RestrictedNeighborhoodSearch Clustering (RNSC) algorithm [17] deploys a cost-based partitioning algorithm. The ClusterONE method [18], detects overlapping clusters in a PPI network using a greedy seed and extend heuristic, an advantage of ClusterONE is the ability to not just find overlapping clusters, but also clusters that may be contained in another cluster.

One of the most widely used graph clustering algorithm is Markov Cluster Algorithm (MCL) [19], which is a fast and robust method, which simulate random walk in the graph to cluster. Lots of studies indicated that MCL can tolerate more noises than other clustring algorithms on PPI networks [8]. Algorithms such as R-MCL [20], SR-MCL [21], MCL-CA [22] and RRW [23] were proposed to overcome further weaknesses of MCL. However, SR-MCL still predicted too many complexes, and RRW predicted complexes of a particular size. On the basis of these limitations, we design a novel edge-weighting MCL method to detect complexes on PPI networks, which can effectively improve accuracy of clustering results. Then, there are some classic clustering algorithms that can be used on the ClustEval framework [24], such as DBSCAN [25], Spectral Clustering [26], Transitivity Clustering [27], fanny [28], but our study is not a complete clustering problem, classical clustering algorithm need combining with the post-processing or some improvements.

Complete enumeration algorithms aim to enumerate all possible subgraphs in G with density exceeding a specified threshold. Spirin and Mirny [29] proposed three techniques for detecting protein complexes and functional modules from PPI networks. The first approach finds cliques as modules by complete enumeration. The second approach leverages the notion of super-paramagnetic clustering (SPC), which assigns to each vertex a spin with several states. Lastly, they proposed a Monte Carlo optimization-based technique (MC) where finding highly connected set of vertices is formulated as an optimization problem. The CFinder method [30] identifies a set of k-clique modules in a PPI network where k-cliques correspond to k node complete subgraphs of G with a maximum density of 1. It is based on a deterministic approach called the Clique Percolation Method (CPM) [31], which generates overlapping clusters by finding k-clique percolation communities. Then, Cui [32] showed on the yeast PPI network that near-cliques may reveal better quality functional modules compared to overlapping cliques. The ClusteringbasedonMaximal Cliques (CMC) [33] method generated maximal cliques from a weighted PPI network and combined or removed them, considering to connectivity and overlapping rate. A common theme among complete enumeration algorithms is exhaustive search. While such search enables identification of all relevant modules within a PPI network, it is computationally expensive. Therefore, their applications are limited to relatively small PPI networks.

Leung [34] developed a core-attachment approach for identifying complexes from PPI networks of single species and studying the organization of complexes. Ulitsky and Shamir [10] reformulated the problem of finding modules with high confidence connectivity as finding subgraphs to satisfy a weight threshold of their minimum cut. Shi [11] proposed a neural network-based semi-supervised learning method, which leverages proteomic features of subgraphs in a weighted PPI network with their topological features to generate complexes. Macropol [23] proposed a protein complex prediction algorithm, named by RRW, which constructed a cluster of proteins according to the stationary probability of a random walk. Maruyama [35] extended the RRW by introducing a random walk via restarts with a cluster of proteins, each of which is weighted by the sum of strengths for directly physical interactions. Also, Maruyama proposed a novel method based on random walks, Naive Bayes classifiers, and sampling methods [3640].

Above methods only focus on static PPI networks. In reality, PPI networks in a cell are not static but dynamic [4143]. The dynamic PPI network can be changing over time, environments and different stages of cell cycles [44, 45]. Lots of methods used dynamic PPI networks to predict complexes accurately [46, 47]. Li [12] proposed a new DPC algorithm to identify complexes based on gene expression profiles and PPI networks, based on static expressed core in all molecular cycles and short-lived dynamic attachments. Also, Luo [13] proposed a DCA method to identify more accurate protein complexes in dynamic PPI networks. Srihari [48] incorporated time in the form of cell-cycle phases into the analysis of complexes from PPI networks and studied the temporal phenomena of complex assembly and disassembly across phases.

Existing methods constructed PPI networks, based on gene expression variance of each protein [49, 50]. Segal [7] introduced a unified probabilistic model to detect functional modules from gene expression, based on the assumption that genes in the same pathway display similar expression profiles and products of genes work together to accomplish certain task. Maraziotis [9] proposed a DMSP algorithm finding functional modules by integrating gene expression and PPI data. In general, if a protein is at active time point, the expression level of corresponding gene is at the peak point. Some researchers use the dynamic information from gene expression data to construct time-evolving dynamic protein interaction networks, which divided proteins into active and inactive and combined active proteins at the same time to form a new network [12, 31, 42, 43, 49, 50]. We design a new co-expression analysis method to measure each protein complex, based on differential co-expression information. Different proteins in the same complex have similar trend on gene expression intervals.

We propose a novel method to identify complexes on PPI networks. First, we design an edge-weighting MCL method to calculate complexes on PPI networks. Second, we propose a novel co-expression analysis method to evaluate predicted complexes, based on differential co-expression information. To evaluate our method, we test on two experimental yeast PPI networks. On DIP network, our method has Precision and F-Measure values of 0.6004 and 0.5528. On MIPS network, our method has F-Measure values of 0.3774. Comparing to existing methods, our method improves Precision value by at least 0.1752, F-Measure value by at least 0.0448, S n value by at least 0.0771.

Methods

We propose a novel method to identify protein complexes on PPI networks. First, we use Markov Cluster Algorithm with an edge-weighting scheme to calculate complexes on PPI networks. Second, we design a novel co-expression analysis method to measure each protein complex, based on differential co-expression information. Figure 1 shows the overall process of our method and the analysis pipeline to detect complexes from PPI network.

Fig. 1
figure 1

The overall process of our method and analysis pipeline to detect complexes from PPI network

Edge-weighting scheme

A PPI network is formulated as an undirected graph G=(V,E), where v i V represents a protein and (v i ,v j )∈E denotes that protein v i interacts with protein v j .

Given a graph G, N(v i ) denotes all neighbors of v i in the PPI network. Let A be a |V|×|V| adjacency matrix, and A(i,j) denotes the confidence weight of edge (v i ,v j ), defined as follows.

$$A(i,j)=\left\{ \begin{array}{lcl} \frac{|N(v_{i}) \cap N(v_{j})|}{|N(v_{i}) \cup N(v_{j})|} & & {if\ (v_{i},v_{j}) \in E}\\ \max_{k \ne j} \{A(i,k)\} & & {if\ v_{i}=v_{j}}\\ 0 & & else \end{array} \right. $$

where |N(v i )∩N(v j )| is the intersection of neighbors between N(v i ) and N(v j ), and |N(v i )∪N(v j )| is the union of neighbors between N(v i ) and N(v j ).

Since two vertices having a larger proportion of common neighbors, one vertex can move to another vertex with great probability. A canonical flow matrix M indicates the probability of transitions via a random walk, and M(i,j) represents the probability of a transition from v i to v j , defined as follows.

$$M(i,j)=\frac{A(i,j)}{\sum_{k=1}^{n}A(k,j)} $$

where n is the number of all vertices in the graph, and each column of M sum up to 1.

Markov cluster algorithm

Markov Cluster Algorithm proposed by Stijn van Dongen [19], is an iterative process of applying two operations, namely Expand and Inflate. These two operations are alternately applied to an initial stochastic matrix M, iterating until convergence. In addition, Prune is performed at the end of each iteration, in order to remove entries with very small values.

Expand and inflate

The operation of Expand is simply expressed as M exp =M×M. We calculate M exp on the basis of M, and then assign the obtained matrix M exp to M.

The operation of Inflate raises each entry in M using parameter r, and re-normalizes elements in each column that sum up to 1. Then, we assign the obtained matrix M inf to M. The operation of Inflate, named by M inf (i,j), is expressed as follows.

$$M_{inf}(i,j) = \frac{M(i,j)^{r}}{\sum_{k=1}^{n} M(k,j)^{r}} $$

where n is the number of all vertices in the graph, and r is the parameter of Inflate, by default r=2,.

Prune

In the iterative process, there are some entries with very small values, and let these entries to be zero. This operation can make convergence faster, and keep the key part of aggregation information.

We use L j ={k|M(k,j)>0} to represent a collection of vertices in column j with values greater than zero; in other words, it is a collection of vertices that flow to vertex v j . We calculate the average value of all elements in L j as follows.

$$avg(j) = \frac{\sum_{k=1}^{|L_{j}|} M(L_{j}(k),j)}{|L_{j}|} $$

And also, we calculate the threshold to filter entries with small values in column j of M as follows.

$$thd(j) = avg(j) - w \times \frac{\sum_{k=1}^{|L_{j}|} (M(L_{j}(k),j) - avg(j))^{2}}{|L_{j}|} $$

where w is a parameter to adjust the threshold value, by default w=1.

We remove entries with very small values less than thd(j) in column j of M, filled with zero. After the operation, M must be re-normalized, and elements in each column of M sum up to 1.

Cluster

After lots of iterations, we find that most vertices flow to one vertex, and there exists one non-zero entry per column in the flow matrix M. We assign all vertices flowing to the same vertex as belonging to one cluster.

Feature analysis

We propose some significant features, such as graph information and gene expression, to filter and modify complexes predicted by Markov Cluster Algorithm.

Connection

The direct connection (edge) in the PPI network, denotes that one protein interacts with another protein. We not only use the interaction information, but also consider indirect connection with a n-length shortest path as n-connection.

If there exists a n-connection between v i and v j , we can define Connect(v i ,v j ,n)=1; else, Connect(v i ,v j ,n)=0. Moreover, PathNum(v i ,v j ,n) denotes the total number of n-length shortest paths from v m to v n .

Given a protein v k and a complex C, we calculate the ratio of n-connection proteins in C from v k , defined as follows.

$$ConnectRatio(v_{k},C,n) = \frac{\sum_{i=1}^{|C|} Connect(v_{k},v_{i},n)}{|C|} $$

Also, we calculate the ratio of total n-length shortest paths for n-connection proteins in C from v k , defined as follows.

$$PathRatio(v_{k},C,n) = \frac{\sum_{i=1}^{|C|} PathNum(v_{k},v_{i},n)}{|C|} $$

Here, we calculate these features of 2-connection and 3-connection.

Density

We can use the density of complex C to describe intensive degree of n-connection proteins, defined as follows.

$$Den(C,n) = \frac{\sum_{v_{i},v_{j} \in C}Connect(v_{i},v_{j},n)}{|C|\times|C|} $$

When a protein v k is added into complex C, a new complex C can be formed. We calculate the density difference between these two complexes, as follows.

$$DenDiff(v_{k},C,n) = Den(C') - Den(C) $$

where C={C,v k }.

Here, we also calculate these features of 2-connection and 3-connection.

Co-expression

Gene expression data could reflect features of proteins under various conditions in a biological process [43, 51]. It is the numerical expression value of one protein within the time period. For a protein, the fluctuation range of its expression value is not the same. We normalize each value to compare the similarity of expression intervals of proteins, as follows.

$$T_{i}'(l) = \frac{T_{i}(l)}{\max_{l}\{T_{i}(l)\}} $$

where T i (l) represents the expression value of protein v i at the time point l.

On 36 intervals,Fig. 2 shows the gene expression data for eIF3 complex, and Fig. 3 shows the gene expression data for Succinate Dehydrogenase complex (complex II). We find that six proteins in one complex tend to have similar tendency of expression values at the fixed time interval (indicated as gray-shadowed intervals).

Fig. 2
figure 2

The gene expression data of eIF3 complex on 36 intervals

Fig. 3
figure 3

The gene expression data of Succinate Dehydrogenase complex (complex II) on 36 intervals

Two proteins have a similar degree of expression at the same time interval leading to a high co-expression value. If the gene expression data of proteins are not similar, their co-expression value is low. Therefore, we calculate the co-expression value E co (v i ,v j ) of proteins v i and v j , defined as follows.

$$E_{co}(v_{i},v_{j}) = \sum_{l=1}^{m} \ln \frac{T_{i}'(l) + T_{j}'(l)}{|T_{i}'(l) - T_{j}'(l)|} $$

where m is the number of expression intervals.

For a complex C, we can measure the co-expression value, as follows.

$$E_{co}(C) = \frac{\sum_{v_{i},v_{j} \in C} E_{co}(v_{i},v_{j})}{|C|\times|C|} $$

And, the co-expression value between one protein v k and a complex C is defined as follows.

$$E_{co}(v_{k},C) = \frac{\sum_{i=1}^{|C|}E_{co}(v_{k},v_{i})}{|C|} $$

We calculate average co-expression value of all pairs of proteins in the PPI network, defined as E co (avg). If E co (v i ,v j )>E co (avg), we set Co(v i ,v j )=1, else, Co(v i ,v j )=0.

We calculate the ratio of co-expression protein pairs in a given complex C, as follows.

$$CoRatio(C) = \frac{\sum_{v_{i},v_{j} \in C} Co(v_{i},v_{j})}{|C| \times |C|} $$

When a protein v k is added into complex C, a new complex C can be formed. We calculate the co-expression difference between these two complexes, as follows.

$$CoDiff(v_{k},C) = E_{co}(C') - E_{co}(C) $$

where C={C,v k }.

For protein v k , we calculate the number of co-expression proteins in a complex C, as follows.

$$CoProNum(v_{k},C) = \sum_{i=1}^{|C|} Co(v_{k},v_{i}) $$

Also, we calculate the ratio of co-expression proteins for protein v k in a complex C, as follows.

$$CoProRatio(v_{k},C) = \frac{CoProNum(v_{k},C)}{|C|}. $$

Complex detection

We set two thresholds, a lower bound and a higher bound for each of three features, to filtering predicted complexes: Den(C,n), E co (C), Cotadio(C). We reserve complexes with high qualities, discard complexes with low values, and modify median complexes. Algorithm 1 shows the overall algorithm of filtering method.

We use a linear function of seven features to determine how to modify a given complex: ConnectRatio(v k ,C,n), PathRatio(v k ,C,n), DenDiff(v k ,C,n), E co (v k ,C), CoDiff(v k ,C), CoProNum(v k ,C), CoProRatio(v k ,C). We delete proteins in a complex with low qualities, and add neighbor proteins with high values into the complex. Algorithm 2 shows the overall algorithm of modifying method.

Results and discussion

Experiments show that our method achieves better results than some state-of-the-art methods for identifying protein complexes on PPI networks, with the prediction quality improved in terms of many evaluation criteria.

Data set

Our method is applied on two experimental yeast PPI networks. One is retrieved from the Database of Interacting Proteins (DIP) [52], which was used in COACH [34]. Another is downloaded from Munich Information Center for Protein Sequences (MIPS) database [53]. We remove self-connecting interactions and repeated interactions. The DIP network includes 4930 yeast proteins and 17,201 interactions, and the MIPS network contains 12,319 interactions among 4546 yeast proteins.

All predicted complexes are compared with the benchmark data, referred to as CYC2008 [54]. There are 408 manually annotated complexes, which are considered as the gold standard data.

We analyze gene expression data GSE3431 [55] downloaded from Gene Expression Omnibus (GEO), entitled as logic of the yeast metabolic cycle. This data set includes 6,777 gene products that cover more than 95% proteins in PPI networks.

Assessment

At present, there are two popular measurements for evaluating the performance of complexes detection method, from many literatures [14, 56].

Sensitivity, Positive Predictive Value, Accuracy

In addition, Sensitivity (S n ), Positive Predictive Value (PPV) and geometric Accuracy (Acc) have recently been proposed to evaluate the quality of protein complex prediction [56]. Give n benchmark complexes and m predicted clusters, let as Ti,j denote the number of common proteins between the i-th benchmark complex and the j-th predicted cluster. Then, S n , PPV and Acc are defined as follows.

$$\begin{array}{*{20}l} S_{n} &= \frac{\sum_{i=1}^{n} \max_{j}\{T_{i,j}\}}{\sum_{i=1}^{n}|C_{i}|}\\ PPV &= \frac{\sum_{j=1}^{m} \max_{i}\{T_{i,j}\}}{\sum_{i=1}^{n} \sum_{j=1}^{m} T_{i,j}}\\ Acc &= \sqrt{S_{n} \times PPV} \end{array} $$

Generally, S n indicates that predicted complexes have a good coverage of proteins in benchmark complexes, and PPV indicates that predicted complexes are likely to be true positive. The geometric accuracy (Acc) indicates the tradeoff between S n and PPV. It is obtained by computing the geometrical mean of them.

Precision, Recall, F-measure

The overlapping score O(C p ,C b ) is used to assess how effectively a predicted complex C p matches a benchmark complex C b [14], defined as follows.

$$O(C_{p}, C_{b}) = \frac{|C_{p} \cap C_{b}|^{2}}{|C_{p}| \times |C_{b}|} $$

where |C p | is the number of proteins in the predicted complex, and |C b | is the number of proteins in the benchmark complex. If a predicted complex C p that has no common proteins with a benchmark complex C b , then O(C p ,C b )=0.

Usually, a predicted complex and a benchmark complex are considered as a match if their overlapping score is no less than a threshold value [14]. Let P be the set of complexes predicted by computational methods and B be the set of benchmark complexes in the PPI network. Then, the number of complexes in P at least matching one real complex is denoted by N cp =|{C p |C p P,∃C b B,O(C p ,C b )≥ω}|, while the counterpart number in B can be denoted by N cb =|{C b |C b B,∃C p P,O(C p ,C b )≥ω}|, by default ω=0.2.

Based on above definitions of N cp and N cb , Precision and Recall can be defined as follows.

$$\begin{array}{*{20}l} Precision &= \frac{N_{cp}}{| P |}\\ Recall &= \frac{N_{cb}}{| B |} \end{array} $$

And, F-measure is their harmonic mean, defined as follows.

$$F-Measure = \frac{2 \times Precision \times Recall} {Precision + Recall} $$

Filtering threshold

We consider three features Vector(C) to filtering predicted complexes. Based on co-expression value as Eco(C), CoRatio(C) as well as graph information as Den(C,n), the preliminary complex clustering result will be determined to be either reserved, removed or further modified. Two thresholds including a upper bond and a lower bond are set for each of the three features. The Tables 1, 2 and 3 show the extent of improvement of our result enhanced solely by each one of the three features respectively under distinctive thresholds. Both the gene expression and graph information are effective and make a contribution to a significantly improved result by 0.04 to 0.10 in Precision.

Table 1 Validity of our filtering threshold parameters (max,min) of E co (C)
Table 2 Validity of our filtering threshold parameters (max,min) of CoRatio(C)
Table 3 Validity of our filtering threshold parameters (max,min) of Den(C,2)

In addition, the optimized parameter thresholds are obtained from the three tables.The minimum value of E co (C) can be set to 50; that is, our method removes complexes with low co-expression values, and N cb and Recall decrease slightly, but Precision increases a lot. The maximum value of E co (C) can be set to 80; that is, our method reserves complexes with high co-expression values, and Precision and F-Measure increase slightly. Moreover, the threshold of CoRatio(C) can be set to (0.75,0.20), and the threshold of Den(C,n) can be set to (0.18,0.06).

Modifying analysis

We use a linear function of seven features to evaluate the probability of a protein adding into a complex, as follows.

$$\begin{array}{*{20}l} L(v_{k},C) &= \sum_{i=1}^{7} w_{i} \times Feature_{i}(v_{k},C)\\ P(v_{k},C) &= \frac{1}{1 + e^{-L(v_{k},C)}} \end{array} $$

We discuss the effectiveness of our linear function, by using some randomly generated positive and negative samples to regression. First, we randomly select a protein in the network, and choose the size of generated complex. Then, we randomly choose another protein from the collection of neighbors. Repeat this step until producing a complex. Finally, we calculate the neighboring collection of this generated complex. If adding a neighboring protein makes new complex better than old one, we assign it is positive; else, it is negative. Similarly, we traverse all proteins in this complex. If deleting a internal protein makes new complex better than old one, we assign it is positive; else, it is negative. We use 1894 positive samples and 9309 negative samples to produce the optimal parameters, as w={0.01,0.02,0.01,0.24,0.36,0.03,0.33}. Gene Expression information as CoProRatio(vk,C),CoDiff(vk,C), Eco(vk,C) contribute the most.

Deleting

For CYC2008, we filter some complexes with less than three proteins, keeping 236 complexes with average size of 6.68 proteins. For each complex, we randomly put in a protein from PPI network to form a new complex. When randomly delete a protein, the probability of correct deleting is \(P_{random} = \frac {1}{1 + 6.68} \approx 0.130\), but accuracy of our deleting method is 0.48. When randomly putting in two or three proteins, accuracy of our deleting method are 0.63 and 0.70, respectively. The analysis of deleting method is shown in Table 4.

Table 4 Validity of our deleting method

Adding

On DIP network, we generate 1507 complexes and use P(v k ,C) for adding neighboring proteins. When randomly add a protein, the probability of correct adding is \(P_{random} = \frac {6189}{105553} \approx 0.0586\). However, accuracy of our adding method is 0.0765 if P>0.5, and accuracy of our adding method is 0.3333 if P>0.8. The analysis of adding method is shown in Table 5.

Table 5 Validity of our adding method

Comparison to existing methods

We compared the performance of our method with three existing methods, such as COACH, ClusterONE and MCL. COACH is a novel core-attachment method to detect complexes with two stages [34]. It detected cores of complexes and then added attachments into these cores to form biologically meaningful structures. ClusterONE is a method for detecting potentially overlapping complexes from the PPI data, clustering with overlapping neighborhood expansion [18]. MCL is a graph clustering algorithm based on stochastic flow simulation [19], which is effective in clustering biological networks. To evaluate our method, we test on two experimental yeast PPI networks.

On DIP network, results by our method and three existing methods are shown in Table 6. Our method has Precision and F-Measure values of 0.6004 and 0.5528. COACH achieves Precision and F-Measure values of 0.2896 and 0.3211, ClusterONE achieves Precision and F-Measure values of 0.4252 and 0.3675, and MCL achieves Precision and F-Measure values of 0.2353 and 0.2941. Comparing to existing methods, our method improves Precision value by at least 0.1752, and F-Measure value by at least 0.1853.

Table 6 Results by our method and three existing methods on DIP network

On MIPS network, results by our method and three existing methods are shown in Table 7. Our method has F-Measure value of 0.3774. COACH achieves F-Measure values of 0.2497, ClusterONE achieves F-Measure values of 0.3326, and MCL achieves F-Measure values of 0.3158. Comparing to existing methods, our method improves F-Measure value by at least 0.0448, and also improves S n value by at least 0.0771. Although our method did not achieve the best recall value, as we can see from the table, method with high recall values like DPClus and RRW, unavoidably have a particular poor precision, which indicates the high recall is based on counting into a overall large number of clusters and hence the precision is weakened. Comparatively, our method remains a relatively high recall value and achieves the best precision revealing the overall efficiency of our model.

Table 7 Results by our method and three existing methods on MIPS network

An available COACH system is downloaded from http://www.comp.nus.edu.sg/~lixl/, and a fast and free implementationof ClusterONE is available at http://www.paccanarolab.org/cluster-one/. Also, we compare our method to many other complex detection methods in Table 8, and results of these methods are from many literatures [10, 14, 30, 33].

Table 8 Results by our method and other complex detection methods

Running time

Our experiments are conducted on a PC with Intel(R) Xeon(R) CPU E5-1620 of 3.7 GHz and 12.0 GB RAM. Here, we compare the running time of different methods on the PPI network with 4930 nodes and 17201 edges. Our method completes complex detection within 736 s, as shown in Table 9.

Table 9 Running time of different methods on the PPI network

Conclusions

We propose a novel method to identify complexes on PPI networks. First, we design an edge-weighting MCL method to calculate complexes on PPI networks. Second, we propose some significant features, such as graph information and gene expression, to filter and modify complexes predicted by Markov Cluster Algorithm.

Experiments show that our method achieves better results than some state-of-the-art methods for identifying complexes on PPI networks. To evaluate our method, we test on two experimental yeast PPI networks. On DIP network, our method has Precision and F-Measure values of 0.6004 and 0.5528, improves by at least 0.1752 and 0.1853. On MIPS network, our method has F-Measure value of 0.3774, improves by at least 0.0448.