Background

Proteins that are predicted to be expressed from an open reading frame, but for which there is no experimental evidence of translation are known as hypothetical proteins (HPs). Across the whole genome, approximately 2% of the genes code for proteins, while the remaining are non-coding or still functionally unknown [1]. These known-unknown regions for which no functional links are discovered, i.e. those with no biochemical properties or obvious relatives in protein and nucleic acid databases are known as orphan genes, and the end products are called HPs [2]. These proteins are of great importance, as many of them might be associated with human diseases, thus falling into functional families. Despite their lack of functional characterization, they play an important role in understanding biochemical and physiological pathways; for example, in finding new structures and functions [3], markers and pharmacological targets [4] and early detection and benefits for proteomic and genomic research [5]. In the recent past, many efficient approaches have existed and the tools are publicly available to predict the function of the HPs. One such widely used technique is protein-protein interaction (PPI) analyses, which is considered valuable in interpreting the function of HPs [6]. While many proteins often interact with other proteins towards expediting their functions, there are challenges that are not just limited to their function but also to their regulation [7]. Therefore, characterizing the uncharacterized proteins helps to understand the biological architecture of the cell [8]. While high-throughput experimental methods like the yeast two-hybrid (Y2H) method and mass spectrometry are available to discern the function of proteins, the datasets generated by these methods tend to be incomplete and generate false positives [9]. Along with PPIs, there are other methods to identify the essentiality of proteins, such as antisense RNA [10], RNA interference [11], single-gene deletions [12] and transposon mutagenesis [13]. However, all these approaches are tedious, expensive and laborious; therefore, computational approaches combined with high-throughput experimental datasets are required to identify the function of proteins [9, 14]. Different computational methods have been designed for estimating protein function based on the information generated from sequence similarity, subcellular localization, phylogenetic profiles, mRNA expression profiles, homology modelling etc. [15]. Very recently, Lei et al. predicted essential proteins based on RNA-Seq, subcellular localization and GO annotation datasets [16, 17]. Furthermore, tools such as “LOCALIZER” [18], that predicts subcellular localization of both plant and effector proteins in the plant cell, and IncLocator [19] have been useful in predicting subcellular localization for long non-coding RNAs based on stacked ensemble classifiers [19]. On the other hand, combined analysis of all these methods or datasets is considered to be more predictive in integrating heterogeneous biological datasets [9]. Genome-wide expression analysis, machine learning, data mining, deep learning and Markov random fields are the other prediction methods which are widely employed [20, 21], whereas Support Vector Machines (SVM) [22], Neural Networks [23], Bayesian Networks [24, 25], Probabilistic Decision Trees [26], Rosetta Stone [14, 27], Gene Clustering and Network Neighbourhood analyses [28] have been used to combine different biological data sources to interpret biological relationships. Although these have shown to be successful in predicting protein function, annotation based on feature selection for inferring the function of HPs is wanting. Nevertheless, there has been a steady increase in the use of imparting machine learning and information theoretic features used for development of efficient framework for predicting interactions between proteins [28,29,30].

In this paper, we present a machine learning based approach to predict whether or not the given HP is functional. This method is not based on homology comparison to experimentally verified essential genes, but depends on the sequence-, topological- and Structure-based features that correlate with protein essentiality at the gene level. Features are the observable quantities that are given as input to a machine learning algorithm. Data given across each feature is used by the learning algorithm to predict the output variables. Therefore, selecting the relevant features that could predict the desired outputs is important. There are various features that define the essentiality of the proteins. In our previous study [31], we selected six such features (orthology mapping, back-to-back orthology, domain analysis, sorting signals and sub-cellular localization, functional linkages, and protein interactions) that are potentially viable to predict the function of HPs. Although the prediction performance of the selected features was shown to be acceptable, in this present study we added data on pseudogenes, non-coding RNA and homology modelling to increase the predictability of functionality of these known-unknowns. The additional features which we employed are extended to show the possibility of pseudogenes linked to HPs, proteins that are essentially structural ‘mers’ of the candidate proteins and presence of non-coding RNA signatures. We discuss the performance of newly introduced classification features from a machine learning perspective to validate the function of HPs.

Results

We report the improved classification efficiency when three additional features were introduced (Table 1) to our earlier proposed six-point classification scoring schema. When we analysed the data through 10-fold cross-validation using the WEKA machine learning package, the decision trees (J48) yielded an accuracy of 97%, with SVM (SMO) performing high: 98, 93, 96 for Poly, RBF, npolyk kernals respectively; MLP (neural network perceptron) with 97.67% and Naive Baiyes multinomial with 98.33% (Table 2). Among the classifiers that we evaluated using WEKA, neural networks yielded the best performance with a steady change in performance of the model. In addition, one-way ANOVA with significance level (α) of 0.05 was performed to ascertain the statistical significance of the mean differences across the columns or groups based on the p-value. The results were found to be statistically significant and in agreement with p-value heuristics (positive and negative p-value of 3.166E-290 and 0, respectively). To check the similarity and diversity of the samples, Jaccard index similarity coefficient was plotted, providing different values ranging from perfect similarity (value 1) to low similarity (threshold value). This was further augmented when we compared the HPs from underlying similarity/distance matrix scores for evaluation. Furthermore, Jaccard index statistics revealed that the HPs annotated are inferential with the first six classifiers, but the newly introduced classifiers tend to fall apart with the introduction of non-coding elements (more details in Additional file 1: Figure S2). Secondly, the negative dataset, which we call a discrete dataset, is in principle a list of all known proteins from GenBank falling under important types of HPs. The 194 proteins are probably scaled to only these types, generating bias with the rest of the features. Thus, we argue that the negative dataset was largely more discrete and would have a more stringent heuristic learning set. To further check the redundancy, a pocket variant of perceptron algorithm was used as a unit step function, starting with a random w’ (weight) vector of length 9, eta (positive scale factor) as 0.2 and n as 1000. Invariably, perceptron gave better validation across all classifiers. For example, with a random split of 66% for the training and testing set, after 1000 iterations we obtained an average accuracy of 94.04%, with a maximum 97.97% and a minimum of 60.60%. The split performed was found to be random from all iterations, with no data point from the learning set being used in the testing set. While the SVM yielded an average accuracy of 97.36%, with a max of 100% and min of 88.13%, Naive Bayes, on the other hand, gave an accuracy of 96.62%, with a max of 100% and a min of 88.13%.

Table 1 Description of annotation for the three newly introduced features
Table 2 Comparison of all accuracies of all features using multiple learning algorithms derived through WEKA (ver 3.8) with additional 3 new features increasing accuracy of the model

Discussion

The statistical evaluation suggests that among the newly introduced classifiers, non-coding RNAs and pseudogene features show considerable impact, indicating that most of the HPs are either the products of pseudogenes or linked to ncRNAs (Table 3). Among the other six features, functional linkages, pfam and orthology are highly significant, indicating that annotating the HPs across these features would predict the probable function of HPs (Table 3). Feature selection algorithms like Correlation-based Feature Selection (CFS) and Principal Component Analysis (PCA) also showed improved accuracy, whereas the accuracies on the entire data (ALL) are highest among the three methods indicating the importance of all the nine features in model generation (Table 4). In addition, we derived the best data subsets from the nine features by selecting top scores from all combinations with an ALL subset combination method “1 2 4 6 7 9” by functions_mlp (98.33) and PCA selected data subset “1 2 3 4 5 6 7 8” by functions_smo_npolyk (97.00) and trees_j48 (97.00) as the best accuracies (Table 5).

Table 3 Ranking to show the impact of each feature (Rank 1: High impact, Rank 9: Less impact)
Table 4 Derived accuracies by learning algorithms with default parameters set by WEKA are listed above. Column 1 lists different algorithms
Table 5 Subset evaluation. Accuracies by learning algorithms with default parameters set by WEKA and best data subset by combination (Column 3) and Feature selection method (column 5) are listed above

Overall, the combined methods of feature selection provided ample evidence that all nine features are essential for a model generation. Correlation analysis has further allowed us to improve our classification feature selection pairs which tend to be positive for pfam and orthology (1 & 2); sub-cellular location and functional linkages (5 & 6); functional linkages and homology modelling (6 & 8) (detailed in Additional file 2). In addition, the two-tailed p-values for the above-mentioned combinations (1 & 2; 5 & 6; 5 & 8) were much less than the correlation (R) values, indicating that the association between those variables is statistically significant. We further analysed the performance of our model using various performance evaluation metrics which showed improved performance for the nine-point schema (Table 6, Additional file 3).

Table 6 Individual nine-point schema data are subjected through learning algorithms and scoring metrics are derived, averaged and tabulated. Values are compared with the six-point performance metrics

Methods

Construction of datasets

Two datasets were prepared for this study, viz. positive and negative datasets, with the former constituting the HPs while the latter representing functional proteins. The final dataset consisted of 106 positive instances and 194 negative instances of HPs. These proteins were considered from GenBank with keyword searches “Homo sapiens” AND “Hypothetical Proteins” and further filtered with annotation across the tools (Additional file 4). The negative dataset was used to override false positives, thereby obtaining improved precision. Algorithms learn the characteristics underlying the known functional proteins from the given negative dataset. They are also used to validate the predicted results by making a comparison with known functional proteins. Finally, scores from all the nine classifiers were summed up to give total reliability score (TRS; Fig. 1).

Fig. 1
figure 1

Methodology adopted to generate the classification model

Significance of the features

The six features from our earlier proposed six-point classification scoring schema are pfam score, orthology inference, functional linkages, back-to-back orthology, subcellular location and protein associations taken from known databases and visualizers [31]. Conservation is one of the important features of essential proteins. Studies have proven that essential proteins evolve more slowly and are more evolutionarily conserved than non-essential proteins [32]. While we used sequence-based features like orthology, back-to-back orthology and domain analysis to describe the essentiality of the proteins from the perspective of evolutionary conservation [33], proteins often interact with each other to accomplish the biological functions of cells [34]. Apart from this, functional linkages [35] and subcellular localization [36] have been popular in predicting the essentiality or what we call the known-unknowns of proteins. Three new features that were considered in this model are HPs linked to pseudogenes, homology modelling and HPs linked to non-coding RNAs. Pseudogenes are the functionally deprecated sequences present in the genome of an organism. These disabled copies of genes are the products of gene duplication or retrotransposition of functional genes [37]. It is generally believed that the majority of the HPs are the products of pseudogenes [38]. This feature is employed to check if the HP is actually a pseudogene by performing tBLASTn, a variant of BLAST which considers proteins as a query and searches against the nucleotide database. The homology modelling feature was introduced to predict the essentiality of the protein based on the model generated. As the protein three-dimensional (3D) structure leads to function, there is a possibility to assign biological function to proteins, if one could generate the model to find any interacting domains through structural bioinformatics-based approaches [39]. Most of the HPs from GenBank lack protein-coding capacity. Similarly, non-coding RNAs by definition do not encode proteins. This indicates that some of the HPs may themselves be noncoding RNAs [40]. With this feature, we checked if HPs are associated with non-coding RNAs and are influenced by regulatory regions (detailed in Table 1).

Classifier design and training

Prediction of the function of HPs can be presented as a binary classification problem. Each protein from both datasets was annotated across nine selected features and assigned a score of 1 if the protein met the criteria or 0 if it did not (Fig. 2). Criteria followed for scoring are shown in Additional file 5: Figure S1. The classifier was trained across the nine features according to the scores assigned to the members of each dataset. We used four major classifiers to train and test the model: (i) SVM (ii) Naïve Bayes (iii) Decision trees and (iv) Perceptron. For non-separable learning sets, a variant of perceptron called pocket algorithm [41] was used, which arbitrarily minimizes the error for the non-separable learning set [42]. It works by storing and using the best solution seen so far rather than relying on the last solution. These solutions appear purely stochastic. 80% of the dataset was used for training and the rest for testing. We performed 1000 independent iterations of SVM, Naïve Bayes and Perceptron algorithms. Instead of a k-fold cross-validation, we considered 1000 independent iterations and averaged their results so as to avoid over-fitting, assuming that a k for such a problem is beyond the scope of this work. Further, we analysed the data using the Waikato Environment for Knowledge Analysis (WEKA) software package (version 3.8) [43] where 37 other learning algorithms were used along with the aforementioned four major algorithms. WEKA was implemented for classifier design, training and evaluation. Finally, Jaccard indices followed by training the datasets using machine learning algorithms were used to infer heuristics.

Fig. 2
figure 2

Workflow to annotate HPs across each classifier (Details in Additional file 2: Figure S1)

Performance evaluation

Evaluating the performance of learning algorithms is a central aspect of machine learning. Several measures including cross-validation as a standard method [44] and a 10-fold cross-validation using WEKA were applied to test the performance of the predictive model. To mitigate the over-fitting problem, the following measures were used to evaluate the performance of the classifiers: accuracy, sensitivity, specificity, F1 score, Matthew’s Correlation Coefficient (MCC) [45, 46]. Specificity, Precision, Sensitivity and MCC of 1 indicate perfect prediction accuracy [47].

The measures are defined as follows:

Accuracy = (TP + TN) / (TP + FN + FP + TN).

Sensitivity (Recall) = TP / (TP + FN).

Specificity = TN / (TN + FP).

Precision = TP / (TP + FP).

F1 Score = 2(Precision * Recall) / (Precision + Recall).

Matthews Correlation Coefficient (MCC).

= ((TP x TN) - (FP x FN)) / (TP + FP) (TP + FN) (TN + FP) (TN + FN).

where TP: True Positives (positive samples classified correctly as positive), TN: True Negatives (negative samples classified correctly as negative), FP: False Positives (negative samples predicted wrongly as positive) and FN: False Negatives (positive samples predicted wrongly as negative).

Conclusion

We have proposed a nine-point classification scoring schema to help functionally annotate the HPs. While a large number of heuristics were interpreted to introduce such problems, there is a strong need to ensure that the HPs in question are provided a function in silico. An attempt has been made to close the gap of providing functional linkages to HPs. The addition of classification features would possibly serve as a valuable resource for analysing data and for understanding the known-unknown regions. The potential regulatory function of HPs could be determined if there are larger curated datasets. However, this is also influenced by how the HPs interact with each other, given a new set of dimensions in the form of next-generation sequencing to the scientific community.