Background

The tertiary structures of proteins are important for understanding their functions, and have a lot of biomedical applications, such as the drug discovery [1]. With the wide application of next generation sequencing technologies, millions of protein sequences have been generated, which create a huge gap between the number of protein sequences and the number of protein structures [2, 3]. The computational structure prediction methods have the potential to fill the gap, since it is much faster and cheaper than experimental techniques, and also can be used for proteins whose structures are hard to be determined by experimental techniques, such as X-ray crystallography [1].

There are generally two major challenges in protein structure prediction [4]. The first challenge is how to sample the protein structural model from the protein sequences, the so-called structure sampling problem. Two different kinds of methods have been used to do the model sampling. The first is template-based modeling method [511] which uses the known structure information of homologous proteins as templates to build protein structure model, such as I-TASSER [12], FALCON [10, 11], MUFOLD [13], RaptorX [14], and MTMG [15]. The second is ab initio modeling method [1621], which builds the structure from scratch, without using existing template structure information. The second challenge is how to select good models from generated models pool, the so-called model ranking problem. It is essential for protein structure prediction, such as selecting models generated by ab initio modeling methods. There are mainly two different types of methods for the model ranking. The first is consensus methods [2225], which calculate the average structural similarity score of a model against other models as its model quality, such as Modfoldclust2 [24] which compares 3D models of proteins by the Q measure. This method assumes the models in a model pool that are more similar to other models have better quality. It shows good performance in previous Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiments [26] (during previous CASP, the consensus QA methods that evaluate protein model quality assessment by pairwise comparison usually performs better than single-model QA methods that evaluate protein model’s quality without using other model’s information), which is a worldwide experiment for blindly testing protein structure prediction methods every 2 year. However, the accuracy of this method depends on input data, such as the proportion of good models in a model pool and the similarity between low quality models. It has been shown that this kind of method is not working well when a large portion of models are of low quality [27]. The time complexity of most consensus methods is O(n2) time complexity (n: the total number of models), making it too slow to assess the quality of a large number of models. These problems with consensus methods highlight the importance of developing another kind of protein model quality assessment (QA) method - single-model QA method [5, 18, 2733] that predicts the model quality based on the information from a single model itself. Single-model quality assessment methods only require the information of a single model as input, and therefore its performance does not depend on the distribution of high and low quality models in a model pool. In this paper, we focus on develop a new single-model quality assessment method that uses deep learning in conjunction with a number of useful features relevant to protein model quality.

Currently, most single-model QA methods predict model quality from sequence evolutionary information [34], residue environment compatibility [35], structural features and physics-based knowledge [2932, 3639]. One such single-model QA method - ProQ2 [40] has relatively good performance in the CASP11 experiment, which uses Support Vector Machines with a number of features from a model and its sequence to predict its quality. ProQ3 [41] is updated version of ProQ2 by exchanging features with energy terms calculated from Rosetta and shows superior performance over ProQ2. Another single-model quality assessment method - RFMQA [39] applies Random Forest on structural features and knowledge-based potential energy terms, which achieves good performance on CASP10 targets. In addition, ResQ [42] is a new protein model quality assessment method for estimating B-factor and residue-level quality in protein structure prediction, based on local variations of modelling simulations and the uncertainty of homologous alignments.

Here, we propose to develop a novel single-model quality assessment method based on deep belief network - a kind of deep learning methods that show a lot of promises in image processing [4345] and bioinformatics [46]. We benchmark the performance of this method on large QA datasets, including the CASP datasets, four datasets from the recently 3DRobot decoys [47], and a dataset generated by our in-house ab initio modeling method UniCon3D. The good performance of our method - DeepQA on these datasets demonstrate the potential of applying deep learning techniques for protein model quality assessment.

The paper is organized as follows. In the Methods Section, we describe the datasets and features that are used for deep learning method, and how we implement, train, and evaluate the performance of our method. In the Result Section, we compare the performance of deep learning technique with two other QA methods based on support vector machines and neural networks. In the Results and Discussion Section, we summarize the results. In the Conclusion Section, we conclude the paper with our findings and future works.

Methods

Datasets

We collect three previous CASP models (CASP8, CASP9, and CASP10) from the CASP website http://predictioncenter.org/download_area/, 3DRobot decoys [47], and 3113 native protein structure from PISCES database [48] as the training datasets. We use CASP11 models that were not used in training as testing dataset, and UniCon3D ab initio CASP11 decoys as the validation datasets.

The 3DRobot decoys have four sets: 200 non-homologous (48 α, 40 β, and 112 α/β) single domain proteins each having 300 structural decoys; 58 proteins used in a Rosetta benchmark [49] each having 100 structural decoys; 20 proteins in a Modeller benchmark [50] each having 200 structural decoys; and 56 proteins in a I-TASSER benchmark each having 400 structural decoys. Two sets (stage1 and stage2) of CASP11 targets are used to test the performance of DeepQA. Each target at stage one contains 20 server models spanning the whole range of structural quality and each target at stage two contains 150 top server models selected by Davis-QAconsensus method. In total, 803 proteins with 216,875 structural decoys covering wide range of qualities are collected for training and testing DeepQA. All of these data and calculated quality scores are available at: http://cactus.rnet.missouri.edu/DeepQA/. The quality score of a model is the GDT-TS score [51] in the range [0, 1] that measures the similarity between the model and its corresponding native structure. The LGA package [52] is used to calculate GDT-TS score and the official CASP website is used to download models and native structure based on domains. In addition, we validate performance of our QA methods in a dataset produced by our ab initio modeling tool UniCon3D, which in total includes 24 targets and 20,030 models. The average of first ranked GDT_TS scores (GDT_TS1) for 84 models of Stage one and Stage two is 0.54 and 0.58 respectively. For the ab initio dataset, the average of first ranked GDT_TS score is 0.20.

Input features for DeepQA

In total, 16 features are used as input for our method DeepQA, which describe the structural, physio-chemical and energy properties of a protein model. These features include nine available top-performing energy and knowledge-based potentials scores, including ModelEvaluator score [31], Dope score [32], RWplus score [30], RF_CB_SRS_OD score [29], Qprob scores [33], GOAP score [53], OPUS score [54], ProQ2 score [40], DFIRE2 score [55]. All of these scores are converted into the range of zero and one as the input features for training the deep leaning networks. Occasionally, if a feature cannot be calculated for a model due to the failure of a tool, its value is set to 0.5.

The remaining seven input features are generated from the physio-chemical properties of a protein model. These features are calculated from a structural model and its protein sequence [37], which include: secondary structure similarity (SS) score, solvent accessibility similarity (SA) score, secondary structure penalty (SP) score, Euclidean compact (EC) score, Surface (SU) score, exposed mass (EM) score, exposed surface (ES) score. All of these 16 scores are converted into the range between zero and one for training the deep learning networks, and the following formula is used for normalizing DFIRE2, RWplus, and RF_CB_SRS_OD scores:

$$ \left\{\begin{array}{l} Norm\_{S}_{Dfire}\kern3.72em =\frac{-{P}_{Dfire\ score}}{1.971*L}\\ {} Norm\_{S}_{RWplus}\kern3.08em =\frac{-{P}_{RWplus\ score}}{232.6*L}\\ {} Norm\_{S}_{RF\_CB\_SRS\_OD}\kern1em =\frac{700-{P}_{RF\_CB\_SRS\_OD\ score}}{1000+0.4823*L}\end{array}\right. $$

L is the sequence length, P Dfire score is the predicted DFIRE2 score, P RWplus score is the predicted RWplus score, and P RF_CB_SRS_OD score is the predicted RF_CB_SRS_OD score. The score is set to zero when the calculated result is less than zero, and one when the calculated result is larger than one. Occasionally, if a feature cannot be calculated for a model due to the failure of a tool, its value is set to 0.5.

A summary table of all features and their descriptions is given in Table 1.

Table 1 16 features for benchmarking DeepQA

Deep belief network architectures and training procedure

Our in-house deep belief network framework [46] is used to train deep learning models for protein model quality assessment. As is shown in Fig. 1, in this framework, a two-layer Restricted Boltzmann Machines (RBMs) form the hidden layers of the deep learning networks, and one layer of logistic regression node is added at the top to output a real value between 0 and 1 as predicted quality score. The weights of RBMs are initialized by unsupervised learning called pre-training. The pre-train process is carried out by the ‘contrastive divergence’ algorithm to adjust the weight in the RBM networks [56]. The mean square error is considered as cost function in the process of standard error backward propagation. The final deep belief architecture is fine-tuned and optimized based on Broyden-Fletcher-Goldfarh-Shanno(BFGS) optimization [57]. We divide the training data equally into five sets, and a five-fold cross validation is used to train and validate DeepQA. Five parameters of DeepQA are adjusted during the training procedure. The five parameters are total number of nodes at the first hidden layer (N1), total number of nodes at the second hidden layer (N2), learning rate Ɛ (default 0.001), weight cost ω (default 0.07), and momentum ν (default from 0.5 to 0.9). The last three parameters are used for training the RBMs. The average of Mean Absolute Error (MAE) is calculated for each round of five-fold cross validation to estimate the model accuracy. MAE is the absolute difference of predicted value and real value.

Fig. 1
figure 1

The Deep Belief Network architecture for DeepQA

Model accuracy evaluation metrics

We evaluate the accuracy of DeepQA on 84 protein targets on both stage one and stage two models of the 11th community-wide experiment on the Critical Assessment of Techniques for Protein Structure Prediction (CASP11), which are available in the CASP official website (http://www.predictioncenter.org/casp11/index.cgi).

The real GDT-TS score of each protein model is calculated against the native structure by TM-score [51]. Second, all feature scores are calculated for each protein model. The trained DeepQA is used to predict the quality score of a model based on its input feature scores.

To evaluate the performance of QA method, we use the following metrics: average per-target loss which is the difference of GDT-TS score of the top one model selected by a QA method and that of the best model in the model pool, average per-target correlation which is the Pearson’s correlation between all models’ real GDT-TS scores and its predicted scores, the summation of real TM-score and RMSD scores of the top models selected by a QA method, and the summation of real TM-score and RMSD scores of the best of top five models selected by QA methods.

To evaluate the performance of QA methods on ab initio models, we calculated the average per-target TM-score and RMSD for the selected top one model, and also for the best of selected top five models by QA methods.

Results and discussion

Comparison of Deep learning with support vector machines and neural networks

We train the deep learning and two other widely used machine learning techniques (Support Vector Machine and Neural Network) separately on our training datasets and compare their performance using five-fold cross-validation protocol. SVMlight [7] is used to train the support vector machine, and the tool Weka [58] is used to train the neural networks. The RBF kernel function is used for support vector machine, and the following three parameters are adjusted: C for the trade-off between training error and margin, Ɛ for the epsilon width of tube for regression, and parameter gamma for RBF kernel. We randomly select 7, 500 data points from the whole datasets to form a small dataset to estimate these parameters of support vector machine to speed up the training process. Based on the cross validation result on this selected small dataset, C is set to 60, Ɛ to 0.19, gamma to 0.95. For the neural network, we adjust the following three parameters: the number of hidden nodes in the first layer (from 5 to 40), the number of hidden nodes in the second layer (from 5 to 40), and the learning rate (from 0.01 to 0.4). Based on the cross validation result on the entire datasets, we set the number of hidden nodes as 40 and 30 for the first and second layer respectively, and the learning rate is set to be 0.3. For the deep belief network, we test the number of hidden nodes in the first and second layer of RBMs from 5 to 40 respectively, learning rate Ɛ from 0.0001 to 0.01, weight cost ω from 0.001 to 0.7, and momentum ν from 0.5 to 0.9. Based on the MAE of cross validation result, we find the following parameters with good performance: the number of hidden nodes in the first and second layer of RBMs is set to 20 and 10 respectively, learning rate to 0.0001, weight cost to 0.007, and momentum from 0.5 to 0.9. After these three machine learning methods are trained, they are evaluated on the test datasets.

The correlation and loss on both stage one and stage two models of CASP11 datasets are calculated for these three methods, and the results are shown in Table 2. Deep belief network has the best average per-target correlation on both stage one and stage two. The loss of DeepQA is also lower than or equal to the other two methods. The result of Wilcoxon signed ranked sum test between deep belief network and other two methods is also added in Table 2. The results suggest that deep belief network is a good choice for protein quality assessment problem.

Table 2 The accuracy of Deep Belief Network, Support Vector Machines, and Neural Networks in terms of Mean Absolute Error (MAE) based on cross validation of training datasets with 16 features, the average per-target correlation, and loss on stage 1 and stage 2 of CASP11 datasets for all three difference techniques. P-value is calculated for the significance of DBN compared to other two methods

Comparison of DeepQA with other single-model QA methods on CASP11

In order to reduce the model complexity and improve accuracy, we do a further analysis by selecting good features out of all these 16 features for our method DeepQA. First of all, we fix a set of parameters with good performance on all 16 features (e.g., the number of nodes in the first and second hidden layer is set to 20 and 10 respectively), and then train the Deep Belief Network for different combination of all these 16 features. Based on the MAE of these models in the training datasets, we use the following features which has relatively good performance and also low model complexity as the final features of DeepQA: Surface score, Dope score, GOAP score, OPUS score, RWplus score, Modelevaluator score, Secondary structure penalty score, Euclidean compact score, and Qprob score. After DeepQA with these sub set of features is trained on the training data, it is blindly tested on the test datasets.

We evaluate the DeepQA on CASP11 datasets, and compare it with other single-model QA methods participating in CASP11. We use the standard evaluation metrics - average per-target correlation and average per-target loss based on GDT-TS score to evaluate the performance of each method (see the results in Table 3). On stage one of CASP11, the average per-target correlation of DeepQA is 0.64, which is the same as the ProQ2 - the top single-model quality assessment method in the CASP11 experiment - and better than Qprob. The average per-target loss of DeepQA is 0.09, same as ProQ2 and ProQ2-refine, and better than other single-model QA methods. On stage two models of CASP11, DeepQA has the highest per-target average correlation. Its per-target average loss is the same as ProQ2, and better than all other QA methods. The result of Wilcoxon signed ranked sum test between DeepQA and other methods is also added in Table 3. Overall, the results demonstrate that DeepQA has achieved the state-of-the-art performance.

Table 3 Average per-target correlation and loss for DeepQA and other top performing single-model QA methods on CASP11. The table is ranked based on the average per-target loss on stage two of CASP11. P-value of Wilcoxon signed ranked sum test* between DeepQA and other methods is also included in the table

In order to evaluate how DeepQA aids the protein tertiary structure prediction methods in model selection, we apply DeepQA to select models in the stage two dataset of CASP11 submitted by top performing protein tertiary structure prediction methods. For most cases, DeepQA helps the protein tertiary structure prediction methods to improve the quality of the top selected model. For example, DeepQA improves overall Z-score for Zhang-Server by 6.39, BAKER-ROSETTASERVER by 16.34, and RaptorX by 6.66. The result of applying DeepQA on 10 top performing protein tertiary structure prediction methods is shown at Additional file 1: Table S1.

Case study of DeepQA on ab initio datasets

In order to assess the ability of DeepQA in evaluating ab initio models, we evaluate it on 24 ab initio targets with more than 20,000 models generated by UniCon3D. Table 4 shows the average per-target TM-score and RMSD for the top one model and best of top 5 models selected by DeepQA, ProQ2, and two energy scores (i.e., Dope and RWplus), respectively. The result shows DeepQA achieves good performance in terms of TM-score and RMSD compared with ProQ2 and two top-performing energy scores. The TM-score difference of best of top 5 models between DeepQA and ProQ2 is significant. In most cases, Z-score is also widely used to highlight the significance of QA methods for model selection. The summation of Z-score based on TM-score and RMSD for each QA method is also included in Table 4. The results demonstrate that DeepQA achieves the best performance compared to other methods based on Z-score. Additional files 2 and 3: Tables S2 and S3 show the per-target TM-score and RMSD of DeepQA and ProQ2 on this ab initio datasets, along with Z-score of top 1 model and best of top 5 models for DeepQA.

Table 4 Model selection ability on ab initio datasets for DeepQA, ProQ2, Dope2, and RWplus score based on TM-score and RMSD, and their summation of Z-score

Comparison of DeepQA with individual features on CASP11

In order to examine the improvement that DeepQA achieved by integrating multiple features for protein quality assessment, specifically, the improvement of DeepQA compared against its nine input training features, we performed Wilcoxon signed ranked sum test on per-target correlation and loss metrics between each input feature and DeepQA predictions. The correlation, loss and significance on Stage one and Stage two for DeepQA and nine input training features are shown in Table 5. In Table 5, DeepQA achieves best correlation on Stage1 against all other nine features, and P-value of statistical analysis between DeepQA and most features (except Qprob) is less than 0.05. However, P-value of statistical analysis on Stage two in Table 5 is less than 0.05 for DeepQA against all nine input features. For the loss metric, DeepQA achieves the best performance against all nine input features, but P-value of statistical analysis shows that the improvement is not always significant. In summary, we compared the performance of DeepQA with all nine input features, and the result shows improvement based on both correlation and loss on CASP11 datasets. In addition, the significant improvement of DeepQA on correlation metric compared with most input features (except Qprob) has been achieved according to the statistical analysis of Wilcoxon signed ranked sum test, and the improvement of DeepQA on loss metric is not significant compared with most input features, especially on Stage two of CASP11 datasets.

Table 5 Average per-target correlation and loss on Stage 1 and Stage 2 for DeepQA and its training features on CASP11. The significance between DeepQA and individual feature was assessed by Wilcoxon signed ranked sum paired t-test*, and its P-value was included to represent the improvement of DeepQA against its input features

Conclusions

In this paper, we develop a new single-model QA method (DeepQA) based on deep belief network. It performs better than support vector machines and neural networks, and achieve the state-of-the-art performance in comparison with other established QA methods. DeepQA is also useful for ranking ab initio protein models. And DeepQA could be further improved by incorporating more relevant features and training on larger datasets.