Assessing the Quality of Modelled 3D Protein Structures Using the ModFOLD Server

  • Daniel Barry Roche
  • Maria Teresa Buenavista
  • Liam James McGuffin
Protocol
Part of the Methods in Molecular Biology book series (MIMB, volume 1137)

Abstract

Model quality assessment programs (MQAPs) aim to assess the quality of modelled 3D protein structures. The provision of quality scores, describing both global and local (per-residue) accuracy are extremely important, as without quality scores we are unable to determine the usefulness of a 3D model for further computational and experimental wet lab studies.

Here, we briefly discuss protein tertiary structure prediction, along with the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition and their key role in driving the field of protein model quality assessment methods (MQAPs). We also briefly discuss the top MQAPs from the previous CASP competitions. Additionally, we describe our downloadable and webserver-based model quality assessment methods: ModFOLD3, ModFOLDclust, ModFOLDclustQ, ModFOLDclust2, and IntFOLD-QA. We provide a practical step-by-step guide on using our downloadable and webserver-based tools and include examples of their application for improving tertiary structure prediction, ligand binding site residue prediction, and oligomer predictions.

Keywords

Model quality assessment Protein tertiary structure prediction Critical Assessment of Techniques for Protein Structure Prediction (CASP) Web servers Single-model quality assessment methods Consensus-based (clustering) model quality assessment methods Per-residue error Fold recognition Ligand binding site residue prediction Oligomer prediction 

1 Introduction

Proteins are essential molecules in all living cells with numerous key functional and structural roles, both within and between cells [1, 2]. Since the advent of the CASP competition, a large number of template-based and template-free tertiary structure prediction methods have been developed, with the aim of producing 3D models of proteins from sequence. Routinely, these methods generate numerous 3D models with alternative conformations, but determining the most accurate conformation is challenging. Model quality assessment programs (MQAPs) are utilized for protein 3D model quality prediction to help determine the most accurate 3D structural conformation. Hence, MQAPs have become a critical component in many of the leading protein tertiary structure prediction pipelines. Firstly, both the global and per-residue scores help in the estimation of how close a model might be to the native structure. Secondly, these scores provide details in regards to the potential errors within the model. Thirdly, the model quality scores give us a guide as to how useful the models will be in further computational and wet lab studies, such as improving tertiary structure prediction, ligand binding site residue prediction, oligomer predictions, molecular replacement, and mutagenesis studies [1, 2, 3, 4].

In order to fully understand the necessity for and application of MQAPs, protein tertiary structure prediction methods are briefly discussed along with the CASP competition that has driven method development in this area. This is followed by a brief history of MQAPs, the various categories of methods including single-model and consensus-based, and a brief introduction to the practical use of our ModFOLD servers [5, 6, 7].

1.1 A Brief Introduction to Tertiary Structure Prediction

Protein tertiary structure prediction methods can be divided into two major subcategories, the purely template-based modelling (TBM) methods and those that are able to carry out template-free modelling (FM). Basically, if a structural template can be located in the PDB [8], then TBM methods such as homology modelling and fold recognition are utilized. However, if a structural template is unavailable then template-free modelling algorithms, which include physics-based methods and knowledge-based methods need to be utilized [2].

TBM is based on three key concepts: (1) Similar sequences fold into similar structures; (2) many unrelated sequences also fold into similar structures; and (3) there are only a relatively small number of unique folds when compared with the number of proteins found in nature; most of the fold space has been structurally annotated and few new folds are being solved [2, 9] (additionally seeNotes14).

Template-free modelling is also known as ab initio modelling, modelling from first principles, or de novo modelling. Template-free modelling is the prediction of protein tertiary structure from sequence, without utilizing a template protein structure. Template-free modelling involves the undertaking of conformational searches with the use of a designed energy function and results in the construction of several structural decoys based on potential conformations that will be utilized to select the final model. Template-free modelling energy functions are usually subcategorized into physics-based energy functions and knowledge-based energy functions, which are dependent on the utilization of statistics from experimentally solved protein structures [2, 10].

1.2 A Brief History of Model Quality Assessment

Since structural biologists first built theoretical protein models, algorithms have been developed to assess their quality. Early model quality methods were based on two broad concepts: assessing stereochemistry and predicting the free-energy of the model. Popular stereochemistry methods include PROCHECK [11], WHAT-CHECK [12], and a more recent method MolProbity [13]. These methods are mainly utilized to give a basic reality check of the constructed protein model. Nevertheless for multiple models determined as stereochemically correct, these methods are unable to judge among them. Additionally, stereochemical methods discard models with accurate backbone topology, which have stereochemically incorrect placement of side chains. Furthermore, stereochemical methods produce various alternative scores which cannot easily be summed up to produce a single score relating to overall model quality. Thus, these methods cannot truly be considered as MQAP methods in themselves; however some of the single-model MQAPs do contain several of these basic checks [3].

In addition, several physics-based methods have been developed for model quality assessment, which provide statistically determined energy functions, including ANOELA [14, 15] and DFIRE [16]. Alternatively, CHARMM [17] and AMBER [18] utilize numerous physics-based energy functions dependent on molecular force fields. Despite numerous attempts, the construction of a realistic energy function has remained a major challenge [2, 3].

1.3 Critical Assessment of Techniques for Protein Structure Prediction in Relation to Model Quality

The continuous development of more advanced protein structure prediction and model quality assessment tools is driven by the Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition. CASP is a biennial competition whose main goal is the advancement of methods for the prediction of protein 3D structures from sequence. This is accomplished by providing objective testing of the methods via blind prediction. CASP is currently divided into numerous prediction categories, including: tertiary structure prediction—template-based and template-free modelling, disorder prediction, contact prediction, quality assessment, binding site prediction, and homo-oligomer prediction [2, 4, 19].

The quality assessment category (QA) was officially introduced in CASP7 (2006) with 28 methods participating [20]. The number of methods that competed in CASP9 (2010) had risen to 46 [21] and 37 server methods took part in the most recent CASP10 (2012) competition. The increasing number of independent quality assessment groups competing in CASP shows heightened interest for MQAP methods, which boosts competition and innovation in the field. Additionally, this demonstrates the critical role MQAPs now play in 3D modelling of proteins.

The CASP competition initially introduced two QA categories in CASP7, QMODE1 for global model quality prediction and QMODE2 for per-residue quality prediction [20]. In CASP8 and CASP9 another assessment category emerged—QMODE3—where the per-residue errors from QMODE2 predictions are integrated into the B-factor column of the 3D models [4, 21]. One of the top QMODE3 prediction methods from the CASP9 competition was IntFOLD-QA/IntFOLD-TS [4, 22] (seeNote3).

1.4 Cutting-Edge Model Quality Assessment Methods

MQAPs are historically divided into two main categories: single-model-based methods that consider each model in isolation and clustering (or consensus)-based methods which compare multiple models for the same target. Single-model-based methods are comparable with consensus-based methods when a relatively small number of models are available. However, single-model methods currently lack accuracy when a wide range of models are available [5, 7], thus several groups have focused on their improvement [4, 23, 24]. In addition, there are methods and servers that blur the line between single-model methods and clustering approaches, which have recently been defined as the quasi-single-model methods [21]. Such quasi-single model approaches are able to provide accurate assessments of model quality given only a single model. They work by generating a number of alternative possible model conformations based on the target sequence, which are then compared with the target model using a clustering-based approach.

In contrast to single-model-based methods, consensus-based methods are often CPU intensive [7]. However, according to the previous two CASP experiments [21, 25], it has been found that clustering of numerous server models belonging to the same target results in the most accurate model quality predictions, both globally and on a per-residue basis [21, 25]. A list of the top MQAPs from CASP9 (2010) which have publicly available methods can be found in Table 1.
Table 1

Publicly available model quality assessment methods that participated in CASP9

Method

Single/consensus

Global/local scores

References

Web link

IntFOLD-QA

Single and consensus

Global and local scores

Roche et al. [22]

http://www.reading.ac.uk/bioinf/IntFOLD/

MetaMQAP

Single

Global and local scores

Pawloski et al. [48]

http://genesilico.pl/toolkit/mqap

ModFOLDclust2

Consensus

Global and local scores

McGuffin and Roche [7]

http://www.reading.ac.uk/bioinf/ModFOLD/

ModFOLDclustQ

Consensus

Global and local scores

McGuffin and Roche [7]

http://www.reading.ac.uk/bioinf/ModFOLD/

MUFOLD_WQA

Consensus

Global

Wang et al. [49]

MULTICOM

Consensus

Global and local scores

Cheng et al. [50]

http://sysbio.rnet.missouri.edu/multicom_toolbox/

ProQ

Single

Global and local scores

Larsson et al. [51]

http://www.sbc.su.se/~bjornw/ProQ/ProQ.cgi

QMEAN

Single

Global and local scores

Benkert et al. [52, 53, 54]

http://swissmodel.expasy.org/qmean/cgi/index.cgi

QMEANclust

Consensus

Global and local scores

Benkert et al. [52, 53, 54]

http://swissmodel.expasy.org/qmean/cgi/index.cgi

1.4.1 The ModFOLD 3.0 Server

The ModFOLD 3.0 server is a quasi-single-model-based server that implements IntFOLD-TS and ModFOLDclust2, therefore undertaking both single- and consensus-based model quality assessment. Our ModFOLDclust2 [7] (IntFOLD-QA [22]) consensus method (along with its predecessor ModFOLDclust) was amongst the top MQAPs that participated in the previous two CASP experiments (CASP8 and CASP9). The ModFOLDclust2 global quality score is composed of a simple linear combination of output scores from two methods, ModFOLDclust and ModFOLDclustQ. ModFOLDclust utilizes the TM-score structural-alignment scoring method [26] to compare a given model against several alternative models that have been constructed for a given protein target [7]. Whereas the ModFOLDclustQ method is a rapid, structural-alignment-free algorithm that utilizes an implementation of the Q-score [27] for multiple model comparison, rather than the time-consuming structural-alignment scores [7] (see Subheading 3). Furthermore, ModFOLDclust2, ModFOLDclust, and ModFOLDclustQ produce both global (QMODE1) and local/per-residue (QMODE2) quality scores. The per-residue errors produced by ModFOLDclust2 are amongst the most accurate and have subsequently been included in the B-factor column (QMODE3) of IntFOLD-TS 3D models [4] as part of the IntFOLD prediction pipeline [22]. (See Table 2 for a comparison of the methods and Notes16).
Table 2

Comparison of all of the ModFOLD server versions in terms of relative speed, upload options, output format, and method types

Method

Relative speed

Upload options

Output modes

Method type

ModFOLD v 1.1

Fast

Single/multiple models

QMODE1

Pure single-model

ModFOLD v 2.0

Slow

Single/multiple models

QMODE1, QMODE2, QMODE3

Quasi-single model

ModFOLD v 3.0 (Default mode)

Slow

Single/multiple models

QMODE1, QMODE2, QMODE3

Quasi-single model

ModFOLD v 3.0 (ModFOLDclustQ)

Fast

Multiple models only

QMODE1, QMODE2, QMODE3

Pure clustering

ModFOLD v 3.0 (ModFOLDclust2)

Slow

Multiple models only

QMODE1, QMODE2, QMODE3

Pure clustering

ModFOLD v 4.0 beta

Slow

Single/multiple models

QMODE1, QMODE2, QMODE3

Quasi-single model

The ModFOLD 3.0 server takes as input: an amino acid sequence, a set of 3D models (or a single model) for a given protein, a short name for the query sequence, and optionally an email address for the return of results. Figure 1 shows a screen capture of the ModFOLD 3.0 submission form. In Fig. 2 are the results of the ModFOLD 3.0 server using consensus mode (ModFOLDclust2) for an example CASP9 target (T0515). The machine-readable results can also be downloaded from the download link at the top of the main results page (Fig. 2). Additionally, the ModFOLD 3.0 server produces a model quality score (between 0 and 1—bad to good) and a p-value in relation to the confidence of the prediction, as can be seen in Fig. 2. The p-value confidence scores range from P < 0.001 (“certain,” colored blue) to P < 0.01 (“high,” colored green) to P > 0.1 (“poor” confidence, colored red) (see Fig. 2). The models are also colored by per-residue error using the same color scheme from blue to red (good to bad (Fig. 2)). Furthermore, the per-residue error plot (Fig. 3), highlights the residues of the predicted model with low confidence. This plot can additionally be downloaded as a PostScript file. Finally, clicking on the model in the main results page (Fig. 2) brings the user to a results page similar to Fig. 4 for the CASP9 target T0515. From the model results page (Fig. 4) the user can download the PDB file of the model with the per-residue errors in the B-factor column. Optionally, the Jmol Java applet may be deployed to display the model in 3D space. Version 4.0 of the ModFOLD server is also currently available for open beta testing.
Fig. 1

Screenshot showing the ModFOLD 3.0 submission form. The web interface gives the user the opportunity to upload either single or multiple models for quality assessment

Fig. 2

Screenshot highlighting the ModFOLD 3.0 results page (in consensus mode—ModFOLDclust2) for the CASP9 target T0515. Machine-readable results can be downloaded from a link at the top of the page. Confidence p-values, model quality scores, and per-residue errors are provided for each model

Fig. 3

Screenshot of the per-residue error plot results page for the top model from Fig. 2 (MULTICOM-REFINE) for the CASP9 target T0515. Additionally, the plot can be downloaded in PostScript format by clicking on the link at the bottom of the results page. This results page is accessed by clicking on the per-residue error plots in the main results page (Fig. 2)

Fig. 4

Screenshot showing the results page for the top model from Fig. 2 (MULTICOM-REFINE) for the CASP9 target T0515. This page shows a large graphical representation of the model. A Jmol application allows users to examine the model in 3D space. Users can optionally download the PDB file of the model with the predicted per-residue errors in the B-factor column. Clicking on the models in the main results page (Fig. 2) brings users to this results page

If a user does not possess a set of models for analysis, then the IntFOLD server can be utilized (seeNotes24) which also integrates the ModFOLDclust2 method into a structure and function prediction pipeline [22]. The ModFOLD standalone software is available as downloadable Java executables (see Subheadings 2 and 3).

2 Materials and Systems Requirements

2.1 Web Server Requirements

For the model quality servers, such as ModFOLD 3.0, internet access, a web browser, a protein model (or a set of models), and the protein sequence are required. The servers are freely accessible at: http://www.reading.ac.uk/bioinf/ModFOLD/. Version 3.0 of the server additionally has the options of exclusively running either ModFOLDclustQ or ModFOLDclust2. See Table 2 for a list of available program versions and options and Note7 for common problems encountered.

2.2 Requirements for the Downloadable Executables

Downloadable executable versions of the ModFOLD component methods are available as executable JAR files which can be run locally. These executables have several dependencies and system requirements which are briefly described below for: ModFOLDclust, ModFOLDclustQ, and ModFOLDclust2. The executables along with extensive README files and example input and output data can be downloaded from the following location: http://www.reading.ac.uk/bioinf/downloads/

2.2.1 ModFOLDclust

The system requirements are as follows:
  1. 1.

    A recent version of Java (java.com/getjava/).

     
  2. 2.

    Please ensure your system environment is set to English, as using other languages may cause problems with the ModFOLDclust calculations: export LC_ALL = en_US.utf-8.

     

2.2.2 ModFOLDclustQ

As ModFOLDclustQ is an alignment free score, the only system requirement is a recent version of Java (java.com/getjava/).

2.2.3 ModFOLDclust2

The system requirements are as follows:
  1. 1.

    A recent version of Java (java.com/getjava/).

     
  2. 2.

    Please ensure your system is running in English as using other languages may cause problems with the ModFOLDclust calculations: export LC_ALL = en_US.utf-8.

     

3 Methods

The following is a step-by-step guide to generate model quality predictions using the latest web server implementations of ModFOLD.

3.1 Requisite Data for Servers

3.1.1 Sequence Data

Paste the full single-letter format amino acid sequence of the target protein into the appropriate text box (see Fig. 1). (Note, the sequence needs to be in FASTA format for use with the ModFOLDclust2 downloadable executables).

Sample sequence of CASP9 target T0515 (input in single-letter code):

MIETPYYLIDKAKLTRNMERIAHVREKSGAKALLALKCFATWSVFDLMRDYMDGTTSSSLFEVRLGRERFGKETHAYSVAYGDNEIDEVVSHADKIIFNSISQLERFADKAAGIARGLRLNPQVSSSSFDLADPARPFSRLGEWDVPKVERVMDRINGFMIHNNCENKDFGLFDRMLGEIEERFGALIARVDWVSLGGGIHFTGDDYPVDAFSARLRAFSDRYGVQIYLEPGEASITKSTTLEVTVLDTLYNGKNLAIVDSSIEAHMLDLLIYRETAKVLPNEGSHSYMICGKSCLAGDVFGEFRFAEELKVGDRISFQDAAGYTMVKKNWFNGVKMPAIAIRELD G S V R T V R E F T Y A D Y E Q S L S

Ensure that the order of the residues in the query sequence corresponds to the sequence of residue coordinates in the model file. The server automatically renumbers the ATOM records in each model to match the residue position in the sequence. In cases where residues in the model file are not contained in the provided sequence, the quality prediction for the model will not be completed.

3.1.2 Model Data

Use the file selector to upload a PDB file of a model or multiple PDB files of models. Ensure that coordinates for each alternative model are contained within separate PDB files; single PDB files containing multiple alternative models will not be accepted. Multiple PDB files should be uploaded as a tarred and gzipped formatted archive file.

Steps to produce a tarball file for your own 3D models:

Linux/MacOS/Irix/Solaris/other Unix users
  1. 1.

    Tar up the directory containing your PDB files, e.g., type the following at the command line: tar cvf my_models.tar my_models/

     
  2. 2.

    Gzip the tar file, e.g., gzip my_models.tar

     
  3. 3.

    Upload the gzipped tar file (e.g., my_models.tar.gz) to the ModFOLD server.

     

Windows users

Use a free application such as 7-zip to tar and gzip the models.
  1. 1.

    Download, install, and run 7-zip.

     
  2. 2.

    Select the directory (folder) of model files to add to the .tar file and click "Add". Select the "tar" option as the "Archive format:" and save the file as something memorable, e.g., my_models.tar

     
  3. 3.

    Select the tar file and click "Add". Then select the "GZip" option as the "Archive format:"—the file should then be saved as my_models.tar.gz

     
  4. 4.

    Upload the gzipped tar file (e.g., my_models.tar.gz) to the ModFOLD server.

     

3.2 Choosing the Quality Assessment Program

Three program selectors are provided in ModFOLD version 3.0. In the ModFOLD 4.0 beta version only one default program option is currently provided (seeNote8 on how to choose the best program for your requirements).

3.2.1 ModFOLD3

ModFOLD3 is the default method used for either single or multiple models. The method compares uploaded 3D models with those obtained from the IntFOLD-TS fold recognition method using the ModFOLDclust2 model quality assessment method (see Subheading 1.4.1).

3.2.2 ModFOLDclust2

ModFOLDclust2 is specific for multiple models. The slow but accurate clustering-based algorithm allows comparison of multiple models using structural alignments enabling the improved selection of target-template alignments. This method has been successfully used in multiple-template modelling as currently integrated into the IntFOLD2 server. This program option is best if there are multiple models for the target sequence, especially if the multiple models are built from alternative target-template alignments using several different methods.

The ModFOLDclust2 method can also work outside the web environment. It is provided in the form of an executable jar file (ModFOLDclust2.jar) and has been developed to run on Linux-based operating systems. This version of the program has been tested on recent versions of Ubuntu and CentOS, but it should work on most versions of Linux that have bash installed as long as the system requirements are met (see Subheading 2.2.3).

To run the program, edit the paths in the shell script (ModFOLDclust2.sh) and run. For example: ./ModFOLDclust2.sh T0515/home/liam/T0515.fasta /home/liam/T0515_example_models/

Or follow the steps below.
  1. 1.

    Optionally, set the environment variable for Java, if Java has not been installed system-wide, e.g.

    export JAVA_HOME=/home/liam/jdk1.6.0/

     
  2. 2.

    Run ModFOLDclust2 with the target name, the sequence file (note that the sequence file should be in FASTA format, i.e., the header line should start with the > symbol with the single-letter amino acid sequence on the subsequent line(s)), and the models directory (note that multiple models of the target under analysis are required to produce model quality scores) included in the command. For example, if the target is "T0515", the sequence file is "/home/liam/T0515.fasta", and the models directory is "/home/liam/T0515_example_models/", then enter the following:

    $JAVA_HOME/bin/java -jar ModFOLDclust2.jar T0515/home/liam/T0515.fasta/home/liam/T0515_example_models/

    Otherwise, if you have java installed system-wide, enter:

    java -jar ModFOLDclust2.jar T0515/home/liam/T0515.fasta/home/liam/T0515_example_models/

    Ensure that the models are provided as separate files in PDB format and the sequence file in FASTA format. Note that FULL PATHS for your input file and models directory are required and that the models directory ends with a "/" (seeNote7).

     
A number of output files are produced in the models directory (e.g., "/home/liam/T0515_example_models/") and a log of the progress is printed to the screen as standard output. A description of the output files are as follows:
  1. 1.

    The QMODE2 output file—this file will consist of the target name plus "_ModFOLDclust2.out", e.g., "T0515_ModFOLDclust2.out". This file conforms to the CASP QA QMODE2 data format (http://predictioncenter.org/casp10/index.cgi?page=format#QA).

     
  2. 2.

    The sorted data file—this file will consist of the target name plus "_ModFOLDclust2.sort", e.g., "T0515_ModFOLDclust2.sort". This file contains the same data as the QMODE2 file but without the headers and in a more convenient machine-readable format.

     
  3. 3.

    B-factor files—these have the extension "*.bfact", e.g., "nFOLD3_TS1.bfact". These files contain your original model with the predicted per-residue error entered into the B-factor column. If you open these files using Pymol or Rasmol you can color your models according to the predicted errors with the b-factor/temperature coloring options.

     
  4. 4.

    Gnuplot files—these have the extension "*.gnuplot", e.g., "nFOLD3_TS1.gnuplot". These files contain per-residue error data for each model which can be plotted using gnuplot. The following is an example script:

    set terminal postscript color

    set output "nFOLD3_TS1.ps"

    set boxwidth 1

    set style fill solid 0.25 border

    set ylabel "Predicted residue error (Angstroms)"

    set xlabel "Residue number"

    set yrange [0:15]

    set yzeroaxis

    unset key

    set datafile missing "NaN"

    plot "nFOLD3_TS1.gnuplot" using 1:2 with boxes,\

    "nFOLD3_TS1.gnuplot" using 1:3 with points

    quit

     

3.2.3 ModFOLDclustQ

ModFOLDclustQ is the quickest server option if there are several hundred models to compare. It uses a rapid clustering-based algorithm which does not require CPU intensive structural alignments. The program is also provided as an executable jar file and can be run in a similar way to the ModFOLDclust2 method described above.

3.3 Optional ModFOLD Server Inputs

Two optional text boxes can be filled in. One is for an email address to send links to the graphical or machine-readable results once the server has finished processing the data. The other is for assigning a preferred (memorable) short name for the prediction job that will enable the user to differentiate results returned by the server.

Acceptable job codes or names are restricted to the following set of characters: letters A-Z (either case), the numbers 0–9, and the following other characters: ., ~, _, - (excluding the commas). The job code or name specified is included in the subject line of the emailed link to results.

3.4 Server Fair Usage Policy

Standard web users are able to submit one job at any one time for each IP address. Once the first submitted job request is completed, notification of results is made via email, if email is provided. Alternatively, one can bookmark the job page where results can be found upon job completion. Once the job is completed, the IP address will be unlocked and the server is again ready for a new job request. The results of a completed job are retained on the server for 1 month.

3.5 Case Studies

The ModFOLD 3 server and the ModFOLDclust2 method have been used in several studies which have led to interesting biological findings. These studies are numerous with several examples centering on EFEO-cuperdoxins [42], Schizosaccharomyces pombe protein Translin [43], and Toll-like receptors [44, 45]. In addition, two recent studies have been undertaken in direct collaboration with the McGuffin group on areas of increased global research activity, namely cardiovascular disease [46] and food security [47].

The first case study focuses on food security in relation to Blumeria graminis, a plant pathogenetic fungi. This study combined proteogenomic and in silico structural and functional annotation to investigate the proteome of the pathogen [47]. Genome wide fold recognition was carried out using the IntFOLD server [22]. The quality of the models produced was then assessed using ModFOLD 3 [7], which resulted in several interesting conclusions in relation to the structural diversity of the genome. Firstly, the model quality assessment analysis showed that a large number of the models had low model quality, thus were probably novel folds or very distantly related to known structures. Secondly, six of the protein models had reasonably good model quality scores (greater than 0.4) and could be confidently assigned putative functions—glycosyl hydrolase activity—which has also been experimentally observed in previous wet lab studies. In conclusion, the model quality assessment helped to determine which protein models could confidently be assigned functions in this study and further highlighted the diversity of folds encoded by the Blumeria graminis genome [47].

The second case study is a more focused study on a specific protein kinase enzyme MST3, which has a putative role in cardiovascular development [46]. This study is experimentally based but used structure prediction (IntFOLD [22]) and model quality assessment (ModFOLDclust2 [7]) to help interpret the laboratory results. Basically, the laboratory results determined that the protein could not be entirely globular, but experimental results were unable to determine why. Modelling of the protein, followed by quality assessment and disordered prediction predicted that the enzyme has a large disordered domain, which was thought to be crucial to its function. The inclusion of the modelling and quality assessment in this study helped to explain the laboratory results and was crucial in proposing a new hypothesis of how this kinase-based pathway functions [46].

4 Notes

  1. 1.

    Model quality assessment methods such as ModFOLDclust2 [7] play an integral role in tertiary structure prediction and thus have been integrated into several prediction pipelines, including the IntFOLD server [22] for the prediction of protein structure, disorder, domain boundaries, and function form sequence. Furthermore, the integration of per-residue errors from ModFOLDclust2 [7] into the B-factor column of IntFOLD-TS [4] models presents the user with a guide to which parts of the model they can trust and which parts they cannot. Without such quality estimations it is difficult for a user to determine the usefulness of a generated 3D model [1, 4].

    Several of our recent tools for the prediction of structure and function from sequence have made extensive use of model quality prediction scores. Recently, the per-residue errors produced by ModFOLDclust2 [7] have been successfully utilized to guide multiple-template selection for improvement of our IntFOLD server models [34]. Furthermore, we recently developed FunFOLDQA [1], a novel quality assessment tool for protein–ligand binding site residue predictions. Finally, we are also testing a homo-multimeric (oligomer) prediction method, which makes use of ModFOLDclust2 scores in its prediction protocol. The above examples are not exhaustive, but highlight the integral and ubiquitous role model quality can play in structural bioinformatics. These example methods are briefly described below, followed by a discussion on common problems encountered in using MQAPs and reasons for choosing to use the web servers or the downloadable executables.

     
  2. 2.

    The IntFOLD server integrates numerous cutting-edge algorithms to predict protein structure and function from sequence. The IntFOLD server firstly utilizes numerous profile–profile alignment tools to produce 3D models for a target sequence. The ModFOLDclust2 quality assessment method is then used to rank the 3D models, producing both global and per-residue quality scores. The top five models in accordance with the global model quality scores become the output of IntFOLD-TS [4, 34]—the tertiary structure prediction component of the pipeline. Additionally, the per-residue errors from ModFOLDclust2 are added to the B-factor column of each model (seeNote1 for details of the IntFOLD2-TS algorithm). The ModFOLDclust2 per-residue errors are also utilized by DISOclust 2.0 [35] to predict regions of disorder/high variability occurring in the protein. DISOclust 2.0 was one of the top disorder prediction methods in the CASP9 experiment [36]. In addition, the domain boundary prediction component of the IntFOLD pipeline, DomFOLD [22], utilizes the PDP method [37] to identify domain boundaries for the top IntFOLD-TS model ranked using ModFOLDclust2. Finally, FunFOLD [38], the ligand binding site residue prediction method, performs model-to-template superpositions of the top ranked 3D models (according to ModFOLDclust2) and related templates containing bound biologically relevant ligands, to identify potential binding site residues [38]. The FunFOLD method was one of the top 10 methods in the CASP9 FN prediction category [39].

     
  3. 3.

    The per-residue errors from ModFOLDclust2 [7] are included in the B-factor column of the IntFOLD-TS [4] 3D model files. The per-residue predictions are very useful for users to know which areas of the model they can trust and which areas of the model are less accurate. An absence of per-residue errors in generated models arguably makes them less useful for further study. The per-residue errors provided by ModFOLDclust2 and integrated into the IntFOLD-TS models were found to be amongst the most accurate by the CASP9 assessors of the QA (QMODE3) category [21].

     
  4. 4.

    Following on from our performance in the CASP9 QA per-residue error category [21] (seeNote2), our new TBM method utilized the ModFOLDclust2 [7] per-residue errors predicted in single template models to construct improved models from multiple templates [34]. This method has been subsequently integrated into the IntFOLD annotation pipeline, now IntFOLD2, which is in its open beta testing phase. Additionally, the method was blind tested in the recent CASP10 prediction experiment.

     
  5. 5.

    The FunFOLDQA method [1] is a quality assessment tool for protein–ligand binding site residue prediction, which borrows many ideas from 3D model quality assessment methods. Currently, there is a lack of methods for assessing the quality of ligand binding site residue predictions prior to the availability of experimental data. Once experimental data is available, the predictions are usually assessed using both the MCC [40] and the BDT [41] scores as in the CASP9 official assessment [39]; however this requires experimentally solved 3D structures with bound ligands. Thus, FunFOLDQA was developed to assess ligand binding site quality, prior to the availability of experimental data. FunFOLDQA utilizes protein feature analysis in its assessment of quality, which includes structure-based feature scores and ligand-based feature scores. The FunFOLDQA algorithm was utilized to re-rank the FunFOLD predictions, which resulted in a statistically significant improvement [1].

     
  6. 6.

    A homo-multimeric manual prediction protocol was also tested in the recent CASP10. The multimeric prediction protocol made use of the ModFOLDclust2 [7] selected 3D server models, per-residue errors, and the lists of templates generated by our IntFOLD2 server [22, 34], with its multi-template [34] modelling protocol (seeNote3). The semiautomated protocol utilized in the CASP10 experiment will be subsequently automated and integrated into future versions of the IntFOLD server.

    Numerous examples of the roles that model quality prediction can play are outlined in Notes16. Not only in 3D structure prediction per se but also in oligomer prediction, ligand binding site prediction, domain prediction, and disorder prediction. Model quality assessment now plays a large and critical role in the field of structural bioinformatics and thus needs to be considered an essential component of any 3D structure prediction algorithm and pipeline.

     
  7. 7.

    When using model quality assessment servers, several problems may be encountered. These mainly include, but are not limited to, the use of incorrect files. Each PDB file should include the coordinates for only one model; not a single PDB file containing the coordinates for multiple alternative models. For multiple models the coordinates should be uploaded as a tarred and gzipped directory of separate files.

    The file format must be correct. All files should be uploaded as PDB files containing correctly formatted ATOM records. In addition, all PDB structures in a single submission should have been built for the same target, using the same target amino acid sequence.

    For models uploaded to the ModFOLD 3 server in particular, the amino acid sequence submitted should correspond exactly to the amino acid sequence used to build the model(s)—not doing so is a common error. See Subheading 3 for more details on how to correctly use the servers and downloadable Java executables.

     
  8. 8.

    Table 2 (Subheading 1.4.1) shows that the ModFOLD server versions vary in relation to speed, input and output options, and sensitivity. As a general rule for quasi-single model and clustering methods, the more models (from alternative templates/alignments) you submit then the better the prediction results. A recommendation of 40 or more alternative models should achieve good cluster analysis. Alternatively, a sequence can be submitted to the IntFOLD2 server (seeNote1), which will build up to 90 alternative models, along with automatically predicting their global and per-residue model quality.

    When using the ModFOLD 3 server, the quality of the results you would like to obtain, the speed by which they are obtained, the output format of the results, and the required input need to be considered. For example, if you use the ModFOLDclustQ option, your results will be returned to you quickly (up to 150 times faster), but if you use the ModFOLDclust2 option, the response time is much slower but the results will be more sensitive [7]. Thus, the user needs to leverage response time for results with the quality of results obtained, when choosing which algorithm to utilize.

    Another consideration is the use of the web servers versus the downloadable Java applications. The ModFOLD web servers permit users to submit only one job at a time due to the server load balancing. If users would like to use the MQAPs frequently or for multiple models, for example analyzing many thousands of models, then we would recommend that users download and install the MQAPs for local execution. This gives the user freedom in the number of models which can be analyzed, provided they have adequate CPU capacity.

    For light (several predictions per week, less than or equal to 300 models per target) users of MQAPs, server submission is adequate; whereas for heavy users (20 or more predictions per week, greater than 300 models per target) the downloadable applications would be most useful. Extensive help pages are available for the ModFOLD 3 web server, and README files are available to help install and run the downloadable java applications.

     

Notes

Acknowledgments

University of Reading Faculty Studentship, MRC Harwell and the Diamond Light Source Ltd. (to. M.T.B.). This research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007–2013) under grant agreement No. 246556 (to D.B.R.).

References

  1. 1.
    Roche DB, Buenavista MT, McGuffin LJ (2012) FunFOLDQA: a quality assessment tool for protein-ligand binding site residue predictions. PLoS One 7(5):e38219. doi:10.1371/journal.pone.0038219 PubMedCentralPubMedCrossRefGoogle Scholar
  2. 2.
    Roche DB, Buenavista MT, McGuffin LJ (2012) Predicting protein structures and structural annotation of proteomes. In: Roberts GCK (ed) Encyclopedia of biophysics, vol 1. Springer, BerlinGoogle Scholar
  3. 3.
    McGuffin LJ (2010) Model quality prediction. In: Rangwala H, Karypis G (eds) Protein structure prediction: methods and algorithms. Wiley, New York, pp 323–342Google Scholar
  4. 4.
    McGuffin LJ, Roche DB (2011) Automated tertiary structure prediction with accurate local model quality assessment using the IntFOLD-TS method. Proteins 79 Suppl 10:137–146. doi:10.1002/prot.23120
  5. 5.
    McGuffin LJ (2007) Benchmarking consensus model quality assessment for protein fold recognition. BMC Bioinformatics 8:345. doi:10.1186/1471-2105-8-345 PubMedCentralPubMedCrossRefGoogle Scholar
  6. 6.
    McGuffin LJ (2008) The ModFOLD server for the quality assessment of protein structural models. Bioinformatics 24(4):586–587. doi:10.1093/bioinformatics/btn014 PubMedCrossRefGoogle Scholar
  7. 7.
    McGuffin LJ, Roche DB (2010) Rapid model quality assessment for protein structure predictions using the comparison of multiple models without structural alignments. Bioinformatics 26(2):182–188. doi:10.1093/bioinformatics/btp629 PubMedCrossRefGoogle Scholar
  8. 8.
    Berman HM, Westbrook J, Feng Z, Gilliland G, Bhat TN, Weissig H, Shindyalov IN, Bourne PE (2000) The Protein Data Bank. Nucleic Acids Res 28(1):235–242PubMedCentralPubMedCrossRefGoogle Scholar
  9. 9.
    McGuffin LJ (2008) Protein fold recognition and threading. In: Schwede T, Peitsch MC (eds) Computational structural biology. World Scientific, London, pp 37–60Google Scholar
  10. 10.
    Lee J, Wu S, Zhang Y (2009) Ab initio protein structure prediction. In: Rigden DJ (ed) From protein structure to function with bioinformatics. Springer, London, pp 1–26Google Scholar
  11. 11.
    Laskowski RA, Moss DS, Thornton JM (1993) Main-chain bond lengths and bond angles in protein structures. J Mol Biol 231(4):1049–1067. doi:10.1006/jmbi.1993.1351 PubMedCrossRefGoogle Scholar
  12. 12.
    Hooft RW, Vriend G, Sander C, Abola EE (1996) Errors in protein structures. Nature 381(6580):272. doi:10.1038/381272a0 PubMedCrossRefGoogle Scholar
  13. 13.
    Davis IW, Murray LW, Richardson JS, Richardson DC (2004) MOLPROBITY: structure validation and all-atom contact analysis for nucleic acids and their complexes. Nucleic Acids Res 32(Web Server issue):W615–W619. doi:10.1093/nar/gkh398 PubMedCentralPubMedCrossRefGoogle Scholar
  14. 14.
    Melo F, Devos D, Depiereux E, Feytmans E (1997) ANOLEA: a www server to assess protein structures. Proc Int Conf Intell Syst Mol Biol 5:187–190PubMedGoogle Scholar
  15. 15.
    Melo F, Feytmans E (1997) Novel knowledge-based mean force potential at atomic level. J Mol Biol 267(1):207–222. doi:10.1006/jmbi.1996.0868 PubMedCrossRefGoogle Scholar
  16. 16.
    Zhou H, Zhou Y (2002) Distance-scaled, finite ideal-gas reference state improves structure-derived potentials of mean force for structure selection and stability prediction. Protein Sci 11(11):2714–2726. doi:10.1110/ps.0217002 PubMedCentralPubMedCrossRefGoogle Scholar
  17. 17.
    Brooks BR, Bruccoleri RE, Olafson BD, States DJ, Swaminathan S, Karplus M (1983) CHARMM: a program for macromolecular energy, minimization, and dynamics calculations. J Comput Chem 4(2):187–217. doi:10.1002/jcc.540040211 CrossRefGoogle Scholar
  18. 18.
    Weiner SJ, Kollman PA, Case DA, Singh UC, Ghio C, Alagona G, Profeta S, Weiner P (1984) A new force field for molecular mechanical simulation of nucleic acids and proteins. J Am Chem Soc 106(3):765–784. doi:10.1021/ja00315a051 CrossRefGoogle Scholar
  19. 19.
    Moult J, Fidelis K, Kryshtafovych A, Rost B, Tramontano A (2009) Critical assessment of methods of protein structure prediction—round VIII. Proteins 77 Suppl 9:1–4. doi:10.1002/prot.22589 Google Scholar
  20. 20.
    Cozzetto D, Kryshtafovych A, Ceriani M, Tramontano A (2007) Assessment of predictions in the model quality assessment category. Proteins 69 Suppl 8:175–183. doi:10.1002/prot.21669 Google Scholar
  21. 21.
    Kryshtafovych A, Fidelis K, Tramontano A (2011) Evaluation of model quality predictions in CASP9. Proteins Struct Funct Bioinformatics (79 Suppl 10):96–106. doi:10.1002/prot.23180
  22. 22.
    Roche DB, Buenavista MT, Tetchner SJ, McGuffin LJ (2011) The IntFOLD server: an integrated web resource for protein fold recognition, 3D model quality assessment, intrinsic disorder prediction, domain prediction and ligand binding site prediction. Nucleic Acids Res 39(Web Server issue):W171–W176. doi:10.1093/nar/gkr184 PubMedCentralPubMedCrossRefGoogle Scholar
  23. 23.
    Benkert P, Biasini M, Schwede T (2011) Toward the estimation of the absolute quality of individual protein structure models. Bioinformatics 27(3):343–350. doi:10.1093/bioinformatics/btq662 PubMedCentralPubMedCrossRefGoogle Scholar
  24. 24.
    Kalman M, Ben-Tal N (2010) Quality assessment of protein model-structures using evolutionary conservation. Bioinformatics 26(10):1299–1307. doi:10.1093/bioinformatics/btq114 PubMedCentralPubMedCrossRefGoogle Scholar
  25. 25.
    Cozzetto D, Kryshtafovych A, Tramontano A (2009) Evaluation of CASP8 model quality predictions. Proteins 77 Suppl 9:157–166. doi:10.1002/prot.22534 Google Scholar
  26. 26.
    Zhang Y, Skolnick J (2004) Scoring function for automated assessment of protein structure template quality. Proteins 57(4):702–710. doi:10.1002/prot.20264 PubMedCrossRefGoogle Scholar
  27. 27.
    Ben-David M, Noivirt-Brik O, Paz A, Prilusky J, Sussman JL, Levy Y (2009) Assessment of CASP8 structure predictions for template free targets. Proteins 77 Suppl 9:50–65. doi:10.1002/prot.22591 Google Scholar
  28. 28.
    Altschul SF, Madden TL, Schaffer AA, Zhang J, Zhang Z, Miller W, Lipman DJ (1997) Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 25(17):3389–3402PubMedCentralPubMedCrossRefGoogle Scholar
  29. 29.
    Jones DT, Swindells MB (2002) Getting the most from PSI-BLAST. Trends Biochem Sci 27(3):161–164PubMedCrossRefGoogle Scholar
  30. 30.
    McGuffin LJ, Bryson K, Jones DT (2000) The PSIPRED protein structure prediction server. Bioinformatics 16(4):404–405PubMedCrossRefGoogle Scholar
  31. 31.
    Soding J (2005) Protein homology detection by HMM-HMM comparison. Bioinformatics 21(7):951–960. doi:10.1093/bioinformatics/bti125 PubMedCrossRefGoogle Scholar
  32. 32.
    Remmert M, Biegert A, Hauser A, Soding J (2012) HHblits: lightning-fast iterative protein sequence searching by HMM-HMM alignment. Nat Methods 9(2):173–175. doi:10.1038/nmeth.1818 CrossRefGoogle Scholar
  33. 33.
    Eswar N, Webb B, Marti-Renom MA, Madhusudhan MS, Eramian D, Shen MY, Pieper U, Sali A (2006) Comparative protein structure modeling using Modeller. Curr Protoc Bioinformatics Chapter 5:Unit 5 6. doi:10.1002/0471250953.bi0506s15
  34. 34.
    Buenavista MT, Roche DB, McGuffin LJ (2012) Improvement of 3D protein models using multiple templates guided by single-template model quality assessment. Bioinformatics 28(14):1851–1857. doi:10.1093/bioinformatics/bts292 PubMedCrossRefGoogle Scholar
  35. 35.
    McGuffin LJ (2008) Intrinsic disorder prediction from the analysis of multiple protein fold recognition models. Bioinformatics 24(16):1798–1804. doi:10.1093/bioinformatics/btn326 PubMedCrossRefGoogle Scholar
  36. 36.
    Monastyrskyy B, Fidelis K, Moult J, Tramontano A, Kryshtafovych A (2011) Evaluation of disorder predictions in CASP9. Proteins 79 Suppl 10:107–118. doi:10.1002/prot.23161 Google Scholar
  37. 37.
    Alexandrov N, Shindyalov I (2003) PDP: protein domain parser. Bioinformatics 19(3):429–430PubMedCrossRefGoogle Scholar
  38. 38.
    Roche DB, Tetchner SJ, McGuffin LJ (2011) FunFOLD: an improved automated method for the prediction of ligand binding residues using 3D models of proteins. BMC Bioinformatics 12:160. doi:10.1186/1471-2105-12-160 PubMedCentralPubMedCrossRefGoogle Scholar
  39. 39.
    Schmidt T, Haas J, Gallo Cassarino T, Schwede T (2011) Assessment of ligand-binding residue predictions in CASP9. Proteins 79 Suppl 10:126–136. doi:10.1002/prot.23174 Google Scholar
  40. 40.
    Matthews BW (1975) Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta 405(2):442–451PubMedCrossRefGoogle Scholar
  41. 41.
    Roche DB, Tetchner SJ, McGuffin LJ (2010) The binding site distance test score: a robust method for the assessment of predicted protein binding sites. Bioinformatics 26(22):2920–2921. doi:10.1093/bioinformatics/btq543 PubMedCrossRefGoogle Scholar
  42. 42.
    Rajasekaran MB, Nilapwar S, Andrews SC, Watson KA (2010) EfeO-cupredoxins: major new members of the cupredoxin superfamily with roles in bacterial iron transport. Biometals 23(1):1–17. doi:10.1007/s10534-009-9262-z PubMedCrossRefGoogle Scholar
  43. 43.
    Eliahoo E, Ben Yosef R, Perez-Cano L, Fernandez-Recio J, Glaser F, Manor H (2010) Mapping of interaction sites of the Schizosaccharomyces pombe protein Translin with nucleic acids and proteins: a combined molecular genetics and bioinformatics study. Nucleic Acids Res 38(9):2975–2989. doi:10.1093/nar/gkp1230 PubMedCentralPubMedCrossRefGoogle Scholar
  44. 44.
    Wei T, Gong J, Jamitzky F, Heckl WM, Stark RW, Rossle SC (2009) Homology modeling of human Toll-like receptors TLR7, 8, and 9 ligand-binding domains. Protein Sci 18(8):1684–1691. doi:10.1002/pro.186 PubMedCentralPubMedCrossRefGoogle Scholar
  45. 45.
    Gong J, Wei T, Stark RW, Jamitzky F, Heckl WM, Anders HJ, Lech M, Rossle SC (2010) Inhibition of Toll-like receptors TLR4 and 7 signaling pathways by SIGIRR: a computational approach. J Struct Biol 169(3):323–330. doi:10.1016/j.jsb.2009.12.007 PubMedCrossRefGoogle Scholar
  46. 46.
    Fuller SJ, McGuffin LJ, Marshall AK, Giraldo A, Pikkarainen S, Clerk A, Sugden PH (2012) A novel non-canonical mechanism of regulation of MST3 (mammalian Sterile20-related kinase 3). Biochem J 442(3):595–610. doi:10.1042/BJ20112000 PubMedCentralPubMedCrossRefGoogle Scholar
  47. 47.
    Bindschedler LV, McGuffin LJ, Burgis TA, Spanu PD, Cramer R (2011) Proteogenomics and in silico structural and functional annotation of the barley powdery mildew Blumeria graminis f. sp. hordei. Methods 54(4):432–441. doi:10.1016/j.ymeth.2011.03.006 PubMedCrossRefGoogle Scholar
  48. 48.
    Pawlowski M, Gajda MJ, Matlak R, Bujnicki JM (2008) MetaMQAP: a meta-server for the quality assessment of protein models. BMC Bioinformatics 9:403. doi:10.1186/1471-2105-9-403 PubMedCentralPubMedCrossRefGoogle Scholar
  49. 49.
    Wang Q, Vantasin K, Xu D, Shang Y (2011) MUFOLD-WQA: a new selective consensus method for quality assessment in protein structure prediction. Proteins 79 Suppl 10:185–195. doi:10.1002/prot.23185 Google Scholar
  50. 50.
    Cheng J, Li J, Wang Z, Eickholt J, Deng X (2012) The MULTICOM toolbox for protein structure prediction. BMC Bioinformatics 13:65. doi:10.1186/1471-2105-13-65 PubMedCentralPubMedCrossRefGoogle Scholar
  51. 51.
    Larsson P, Skwark MJ, Wallner B, Elofsson A (2009) Assessment of global and local model quality in CASP8 using Pcons and ProQ. Proteins 77 Suppl 9:167–172. doi:10.1002/prot.22476 Google Scholar
  52. 52.
    Benkert P, Kunzli M, Schwede T (2009) QMEAN server for protein model quality estimation. Nucleic Acids Res 37(Web Server issue):W510–W514. doi:10.1093/nar/gkp322 PubMedCentralPubMedCrossRefGoogle Scholar
  53. 53.
    Benkert P, Schwede T, Tosatto SC (2009) QMEANclust: estimation of protein model quality by combining a composite scoring function with structural density information. BMC Struct Biol 9:35. doi:10.1186/1472-6807-9-35 PubMedCentralPubMedCrossRefGoogle Scholar
  54. 54.
    Benkert P, Tosatto SC, Schomburg D (2008) QMEAN: a comprehensive scoring function for model quality assessment. Proteins 71(1):261–277. doi:10.1002/prot.21715 PubMedCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Daniel Barry Roche
    • 1
    • 2
    • 3
    • 4
  • Maria Teresa Buenavista
    • 5
    • 6
    • 7
  • Liam James McGuffin
    • 5
  1. 1.Genoscope, Institut de Génomique, Commissariat à l’Energie Atomique et aux Energies AlternativesEvryFrance
  2. 2.Centre National de la Recherche Scientifique, UMR EvryEvryFrance
  3. 3.Université d’Evry-Val-d’EssonneEvryFrance
  4. 4.PRES UniverSud Paris, Les Algorithmes, Bâtiment EuripideSaint-AubinFrance
  5. 5.School of Biological Sciences, University of ReadingReadingUK
  6. 6.BioComputing Section, Medical Research Council HarwellHarwell OxfordOxfordshireUK
  7. 7.Diamond Light SourceDidcotUK

Personalised recommendations