Introduction

The observed results by Muskal and Kim [1] suggested that the structural class of a protein might basically depend on its amino acid composition. Many efforts [2,3,4,5,6,7,8,9,10,11,12,13,14] have been made to predict the structural class of a protein based on its amino acid composition. The physical mechanism about this kind of correlation has been discussed by Bahar et al. [14] and Chou [15]. For a systematic description in this area, see a comprehensive review by Chou and Zhang [16] and an updated review [17]. In this paper, we try to apply Vapnik's Support Vector Machine [18] to approach this problem. In this work. Support Vector Machine was performed based on the data sets constructed by Zhou [19] based on SCOP [20]. In ref.19 the reason why these data sets are more reasonable has also been addressed. As a result, high rates of self-consistency and jackknife test were obtained. This has further confirmed that the structural class of a protein is considerably correlated with its amino acid composition.

Results and Discussion

Success rate of self-consistency of SVMs

In this research, the examination for the self-consistency of the SVM method was tested. The following two data sets from Zhou [19] are used. One consists of 277 domains, of which 70 all-α domains, 61 all-β domains, 81 α/β domains, and 65 α+β domains. The other data set consists of 498 domains, of which 107 are all-α domains, 126 all-β,136 α/β domains, and 129 α+β domains. All the rates of correct prediction for the four structural classes of both datasets reach 100%. These rates are "training" accuracy, indicating that after being trained, the SVM model has grasped the complicated relationship between the amino acid composition and protein structure.

Success rate of jackknife test of SVMs

We use jackknife test for cross-validation. The cross-validation by jackknifing is thought the most objective and rigorous way in comparison with sub-sampling test or independent dataset test [16, 21,22]. During the process of jackknife analysis, the datasets are actually open, and a protein will in turn move from each to the other. As a result, the overall rate of correct prediction for the four structural classes of 277 domains (the 1 st set) was 220/277 = 79.4%; while the rates of correct prediction for the four structural classes of 498 domains (the 2nd set) was 464/498 = 93.2%.

Comparison to neural network method and elegant component-coupled algorithm

Zhou [19] applied the elegant component-coupled algorithm developed by Chou et al. [11,12,13] to protein structure class prediction. Later Cai and Zhou [23] applied neural network method to the same problem. The comparison of their results to SVM method is given in Table 1 (for self-consistency test) and Table 2 (for jackknife test).

Table 1 Results of Self-Consistency Test
Table 2 Results of Jackknife Test

The comparison should be focused on the jackknife rates (Table 2) because it represents the rate obtained by following a more objective test procedure [21,22]. From Table 2 we can see that the rates of both the SVM and the component-coupled algorithm are higher than those of neural network. Although the rates obtained here by SVM are slightly higher than those by the component-coupled algorithm, it does not mean the predicted results by SVM are always better than those by the component-coupled algorithm. For some cases, the results obtained by the latter might be better than those by the former. Accordingly, it is expected, the SVM method and the component-coupled algorithm, if complemented with each other, will provide a powerful tool for predicting protein structural class.

Conclusion

The current study has further supported, from the approach of SVMs, the conclusion drawn by Chou and his co-workers [11,12,13] and Zhou [19] that if the coupling effect among different amino acid components can be properly taken into account, the prediction quality of protein structural classes can be significantly improved.

Materials and Methods

Support Vector Machine (SVM)

Support Vector Machine (SVM) is one kind of learning machine based on statistical learning theory. The basic idea of applying SVM to pattern classification can be stated briefly as follows. First, map the input vectors into one feature space (possible with a higher dimension), either linearly or non-linearly, which is relevant with the selection of the kernel function. Then, within the feature space from the first step, seek an optimized linear division, i.e. construct a hyperplane which separates two classes(this can be extended to multi-class). SVM training always seeks a global optimized solution and avoids over-fitting, so it has the ability to deal with a large number of features. A complete description to the theory of SVMs for pattern recognition is in Vapnik's book [24]

SVMs have been used in a wide range of problems including drug design [25], image recognition and text classification [26], microarray gene expression data analysis [27], and protein fold recognition [28].

In this paper, we apply Vapnik's Support Vector Machine [18] for the structural classes of proteins. We download the SVMIight, which is an implementation (in C Language) of SVM for the problem of pattern recognition. The optimization algorithm used in SVMIight can be found in [29,30]. The code has been used in text classification, image recognition [26], microarray gene expression data analysis [27] and protein fold recognition [28].

Suppose we are given a set of samples, i.e, a series of input vectors

with corresponding labels

.

Where -1 and +1 are used to stand respectively for the two classes. The goal here is to construct one binary classifier or derive one decision function from the available samples, which has small probability of misclassifying a future sample. Both the basic linear separable case and the most useful linear non-separable case for most real life problems are considered here:

The linear separable case

In this case, there exists a separating hyper plane whose function is

, which implies:

By minimizing

subject to this constraint, the SVM approach tries to find a unique separating hyperplane. Here is the Euclidean norm of which maximizes the distance between the hyper plane, i.e. Optimal Separating Hyperplane or OSH [31], and the nearest data points of each class. The classifier is called the largest margin classifier. By introducing Lagrange multipliers , the SVM training procedure amounts to solving a convex QP problem. The solution is a unique globally optimized result can be shown having the following expansion:

Only if the corresponding

> 0, these are called Support Vectors. When a SVM is trained, the decision function can be written as:

Where sgn() in the above formula is the given sign function.

The linear non-separable case

  1. (i)

    "soft margin" technique.

In order to allow for training errors, ref.31 introduced slack variables:

ξi > 0, i = 1, ..., N

And relaxed separation constraint is given as:

And the OSH can be found by minimizing

Where C is a regularization parameter used to decide a trade- off between the training error and the margin.

  1. (ii)

    "kernel substitution" technique

SVM performs a nonlinear mapping of the input vector

from the input space into a higher dimensional Hilbert space, where the mapping is determined by the kernel function. Then like in case (i), it finds the OSH in the space H corresponding to a non-linear boundary in the input space. Two typical kernel functions are listed below:

And the form of the decision function is

For a given data set, only the kernel function and the regularity parameter C must be selected to specify one SVM.

The Training and Prediction of Protein Structural Class

According to the SCOP database, the protein domains generally fall into one of the following four classes: (1) all-α, (2) all-β, (3) α/β, (4) α+β.

According to its amino acid composition, a protein domain can be represented by a point or a vector in a 20-D space. However, of the 20 amino acid composition components, only 19 are independent due to the normalisation condition [11]. Accordingly, strictly speaking, if based on amino acid composition, a protein should be represented by a point or a vector in a 19-D space rather than 20-D space as defined in a conventional manner. Furthermore, according to Chou's invariance theorem, the final predicted result will remain the same regardless of which one of the 20 components is left out for forming the 19-D space. It is extremely important to realize this, particularly when the calculations involve a covariance matrix such as in the case of refs.11-14. For the current study, the amino acid composition was used as the input of the SVM.

The SVM method applies to two-class problems. In this paper, for the four-class problems, we use a simple and effective method: "one-against-others" method [27, 28] to transfer it into two-class problems.

The computations were carried out on a Silicon Graphics IRIS Indigo work station (Elan 4000).

In this research, for the SVM, the width of the Gaussian RBFs is selected as that which minimized an estimate of the VC-dimension. The parameter C that controls the error-margin tradeoff is set at 100. After being trained, the hyperplane output by the SVM was obtained. This indicates that the trained model, i.e. hyperplane output which is including the important information, has the function to identify protein structural classes.

We first test the self-consistency of the method, latterly is to test the method by cross-validation (jackknife test). As a result, the rates of both self-consistency and cross-validation were quite high.