Background

Biomarker discovery and prognosis prediction are essential for improved personalized cancer treatment. Microarray technology is a significant tool for gene expression analysis and cancer diagnosis. Typically, microarray data sets are used for class discovery [1, 2] and prediction [3, 4]. The high dimensionality of the input feature space in comparison with the relatively small number of subjects is a widespread concern; hence some form of dimensionality reduction is often applied. Feature selection and feature transformation are two commonly used dimensionality reduction techniques. The key difference between feature selection and feature transformation is that, in the former only a subset of original features is selected while the latter is based on generation of new features.

In this genomic era, several classification and dimensionality reduction methods are available for analyzing and classifying microarray data. Prediction Analysis of Microarray (PAM) [5] is a statistical technique for class prediction from gene expression data using Nearest Shrunken Centroid (NSC). PAM identifies subsets of genes that best characterize each class. LS-SVM is a promising method for classification, because of its solid mathematical foundations which convey several salient properties that other methods hardly provide. A commonly used technique for feature selection, t-test, assumes that the feature values from two different classes follow normal distributions. Several studies, especially microarray analysis, have used t-test and LS-SVM together to improve the prediction performance by selecting key features [6, 7]. The Least Absolute Shrinkage and Selection Operator (Lasso) [8] is often used for gene selection and parameter estimation in high-dimensional microarray data [9]. The Lasso shrinks some of the coefficients to zero, and extend of shrinkage is determined by the tuning parameter, often obtained from cross validation.

Inductive learning systems were successfully applied in a number of medical domains, e.g. in localization of primary tumors, prognostic of recurring breast cancer, diagnosis of thyroid diseases, and rheumatology [10]. An induction algorithm is used to learn a classifier, which maps the space of feature values into the set of class values. This classifier is later used to classify new instances, with the unknown classifications (class labels). Researchers and practitioners realize that the effective use of these inductive learning systems requires data preprocessing, before a learning algorithm could be applied [11]. Due to the instability of feature selection techniques, it might be difficult or even impossible to remove irrelevant and/or redundant features from a data set. Feature transformation techniques, such as KPCA, discover a new feature space having fewer dimensions through a functional mapping, while keeping as much information, as possible in the data set.

KPCA, which is a generalization of PCA, a nonlinear dimensionality reduction technique that has proven to be a powerful pre-processing step for classification algorithms. It has been studied intensively in the last several years in the field of machine learning and has claimed success in many applications [12]. An algorithm for classification using KPCA was developed by Liu et al.[13]. KPCA was proposed by Schölkopf and Smola [14], by mapping features sets to a high-dimensional feature space (possibly infinite) and applying Mercer’s theorem. Suykens et al.[15, 16] proposed a simple and straightforward primal-dual support vector machine formulation to the PCA problem.

To perform KPCA, the user first transforms the input data x from the original input space F0 into a higher-dimensional feature space F1 with a nonlinear transform xΦ(x) where Φ is a nonlinear function. Then a kernel matrix K is formed using the inner products of new feature vectors. Finally, a PCA is performed on the centralized K, which is an estimate of the covariance matrix of the new feature vectors in F1. One of the commonly used kernel function is radial basis function (RBF) kernel: K( x i , x j )=exp x i x j 2 2 h 2 (RBF kernel with bandwidth h). Traditionally the optimal parameters (bandwidth and number of principal components) of RBF kernel function are selected in a trial and error fashion.

Pochet et al.[17] proposed an optimization algorithm for KPCA with RBF kernel followed by Fisher Discriminant Analysis (FDA) to find the parameters of KPCA. In this case, the parameter selection is coupled with the corresponding classifier. This means that the performance of the final procedure depends on the chosen classifier. Such a procedure could produce possible inaccurate results in the case of weak classifiers. In addition, this appears to be a time consuming procedure, while tuning the parameters of KPCA.

Most classification methods have inherent problem with high dimensionality of microarray data and hence require dimensionality reduction. The ultimate goal of our work is to design a powerful preprocessing step, decoupled from the classification method, for large dimensional data sets. In this paper, initially we explain an SVM approach to PCA and LS-SVM approach to KPCA. Next, by following the idea of least squares cross-validation in kernel density estimation, we propose a new data-driven bandwidth selection criterion for KPCA. The tuned LS-SVM formulation to KPCA is applied to several data sets and serves as a dimensionality reduction technique for a final classification task. In addition, we compared the proposed strategy with an existing optimization algorithm for KPCA, as well as with other preprocessing steps. Finally, for the sake of comparison, we applied LS-SVM on whole data sets, PCA+LS-SVM, t-test + LS-SVM, PAM and Lasso. Randomization on all data sets are carried out in order to get a more reliable idea of the expected performance.

Data sets

In our analysis, we collected 11 publicly available binary class data sets (diseased vs. normal). The data sets are: colon cancer data [18, 19], breast cancer data [20], pancreatic cancer premalignant data [21, 22], cervical cancer data [23], acute myeloid leukemia data[24], ovarian cancer data [21], head & neck squamous cell carcinoma data [25], early-early stage duchenne muscular dystrophy (EDMD) data [26], HIV encephalitis data [27], high grade glioma data [28], and breast cancer data [29]. In breast cancer data [29] and high grade glioma data, all data samples have already been assigned to a training set or test set. The breast cancer data in [29] contains missing values; those values have been imputed based on the nearest neighbor method.

An overview of the characteristics of all the data sets can be found in Table 1. In all the cases, 2/3rd of the data samples of each class are assigned randomly to the training and the rest to the test set. These randomizations are the same for all numerical experiments on all data sets. This split was performed stratified to ensure that the relative proportion of outcomes sampled in both training and test set was similar to the original proportion in the full data set. In all these cases, the data were standardized to zero mean and unit variance.

Table 1 Summary of the 11 binary disease data sets

Methods

The methods used to set up the case studies can be subdivided into two categories: dimensionality reduction using the proposed criterion and subsequent classification.

SVM formulation to linear PCA

Given training set { x i } i = 1 N , x i d (d - dimensional data) and N given data points for which one aims at finding projected variables vTx i with maximal variance. SVM formulation to PCA problem is given in [30] as follows:

max v i = 1 N 0 v T x i 2

where zero is considered as a single target value. This interpretation of the problem leads to the following primal optimization problem

max v , e J P ( v , e ) = γ 1 2 i = 1 N e i 2 1 2 v T v

such that

e i = v T x i , i = 1 , , N.

This formulation states that one considers the difference between vTx i (the projected data points to the target space) and the value 0 as error variables. The projected variables correspond to what one calls the score variables. These error variables are maximized for the given N data points while keeping the norm of v small by the regularization term. The value γ is a positive real constant. The Lagrangian becomes

( v , e ; α ) = γ 1 2 k = 1 N e k 2 1 2 v T v k = 1 N α k e k v T x k

with conditions for optimality

∂v = 0 v = k = 1 N α k x k e k = 0 α k = γ e k k = 1 , , N α k = 0 e k v T x k , k = 1 , , N.

By elimination of the variables e, v one obtains the following symmetric eigenvalue problem:

x 1 T x 1 x 1 T x N x N T x 1 x N T x N α 1 α N = λ α 1 α N

The vector of dual variables α=[α1;…;α N ] is an eigenvector of the Gram matrix and λ= 1 γ is the corresponding eigenvalue. The score variable, z n pca (x) of sample x on nth eigenvector αn becomes

z n pca (x)= v T x= Σ i = 1 N α i ( n ) x i T x
(1)

LS-SVM approach to KPCA

The PCA analysis problem is interpreted as a one-class modeling problem with a target value equal to zero around which the variance is maximized. This results into a sum of squared error cost function with regularization. The score variables are taken as additional error variables. We now follow the usual SVM methodology of mapping the d-dimensional data from the input space to a high-dimensional feature space ϕ: d n h , where n h can be infinite, and apply Mercer’s theorem [31].

Our objective is the following

max v k = 1 N 0 v T ( ϕ ( x k ) μ ̂ ϕ ) ) 2
(2)

with μ ̂ ϕ =(1/N) k = 1 N ϕ( x k ) and v is the eigenvector in the primal space with maximum variance. This formulation states that one considers the difference between v T (ϕ( x k ) μ ̂ ϕ ) (the projected data points to the target space) and the value 0 as error variables. The projected variables correspond to what is called score variables. These error variables are maximized for the given N data points. Next, by adding a regularization term we also want to keep the norm of v small. The following optimization problem is formulated now in the primal weight space

max v , e J P (v,e)=γ 1 2 k = 1 N e k 2 1 2 v T v
(3)

such that

e k = v T ( ϕ ( x k ) μ ϕ ) , k = 1 , , N.

The Lagrangian yields

( v , e ; α ) = γ 1 2 k = 1 N e k 2 1 2 v T v k = 1 N α k e k v T ϕ x k μ ̂ ϕ

with conditions for optimality

∂v = 0 v = k = 1 N α k ϕ x k μ ̂ ϕ e k = 0 α k = γ e k k = 1 , , N α k = 0 e k v T ϕ x k μ ̂ ϕ = 0 , k = 1 , , N.

By elimination of variables e and v, one obtains

1 γ α k l = 1 N α l ϕ x l μ ̂ ϕ T ϕ x k μ ̂ ϕ = 0 k = 1 , , N.

Defining λ= 1 γ , one obtains the following dual problem

Ω c α = λα

where Ω c denotes the centered kernel matrix with ijth entry: Ω c , i , j =K( x i , x j ) 1 N r = 1 N K( x i , x r ) 1 N r = 1 N K( x j , x r )+ 1 N 2 r = 1 N s = 1 N K( x r , x s ).

Data-driven bandwidth selection for KPCA

Model selection is a prominent issue in all learning tasks, especially in KPCA. Since KPCA is an unsupervised technique, formulating a data-driven bandwidth selection criterion is not trivial. Until now, no such data-driven criterion was available to tune the bandwidth (h) and number of components (k) for KPCA. Typically these parameters are selected by trial and error. Analogue to least squares cross validation [32, 33] in kernel density estimation, we propose a new data driven selection criterion for KPCA. Let

z n ( x ) = Σ i = 1 N α i ( n ) K ( x i , x )

where K( x i , x j )=exp x i x j 2 2 h 2 (RBF kernel with bandwidth h) and set the target equal to 0 and denote by z n (x) the score variable of sample x on nth eigenvector α(n). Here, the score variables are expressed in terms of kernel expressions in which every training point contributes. These expansions are typically dense (nonsparse). In Equation 3, the KPCA uses L2 lose function. Here we have chosen the L1 loss function to induce sparsness in KPCA. By extending the formulation in Equation 3 to L1 loss function, the following problem is formulated for kernel PCA [34].

max v , e J P ( v , e ) = γ 1 2 k = 1 N L 1 ( e k ) 1 2 v T v

such that

e k = v T ( ϕ ( x k ) μ ϕ ) , k = 1 , , N.

We propose the following tuning criterion for the bandwidth h which maximizes the L1 loss function of KPCA:

J(h)= argmax h 0 + E| z n (x)|dx,
(4)

where E denotes the expectation operator. Maximizing Eq. 4 would lead to overfitting since we used all the training data in the criterion. Instead, we work with Leave-One-Out cross validation (LOOCV) estimation of z n (x) to obtain the optimum bandwidth h of KPCA, which gives projected variables with maximal variance. A finite approximation to Eq. 4 is given by

J(h)= argmax h 0 + 1 N j = 1 N | z n ( j ) (x)|dx
(5)

where N is the number of samples and z n ( j ) denotes the score variable with the jth observation is left out. In case the leave-one-out approach is computationally expensive, one could replace it with a leave v group out strategy (v- fold cross-validation). Integration can be performed by means of any numerical technique. In our case, we have used trapezoidal rule. The final model with optimum bandwidth is constructed as follows:

Ω c , ĥ max α = λα ,

where ĥ max = max h 0 + 1 N j = 1 N | z n ( j ) (x)|dx. Figure 1 shows the bandwidth selection for cervical and colon cancer data sets for fixed number of components. To also retain the optimum number of components of KPCA, we modify Eq. 5 as follows:

J(h,k)= argmax h 0 + , k 0 1 N n = 1 k j = 1 N | z n ( j ) (x)|dx
(6)
Figure 1
figure 1

Bandwidth selection of KPCA for a fixed number of components. Retaining (a) 5 components for cervical cancer data set (b) 15 components for colon cancer data set.

where k=1,…,N. Figure 2 illustrate the proposed model. Figure 3 shows the surface plot of Eq. 6 for various values of h and k.

Figure 2
figure 2

Data-Driven Bandwidth Selection for KPCA Leave-one-out cross validation (LOOCV) for KPCA.

Figure 3
figure 3

Model selection for KPCA-optimal bandwidth and number of components. (a) Cervical cancer (b) Colon cancer.

Thus, the proposed data-driven model can obtain the optimal bandwidth for KPCA, while retaining minimum number of eigenvectors which capture the majority of the variance of the data. Figure 4 shows a slice of the surface plots. The values of the proposed criterion were re-scaled to be maximum 1. The parameters that maximize Eq. 6 are h=70.71 and k=5 for cervical cancer data and h=43.59 and k=15 for colon cancer data.

Figure 4
figure 4

Slice plot for the Model selection for KPCA for the optimal bandwidth. (a) Cervical cancer (b) Colon cancer.

Classification models

The constrained optimization problem for an LS-SVM [16, 35] for classification has the following form:

min w , b , e 1 2 w T w + γ 1 2 Σ k = 1 N e k 2

subject to:

y k w T ϕ ( x k ) + b = 1 e k , k = 1 , , N

where ϕ(.): d d h is a nonlinear function which maps the d-dimensional input vector x from the input space to the d h -dimensional feature space, possibly infinite. In the dual space the solution is given by

0 y T y Ω + I γ b β = 0 1 v

with y=[ y1,…,y N ]T,1 N =[ 1,…,1]T,e=[ e1,…,e N ]T, β=[ β1,…,β N ]T and Ωi,j=y i y j K(x i ,x j ) where K(x i ,x j ) is the kernel function. The classifier in the dual space takes the form

y(x)=sign k = 1 N β k y k K ( x , x k ) + b
(7)

where β k are Lagrange multipliers.

Results

First we considered nine data sets described in Table 1. We have chosen the RBF kernel K( x i , x j )=exp | | x i x j | | 2 2 h 2 for KPCA. In this section all the steps are implemented using Matlab R2012b and LS-SVMlab v1.8 toolbox [36]. Next, we compared the performance of the proposed method with classical PCA and an existing tuning algorithm for RBF-KPCA developed by Pochet et al.[17]. Later, with the intention to comprehensively compare PCA+LS-SVM and KPCA+LS-SVM with other classification methods, we applied four widely used classifiers to the microarray data, being LS-SVM on whole data sets, t-test + LS-SVM, PAM and Lasso. To fairly compare kernel functions of the LS-SVM classifier; linear, RBF and polynomial kernel functions are used (in Table 2 referred to as linear/poly/RBF). The average test accuracies and execution time for all these methods when applied to the 9 case studies are shown in Table 2 and Table 3 respectively. Statistical significance test results (two-sided signed rank test) are given in Table 4 which compares the performance of KPCA with other classifiers. For all these methods, training on 2/3rd of the samples and testing on 1/3rd of the samples was repeated 30 times.

Table 2 Comparison of classifiers: Mean AUC(std) of 30 iterations
Table 3 Summary of averaged execution time of classifiers over 30 iterations in seconds
Table 4 Statistical significance test which compares KPCA with other classifiers: whole data, PCA, t-test, PAM and Lasso

Comparison between the proposed criterion and PCA

For each data set, the proposed methodology is applied. This methodology consists of two steps. First, Eq. 6 is maximized in order to obtain an optimal bandwidth h and corresponding number of components k. Second, the reduced data set is used to perform a classification task with LS-SVM. We retained 5 and 15 components respectively for cervical and colon cancer data sets. For PCA, the optimal number of components were selected by slightly modifying the Equation 6, i.e., which performed only for the components k as follows:

J(k)= argmax k 0 1 N n = 1 k j = 1 N z n pca ( j ) ( x ) dx
(8)

where z n pca (x) the score corresponding to the varibale x on PCA problem. (See Equation 1).

Figure 5 shows the plots of the optimal components selection of PCA. Thus we retained 13 components and 15 components for cervical and colon cancer respectively for PCA. Similarly, we obtained number of components of PCA and the number of components with corresponding bandwidth for KPCA for the remaining data sets.

Figure 5
figure 5

Plot for the selection of optimal number of components for PCA. (a) Cervical cancer (b) Colon cancer.

The score variables (projection of samples onto the direction of selected principal components) are used to develop an LS-SVM classification model. The averaged test AUC values over the 30 random repetitions were reported.

The main goal of PCA is the reduction of dimensionality, that is, focusing on a few principal components (PC) versus many variables. There are several criteria have been proposed for determining how many PC should be investigated and how many should be ignored. One common criteria is to include all those PCs up to a predetermined total percent variance explained, such as, 95%. Figure 6 depicts the prediction performances on colon cancer data, with PCA+LS-SVM(RBF), at different fractions of explained total variance. It shows the results vary with the selected components. Here the number of retained components, depends on the chosen fraction of explained total variance. The proposed approach offers a data-driven selection criterion for PCA problem, instead of a traditional trial and error PC selection.

Figure 6
figure 6

The prediction performances on colon cancer data, with PCA+LS-SVM(RBF). Number of selected components depends on the chosen fraction of explained total variance.

Comparison between the proposed criterion and an existing optimization algorithm for RBF-KPCA

We selected two experiments from Pochet et al.[17] (last two data sets in Table 1), being high-grade glioma and breast cancer II data sets. We repeated the same experiments as reported in Pochet et al.[17] and compared with the proposed strategy. The results are shown in Table 5. The three dimensional surface plot of LOOCV performance of the method proposed by [17] for the high-grade glioma data set is shown in Figure 7, with the optimal h=114.018 and k=12. The optimum parameters are h=94.868 and k=10 obtained by the proposed strategy (see Eq. 6) for the same data set.

Table 5 Comparison of performance of proposed criterion with the method proposed by Pochet et al. [[17]]: Averaged test AUC(std) over 30 iterations and execution time in minutes
Figure 7
figure 7

LOOCV performance of optimization algorithm [[17]] on high-grade glioma data set.

When looking at test AUC in Table 5, both case studies applying the proposed strategy, perform better than the method proposed by Pochet et al.[17] with less variability. In addition, the tuning method Pochet et al.[17] appears to be quite time consuming, whereas the proposed model enjoys a distinctive advantage with its low time complexity to carry out the same process.

Comparison between the proposed criterion and other classifiers

In Table 4, we have highlighted the comparisons in which the proposed method was significantly better. When looking specifically on the performance of each of the discussed methods, we note that LS-SVM performance was slightly low on PCA. On data sets IV, VI, VII proposed approach performs better than, LS-SVM with RBF kernel and LS-SVM with linear kernel. The proposed approach is outperformed, by the t-test + LS-SVM on data sets V and VI and, by both PAM and Lasso on most of the data sets.

Discussions

The obtained test AUC of different classifiers on nine data sets, do not direct to a common conclusion that one method outperforms the other. Instead, it shows that each of these methods have its own advantage in classification tasks. When considering classification problems without dimensionality reduction, the regularized LS-SVM classifier shows a good performance on 50 percentage of data sets. Up till now, most microarray data sets are smaller in the sense of number of features and samples, but it is expected that these data sets might become larger or perhaps represent more complex classification problems in the future. In this situation, dimensionality reduction processes (feature selection and feature transformation) are the essential steps for building stable, robust and interpretable classifiers on these kind of data.

The selected features of feature selection method such as t-test, PAM and Lasso widely vary for each random iteration. Moreover, the classification performance of these methods on each iteration depends on the number of features selected. Table 6 shows the range, i.e. minimum and maximum number of features selected on 30 iterations. However PAM is a user friendly toolbox for gene selection and classification tasks, its performance depends heavily on the selected features. In addition, it is interesting that the Lasso selected only very small subsets of the actual data sets. But, in the Lasso, the amount of shrinkage varies, depending on the value of the tuning parameter, which is often determined by cross validation [37]. The number of genes selected as the outcome-predictive genes, generally decrease as the value of the tuning parameter increases. The optimal value of the tuning parameter, that maximizes the prediction accuracy is determined; however, the set of genes identified using the optimal value contains the non-outcome-predictive genes (ie, false positive genes) in many cases [9].

Table 6 Summary of the range (minimum to maximum) of features selected over 30 iterations

The test AUC on all nine case studies shows that KPCA performs better than classical PCA. But the parameters of KPCA need to be optimized. Here we have used LOOCV approach for parameters selection (bandwidth and number of components) of KPCA. In the optimization algorithm proposed by Pochet et al.[17], the combination of KPCA with RBF kernel followed by FDA tends to result in overfitting. The proposed parameter selection criterion of KPCA with RBF kernel, often results in test set performances (see Table 4) that is better than using KPCA with a linear kernel, which reported in Pochet et al. It means that LOOCV in the proposed parameter selection criterion does not encounter an overfitting for KPCA with RBF kernel function. In addition, the optimization algorithm proposed by Pochet et al. is completely coupled with the subsequent classifier and thus it appears to be very time-consuming.

In combination with classification methods, microarray data analysis can be useful to guide clinical management in cancer studies. In this study, several mathematical and statistical techniques were evaluated and compared in order to optimize the performance of clinical predictions based on microarray data. Considering the possibility of increasing size and complexity of microarray data sets in future, dimensionality reduction and nonlinear techniques have its own significance. In many cases, in a specific application context the best feature set is still important (e.g. drug discovery). While considering the stability and performance (both accuracy and execution time) of classifiers, the proposed methodology has its own importance to predict classes, of future samples of known disease cases.

Finally this work could be extended further to uncover key features from biological data sets. In several studies, KPCA have used to obtain biologically relevant features such as genes [38, 39] or detect the association between multiple SNPs and disease [40]. In all these cases, one needs to address the parameter optimization of KPCA. The available bandwidth selection techniques of KPCA are time-consuming with high computational burden. This could be resolved with the proposed data-driven bandwidth selection criterion for KPCA.

Conclusion

The objective in class prediction with microarray data is an accurate classification of cancerous samples, which allows directed and more successful therapies. In this paper, we proposed a new data-driven bandwidth selection criterion for KPCA (which is a well defined preprocessing technique). In particular, we optimize the bandwidth and the number of components by maximizing the projected variance of KPCA. In addition, we compared several data preprocessing techniques prior to classification. In all the case studies, most of these preprocessing steps performed well on classification with approximately similar performance. We observed that in feature selection methods selected features widely vary on each iteration. Hence it is difficult, even impossible to design a stable class predictor for future samples with these methods. Experiments on nine data sets show that the proposed strategy provides a stable preprocessing algorithm for classification of high dimensional data with good performance on test data.

The advantages of the proposed KPCA+LS-SVM classifier were presented in four aspects. First, we propose a data-driven bandwidth selection criterion for KPCA by tuning the optimum bandwidth and the number of principal components. Second, we illustrate that the performance of the proposed strategy is significantly better than an existing optimization algorithm for KPCA. Third, its classification performance is not sensitive to any number of selected genes, so the proposed method is more stable than others proposed in literature. Fourth, it reduces the dimensionality of the data while keeping as much information as possible of the original data. This leads to computationally less expensive and more stable results for massive microarray classification.