Application of the Stochastic Gradient Method in the Construction of the Main Components of PCA in the Task Diagnosis of Multiple Sclerosis in Children
- 568 Downloads
Many different medical problems are characterized by quite large spatial dimensions, which causes the task of recognizing patterns to become troublesome. This is a well-known phenomenon called curse of dimensionality. These problems force the creation of various methods of reducing dimensionality. These methods are based on selection and extraction of features. The most commonly used method in literature, regarding the later, is the analysis of the main components of pca. The natural problem of this method is the possibility of applying it to linear space. It is a natural problem to develop the pca concept for cases of nonlinear feature spaces, optimization of feature selection for principal components and the inclusion of classes in the task of supervised learning. An important problem in the perspective of machine learning is not only a reduction of features and attributes but also separation of classes. The developed method was tested in two computer experiments using real data of multiple sclerosis in children. The discussed problem, even from the very nature of the data itself, is important because it can contribute to practical implementations in medical diagnostics. The purpose of the research is to develop a method of extracting features with the application of the stochastic gradient method in the task diagnosis of multiple sclerosis in children. This solution could contribute to the increasing quality of classification and thus may be the basis for building systems that support the medical diagnostics in recognition of multiple sclerosis in children.
KeywordsPrincipal components analysis Stochastic gradient Recognition of returns Multiple sclerosis
Nowadays machine learning techniques are being used in ever more fields, such as broadly understood medicine, neuroimaging, image classification and detection of network attacks. They produce huge amounts of data with many attributes. Such a large dose of information, paradoxically, does not improve the quality of algorithms, and the data itself is expensive to acquire and store. This resulted in the need for methods to reduce the size of the data, without degrading (or even improving) the quality of classifiers. The reason why more information does not mean better classification is the so-called curse of dimensionality, described for the first time by Richard Bellman . When adding dimensions to collections, the distances between specific points are constantly increasing. The number of objects needed for proper generalization is also increasing. It is estimated that in the case of linear classifiers this number increases linearly with dimensionality, and squarely in the case of quadratic algorithms. Even worse is the case of non-parametric classifiers, such as neural networks or those using radial base functions, where the number of objects needed for proper generalization increases exponentially . Sometimes the problem of the curse of dimensionality is called small n large p” .
The curse of dimensionality results in the Hughes phenomenon . For a fixed number of samples, recognition accuracy may first increase algorithms increase, but decreases when the number of attributes exceeds a certain optimal value. In addition to the distance between the samples, this is also caused by the noise in the data or insignificant features. Selection and extraction (reduction) features are used to reduce the dimensionality of the data. Feature selection is designed to select a subset of the features used for classification, while feature extraction is used to transform (e.g., linear) feature space.
Principal Component Analysis belongs to projection methods. The goal of projection methods is to find a mapping from original space with d dimensions for a new one (\(k <= d\)) space, to minimize information loss .
One of the disadvantages of pca is that it uses a linear transformation, which makes it unsuitable for more complex spaces. The solution to this problem may be to develop a basic algorithm with the so-called kernel trick, getting kpca (Kernel Principal Component Analysis).
In order to solve a non-linear problem, one would first have to transform the input space X as a certain highly-dimensional space F using the function \(\phi (x)\), and then e.g. calculate the scalar product \(\phi (x), \phi (x')\) . However, it would be computationally complicated. Therefore, choose the \(k(x, x') =\,<\) \(\phi (x), \phi (x')\) for some transformation \(\phi \) . One of the models using this trick is e.g. svm classifier.
Another idea for developing pca is, for example, using class labels as in the development of Karhunen-Loève or carrying out selection of features in the space obtained by pca . In addition to using the standard pca, new versions are often created to suit specific problems. One such variation of pca method is SuperPCA . It is used in the classification problem related to hyperspectral imagining . The method combines pca with a segmentation algorithm by means of super pixelization.
Another interesting development of pca is the d i pca (Dynamic Inner PCA), method, also used in process monitoring, but focusing on the aspect of data dynamics . Its goal is to maximize covariance between components and their earlier values. It accomplishes this by extracting a model of dynamic hidden variables on which standard pca is then performed.
When it comes to supervised methods, lda is also still widely used. An example of the use of linear discriminant analysis is the already mentioned feature extraction for the task of cancer recognition based on microscopic tissue images . A team from India used a different approach to diagnose lung cancer , that used computed tomography images as input. In the study, lda was used to reduce the size of the data (Optimal Deep Neural Network). The results showed an improvement in quality compared to previously used classifiers.
Another proposed method is factor-rotation-modified ccpca analysis. The authors  proposed factor rotation in terms of decision-making centroids. The method was used to assess the risk of lymphocytic leukaemia.
The article presents a new concept of gpca for building main components in the pca method. For this purpose, the stochastic-gradient-optimization method was used .
The developed gpca method can be used in non-linear feature spaces. Other kernel functions may be proposed depending on the class the problem. In the article we consider a linear case.
3 Experimental Set-Up
The aim of the research is to build a feature extraction method that will allow more accurate classification of children with multiple sclerosis. The problem is important because the prognosis for the development of the disease is an extremely difficult process. Often, only appropriately selected variables allow for accurate classification of children to certain risk groups. The developed method gives a chance to build a tool that will support the physician in diagnostics and thus can contribute to the correct diagnosis and treatment of children. Because multiple sclerosis does not give initial clear-cut symptoms, well-chosen variables and risk groups can improve the quality of classification. This goal has become the most important reason for undertaking research on the construction of the extraction model, which will form the basis for classification using known algorithms. Similar studies have already been conducted and the developed ccpca method  has found real application in the classification people with lymphocytic leukaemia. Particular attention was paid to the newly developed gpca concept focusing on the optimization of factor rotation axes using the gradient method.
The real-world dataset was used in own research. Actual data relate to prognosis of multiple sclerosis in children. The data contained instances and features and two classes: – poor prognosis, – good prognosis. The number of respondents in the classes is , instances. So we have balanced data.
In the experiments, several methods of extracting features known from the literature have been compared. Including: pca (Principal Component Analysis) , kpca (Kernel Principal Component Analysis) , ccpca (Centroid Class Principal Component Analysis) , fa (Factor Analysis) , ica (Independent Component Analysis) , gpca (Gradient Component Analysis), which is the proposed proprietary method in this article.
Two experiments were performed in the tests, in which the accuracy score for three classifiers was verified in succession: svm (Support Vector Machine, rf (Random Forest) and k-nn (k-Nearest Neighbours).
The accuracy score metric was used to assess the quality of the classification. Wilcoxon signed rank test at statistical significance level \(\alpha =0.05\), was used to assess the differences between accuracy for different methods and algorithms. A five-stratified cross-validation was used in all experiments.
4 Experimental Evaluation
The conducted research was divided into two experiments. The results of the second experiment depend on the first experiment. In the first experiment, the number of principal components were determined experimentally for the pca, ccpca and gpca methods, which explain the set threshold of total variance. Thanks to this approach, we control the selection of main components, and thus the number of features that will form the basis of the classification. The thresholds for which the best algorithm classifications were obtained were included in the second experiment.
4.1 Experiment 1 - Determining the Quality of the Classification Depending on the Threshold of Total Explained Variance
The results of the tests in Experiment 1 show that for each pca, ccpca and gpca method there is a threshold of total variance at which the quality of all classifiers is the highest. As you can see, these thresholds are consistent and the best results of correct classifications with each pca method and classification algorithm are within 68–72%. It should be noted that for threshold 1 all features are taken for classification. In the case of 0.01, we have a situation where there is only one main component that combines are one to three attributes. For the 0.7 threshold, there are 3 main components. Also note that there is a slight data drift for different and near thresholds. However, as you can see, matching attributes to principal components is getting better. Therefore, there is a very interesting conclusion that as the total variance is threshold, the quality of matching attributes to these components increases. Figure 2 shows the results showing which features were assigned to a given principal component. The basis for classification of features into main components was the factor load value \(\lambda > 0.6\). The results indicate that we will get a better fit for decision class 2 of the problem for component 1, and class 2 will be better classified by the set of features in components 2 and 3. Based on the gpca method, the features Z7, Z8, Z10, Z12, Z14 and Z18 were rejected, which do not make a significant contribution to explaining object classes.
4.2 Experiment 2. Determining the Quality of Classification for Various Methods of Feature Extraction
The results of the experiments for the binary case with application of accurace-score metrics. In the columns the algorithms are presented, where no means lack of extraction of an object’s features.
The first significant conclusion from the research is that after extraction with any of the methods, the quality of classification with each of the three algorithms increased statistically significantly (\(p<0.05\)). In the task of feature extraction, the best results are obtained by using the gpca and ccpca methods. Classification quality after application of gpca and ccpca were statistically comparable. Methods kpca and fa don’t differ significantly from each other. Method ica for algorithms rf and knn gave better results than in the case of extraction with the ica method ica.
The purpose of the work was to develop a feature extraction method based on updating the property matrix and eigenvector values. In this task, the stochastic gradients method was used, where the function of the goal was the regression function. The study was conducted on a balanced set describing prognosis of children with multiple sclerosis. In during the analysis, it was possible to create a model that gives promising results for such a task. Two experiments were carried out in the work. The first assumed estimation of the gpca model parameters, i.e. the threshold of the greedy explained variance giving the best quality of classification, estimation of the belonging of variables to the main components. In experiment 2, the quality of svm, RF and k-nn algorithm classification was tested for various methods of feature extraction. The obtained results showed that the best extraction method is gpca and ccpca. The method of stochastic gradients used in the task of minimizing the error in estimating the matrix of eigenvector values proved to be a good approach. The estimation of gpca components was also carried out for each decision class. In this way, although the same sets of characteristics for each class in each component were obtained, but different matching attributes of the teaching set, which in turn contributed to improving the quality of classification. The gpca algorithm proved comparable to ccpca method which was based on Varimax rotation normalized with respect to decision-making centroids. The elaborated method was, as already mentioned, tested on real data with ms disease in children. However, it can be used for other learning collections. In further research, the developed method will be tested on other learning sets, which will confirm the ability to handle various types of data. The biggest problem that can be encountered in using the stochastic gradient approach is the algorithm step.
- 1.Bellman, R.E.: Adaptive Control Processes: A Guided Tour, vol. 2045. Princeton University Press, Princeton (2015)Google Scholar
- 7.Schölkopf, B.: The kernel trick for distances. In: Advances in Neural Information Processing Systems, pp. 301–307 (2001)Google Scholar
- 8.Mao, K.Z.: Identifying critical variables of principal components for unsupervised feature selection. IEEE Trans. Systems Man Cybern. Part B (Cybern.) 35(2), 339–344 (2005)Google Scholar
- 9.Jain, P.M., Shandliya, V.K.: A survey paper on comparative study between principal component analysis (PCA) and exploratory factor analysis (EFA). Int. J. Comput. Sci. Appl. 6(2), 373–375 (2013)Google Scholar
- 15.Topolski, M., Topolska, K.: Algorithm for constructing a classifier team using a modified PCA (Principal Component Analysis) in the task of diagnosis of acute lymphocytic leukaemia type B-CLL. In: Pérez García, H., Sánchez González, L., Castejón Limas, M., Quintián Pardo, H., Corchado Rodríguez, E. (eds.) HAIS 2019. LNCS (LNAI), vol. 11734, pp. 614–624. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29859-3_52CrossRefGoogle Scholar
- 16.Bootou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’ 2010, pp. 177–186 (2010)Google Scholar
- 17.Krawczyk, B., Ksieniewicz, P., Woźniak, M.: Hyperspectral image analysis based on color channels and ensemble classifier. In: Polycarpou, M., de Carvalho, A.C.P.L.F., Pan, J.-S., Woźniak, M., Quintian, H., Corchado, E. (eds.) HAIS 2014. LNCS (LNAI), vol. 8480, pp. 274–284. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07617-1_25CrossRefGoogle Scholar