Cluster Computing

, Volume 22, Supplement 1, pp 189–196 | Cite as

Performance evaluation of support vector machine classification approaches in data mining

  • S. ChidambaramEmail author
  • K. G. Srinivasagan


At present, knowledge extraction from the given data set plays a significant role in all the fields in our society. Feature selection process used to choose a few relevant features to achieve better classification performance. The existing feature selection algorithms consider the job as a single objective problem. Selecting attributes is prepared by the combination of attribute evaluation and search method using the WEKA Machine Learning Tool. The proposed method is performed in three phases. In the first step, support vector classifiers are implemented with four different kernel methods such as linear function, Polynomial function, Radial basis function and sigmoid functions to classify data items. In the second step, classifier subset evaluation is applied to feature selection, along with the SVM classification for optimizing feature vectors and this obtains the maximum accuracy. In the third step, introducing new kernel approach which generates the maximum accuracy in classification compared to the other four kernel methods. From the experimental analysis, SVM with the proposed kernel approach has produced maximum accuracy over other kernel methods.


Data mining Kernel methods Classification Data base Optimization 


  1. 1.
    Carrizosa, E., Martín-Barragán, B., Romero-Morales, D.: Detecting relevant variables and interactions in supervised classification. Euro. J. Oper. Res. 213, 260–269 (2011)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Liu, D., Qian, H., Dai, G., Zhang, Z.: An iterative SVM approach to feature selection and classification in high-dimensional datasets. Pattern Recognit. 46(9), 2531–2537 (2013)Google Scholar
  3. 3.
    Thi, H.A.L., Vo, X.T., Dinh, T.P.: Feature selection for linear SVMs under uncertain data: Robust optimization based on the difference of convex functions algorithms. Neural Netw. 59, 36–50 (2014)Google Scholar
  4. 4.
    Hassan, R., Othman, R.M., Saad, P., Kasim, S.: A compact hybrid feature vector for an accurate secondary structure prediction. Inf. Sci. 181, 5267–5277 (2011)CrossRefGoogle Scholar
  5. 5.
    Sun, L., Toh, K.-A., Lin, Z.: A center sliding Bayesian binary classifier adopting orthogonal polynomials. Pattern Recognit. 48(6), 2013–2028 (2015)Google Scholar
  6. 6.
    Maldonado, S., Weber, R., Basak, J.: Kernel-penalized SVM for feature selection. Inf. Sci. 181, 115–128 (2011)CrossRefGoogle Scholar
  7. 7.
    Maldonado, S., López, J.: Imbalanced data classification using second-order cone programming support vector machines. Pattern Recognit. 47 (2014).Google Scholar
  8. 8.
    Couellan, N., Jan, S., Jorquera, T., Georgé, J.-P.: Self-adaptive support vector machine: a multi-agent optimization perspective. Expert Syst. Appl. 42(9), 4284–4298 (2015).Google Scholar
  9. 9.
    Nematzadeh Balagatabi, Z., Nematzadeh Balagatabi, H.: Comparison of decision tree and SVM methods in classification of researcher’s cognitive styles in academic environment. Indian J. Autom. Artif. Intell. 1(1). January (2013) ISSN 2320-4001Google Scholar
  10. 10.
    Danenas, P., Garsva, G.: Selection of support vector machines based classifiers for credit risk domain. Expert Syst. Appl. 42(6), 3194–3204 (2015)Google Scholar
  11. 11.
    Pradhan, B., Sameen, M.I.: Manifestation of SVM-based rectified linear unit (ReLU) kernel function in landslide modeling. In: Space Science and Communication for Sustainability, pp. 185–195. Springer, Singapore (2018)Google Scholar
  12. 12.
    Qi, Z.Q., Tian, Y.J., Shi, Y.: Robust twin support vector machine for pattern classification. Pattern Recognit. 46(1), 305–316 (2013)CrossRefzbMATHGoogle Scholar
  13. 13.
    Zhanga, R., Wang, W.: Facilitating the applications of support vector machine by using a new kernel. Expert Syst. Appl. 38(11), 14225–14230 (2011)Google Scholar
  14. 14.
    Ravisankar, P., Ravi, V., Raghava, R.G., Bose, I.: Detection of financial statement fraud and feature selection using data mining techniques. Decis. Support Syst. 50(2), 491–500 (2011)CrossRefGoogle Scholar
  15. 15.
    Peng, S., Hu, Q., Chen, Y., Dang, J.: Improved support vector machine algorithm for heterogeneous data. Pattern Recognit. 48(6), 2072–2083 (2015)Google Scholar
  16. 16.
    Maldonado, S., Weber, R., Famili, F.: Feature selection for high-dimensional class-imbalanced data sets using support vector machines. Inf. Sci. 286, 228–246 (2014).Google Scholar
  17. 17.
    Abe, S.: Fuzzy support vector machines for multi-label classification. Pattern Recognit. 48(6), 2110–2117 (2015).Google Scholar
  18. 18.
    Song, L., Smola, A., Gretton, A., Bedo, J., Borgwardt, K.: Feature selection via dependence maximization. J. Mach. Learn. Res. 13, 1393–1434 (2012)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Wang, S., Liu, Q., Zhu, E., Porikli, F., Yin, J.: Hyperparameter selection of one-class support vector machine by self-adaptive data shifting. Pattern Recognit. 74, 198–211 (2018)CrossRefGoogle Scholar
  20. 20.
    Yan, H., Ye, Q., Dong-Jun, Yu., Yuan, X., Yiqing, X., Liyong, F.: Least squares twin bounded support vector machines based on L1-norm distance metric for classification. Pattern Recognit. 74, 434–447 (2018)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of ITNational Engineering CollegeKovilpattiIndia

Personalised recommendations