Advertisement

Coordinating Discernibility and Independence Scores of Variables in a 2D Space for Efficient and Accurate Feature Selection

  • Juanying XieEmail author
  • Mingzhao Wang
  • Ying Zhou
  • Jinyan Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9773)

Abstract

Feature selection is to remove redundant and irrelevant features from original ones of exemplars, so that a sparse and representative feature subset can be detected for building a more efficient and accurate classifier. This paper presents a novel definition for the discernibility and independence scores of a feature, and then constructs a two dimensional (2D) space with the feature’s independence as y-axis and discernibility as x-axis to rank features’ importance. This new method is named FSDI (Feature Selection based on Discernibility and Independence of a feature). The discernibility score of a feature is to measure the distinguishability of the feature to detect instances from different classes. The independence score is to measure the redundancy of a feature. All features are plotted in the 2D space according to their discernibility and independence coordinates. The area of the rectangular corresponding to a feature’s discernibility and independence in the 2D space is used as a criterion to rank the importance of the features. Top-k features with much higher importance than the rest ones are selected to form the sparse and representative feature subset for building an efficient and accurate classifier. Experimental results on 5 classical gene expression datasets demonstrate that our proposed FSDI algorithm can select the gene subset efficiently and has the best performance in classification. Our method provides a good solution to the bottleneck issues related to the high time complexity of the existing gene subset selection algorithms.

Keywords

Discernibility Independence Feature selection Gene subset selection 

Notes

Acknowledgements

We are much obliged to those who share the gene expression datasets with us. This work is supported in part by the National Natural Science Foundation of China under Grant No. 31372250, is also supported by the Key Science and Technology Program of Shaanxi Province of China under Grant No. 2013K12-03-24, and is at the same time supported by the Fundamental Research Funds for the Central Universities under Grant No. GK201503067, and by the Innovation Funds of Graduate Programs at Shaanxi Normal University under Grant No. 2015CXS028.

References

  1. 1.
    Blum, A.L., Langley, P.: Selection of relevant features and examples in machine learning. Artif. Intell. 97(1), 245–271 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Google Scholar
  3. 3.
    Ding, C., Peng, H.: Minimum redundancy feature selection from microarray gene expression data. J. Bioinform. Comput. Biol. 3(2), 185–205 (2005)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  5. 5.
    Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Mach. Learn. 46(1–3), 389–422 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Hall, M.A.: Correlation-based feature selection for machine learning. The University of Waikato (1999)Google Scholar
  7. 7.
    Han, J., Kamber, M., Pei, J.: Data mining: concepts and techniques: concepts and techniques. Elsevier (2011)Google Scholar
  8. 8.
    Hu, Q., Pedrycz, W., Yu, D., Lang, J.: Selecting discrete and continuous features based on neighborhood decision error minimization. IEEE Trans. Syst. Man Cybern. Part B Cybern. 40(1), 137–150 (2010)CrossRefGoogle Scholar
  9. 9.
    Huang, Z.: Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Discov. 2(3), 283–304 (1998)CrossRefGoogle Scholar
  10. 10.
    Kira, K., Rendell, L.A.: The feature selection problem: Traditional methods and a new algorithm. Paper presented at the AAAI (1992)Google Scholar
  11. 11.
    Kohavi, R., John, G.H.: Wrappers for feature subset selection. Artif. Intell. 97(1), 273–324 (1997)CrossRefzbMATHGoogle Scholar
  12. 12.
    Li, Y.-X., Li, J.-G., Ruan, X.-G.: Study of informative gene selection for tissue classification based on tumor gene expression profiles. Chin. J. Comput. Chin. Ed. 29(2), 324 (2006)MathSciNetGoogle Scholar
  13. 13.
    MacQueen, J.: Some methods for classification and analysis of multivariate observations. Paper presented at the Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1967)Google Scholar
  14. 14.
    Mao, Y., Zhou, X., Xia, Z., Yi, Z., Sun, Y.: A survey for study of feature selection. Algorithm 20(2), 211–218 (2007). (in Chinese)Google Scholar
  15. 15.
    Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1226–1238 (2005)CrossRefGoogle Scholar
  16. 16.
    Shah, M., Marchand, M., Corbeil, J.: Feature selection with conjunctions of decision stumps and learning from microarray data. IEEE Trans. Pattern Anal. Mach. Intell. 34(1), 174–186 (2012)CrossRefGoogle Scholar
  17. 17.
    Song, Q., Ni, J., Wang, G.: A fast clustering-based feature subset selection algorithm for high-dimensional data. IEEE Trans. Knowl. Data Eng. 25(1), 1–14 (2013)CrossRefGoogle Scholar
  18. 18.
    Wang, R., Tang, K.: Feature Selection for Maximizing the Area Under the ROC Curve, pp. 400–405 (2009)Google Scholar
  19. 19.
    Xie, J., Gao, H.: Statistical correlation and k-means based distinguishable gene subset selection algorithms. J. Softw. 9, 013 (2014). (in Chinese)Google Scholar
  20. 20.
    Xie, J., Gao, H.: A stable gene subset selection algorithm for cancers. In: Yin, X., Ho, K., Zeng, D., Aickelin, U., Zhou, R., Wang, H. (eds.) HIS 2015. LNCS, vol. 9085, pp. 111–122. Springer, Heidelberg (2015)Google Scholar
  21. 21.
    Xie, J., Xie, W.: Several feature selection algorithms based on the discernibility of a feature subset and support vector machines. Chin. J. Comput. Chin. Ed. 37(8), 1704–1718 (2014). (in Chinese)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Juanying Xie
    • 1
    Email author
  • Mingzhao Wang
    • 1
  • Ying Zhou
    • 1
  • Jinyan Li
    • 2
  1. 1.School of Computer ScienceShaanxi Normal UniversityXiʼanPeople’s Republic of China
  2. 2.Faculty of Engineering and Information TechnologyUniversity of Technology SydneyBroadwayAustralia

Personalised recommendations