Feature Selection in Gene Expression Data Using Principal Component Analysis and Rough Set Theory

  • Debahuti Mishra
  • Rajashree Dash
  • Amiya Kumar Rath
  • Milu Acharya
Part of the Advances in Experimental Medicine and Biology book series (AEMB, volume 696)


In many fields such as data mining, machine learning, pattern recognition and signal processing, data sets containing huge number of features are often involved. Feature selection is an essential data preprocessing technique for such high-dimensional data classification tasks. Traditional dimensionality reduction approach falls into two categories: Feature Extraction (FE) and Feature Selection (FS). Principal component analysis is an unsupervised linear FE method for projecting high-dimensional data into a low-dimensional space with minimum loss of information. It discovers the directions of maximal variances in the data. The Rough set approach to feature selection is used to discover the data dependencies and reduction in the number of attributes contained in a data set using the data alone, requiring no additional information. For selecting discriminative features from principal components, the Rough set theory can be applied jointly with PCA, which guarantees that the selected principal components will be the most adequate for classification. We call this method Rough PCA. The proposed method is successfully applied for choosing the principal features and then applying the Upper and Lower Approximations to find the reduced set of features from a gene expression data.


Data preprocessing Feature selection Principal component analysis Rough sets Lower approximation Upper approximation 


  1. 1.
    Jensen R, “Performing feature selection with ACO. Swarm intelligence and data mining”, Abraham A, Grosan C and Ramos V (eds.), Studies in Computational Intelligence, 34, 2006, 45–73.Google Scholar
  2. 2.
    Yan J, Zhang B, Liu N, Yan S, Cheng Q, Fan W, Yang Q, Xi W, Chen Z, “Effective and efficient dimensionality reduction for large-scale and streaming data preprocessing”, IEEE transactions on knowledge and data engineering, 18, 3, 2006, 320–333.CrossRefGoogle Scholar
  3. 3.
    Davy M, Luz S, “Dimensionality reduction for active learning with nearest neighbour classifier in text categorisation problems”, Sixth International Conference on Machine Learning and Applications, 2007.Google Scholar
  4. 4.
    Swiniarski RW, “Rough sets methods in feature reduction and classification” International Journal of Applied Mathematics and Computer Science, 11, 3, 2001, 565–582.Google Scholar
  5. 5.
    Jollie IT, “Principal Component Analysis”, Springer-Verlag, New York, 1986.Google Scholar
  6. 6.
    Pawlak Z, “Rough Sets: Theoretical Aspects of Reasoning About Data”, Kluwer Academic Publishing, Dordrecht, 1991.Google Scholar
  7. 7.
    Polkowski L, Lin TY, Tsumoto S (Eds), “Rough set methods and applications: new developments in knowledge discovery in information systems”, Vol. 56. Studies in Fuzziness and Soft Computing, Physica-Verlag, Heidelberg, 2000.Google Scholar
  8. 8.
    Pawlak Z, “Roughsets” International Journal of Computer and Information Sciences, 11, 1982, 341–356.Google Scholar
  9. 9.
    UCI Repository for Machine Learning Databases retrieved from the World Wide Web:
  10. 10.
    Han J, Kamber M, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2001 279–325.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Debahuti Mishra
    • 1
  • Rajashree Dash
  • Amiya Kumar Rath
  • Milu Acharya
  1. 1.Department of Computer Science & Engineering, Institute of Technical Education & ResearchSiksha O Anusandhan UniversityBhubaneswarIndia

Personalised recommendations