Advertisement

Attribute Selection Based on Correlation Analysis

  • Jatin Bedi
  • Durga Toshniwal
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 645)

Abstract

Feature selection is one of the significant areas of research in the field of data mining, pattern recognition, and machine learning. One of the effective methods of feature selection is to determine the distinctive capability of the individual feature. More the distinctive capability the more interesting the feature is. But in addition to this, another important thing to be considered is the dependency between the different features. Highly dependent features lead to the inaccurate analysis or results. To solve this problem, we present an approach for feature selection based on the correlation analysis (ASCA) between the features. The algorithm works by iteratively removing the features that is highly dependent on each other. Firstly, we define the concept of multi-collinearity and its application to feature selection. Then, we present a new method for selection of attributes based on the correlation analysis. Finally, the proposed approach is tested on the benchmark datasets and the experimental results show that this approach works better than other existing feature selection algorithms both in terms of accuracy and computational overheads.

Keywords

Feature selection Correlation analysis Multi-collinearity Attribute subset 

References

  1. 1.
    Pei, H.K.: Data Mining: Concepts and Techniques, 3rd edn. Elsevier (2011)Google Scholar
  2. 2.
    Hall, M.: Correlation-Based Feature Subset Selection for Machine Learning. University of Waikato, Hamilton, New Zealand (1998)Google Scholar
  3. 3.
    Fayyad, U., Piatetsky, G., Smyth, P.: From data mining to knowledge discovery in databases. AI Magazine (1996)Google Scholar
  4. 4.
    Chandrashekhar, G., Sahin, F.: A survey on feature selection methods. Comput. Electr. Eng. 40, 14–28 (2014)Google Scholar
  5. 5.
    Kononenko, I.: Estimation attributes: analysis and extensions of RELIEF. In: European Conference on Machine Learning, New Brunswick (1994)Google Scholar
  6. 6.
    Hall, M.: Correlation–based feature selection for discrete and numeric class machine learning. In: 17th International Conference on Machine Learning (2000)Google Scholar
  7. 7.
    Blum, A.L., Langley, P.: Selection of relevant features and examples in machine learning. Artif. Intell. 97(1), 245–271 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Cheng, T.-H., Wei, C.P., Tseng, V.: Feature selection for medical data mining. In: IEEE International Symposium on Computer-Based Medical Systems, pp. 165–170 (2006)Google Scholar
  9. 9.
    Rahman, M.N.A., Lazim, Y.M., Mohamed, F.: Applying rough set theory in multimedia data classification. Int. J. New Comput. Archit Appl 683–693 (2011)Google Scholar
  10. 10.
    Yang, Y., Pedersen, J.: A comparative study on feature selection in text categorization. In: Proceedings of the 14th International Conference on Machine Learning (1997)Google Scholar
  11. 11.
    Chouchoulas, A., Shen, Q.: Rough set-aided keyword reduction for text categorisation. Appl. Artif. Intell. 15(9), 843–873 (2001)Google Scholar
  12. 12.
    Han, J., Hu, X., Lin, T.Y.: Feature subset selection based on relative dependency between attributes. In: 4th International Conference Rough Sets and Current Trends in Computing, Uppsala, Sweden (2004)Google Scholar
  13. 13.
    Shen, Q., Chouchoulas, A.: A modular approach to generating fuzzy rules with reduced attributes for the monitoring of complex systems. Eng. Appl. Artif. Intell. 263–278 (2002)Google Scholar
  14. 14.
    Deng, D.: Parallel reduct and its properties. In: Granular Computing, pp. 121–125 (2009)Google Scholar
  15. 15.
    Liu, H., Setiono, R.: A probabilistic approach to feature selection—a filter solution. In: 13th International Conference on Machine Learning (1996)Google Scholar
  16. 16.
    Basak, A., Das, A.K.: A graph based feature selection algorithm utilizing attribute intercorrelation. In: IEEE (2016)Google Scholar
  17. 17.
    Liu, J., et al.: Feature selection based on quality of information. Neurocomputing 225, 11–22 (2017)Google Scholar
  18. 18.
    Ebrahimpour, M.K., Eftekhari, M.: Ensemble of feature selection methods: a hesitant fuzzy sets approach. Appl. Soft Comput. 300–312 (2017)Google Scholar
  19. 19.
    Gujrati, D.: Basic Econometrics, 4th edn. McGraw-HILL, USA (1995)Google Scholar
  20. 20.
    Wikipedia. http://en.m.wikipedia.org/wiki/Rough_set. Accessed July 2016
  21. 21.
    Mozer, M.C., Jordan, M.I., Petsche, T.: A principled alternative to the self-organising map. In: Advances in Neural Information Processing Systems, Cambridge (1997)Google Scholar
  22. 22.
  23. 23.
    Murphy, P., Aha, W.: UCI repository of machine learning databases (1996). http://www.ics.uci.edu/mlearn/MLRepository.html

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Computer Science & EngineeringIndian Institute of TechnologyRoorkeeIndia

Personalised recommendations