Advertisement

Redundant features removal for unsupervised spectral feature selection algorithms: an empirical study based on nonparametric sparse feature graph

  • Pengfei Xu
  • Shuchu Han
  • Hao Huang
  • Hong Qin
Regular Paper
  • 8 Downloads

Abstract

For existing unsupervised spectral feature selection algorithms, the quality of the eigenvectors decides the performance. There eigenvectors are calculated from the Laplacian matrix of similarity graph which is built from samples. When applying these algorithms to high-dimensional data, we meet the very embarrassing chicken-and-egg problem: “the success of feature selection depends on the quality of indication vectors which are related to the structure of data. But the purpose of feature selection is to give more accurate data structure.” To alleviate this problem, we propose a graph-based approach to reduce the dimension of data by searching and removing redundant features automatically. A sparse graph is generated at feature side and is used to learn the redundant relationship among features. We name this novel graph as sparse feature graph (SFG). To avoid the inaccurate distance information among high-dimensional vectors, the construction of SFG does not utilize the pairwise relationship among samples, which means the structure info of data is not used. Our proposed algorithm is also a nonparametric one as it does not make any assumption about the data distribution. We treat this proposed redundant feature removal algorithm as a data preprocessing approach for existing popular unsupervised spectral feature selection algorithms like multi-cluster feature selection (MCFS) which requires accurate cluster structure information based on samples. Our experimental results on benchmark datasets show that the proposed SFG and redundant feature remove algorithm can improve the performance of those unsupervised spectral feature selection algorithms consistently.

Keywords

Sparse graph representation Unsupervised spectral feature selection Dense subgraph 

Notes

Acknowledgements

This work is supported by NSF funding IIS-1715985.

References

  1. 1.
    Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: International Conference on Database Theory, pp. 420–434. Springer, Berlin (2001)CrossRefGoogle Scholar
  2. 2.
    Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U.: When is nearest neighbor meaningful? In: International Conference on Database Theory, pp. 217–235. Springer, Berlin (1999)Google Scholar
  3. 3.
    Cai, D., Zhang, C., He, X.: Unsupervised feature selection for multi-cluster data. In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 333–342. ACM, New York (2010)Google Scholar
  4. 4.
    Dash, M., Liu, H.: Feature selection for classification. Intell. Data Anal. 1(3), 131–156 (1997)CrossRefGoogle Scholar
  5. 5.
    Du, L., Shen, Y.D.: Unsupervised feature selection with adaptive structure learning. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 209–218. ACM, New York (2015)Google Scholar
  6. 6.
    Dy, J.G., Brodley, C.E.: Feature selection for unsupervised learning. J. Mach. Learn. Res. 5, 845–889 (2004)MathSciNetzbMATHGoogle Scholar
  7. 7.
    Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35(11), 2765–2781 (2013)CrossRefGoogle Scholar
  8. 8.
    Han, S., Qin, H.: A greedy algorithm to construct sparse graph by using ranked dictionary. Int. J. Data Sci. Anal. 2(3), 131–143 (2016).  https://doi.org/10.1007/s41060-016-0020-3 CrossRefGoogle Scholar
  9. 9.
    He, X., Ji, M., Zhang, C., Bao, H.: A variance minimization criterion to feature selection using Laplacian regularization. IEEE Trans. Pattern Anal. Mach. Intell. 33(10), 2013–2025 (2011)CrossRefGoogle Scholar
  10. 10.
    Hou, C., Nie, F., Li, X., Yi, D., Wu, Y.: Joint embedding learning and sparse regression: a framework for unsupervised feature selection. IEEE Trans. Cybern. 44(6), 793–804 (2014)CrossRefGoogle Scholar
  11. 11.
    Li, J., Cheng, K., Wang, S., Morstatter, F., Trevino, R.P., Tang, J., Liu, H.: Feature selection: a data perspective (2016). http://featureselection.asu.edu/
  12. 12.
    Koller, D.: Toward optimal feature selection. In: Proceeding of the 13th International Conference on Machine Learning, pp. 284–292. Morgan Kaufmann, Los Altos (1996)Google Scholar
  13. 13.
    Lee, V.E., Ruan, N., Jin, R., Aggarwal, C.: A survey of algorithms for dense subgraph discovery. In: Managing and Mining Graph Data, pp. 303–336. Springer, Berlin (2010)CrossRefGoogle Scholar
  14. 14.
    Li, Z., Yi, Y., Liu, J., Zhou, X., Lu, H.: Unsupervised feature selection using nonnegative spectral analysis. In: AAAI (2012)Google Scholar
  15. 15.
    Liu, X., Wang, L., Zhang, J., Yin, J., Liu, H.: Global and local structure preservation for feature selection. IEEE Trans. Neural Netw. Learn. Syst. 25(6), 1083–1095 (2014)CrossRefGoogle Scholar
  16. 16.
    Mairal, J., Yu, B.: Supervised feature selection in graphs with path coding penalties and network flows. J. Mach. Learn. Res. 14(1), 2449–2485 (2013)MathSciNetzbMATHGoogle Scholar
  17. 17.
    Moujahid, A., Dornaika, F.: Feature selection for spatially enhanced lbp: application to face recognition. Int. J. Data Sci. Anal. 5(1), 11–18 (2018).  https://doi.org/10.1007/s41060-017-0083-9 CrossRefGoogle Scholar
  18. 18.
    Ng, A.Y., Jordan, M.I., Weiss, Y., et al.: On spectral clustering: analysis and an algorithm. In: NIPS, vol. 14, pp. 849–856 (2001)Google Scholar
  19. 19.
    Peng, H., Long, F., Ding, C.: Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1226–1238 (2005)CrossRefGoogle Scholar
  20. 20.
    Robnik-Šikonja, M., Kononenko, I.: Theoretical and empirical analysis of relieff and rrelieff. Mach. Learn. 53(1–2), 23–69 (2003)CrossRefGoogle Scholar
  21. 21.
    Song, Q., Ni, J., Wang, G.: A fast clustering-based feature subset selection algorithm for high-dimensional data. IEEE Trans. Knowl. Data Eng. 25(1), 1–14 (2013)CrossRefGoogle Scholar
  22. 22.
    Sturm, B.L., Christensen, M.G.: Comparison of orthogonal matching pursuit implementations. In: 2012 Proceedings of the 20th European on Signal Processing Conference (EUSIPCO), pp. 220–224. IEEE (2012)Google Scholar
  23. 23.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B (Methodol.) 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Tsourakakis, C., Bonchi, F., Gionis, A., Gullo, F., Tsiarli, M.: Denser than the densest subgraph: extracting optimal quasi-cliques with quality guarantees. In: Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 104–112. ACM, New York (2013)Google Scholar
  25. 25.
    Wang, X., Mccallum, A., Wei, X.: Feature selection with integrated relevance and redundancy optimization. In: ICDM 2015. 15th IEEE International Conference on Data Mining, 2015, pp. 697–702. IEEE (2015)Google Scholar
  26. 26.
    Wang, D., Nie, F., Huang, H.: Feature selection via global redundancy minimization. IEEE Trans. Knowl. Data Eng. 27(10), 2743–2755 (2015)CrossRefGoogle Scholar
  27. 27.
    Weber, R., Schek, H.J., Blott, S.: A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In: VLDB, vol. 98, pp. 194–205 (1998)Google Scholar
  28. 28.
    Yang, Y., Shen, H.T., Ma, Z., Huang, Z., Zhou, X.: L2, 1-norm regularized discriminative feature selection for unsupervised learning. In: IJCAI Proceedings-International Joint Conference on Artificial Intelligence, vol. 22, p. 1589, Citeseer (2011)Google Scholar
  29. 29.
    You, C., Robinson, D.P., Vidal, R.: Scalable sparse subspace clustering by orthogonal matching pursuit. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3918–3927 (2016)Google Scholar
  30. 30.
    Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 5, 1205–1224 (2004)MathSciNetzbMATHGoogle Scholar
  31. 31.
    Zhai, H., Haraguchi, M., Okubo, Y., Tomita, E.: A fast and complete algorithm for enumerating pseudo-cliques in large graphs. Int. J. Data Sci. Anal. 2(3), 145–158 (2016).  https://doi.org/10.1007/s41060-016-0022-1 CrossRefGoogle Scholar
  32. 32.
    Zhao, Z., Wang, L., Liu, H.: Efficient spectral feature selection with minimum redundancy. In: AAAI (2010)Google Scholar
  33. 33.
    Zhao, Z., Wang, L., Liu, H., Ye, J.: On similarity preserving feature selection. IEEE Trans. Knowl. Data Eng. 25(3), 619–632 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Information Science and TechnologyBeijing Normal UniversityBeijingChina
  2. 2.Department of Computer ScienceStony Brook UniversityStony BrookUSA
  3. 3.Machine Learning LaboratoryGeneral Electric Global ResearchSan RamonUSA

Personalised recommendations