Advertisement

Feature Weighting by RELIEF Based on Local Hyperplane Approximation

  • Hongmin Cai
  • Michael Ng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7302)

Abstract

In this paper, we propose a new feature weighting algorithm through the classical RELIEF framework. The key idea is to estimate the feature weights through local approximation rather than global measurement, as used in previous methods. The weights obtained by our method are more robust to degradation of noisy features, even when the number of dimensions is huge. To demonstrate the performance of our method, we conduct experiments on classification by combining hyperplane KNN model (HKNN) and the proposed feature weight scheme. Empirical study on both synthetic and real-world data sets demonstrate the superior performance of the feature selection for supervised learning, and the effectiveness of our algorithm.

Keywords

Feature weighting local hyperplane RELIEF Classification KNN 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007), http://www.ics.uci.edu/~mlearn/MLRepository.html
  2. 2.
    Bachrach, G.R., Navot, A., Tishby, N.: Margin Based Feature Selection - Theory and Algorithms. In: Proc. 21st International Conference on Machine Learning (ICML), pp. 43–50 (2004)Google Scholar
  3. 3.
    Brown, G.: An Information Theoretic Perspective on Multiple Classifier Systems. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 344–353. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  4. 4.
    Brown, G.: Some Thoughts at the Interface of Ensemble Methods and Feature Selection. In: El Gayar, N., Kittler, J., Roli, F. (eds.) MCS 2010. LNCS, vol. 5997, pp. 314–314. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Cawley, G.C., Talbot, N.L.C., Girolami, M.: Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation. Advances in Neural Information Processing Systems 19 (2007)Google Scholar
  6. 6.
    Christopher, A., Andrew, M., Stefan, S.: Locally weighted learning. Artificial Intelligence Review 11, 11–73 (1997)CrossRefGoogle Scholar
  7. 7.
    Ding, C., Peng, H.: Minimum redundancy feature selection from microarray gene expression data. Journal of Bioinformatics and Computational Biology 3(2), 185–205 (2005)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Domeniconi, C., Peng, J., Gunopulos, D.: Locally adaptive metric nearest-neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(9), 1281–1285 (2002)CrossRefGoogle Scholar
  9. 9.
    Duan, K.B.B., Rajapakse, J.C., Wang, H., Azuaje, F.: Multiple SVM-RFE for gene selection in cancer classification with expression data. IEEE Transactions on Nanobioscience 4(3), 228–234 (2005)CrossRefGoogle Scholar
  10. 10.
    Duda, R., Hart, P., Stork, D.: Pattern Classification. Wiley (2001)Google Scholar
  11. 11.
    Fraley, C., Raftery, A.E.: Model-based clustering, discriminant analysis, and density estimation. Journal of the American Statistical Association 97(458), 611–631 (2002)MathSciNetzbMATHCrossRefGoogle Scholar
  12. 12.
    Furey, T.S., Cristianini, N., Duffy, N., Bednarski, D.W., Schummer, M., Haussler, D.: Support vector machine classification and validation of cancer tissue samples using microarray expression data. BMC bioinformatics 16, 906–914 (2000)Google Scholar
  13. 13.
    Girolami, M., He, C.: Probability density estimation from optimally condensed data samples. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1253–1264 (2003)CrossRefGoogle Scholar
  14. 14.
    Guyon, I.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)zbMATHGoogle Scholar
  15. 15.
    Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Machine Learning 46, 389–422 (2002)zbMATHCrossRefGoogle Scholar
  16. 16.
    Hastie, T., Tibshirani, R.: Discriminant adaptive nearest neighbor classification. IEEE Transactions on Pattern Analysis and Machine Intelligence 18, 607–616 (1996)CrossRefGoogle Scholar
  17. 17.
    Huang, C.J., Yang, D.X., Chuang, Y.T.: Application of wrapper approach and composite classifier to the stock trend prediction. Expert Systems with Applications 34(4), 2870–2878 (2008)CrossRefGoogle Scholar
  18. 18.
    Koller, D., Sahami, M.: Toward optimal feature selection. In: Saitta, L. (ed.) Proceedings of the Thirteenth International Conference on Machine Learning (ICML), pp. 284–292. Morgan Kaufmann Publishers (1996)Google Scholar
  19. 19.
    Kononenko, I.: Estimating Attributes: Analysis and Extensions of RELIEF. In: Bergadano, F., De Raedt, L. (eds.) ECML 1994. LNCS, vol. 784, pp. 171–182. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  20. 20.
    Kwak, N., Choi, C.H.: Input feature selection by mutual information based on parzen window. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1667–1671 (2002)CrossRefGoogle Scholar
  21. 21.
    Liu, H., Setiono, R.: Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering 9, 642–645 (1997)CrossRefGoogle Scholar
  22. 22.
    Narlikar, L., Hartemink, A.J.: Sequence features of dna binding sites reveal structural class of associated transcription factor. Bioinformatics 22(2), 157–163 (2006)CrossRefGoogle Scholar
  23. 23.
    Nocedal, J., Wright, S.J.: Numerical Optimization. Springer (August 2000)Google Scholar
  24. 24.
    Peng, Y.H.: A novel ensemble machine learning for robust microarray data classification. Computers in Biology and Medicine 36, 553–573 (2006)CrossRefGoogle Scholar
  25. 25.
    Shakhnarovich, G., Darrell, T., Indyk, P. (eds.): Nearest-Neighbor Methods in Learning and Vision: Theory and Practice. MIT Press (2006)Google Scholar
  26. 26.
    Statnikov, A., Wang, L., Aliferis, C.F.: A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics 9, 319–328 (2008)CrossRefGoogle Scholar
  27. 27.
    Sun, Y.: Iterative relief for feature weighting: Algorithms, theories, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1035–1051 (2007)CrossRefGoogle Scholar
  28. 28.
    Sun, Y., Todorovic, S., Goodison, S.: Local-learning-based feature selection for high-dimensional data analysis. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1610–1626 (2010)CrossRefGoogle Scholar
  29. 29.
    Tao, Y., Vojislav, K.: Adaptive local hyperplane classification. Neurocomputing 71(13-15), 3001–3004 (2008)CrossRefGoogle Scholar
  30. 30.
    Vincent, P., Bengio, Y.: K-local hyperplane and convex distance nearest neighbor algorithms. In: Advances in Neural Information Processing Systems, pp. 985–992. The MIT Press (2001)Google Scholar
  31. 31.
    Zhang, T., Oles, F.J.: Text categorization based on regularized linear classification methods. Information Retrieval 4(1), 5–31 (2001)zbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Hongmin Cai
    • 1
  • Michael Ng
    • 2
  1. 1.South China University of TechnologyGuangdongP.R. China
  2. 2.Department of MathematicsHong Kong Baptist UniversityHong Kong

Personalised recommendations