Advertisement

Combining SVMs with Various Feature Selection Strategies

  • Yi-Wei Chen
  • Chih-Jen Lin
Part of the Studies in Fuzziness and Soft Computing book series (STUDFUZZ, volume 207)

Abstract

This article investigates the performance of combining support vector machines (SVM) and various feature selection strategies. Some of them are filter-type approaches: general feature selection methods independent of SVM, and some are wrapper-type methods: modifications of SVM which can be used to select features. We apply these strategies while participating to the NIPS 2003 Feature Selection Challenge and rank third as a group.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pages 144–152, 1992.Google Scholar
  2. Leo Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. URL citeseer. nj.nec.com/breiman01random.html.zbMATHCrossRefGoogle Scholar
  3. Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.
  4. O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for support vector machines. Machine Learning, 46:131–159, 2002.zbMATHCrossRefGoogle Scholar
  5. W. Chu, S.S. Keerthi, and C.J. Ong. Bayesian trigonometric support vector classifier. Neural Computation, 15(9):2227–2254, 2003.zbMATHCrossRefGoogle Scholar
  6. Kai-Min Chung, Wei-Chun Kao, Chia-Liang Sun, Li-Lun Wang, and Chih-Jen Lin. Radius margin bounds for support vector machines with the RBF kernel. Neural Computation, 15:2643–2681, 2003.zbMATHCrossRefGoogle Scholar
  7. C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273–297, 1995.zbMATHGoogle Scholar
  8. Yann LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. MNIST database available at http://yann.lecun.com/exdb/mnist/.CrossRefGoogle Scholar
  9. Andy Liaw and Matthew Wiener. Classification and regression by randomForest. R News, 2/3:18–22, December 2002. URL http://cran.r-project.org/doc/Rnews/Rnews 2002-3.pdf.Google Scholar
  10. V. Svetnik, A. Liaw, C. Tong, and T. Wang. Application of Breiman’s random forest to modeling structure-activity relationships of pharmaceutical molecules. In F. Roli, J. Kittler, and T. Windeatt, editors, Proceedings of the 5th International Workshopon Multiple Classifier Systems, Lecture Notes in Computer Science vol. 3077., pages 334–343. Springer, 2004.Google Scholar
  11. Vladimir Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998.zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Yi-Wei Chen
    • 1
  • Chih-Jen Lin
    • 1
  1. 1.Department of Computer ScienceNational Taiwan UniversityTaipeiTaiwan

Personalised recommendations