Embedded variable selection method using signomial classification
- 219 Downloads
We propose two variable selection methods using signomial classification. We attempt to select, among a set of the input variables, the variables that lead to the best performance of the classifier. One method repeatedly removes variables based on backward selection, whereas the second method directly selects a set of variables by solving an optimization problem. The proposed methods conduct variable selection considering nonlinear interactions of variables and obtain a signomial classifier with the selected variables. Computational results show that the proposed methods more effectively selects desirable variables for predicting output and provide the classifiers with better or comparable test error rates, as compared with existing methods.
KeywordsClassification problems Variable selection Embedded method Signomial classification
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2013-025297).
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest. Funding information is stated in the above acknowledgements.
- Bache, K., & Lichman, M. (2013). University of california, irvine (UCI) machine learning repository. http://archive.ics.uci.edu/ml.
- Bay, S. D. (1998). Combining nearest neighbor classifiers through multiple feature subsets. In Proceedings of the 15th international conference on machine learning (ICML ’98, pp. 37–45). Madison, WI: Morgan Kaufmann Publishers.Google Scholar
- Bertsimas, D., & Tsitsiklis, J. N. (1997). Introduction to linear optimization. No. 6 in Athena scientific series in optimization and neural computation. Belmont: Athena Scientific, MAMSC.Google Scholar
- Bi, J., Bennett, K., Embrechts, M., Breneman, C., & Song, M. (2003). Dimensionality reduction via support vector machines. Journal of Machine Learning Research, 3, 1229–1243.Google Scholar
- Biesiada, J., & Duch, W. (2007). Feature selection for high-dimensional data—a Pearson redundancy based filter. In Computer recognition systems 2, advances in soft computing (Vol. 45, pp. 242–249). NewYork: Springer.Google Scholar
- Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees. Belmont, CA: Wadsworth International Group.Google Scholar
- Canu, S., Grandvalet, Y., Guigue, V., & Rakotomamonjy, A. (2005). SVM and kernel methods matlab toolbox. INSA de Rouen, Rouen: Perception Systemes et Information.Google Scholar
- Cun, Y. L., Denker, J. S., & Solla, S. A. (1989). Optimal brain damage. In Proceedings of the 2nd annual conference on neural information processing systems (NIPS ’89, pp. 598–605). Morgan Kaufmann Publishers: Denver, CO.Google Scholar
- Dash, M., Choi, K., Scheuermann, P., & Liu, H. (2002). Feature selection for clustering—a filter solution. In Proceedings of the 2nd international conference on data mining (ICDM ’02, pp. 115–122). Maebashi: IEEE Computer Society.Google Scholar
- Garey, M. R., & Johnson, D. S. (1979). Computers and intractability: A guide to the theory of NP-completeness., A series of books in the mathematical sciences New York, NY: W. H. Freeman and Company.Google Scholar
- Grandvalet, Y., & Canu, S. (2002). Adaptive scaling for feature selection in SVMs. In Proceedings of the 15th annual conference on neural information processing systems (NIPS ’02, pp. 553–560). Vancouver, BC: MIT Press.Google Scholar
- Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3, 1157–1182.Google Scholar
- Hermes, L., & Buhmann, J. M. (2000). Feature selection for support vector machines. In Proceedings of the 15th international conference on pattern recognition (ICPR ’00, Vol. 2, pp. 716–719). Barcelona: IEEE Computer Society .Google Scholar
- Hosmer, D., & Lemeshow, S. (2005). Applied logistic regression (2nd ed.)., Wiley series in probability and statistics New York, NY: Wiley.Google Scholar
- Hsu, C. W., Chang, C. C, & Lin, C. J. (2003). A practical guide to support vector classification. Technical report, Department of Computer Science, National Taiwan University, Taipei 106, Taiwan.Google Scholar
- Jebara, T., & Jaakkola, T. (2000). Feature selection and dualities in maximum entropy discrimination. In Proceedings of the 16th conference on uncertainty in artificial intelligence (UAI ’00, pp. 291–300). Stanford, CA: Morgan Kaufmann Publishers.Google Scholar
- Kohavi, R., & Sommerfield, D. (1995). Feature subset selection using the wrapper method: Overfitting and dynamic search space topology. In Proceedings of the 1st international conference on knowledge discovery and data mining (KDD ’95, pp. 192–197). Montreal, QC: AAAI Press.Google Scholar
- Lal, T. N., Chapelle, O., Weston, J., & Elisseeff, A. (2006). Feature extraction: Foundations and applications (Studies in Fuzziness and Soft Computing), chap 5. Embedded methods (Vol. 207, pp. 137–165). Berlin: Springer.Google Scholar
- MATLAB (2010). version 7.10.0 (R2010a). The MathWorks Inc., Natick, MA, USA.Google Scholar
- Perkins, S., Lacker, K., & Theiler, J. (2003). Grafting: Fast, incremental feature selection by gradient descent in function space. Journal of Machine Learning Research, 3, 1333–1356.Google Scholar
- Rakotomamonjy, A. (2003). Variable selection using SVM based criteria. Journal of Machine Learning Research, 3, 1357–1370.Google Scholar
- Rivals, I., & Personnaz, L. (2003). MLPs (mono layer polynomials and multi layer perceptrons) for nonlinear modeling. Journal of Machine Learning Research, 3, 1383–1398.Google Scholar
- Stoppiglia, H., Dreyfus, G., Dubois, R., & Oussar, Y. (2003). Ranking a random feature for variable and feature selection. Journal of Machine Learning Research, 3, 1399–1414.Google Scholar
- Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B (Methodological), 58(1), 267–288.Google Scholar
- Tipping, M. E. (2001). Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1, 211–244.Google Scholar
- Torkkola, K. (2003). Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research, 3, 1415–1438.Google Scholar
- Weston, J., Mukherjee, S., Chapelle, O., Pontil, M., Poggio, T., & Vapnik, V. (2000). Feature selection for SVMs. In Proceedings of the 13th annual conference on neural information processing systems (NIPS ’00, pp. 563–532). Denver, CO: MIT PressGoogle Scholar
- Weston, J., Elisseeff, A., Schölkopf, B., & Tipping, M. (2003). Use of the zero-norm with linear models and kernel methods. Journal of Machine Learning Research, 3, 1439–1461.Google Scholar
- Weston, J., Elisseeff, A., BakIr, G., & Sinz, F. (2006). Spider toolbox. http://people.kyb.tuebingen.mpg.de/spider.
- Xpress (2016). Xpress 7.9. http://www.fico.com/en.
- Yu, L., & Liu, H. (2003). Feature selection for high-dimensional data: A fast correlation-based filter solution. In Proceedings of the 20th international conference on machine learning (ICML ’03, pp. 56–63). Washington, DC: AAAI Press.Google Scholar