Less Biased Measurement of Feature Selection Benefits
In feature selection, classification accuracy typically needs to be estimated in order to guide the search towards the useful subsets. It has earlier been shown  that such estimates should not be used directly to determine the optimal subset size, or the benefits due to choosing the optimal set. The reason is a phenomenon called overfitting, thanks to which these estimates tend to be biased. Previously, an outer loop of cross-validation has been suggested for fighting this problem. However, this paper points out that a straightforward implementation of such an approach still gives biased estimates for the increase in accuracy that could be obtained by selecting the best-performing subset. In addition, two methods are suggested that are able to circumvent this problem and give virtually unbiased results without adding almost any computational overhead.
KeywordsFeature Selection Outer Loop Feature Subset Feature Selection Algorithm Subset Size
Unable to display preview. Download preview PDF.
- 1.Reunanen, J.: A pitfall in determining the optimal feature subset size. In: Proc. of the 4th Int. Workshop on Pattern Recognition in Information Systems (PRIS 2004), Porto, Portugal, pp. 176–185 (2004)Google Scholar
- 2.Schalkoff, R.J.: Pattern Recognition: Statistical, Structural and Neural Approaches. John Wiley & Sons, Inc., Chichester (1992)Google Scholar
- 4.Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
- 5.John, G.H., Kohavi, R., Pfleger, K.: Irrelevant features and the subset selection problem. In: Proc. of the 11th Int. Conf. on Machine Learning (ICML 1994), New Brunswick, NJ, USA, pp. 121–129 (1994)Google Scholar
- 8.Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proc. of the 14th Int. Joint Conf. on Artificial Intelligence (IJCAI 1995), Montreal, Canada, pp. 1137–1143 (1995)Google Scholar