Rough Set Feature Selection Methods for Case-Based Categorization of Text Documents
Textual case bases can contain thousands of features in the form of tokens or words, which can inhibit classification performance. Recent developments in rough set theory and its applications to feature selection offer promising approaches for selecting and reducing the number of features. We adapt two rough set feature selection methods for use on n-ary class text categorization problems. We also introduce a new method for selecting features that computes the union of features selected from randomly-partitioned training subsets. Our comparative evaluation of our method with a conventional method on the Reuters-21578 data set shows that it can dramatically decrease training time without compromising classification accuracy. Also, we found that randomized training set partitions dramatically reduce training time.
KeywordsFeature Selection Training Time Information Gain Feature Selection Method Conditional Attribute
- Li, Y., Shiu, S.C.K., Pal, S.K.: Combining feature reduction and case selection in building CBR classifiers. In: Pal, S.K., Aha, D.W., Gupta, K.M. (eds.) Case-based reasoning in knowledge discovery and data mining. Wiley, New York (2005) (to appear)Google Scholar
- Popova, V.N.: Knowledge discovery and monotonicity. Doctoral dissertation, Rotterdam School of Economics, Erasmus University, The Netherlands (2004)Google Scholar
- Wiratunga, N., Koychev, I., Massie, S.: Feature selection and generalization for re-trieval of textual cases. In: Proceedings of the Seventh European Conference on Case-Based Reasoning, Madrid, Spain, pp. 806–820. Springer, Heidelberg (2004)Google Scholar
- Yang, Y., Pederson, J.: A comparative study of feature selection in text categorization. In: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 412–420. Morgan Kaufmann, Nashville (1997)Google Scholar