Rough Set Feature Selection Algorithms for Textual Case-Based Classification
Feature selection algorithms can reduce the high dimensionality of textual cases and increase case-based task performance. However, conventional algorithms (e.g., information gain) are computationally expensive. We previously showed that, on one dataset, a rough set feature selection algorithm can reduce computational complexity without sacrificing task performance. Here we test the generality of our findings on additional feature selection algorithms, add one data set, and improve our empirical methodology. We observed that features of textual cases vary in their contribution to task performance based on their part-of-speech, and adapted the algorithms to include a part-of-speech bias as background knowledge. Our evaluation shows that injecting this bias significantly increases task performance for rough set algorithms, and that one of these attained significantly higher classification accuracies than information gain. We also confirmed that, under some conditions, randomized training partitions can dramatically reduce training times for rough set algorithms without compromising task performance.
KeywordsFeature Selection Information Gain Feature Selection Algorithm Discernibility Matrix Reduce Training Time
Unable to display preview. Download preview PDF.
- Brill, E.: A corpus-based approach to language learning. Doctoral dissertation: Department of Computer Science, University of Pennsylvania, Philadelphia, PA (1993)Google Scholar
- Bruninghaus, S., Ashley, K.D.: Combining case-based and model-based reasoning for predicting the outcome of legal cases. In: Proceedings of the Fifth International Conference on Case-Based Reasoning. Springer, Trondheim, pp. 65–79 (2003) Google Scholar
- Gupta, K.M., Aha, D.W.: RuMop: A rule-based morphotactic parser. In: Proceedings of the International Conference on Natural Language Processing, pp. 280–284. Allied Publishers, Hyderabad (2004)Google Scholar
- Gupta, K.M., Moore, P.G., Aha, D.W., Pal, S.K.: Rough set feature selection methods for case-based categorization of text documents. In: Proceedings of the First International Conference on Pattern Recognition and Machine Intelligence, Kolkata, India, pp. 792–798. Springer, Heidelberg (2005)CrossRefGoogle Scholar
- Lang, K.: 20 News group dataset (2006), http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.html
- Li, Y., Shiu, S.C.K., Pal, S.: Combining Feature Reduction and Case Selection in Building CBR Classifiers. In: Aha, D.W., Gupta, K.M., Pal, S.K. (eds.) Case-Based Reasoning and Data Mining. John Wiley, Hoboken (2006)Google Scholar
- Quirk, R., Greenbaum, S., Leech, G., Svartvik, J.: A comprehensive grammar of the English language. Longman, New York (1985)Google Scholar
- Reuters. Reuters-21578 Evaluation Data (2006) (Retrieved on April 12, 2005), http://www.daviddlewis.com/resources/testcollections/reuters21578/
- Weber, R.O., Ashley, K.D., Brüninghaus, S.: Textual case-based reasoning. Knowledge Engineering Review 20(3) (to appear, 2005)Google Scholar
- Wilson, D.C., Bradshaw, S.: CBR textuality. Expert Update 3(1), 28–37 (2000)Google Scholar
- Yang, Y., Pederson, J.: A comparative study of feature selection in text categorization. In: Proceedings of the Fourteenth International Conference on Machine Learning, pp. 412–420. Morgan Kaufmann, Nashville (1997)Google Scholar