On the Behavior of SVM and Some Older Algorithms in Binary Text Classification Tasks
Document classification has already been widely studied. In fact, some studies compared feature selection techniques or feature space transformation whereas some others compared the performance of different algorithms. Recently, following the rising interest towards the Support Vector Machine, various studies showed that the SVM outperforms other classification algorithms. So should we just not bother about other classification algorithms and opt always for SVM?
We have decided to investigate this issue and compared SVM to kNN and naive Bayes on binary classification tasks. An important issue is to compare optimized versions of these algorithms, which is what we have done. Our results show all the classifiers achieved comparable performance on most problems. One surprising result is that SVM was not a clear winner, despite quite good overall performance. If a suitable preprocessing is used with kNN, this algorithm continues to achieve very good results and scales up well with the number of documents, which is not the case for SVM. As for naive Bayes, it also achieved good performance.
KeywordsSupport Vector Machine Feature Space Text Categorization Inverse Document Frequency Optimal Hyper Plane
Unable to display preview. Download preview PDF.
- 1.Rogati, M., Yang, Y.: High-performing feature selection for text classification. In: 11th International Conference on Information and Knowledge Management, pp. 659–661 (2002)Google Scholar
- 2.Yang, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: 14th International Conference on Machine Learning, pp. 412–420 (1997)Google Scholar
- 4.Joachims, T.: Making large-scale support vector machine learning practical. In: Advances in Kernel Methods: Support Vector Machines (1998)Google Scholar
- 5.Dumais, S., Platt, J., Heckerman, D., Sahami, M.: Inductive learning algorithms and representations for text categorization. In: 7th International Conference on Information and Knowledge Management, pp. 148–155 (1998)Google Scholar
- 6.Yang, Y., Liu, X.: A re-examination of text categorization methods. In: 22nd International Conference on Research and Development in Information Retrieval, pp. 42–49 (1999)Google Scholar
- 7.Zhang, T., Oles, F.J.: Text categorization based on regularized linear classification methods. Information Retrieval, 5–31 (2001)Google Scholar
- 8.Fürnkranz, J.: Pairwise classification as an ensemble technique. In: 13th European Conference on Machine Learning, pp. 97–110 (2002)Google Scholar
- 9.McCallum, A.K.: Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering (1996), http://www.cs.cmu.edu/~mccallum/bow
- 10.Yang, Y.: An evaluation of statistical approaches to text categorization. Information Retrieval, 69–90 (1999)Google Scholar
- 11.McCallum, A., Nigam, K.: A comparison of event models for naive bayes text classification. In: AAAI 1998 Workshop on Learning for Text Categorization (1998)Google Scholar
- 12.Daelemans, W., Hoste, V., Meulder, F.D., Naudts, B.: Combined optimization of feature selection and algorithm parameters in machine learning of language. In: 14th European Conference of Machine Learning, pp. 84–95 (2003)Google Scholar
- 13.Yang, Y.: A scalability analysis of classifiers in text categorization. In: 26th International Conference on Research and Development in Information Retrieval (2003)Google Scholar