Discrimination-Based Feature Selection for Multinomial Naïve Bayes Text Classification

  • Jingbo Zhu
  • Huizhen Wang
  • Xijuan Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4285)


In this paper we focus on the problem of class discrimination issues to improve performance of text classification, and study a discrimination-based feature selection technique in which the features are selected based on the criterion of enlarging separation among competing classes, referred to as discrimination capability. The proposed approach discards features with small discrimination capability measured by Gaussian divergence, so as to enhance the robustness and the discrimination power of the text classification system. To evaluation its performance, some comparison experiments of multinomial naïve Bayes classifier model are constructed on Newsgroup and Ruters21578 data collection. Experimental results show that on Newsgroup data set divergence measure outperforms MI measure, and has slight better performance than DF measure, and outperforms both measures on Ruters21578 data set. It shows that discrimination-based feature selection method has good contributions to enhance discrimination power of text classification model.


Feature Selection Mutual Information Feature Subset Feature Selection Method Text Classification 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Lewis, D., Schapire, R., Callan, J., Papka, R.: Training Algorithms for Linear Text Classifiers. In: Proceedings of ACM SIGIR, pp. 298–306 (1996)Google Scholar
  2. 2.
    Joachims, T.: Text categorization with Support Vector Machines: Learning with many relevant features. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 137–142. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  3. 3.
    Lewis, D.: A Comparison of Two Learning Algorithms for Text Categorization. In: Symposium on Document Analysis and IR (1994)Google Scholar
  4. 4.
    Nigam, K., Lafferty, J., McCallum, A.: Using maximum entropy for text classification. In: IJCAI 1999 Workshop on Machine Learning for Information Filtering, pp. 61–67 (1999)Google Scholar
  5. 5.
    McCallum, Nigam, K.: A comparison of event models for naive bayes text classification. In: AAAI 1998 Workshop on Learning for Text Categorization (1998)Google Scholar
  6. 6.
    Yiming, Y., Pedersen, J.O.: A comparative study on feature selection in text categorization. In: 14th international conference on machine learning, pp. 412–420 (1997)Google Scholar
  7. 7.
    Jain, A., Zongker, D.: Feature selection: evaluation, application, and small sample performance. IEEE transactions on pattern analysis and machine intelligence 19(2), 153–158 (1997)CrossRefGoogle Scholar
  8. 8.
    Su, K.Y., Lee, C.H.: Speech recognition using weighted HMM and subspace projection approach. IEEE transactions on speech and audio processing 2(1), 69–79 (1994)CrossRefGoogle Scholar
  9. 9.
    Tol, J.T., Gonzalez, R.C.: Pattern recognition Principles. Addison-Wesley publishing company, Reading (1974)Google Scholar
  10. 10.
    Bressan, M., Vitria, J.: On the selection and classification of independent features. IEEE transactions on pattern analysis and machine intelligence 25(10), 1312–1317 (2003)CrossRefGoogle Scholar
  11. 11.
    Schneider, K.-M.: A new feature selection score for multinomial naïve Bayes text classification based on KL-divergence. In: 42nd Annual meeting of the association for computational linguistics (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Jingbo Zhu
    • 1
  • Huizhen Wang
    • 1
  • Xijuan Zhang
    • 1
  1. 1.Natural Language Processing Laboratory, Institute of Computer Software and TheoryNortheastern UniversityShenyangP.R. China

Personalised recommendations