Eliminating High-Degree Biased Character Bigrams for Dimensionality Reduction in Chinese Text Categorization

  • Dejun Xue
  • Maosong Sun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2997)

Abstract

High dimensionality of feature space is a main obstacle for Text Categorization (TC). In a candidate feature set consisting of Chinese character bigrams, there exist a number of bigrams which are high-degree biased according to character frequencies. Usually, these bigrams are likely to survive for their strength of discriminating documents after the process of feature selection. However, most of them are useless for document categorization because of the weakness in representing document contents. The paper firstly defines a criterion to identify the high-degree biased Chinese bigrams. Then, two schemes called s-BR1 and s-BR2 are proposed to deal with these bigrams: the former directly eliminates them from the feature set whereas the latter replaces them with the corresponding significant characters involved. Experimental results show that the high-degree biased bigrams should be eliminated from the feature set, and the σ-BR1 scheme is quite effective for further dimensionality reduction in Chinese text categorization, after a feature selection process with a Chi − CIG score function.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sebastiani, F.: Machine Learning in Automated Text Categorization. ACM Computing Surveys 34(1), 1–47 (2002)CrossRefGoogle Scholar
  2. 2.
    Yang, Y.: Expert Network: Effective and Efficient Learning from Human Decisions in Text Categorization and Retrieval. In: Proceedings of 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 11–21 (1994)Google Scholar
  3. 3.
    Theeramunkong, T., Lertnattee, V.: Improving Centroid-Based Text Classification Using Term-Distribution-Based Weighting System and Clustering. In: Proceedings of International Symposium on Communications and Information Technology, pp. 33–36 (2001)Google Scholar
  4. 4.
    Joachims, T.: A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. In: Proceedings of 14th of International Conference on Machine Learning, pp. 143–151 (1997)Google Scholar
  5. 5.
    Joachims, T.: Text Categorization with Support Vector Machines: Learnging with Many Relevant Features. In: Proceedings of 10th European Conference on Machine Learning, pp. 137–142 (1998)Google Scholar
  6. 6.
    Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw-Hill Book Company, New York (1983)MATHGoogle Scholar
  7. 7.
    Lewis, D.D.: An Evaluation of Phrasal and Clustered Representations on a Text Categorization. In: Proceedings of 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 37–50 (1992)Google Scholar
  8. 8.
    Molina, L.C., Belanche, L., Nebot, A.: Feature Selection Algorithms: A Survey and Experimental Evaluation. In: Proceedings of 2nd IEEE International Conference on Data Mining, Maebashi City, Japan, pp. 306–313 (2002)Google Scholar
  9. 9.
    Yang, Y., Jan Pedersen, O.: A Comparative Study on Feature Selection in Text Categorization. In: Proceedings of 14th International Conference on Machine Learning, pp. 412–420 (1997)Google Scholar
  10. 10.
    Li, Y.H., Jain, A.K.: Classification of Text Document. The Computer Journal 41(8), 537–546 (1998)CrossRefMATHGoogle Scholar
  11. 11.
    Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by Latent Semantic Indexing. Journal of the American Society for Information Science 41(6), 391–407 (1990)CrossRefGoogle Scholar
  12. 12.
    Schutze, H., Hull, D.A., Jan Pedersen, O.: A comparison of Classifiers and Document Representations for the Routing Problem. In: Proceedings of 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 229–237 (1995)Google Scholar
  13. 13.
    Tsay, J.-J., Yang, J.-D.: Design and Evaluation of Approaches to Automatic Chinese Text Categorization. Computational Linguistics and Chinese Language Processing 5(2), 43–58 (2000)Google Scholar
  14. 14.
    Bekkerman, R., El-Yaniv, R., Tishby, N., Winter, Y.: Distributional Word Cluster vs. Words for Text Categorization. Journal of Machine Learning Research 3, 1183–1208 (2003)CrossRefMATHGoogle Scholar
  15. 15.
    Nie, J., Ren, F.: Chinese Information Retrieval: Using Characters or Words? Information Processing and Management 35, 443–462 (1999)CrossRefGoogle Scholar
  16. 16.
    Zhou, S., Guan, J.: Chinese Documents Classification Based on N-Grams. In: Proceedings of the 3rd International Conference on Computational Linguistics and Intelligent Text Processing, Mexico City, pp. 405–414 (2002)Google Scholar
  17. 17.
    Xue, D., Sun, M.: A Study on Feature Weighting in Chinese Text Categorization. In: Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing, Mexico City, pp. 594–604 (2003)Google Scholar
  18. 18.
    Oakes, M., Gaizauskas, R.J., Fowkes, H.: A Method Based on the Chi-Square Test for Document Classification. In: Proceedings of 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 440–441 (2001)Google Scholar
  19. 19.
    Luo, S.: Statistic-Based Two-Character Chinese Word Extraction. Master Thesis of Tsinghua University, China (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Dejun Xue
    • 1
  • Maosong Sun
    • 1
  1. 1.National Key Laboratory of Intelligent Technology and Systems Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations