Raising High-Degree Overlapped Character Bigrams into Trigrams for Dimensionality Reduction in Chinese Text Categorization

  • Dejun Xue
  • Maosong Sun
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2945)

Abstract

High dimensionality of feature space is a crucial obstacle for Automated Text Categorization. According to the characteristics of Chinese character N-grams, this paper reveals that there exists a kind of redundancy arising from feature overlapping. Focusing on Chinese character bigrams, the paper puts forward a concept of δ-overlapping between two bigrams, and proposes a new method of dimensionality reduction, called δ-Overlapped Raising (δOR), by raising the δ-overlapped bigrams into their corresponding trigrams. Moreover, the paper designs a two-stage dimensionality reduction strategy for Chinese bigrams by integrating a filtering method based on Chi-CIG score function and the δOR method. Experimental results on a large-scale Chinese document collection indicate that, on the basis of the first stage of reduction processing, δOR at the second stage can significantly reduce the dimension of feature space without sacrificing categorization effectiveness. We believe that the above methodology would be language-independent.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sebastiani, F.: Machine Learning in Automated Text Categorization. ACM Computing Surveys 34(1), 1–47 (2002)CrossRefGoogle Scholar
  2. 2.
    Lewis, D.D.: Naïve Bayes at Forty: The Independence Assumption in Information Retrieval. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 4–15. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  3. 3.
    McCallum, A., Nigam, K.: A Comparison of Event Models for Naïve Bayes Text Classification. In: AAAI 1998 Workshop on Learning for Text Categorization, pp. 41–48 (1998)Google Scholar
  4. 4.
    Wiener, E., Pedersen, J.O., Weigend, A.S.: A Neural Network Approach to Topic Spotting. In: Proceedings of 4th Annual Symposium on Document Analysis and Information Retrieval, pp. 317–332 (1995)Google Scholar
  5. 5.
    Yang, Y.: Expert Network: Effective and Efficient Learning from Human Decisions in Text Categorization and Retrieval. In: Proceedings of 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 11–21 (1994)Google Scholar
  6. 6.
    Apte, C., Damerau, F., Weiss, S.M.: Automated Learning of Decision Rules for Text Categorization. ACM Transactions on Information Retrieval 12(3), 233–251 (1994)CrossRefGoogle Scholar
  7. 7.
    Lertnattee, V., Theeramunkong, T.: Improving Centroid-Based Text Classification Using Term-Distribution-Based Weighting and Feature Selection. In: Proceedings of International Conference on Intelligent Technologies, pp. 349–356 (2001)Google Scholar
  8. 8.
    Joachims, T.: A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. In: Proceedings of 14th of International Conference on Machine Learning, pp. 143–151 (1997)Google Scholar
  9. 9.
    Joachims, T.: Text Categorization with Support Vector Machines: Learnging with Many Relevant Features. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS, vol. 1398, pp. 137–142. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  10. 10.
    Schapire, R.E., Singer, Y.: BoosTexter: A Boosting-Based System for Text Categorization. Machine Learning 39(2/3), 135–168 (2000)MATHCrossRefGoogle Scholar
  11. 11.
    Salton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw-Hill Book Company, New York (1983)MATHGoogle Scholar
  12. 12.
    Lewis, D.D.: An Evaluation of Phrasal and Clustered Representations on a Text Categorization. In: Proceedings of 15th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 37–50 (1992)Google Scholar
  13. 13.
    Yang, Y., Pedersen, J.O.: A Comparative Study on Feature Selection in Text Categorization. In: Proceedings of 14th International Conference on Machine Learning, pp. 412–420 (1997)Google Scholar
  14. 14.
    Molina, L.C., Belanche, L., Nebot, A.: Feature Selection Algorithms: A Survey and Experimental Evaluation. In: Proceedings of 2nd IEEE International Conference on Data Mining, Maebashi City, Japan, pp. 306–313 (2002)Google Scholar
  15. 15.
    Li, Y.H., Jain, A.K.: Classification of Text Document. The Computer Journal 41(8), 537–546 (1998)MATHCrossRefGoogle Scholar
  16. 16.
    Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by Latent Semantic Indexing. Journal of the American Society for Information Science 41(6), 391–407 (1990)CrossRefGoogle Scholar
  17. 17.
    Schutze, H., Hull, D.A., Pedersen, J.O.: A comparison of Classifiers and Document Representations for the Routing Problem. In: Proceedings of 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 229–237 (1995)Google Scholar
  18. 18.
    Nie, J., Ren, F.: Chinese Information Retrieval: Using Characters or Words? Information Processing and Management 35, 443–462 (1999)CrossRefGoogle Scholar
  19. 19.
    Xue, D., Sun, M.: A Study on Feature Weighting in Chinese Text Categorization. In: Proceedings of the 4th International Conference on Computational Linguistics and Intelligent Text Processing, Mexico City, pp. 594–604 (2003)Google Scholar
  20. 20.
    Luo, S.: Statistic-Based Two-Character Chinese Word Extraction. Master Thesis of Tsinghua University, China (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Dejun Xue
    • 1
  • Maosong Sun
    • 1
  1. 1.National Key Laboratory of Intelligent Technology and Systems Department of Computer Science and TechnologyTsinghua UniversityBeijingChina

Personalised recommendations