Advertisement

Heterogeneous Information Integration in Hierarchical Text Classification

  • Huai-Yuan Yang
  • Tie-Yan Liu
  • Li Gao
  • Wei-Ying Ma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3918)

Abstract

Previous work has shown that considering the category distance in the taxonomy tree can improve the performance of text classifiers. In this paper, we propose a new approach to further integrate more categorical information in the text corpus using the principle of multi-objective programming (MOP). That is, we not only consider the distance between categories defined by the branching of the taxonomy tree, but also consider the similarity between categories defined by the document/term distributions in the feature space. Consequently, we get a refined category distance by using MOP to leverage these two kinds of information. Experiments on both synthetic and real-world datasets demonstrated the effectiveness of the proposed algorithm in hierarchical text classification.

Keywords

Singular Value Decomposition Classification Performance Synthetic Dataset Taxonomy Tree Text Corpus 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)CrossRefMATHGoogle Scholar
  2. 2.
    Dekel, O., Keshet, J., Singer, Y.: Large Margin Hierarchical Classification. In: Proceedings of the 21st International Conference on Machine Learning (2004)Google Scholar
  3. 3.
    Dumais, S., Chen, H.: Hierarchical Classification of Web Content. In: Proc. SIGIR, pp. 256–263 (2000)Google Scholar
  4. 4.
    Huang, K., Yang, H., King, I., Lyu, M.R.: Learning Large Margin Classifiers Locally and Globally. In: Proceedings of the 21st International Conference on Machine Learning (2004)Google Scholar
  5. 5.
    Hofmann, T., Cai, L., Ciaramita, M.: Learning with Taxonomies: Classifying Documents and Words. In: Conference on Neural Information Processing Systems (NIPS)Google Scholar
  6. 6.
    Lewis, D.D.: Naïve (Bayes) at Forty: the Independence Assumption in Information Retrieval. In: ECML 1998 (1998)Google Scholar
  7. 7.
    Liu, T.Y., Yang, Y., Wan, H., Zeng, H.J., Chen, Z., Ma, W.Y.: Support Vector Machines Classification with Very Large Scale Taxonomy, SIGKDD Explorations. Special Issue on Text Mining and Natural Language Processing 7(1), 36–43 (2005)Google Scholar
  8. 8.
    Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank Citation Ranking: Bring Order to the Web. Technical Report, Stanford University, CA (1998)Google Scholar
  9. 9.
    Raydan, M.: The BarziLai and Borwein Gradient Method for Large Scale Unconstrained Minimization Problem. SIAM J. OPIM (1997)Google Scholar
  10. 10.
    Sun, A., Lim, E.P.: Hierarchical Text Classification and Evaluation. In: Proceedings of the 2001 IEEE International Conference on Data Mining (2001)Google Scholar
  11. 11.
    Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)MATHGoogle Scholar
  12. 12.
    Yang, Y.: An evaluation of statistical approaches to text categorization. Information Retrieval 1(1-2), 69–90 (1999)MathSciNetCrossRefGoogle Scholar
  13. 13.

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Huai-Yuan Yang
    • 1
    • 2
  • Tie-Yan Liu
    • 1
  • Li Gao
    • 2
  • Wei-Ying Ma
    • 1
  1. 1.5F Sigma CenterMicrosoft Research AsiaBeijingP.R. China
  2. 2.Department of Scientific & Engineering Computing School of Mathematical SciencesPeking UniversityBeijingP.R. China

Personalised recommendations