A Probabilistic Clustering-Projection Model for Discrete Data

  • Shipeng Yu
  • Kai Yu
  • Volker Tresp
  • Hans-Peter Kriegel
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3721)

Abstract

For discrete co-occurrence data like documents and words, calculating optimal projections and clustering are two different but related tasks. The goal of projection is to find a low-dimensional latent space for words, and clustering aims at grouping documents based on their feature representations. In general projection and clustering are studied independently, but they both represent the intrinsic structure of data and should reinforce each other. In this paper we introduce a probabilistic clustering-projection (PCP) model for discrete data, where they are both represented in a unified framework. Clustering is seen to be performed in the projected space, and projection explicitly considers clustering structure. Iterating the two operations turns out to be exactly the variational EM algorithm under Bayesian model inference, and thus is guaranteed to improve the data likelihood. The model is evaluated on two text data sets, both showing very encouraging results.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Oxford (1995)Google Scholar
  2. 2.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet Allocation. Journal of Machine Learning Research 3, 993–1022 (2003)MATHCrossRefGoogle Scholar
  3. 3.
    Buntine, W., Perttu, S.: Is multinomial PCA multi-faceted clustering or dimensionality reduction? In: Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics, pp. 300–307 (2003)Google Scholar
  4. 4.
    Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by latent semantic analysis. Journal of the American Society of Information Science 41(6), 391–407 (1990)CrossRefGoogle Scholar
  5. 5.
    Dhillon, I.S.: Co-clustering documents and words using bipartite spectral graph partitioning. SIGKDD, 269–274 (2001)Google Scholar
  6. 6.
    Ding, C., He, X., Zha, H., Simon, H.D.: Adaptive dimension reduction for clustering high dimensional data. In: ICDM, pp. 147–154 (2002)Google Scholar
  7. 7.
    Hofmann, T.: Probabilistic Latent Semantic Indexing. In: Proceedings of the 22nd Annual ACM SIGIR Conference, Berkeley, California, August 1999, pp. 50–57 (1999)Google Scholar
  8. 8.
    Hofmann, T., Puzicha, J.: Statistical models for co-occurrence data. Technical Report AIM 1625 (1998)Google Scholar
  9. 9.
    Jordan, M.I., Ghahramani, Z., Jaakkola, T., Saul, L.K.: An introduction to variational methods for graphical models. Machine Learning 37(2), 183–233 (1999)MATHCrossRefGoogle Scholar
  10. 10.
    Keller, M., Bengio, S.: Theme Topic Mixture Model: A Graphical Model for Document Representation (January 2004)Google Scholar
  11. 11.
    Lee, D.D., Seung, H.S.: Learning the parts of objects with nonnegative matrix factorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  12. 12.
    Li, T., Ma, S., Ogihara, M.: Document clustering via adaptive subspace iteration. In: Proceedings of SIGIR (2004)Google Scholar
  13. 13.
    Xu, W., Liu, X., Gong, Y.: Document clustering based on non-negative matrix factorization. In: Proceedings of SIGIR, pp. 267–273 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Shipeng Yu
    • 1
    • 2
  • Kai Yu
    • 2
  • Volker Tresp
    • 2
  • Hans-Peter Kriegel
    • 1
  1. 1.Institute for Computer ScienceUniversity of MunichGermany
  2. 2.Siemens Corporate TechnologyMunichGermany

Personalised recommendations