Advertisement

Topic Optimization Method Based on Pointwise Mutual Information

  • Yuxin DingEmail author
  • Shengli YanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9491)

Abstract

Latent Dirichlet Allocation (LDA) model is biased to draw high-frequency words to describe topics. This affects the accuracy of the representation of topics. To solve this issue, we use point-wise mutual information (PMI) to estimate the internal correlation between words and documents and propose the LDA model based on PMI. The proposed model draws words in a topic according to the mutual information. We also propose three measures to evaluate the quality of topics, which are readability, consistency of topics, and similarity of topics. The experimental results show that the quality of the topics generated by the proposed topic model is better than that of the LDA model.

Keywords

Latent Dirichlet allocation Mutual information Topic model 

Notes

Acknowledgments

This work was partially supported by Scientific Research Foundation in Shenzhen (Grant No. JCYJ20140627163809422), Scientific Research Innovation Foundation in Harbin Institute of Technology (Project No. HIT.NSRIF2010123), State Key Laboratory of Computer Architecture, Chinese Academy of Sciences and Key Laboratory of Network Oriented Intelligent Computation (Shenzhen).

References

  1. 1.
    Thomas, K.L., Peter, W.F., Darrell, L.: An introduction to latent semantic analysis. Discourse Process 25, 259–284 (1998)CrossRefGoogle Scholar
  2. 2.
    Hofmann, T.: Probabilistic latent semantic indexing. In: Special Interest Group on Information Retrieval, pp. 50–57, Berkeley, CA, USA (1999)Google Scholar
  3. 3.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(1), 993–1022 (2003)zbMATHGoogle Scholar
  4. 4.
    Ding, Y., Meng, X., Chai, G., Tang, Y.: User identification for instant messages. In: Lu, B.-L., Zhang, L., Kwok, J. (eds.) ICONIP 2011, Part III. LNCS, vol. 7064, pp. 113–120. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  5. 5.
    Griffiths, T.L., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci. 101, 5228–5235 (2004)CrossRefGoogle Scholar
  6. 6.
    Michal, R.Z., Griffiths, T., Steyvers, M., et al.: The author-topic model for authors and documents. In: Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, pp. 487–494 (2004)Google Scholar
  7. 7.
    Zhao, W.X., Jiang, J., Weng, J., He, J., Lim, E.-P., Yan, H., Li, X.: Comparing twitter and traditional media using topic models. In: Clough, P., Foley, C., Gurrin, C., Jones, G.J., Kraaij, W., Lee, H., Mudoch, V. (eds.) ECIR 2011. LNCS, vol. 6611, pp. 338–349. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  8. 8.
    Blei, D.M., Lafferty, J.D.: Correlated topic models.. In: International Conference on Machine Learning, pp. 113–120 (2006)Google Scholar
  9. 9.
    Canini, K.R., Shi, L., Griffiths, T.L.: Online inference of topics with latent Dirichlet allocation. In: International Conference on Artificial Intelligence and Statistics, pp. 41–48, Clearwater Beach, Florida, USA (2009)Google Scholar
  10. 10.
    David, M., Wallach, H.M., Talley, E., et al.: Optimizing semantic coherence in topic models. In: Empirical Methods in Natural Language Processing, pp. 262–272 (2011)Google Scholar
  11. 11.
    Blei, D.M., Jon, D.: McAuliffe. supervised topic models. In: NIPS (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer Sciences and Technology, Key Laboratory of Network Oriented Intelligent ComputationHarbin Institute of Technology Shenzhen Graduate SchoolShenzhenChina

Personalised recommendations