Skip to main content

On Finding the Natural Number of Topics with Latent Dirichlet Allocation: Some Observations

  • Conference paper

Part of the Lecture Notes in Computer Science book series (LNAI,volume 6118)

Abstract

It is important to identify the “correct” number of topics in mechanisms like Latent Dirichlet Allocation(LDA) as they determine the quality of features that are presented as features for classifiers like SVM. In this work we propose a measure to identify the correct number of topics and offer empirical evidence in its favor in terms of classification accuracy and the number of topics that are naturally present in the corpus. We show the merit of the measure by applying it on real-world as well as synthetic data sets(both text and images). In proposing this measure, we view LDA as a matrix factorization mechanism, wherein a given corpus C is split into two matrix factors M 1 and M 2 as given by C d*w = M1 d*t x Q t*w . Where d is the number of documents present in the corpus and w is the size of the vocabulary. The quality of the split depends on “t”, the right number of topics chosen. The measure is computed in terms of symmetric KL-Divergence of salient distributions that are derived from these matrix factors. We observe that the divergence values are higher for non-optimal number of topics – this is shown by a ’dip’ at the right value for ’t’.

Keywords

  • LDA
  • Topic
  • SVD
  • KL-Divergence

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-642-13657-3_43
  • Chapter length: 12 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   99.00
Price excludes VAT (USA)
  • ISBN: 978-3-642-13657-3
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   129.99
Price excludes VAT (USA)

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by Latent Semantic Analysis. JASIS 41(6), 391–407 (1990)

    CrossRef  Google Scholar 

  2. Hofmann, T.: Probabilistic Latent Semantic Indexing. In: SIGIR 1999, pp. 50–57 (1999)

    Google Scholar 

  3. Blei, D.M., Ng, A.Y., Jordan, M.I.: Jordan: Latent Dirichlet Allocation. Journal of Machine Learning Research 3, 993–1022 (2003)

    MATH  CrossRef  Google Scholar 

  4. Lee, D.D., Sebastian Seung, H.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)

    CrossRef  Google Scholar 

  5. Cao, J., Xia, T., Li, J., Zhang, Y., Tang, S.: A density-based method for adaptive LDA model selection. Neurocomputing 72(7-9), 1775–1781 (2009)

    CrossRef  Google Scholar 

  6. Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the Surprising Behavior of Distance Metrics in High Dimensional Spaces. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2000)

    CrossRef  Google Scholar 

  7. Gaussier, E., Goutte, C.: Relation between PLSA and NMF and Implications. In: Proc. 28th international ACM SIGIR conference on Research and development in information retrieval (SIGIR 2005), pp. 601–602 (2005)

    Google Scholar 

  8. Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)

    CrossRef  Google Scholar 

  9. Kullback, S., Leibler, R.A.: On Information and Sufficiency. Annals of Mathematical Statistics 22(1), 79–86 (1951)

    MATH  CrossRef  MathSciNet  Google Scholar 

  10. Zavitsanos, E., Petridis, S., Paliouras, G., Vouros, G.A.: Determining Automatically the Size of Learned Ontologies. In: ECAI 2008, pp. 775–776 (2008)

    Google Scholar 

  11. Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes. In: NIPS 2004 (2004)

    Google Scholar 

  12. Blei, D.M., Lafferty, J.D.: Correlated Topic Models. In: NIPS 2005 (2005)

    Google Scholar 

  13. Smyth, P., Welling, M.: Asynchronous Distributed Learning of Topic Models. In: NIPS 2008, pp. 81–88 (2008) (bibliographical record in XML Arthur Asuncion)

    Google Scholar 

  14. Li, W., McCallum, A.: Pachinko allocation: DAG-structured mixture models of topic correlations. In: ICML 2006, pp. 577–584 (2007)

    Google Scholar 

  15. http://archive.ics.uci.edu/ml/datasets/Bag+of+Words , http://www.cs.princeton.edu/~blei/lda-c/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Arun, R., Suresh, V., Veni Madhavan, C.E., Narasimha Murthy, M.N. (2010). On Finding the Natural Number of Topics with Latent Dirichlet Allocation: Some Observations. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2010. Lecture Notes in Computer Science(), vol 6118. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13657-3_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-13657-3_43

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-13656-6

  • Online ISBN: 978-3-642-13657-3

  • eBook Packages: Computer ScienceComputer Science (R0)