On Finding the Natural Number of Topics with Latent Dirichlet Allocation: Some Observations
- 104 Citations
- 4 Mentions
- 4.8k Downloads
Abstract
It is important to identify the “correct” number of topics in mechanisms like Latent Dirichlet Allocation(LDA) as they determine the quality of features that are presented as features for classifiers like SVM. In this work we propose a measure to identify the correct number of topics and offer empirical evidence in its favor in terms of classification accuracy and the number of topics that are naturally present in the corpus. We show the merit of the measure by applying it on real-world as well as synthetic data sets(both text and images). In proposing this measure, we view LDA as a matrix factorization mechanism, wherein a given corpus C is split into two matrix factors M 1 and M 2 as given by C d*w = M1 d*t x Q t*w . Where d is the number of documents present in the corpus and w is the size of the vocabulary. The quality of the split depends on “t”, the right number of topics chosen. The measure is computed in terms of symmetric KL-Divergence of salient distributions that are derived from these matrix factors. We observe that the divergence values are higher for non-optimal number of topics – this is shown by a ’dip’ at the right value for ’t’.
Keywords
LDA Topic SVD KL-DivergencePreview
Unable to display preview. Download preview PDF.
References
- 1.Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by Latent Semantic Analysis. JASIS 41(6), 391–407 (1990)CrossRefGoogle Scholar
- 2.Hofmann, T.: Probabilistic Latent Semantic Indexing. In: SIGIR 1999, pp. 50–57 (1999)Google Scholar
- 3.Blei, D.M., Ng, A.Y., Jordan, M.I.: Jordan: Latent Dirichlet Allocation. Journal of Machine Learning Research 3, 993–1022 (2003)zbMATHCrossRefGoogle Scholar
- 4.Lee, D.D., Sebastian Seung, H.: Learning the parts of objects by non-negative matrix factorization. Nature 401(6755), 788–791 (1999)CrossRefGoogle Scholar
- 5.Cao, J., Xia, T., Li, J., Zhang, Y., Tang, S.: A density-based method for adaptive LDA model selection. Neurocomputing 72(7-9), 1775–1781 (2009)CrossRefGoogle Scholar
- 6.Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the Surprising Behavior of Distance Metrics in High Dimensional Spaces. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2000)CrossRefGoogle Scholar
- 7.Gaussier, E., Goutte, C.: Relation between PLSA and NMF and Implications. In: Proc. 28th international ACM SIGIR conference on Research and development in information retrieval (SIGIR 2005), pp. 601–602 (2005)Google Scholar
- 8.Eckart, C., Young, G.: The approximation of one matrix by another of lower rank. Psychometrika 1(3), 211–218 (1936)CrossRefGoogle Scholar
- 9.Kullback, S., Leibler, R.A.: On Information and Sufficiency. Annals of Mathematical Statistics 22(1), 79–86 (1951)zbMATHCrossRefMathSciNetGoogle Scholar
- 10.Zavitsanos, E., Petridis, S., Paliouras, G., Vouros, G.A.: Determining Automatically the Size of Learned Ontologies. In: ECAI 2008, pp. 775–776 (2008)Google Scholar
- 11.Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Sharing Clusters among Related Groups: Hierarchical Dirichlet Processes. In: NIPS 2004 (2004)Google Scholar
- 12.Blei, D.M., Lafferty, J.D.: Correlated Topic Models. In: NIPS 2005 (2005)Google Scholar
- 13.Smyth, P., Welling, M.: Asynchronous Distributed Learning of Topic Models. In: NIPS 2008, pp. 81–88 (2008) (bibliographical record in XML Arthur Asuncion)Google Scholar
- 14.Li, W., McCallum, A.: Pachinko allocation: DAG-structured mixture models of topic correlations. In: ICML 2006, pp. 577–584 (2007)Google Scholar
- 15.