Information Retrieval

, Volume 14, Issue 2, pp 178–203

Investigating task performance of probabilistic topic models: an empirical study of PLSA and LDA

Authors

    • Department of Computer ScienceUniversity of Illinois at Urbana-Champaign
  • Qiaozhu Mei
    • School of InformationUniversity of Michigan
  • ChengXiang Zhai
    • Department of Computer ScienceUniversity of Illinois at Urbana-Champaign
Article

DOI: 10.1007/s10791-010-9141-9

Cite this article as:
Lu, Y., Mei, Q. & Zhai, C. Inf Retrieval (2011) 14: 178. doi:10.1007/s10791-010-9141-9

Abstract

Probabilistic topic models have recently attracted much attention because of their successful applications in many text mining tasks such as retrieval, summarization, categorization, and clustering. Although many existing studies have reported promising performance of these topic models, none of the work has systematically investigated the task performance of topic models; as a result, some critical questions that may affect the performance of all applications of topic models are mostly unanswered, particularly how to choose between competing models, how multiple local maxima affect task performance, and how to set parameters in topic models. In this paper, we address these questions by conducting a systematic investigation of two representative probabilistic topic models, probabilistic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA), using three representative text mining tasks, including document clustering, text categorization, and ad-hoc retrieval. The analysis of our experimental results provides deeper understanding of topic models and many useful insights about how to optimize the performance of topic models for these typical tasks. The task-based evaluation framework is generalizable to other topic models in the family of either PLSA or LDA.

Keywords

EvaluationTopic modelsLDAPLSAExperimentationPerformance

Copyright information

© Springer Science+Business Media, LLC 2010