Advertisement

Boost Multi-class sLDA Model for Text Classification

  • Maciej JankowskiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10841)

Abstract

Text classification is an important problem in Natural Language Processing. It differs from many other classification tasks by the large number of features that have to be used during training. One of the solution for reducing dimensionality of feature space, is the usage of Latent Dirichlet Allocation. After this step, the smaller problem can be solved using standard classifiers. In [11], authors propose combination of LDA and Softmax classifier called Multi-class sLDA, that does both tasks simultaneously. However, to use the method, we have to choose a number of topics - hyperparameter of the model. This step requires analysis and human supervision. In this paper, we propose Boost Multi-class sLDAmodel, based on ensemble of many Multi-class sLDA models, that does not require the choice of topic number. Moreover, our model achieves significantly better classification accuracy, than Multi-class sLDA for any number of topics.

References

  1. 1.
    Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 39(1), 1–38 (1977)MathSciNetzbMATHGoogle Scholar
  2. 2.
    Freund, Y., Schapire, R.: A decision-theoretic generalization of online learning and an application to boosting. J. Comput. Syst. Sci. 55, 119–139 (1997)CrossRefGoogle Scholar
  3. 3.
    Rubin, D.: Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Stat. 12(4), 1151–1172 (1984)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, New York (2001).  https://doi.org/10.1007/978-0-387-84858-7CrossRefzbMATHGoogle Scholar
  5. 5.
    Blei, D., Ng, A., Jordan, M.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)zbMATHGoogle Scholar
  6. 6.
    Griffiths, T., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci. 101, 5228–5235 (2004).  https://doi.org/10.1073/pnas.0307752101CrossRefGoogle Scholar
  7. 7.
    Bishop, C.M.: Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, New York (2006)zbMATHGoogle Scholar
  8. 8.
    Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Hierarchical Dirichlet processes. J. Am. Stat. Assoc. 101(476), 1566–1581 (2006)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Mcauliffe, J.D., Blei, D.M.: Supervised topic models. In: Advances in Neural Information Processing Systems (2008)Google Scholar
  10. 10.
    Cao, J., Xia, T., Li, J., Zhang, Y., Tang, S.: A density-based method for adaptive LDA model selection. Neurocomputing 72(7–9), 1775–1781 (2008). 16th European Symposium on Artificial Neural NetworksGoogle Scholar
  11. 11.
    Wang, C., Blei, D., Fei-Fei, L.: Simultaneous image classification and annotation. In: Computer Vision and Pattern Recognition (2009)Google Scholar
  12. 12.
    Arun, R., Suresh, V., Veni Madhavan, C.E., Narasimha Murthy, M.N.: On finding the natural number of topics with latent Dirichlet allocation: some observations. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds.) PAKDD 2010. LNCS (LNAI), vol. 6118, pp. 391–402. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-13657-3_43CrossRefGoogle Scholar
  13. 13.
    Mimno, D., Blei, D.: Bayesian checking for topic models. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics (2011)Google Scholar
  14. 14.
    Almeida, T.A., Gomez Hidalgo, J.M., Yamakami, A.: Contributions to the study of SMS spam filtering: new collection and results. In: Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG 2011), Mountain View, CA, USA (2011)Google Scholar
  15. 15.
    Deveaud, R., Sanjuan, E., Bellot, P.: Accurate and effective latent concept modeling for ad hoc information retrieval. Revue des Sciences et Technologies de l’Information - Série Document Numérique, Lavoisier, 61–84 (2014)Google Scholar
  16. 16.
    Chang, J.: LDA: Collapsed Gibbs Sampling Methods for Topic Models. R package version 1.4.2 (2015). https://CRAN.R-project.org/package=lda
  17. 17.
    Blei, D., Kucukelbir, A., McAuliffe, J.: Variational inference: a review for statisticians. J. Am. Stat. Assoc. 112(518), 859–877 (2017)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Faculty of CyberneticsMilitary University of Technology in WarsawWarsawPoland

Personalised recommendations