Advertisement

Sequential Monte Carlo Inference Based on Activities for Overlapping Community Models

  • Shohei Sakamoto
  • Koji EguchiEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11406)

Abstract

Various kinds of data such as social media can be represented as a network or graph. Latent variable models using Bayesian statistical inference are powerful tools to represent such networks. One such latent variable network model is a Mixed Membership Stochastic Blockmodel (MMSB), which can discover overlapping communities in a network and has high predictive power. Previous inference methods estimate the latent variables and unknown parameters of the MMSB on the basis of the whole observed network. Therefore, dynamic changes in network structure over time are hard to track. Thus, we first present an incremental Gibbs sampler based on node activities that focuses only on observations within a fixed term length for online sequential estimation of the MMSB. We further present a particle filter based on node activities with various term lengths. For instance, in an e-mail communication network, each particle only considers e-mail accounts that sent or received a message within a specific term length, where the length may be different from those of other particles. We show through experiments with two link prediction datasets that our proposed methods achieve both high prediction performance and computational efficiency.

Notes

Acknowledgments

This work was supported in part by the Grant-in-Aid for Scientific Research (#15H02703) from JSPS, Japan.

References

  1. 1.
    Airoldi, E.M., Blei, D.M., Fienberg, S.E., Xing, E.P.: Mixed membership stochastic blockmodels. J. Mach. Learn. Res. 9, 1981–2014 (2008)zbMATHGoogle Scholar
  2. 2.
    Ball, B., Karrer, B., Newman, M.E.J.: Efficient and principled method for detecting communities in networks. Phys. Rev. E 84(3), 036103 (2011)CrossRefGoogle Scholar
  3. 3.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)zbMATHGoogle Scholar
  4. 4.
    Canini, K.R., Shi, L., Griffiths, T.L.: Online inference of topics with latent Dirichlet allocation. In: Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, Clearwater Beach, Florida, USA, pp. 65–72 (2009)Google Scholar
  5. 5.
    Doucet, A., de Freitas, N., Gordon, N. (eds.): Sequential Monte Carlo Methods in Practice. Springer, New York (2001).  https://doi.org/10.1007/978-1-4757-3437-9CrossRefzbMATHGoogle Scholar
  6. 6.
    Ghahramani, Z., Griffiths, T.L.: Infinite latent feature models and the Indian buffet process. In: Advances in Neural Information Processing Systems, vol. 18 (2006)Google Scholar
  7. 7.
    Goldenberg, A., Zheng, A.X., Fienberg, S.E., Airoldi, E.M.: A survey of statistical network models. Found. Trends Mach. Learn. 2(2), 129–233 (2010)CrossRefGoogle Scholar
  8. 8.
    Gopalan, P.K., Gerrish, S., Freedman, M., Blei, D.M., Mimno, D.M.: Scalable inference of overlapping communities. In: Advances in Neural Information Processing Systems, vol. 25 (2012)Google Scholar
  9. 9.
    Griffiths, T.L., Steyvers, M.: Finding scientific topics. Proc. Natl. Acad. Sci. U. S. A. 101, 5228–5235 (2004)CrossRefGoogle Scholar
  10. 10.
    Hofmann, T.: Probabilistic latent semantic indexing. In: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Berkeley, California, USA, pp. 50–57 (1999)Google Scholar
  11. 11.
    Iwata, T., Yamada, T., Sakurai, Y., Ueda, N.: Sequential modeling of topic dynamics with multiple timescales. ACM Trans. Knowl. Discov. Data 5(4) (2012) CrossRefGoogle Scholar
  12. 12.
    Karrer, B., Newman, M.E.J.: Stochastic blockmodels and community structure in networks. Phys. Rev. E 83, 016107 (2011)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Kemp, C., Tenenbaum, J.B., Griffiths, T.L., Yamada, T., Ueda, N.: Learning systems of concepts with an infinite relational model. In: Proceedings of the 21st National Conference on Artificial Intelligence, Boston, Massachusetts, USA, vol. 1, pp. 381–388 (2006)Google Scholar
  14. 14.
    Klimt, B., Yang, Y.: Introducing the Enron corpus. In: First Conference on Email and Anti-Spam CEAS, Mountain View, California, USA (2004)Google Scholar
  15. 15.
    Kobayashi, T., Eguchi, K.: Online inference of mixed membership stochastic blockmodels for network data streams. IEICE Trans. Inf. Syst. E97-D(4), 752–761 (2014)CrossRefGoogle Scholar
  16. 16.
    Miller, K.T., Jordan, M.I., Griffiths, T.L.: Nonparametric latent feature models for link prediction. In: Advances in Neural Information Processing Systems, vol. 22, pp. 1276–1284 (2009)Google Scholar
  17. 17.
    Nowicki, K., Snijders, T.A.B.: Estimation and prediction for stochastic blockstructures. J. Am. Stat. Assoc. 96(455), 1077–1087 (2001)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Opsahl, T., Panzarasa, P.: Clustering in weighted networks. Soc. Netw. 31(2), 155–163 (2009)CrossRefGoogle Scholar
  19. 19.
    Snijders, T.A.B., Nowicki, K.: Estimation and prediction for stochastic blockmodels for graphs with latent block structure. J. Classif. 14, 75–100 (1997)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Kobe UniversityKobeJapan
  2. 2.Hiroshima UniversityHigashi-HiroshimaJapan

Personalised recommendations