AAANE: Attention-Based Adversarial Autoencoder for Multi-scale Network Embedding

  • Lei Sang
  • Min XuEmail author
  • Shengsheng Qian
  • Xindong Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11441)


Network embedding represents nodes in a continuous vector space and preserves structure information from a network. Existing methods usually adopt a “one-size-fits-all” approach when concerning multi-scale structure information, such as first- and second-order proximity of nodes, ignoring the fact that different scales play different roles in embedding learning. In this paper, we propose an Attention-based Adversarial Autoencoder Network Embedding (AAANE) framework, which promotes the collaboration of different scales and lets them vote for robust representations. The proposed AAANE consists of two components: (1) an attention-based autoencoder that effectively capture the highly non-linear network structure, which can de-emphasize irrelevant scales during training, and (2) an adversarial regularization guides the autoencoder in learning robust representations by matching the posterior distribution of the latent embeddings to a given prior distribution. Experimental results on real-world networks show that the proposed approach outperforms strong baselines.


Network embedding Multi-scale Attention Adversarial autoencoder 


  1. 1.
    Perozzi, B., Al-Rfou, R., Skiena, S.: DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2014)Google Scholar
  2. 2.
    Wang, X., Cui, P., Wang, J., Pei, J., Zhu, W., Yang, S.: Community preserving network embedding. In: AAAI, pp. 203–209 (2017)Google Scholar
  3. 3.
    Sang, L., Xu, M., Qian, S., Wu, X.: Multi-modal multi-view Bayesian semantic embedding for community question answering. Neurocomputing (2018)Google Scholar
  4. 4.
    Lü, L., Zhou, T.: Link prediction in complex networks: a survey. Phys. A: Stat. Mech. Appl. 390(6), 1150–1170 (2011)CrossRefGoogle Scholar
  5. 5.
    Perozzi, B., Kulkarni, V., Chen, H., Skiena, S.: Don’t walk, skip!: online learning of multi-scale network embeddings. In: Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, pp. 258–265. ACM (2017)Google Scholar
  6. 6.
    Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., Mei, Q.: Line: large-scale information network embedding. In: Proceedings of the 24th International Conference on World Wide Web, pp. 1067–1077. International World Wide Web Conferences Steering Committee (2015)Google Scholar
  7. 7.
    Cao, S., Lu, W., Xu, Q.: Grarep: learning graph representations with global structural information. In: Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pp. 891–900. ACM (2015)Google Scholar
  8. 8.
    Cao, S., Lu, W., Xu, Q.: Deep neural networks for learning graph representations. In: AAAI, pp. 1145–1152 (2016)Google Scholar
  9. 9.
    Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: Proceedings of the 20th ACM SIGKDD (2016)Google Scholar
  10. 10.
    Qu, M., Tang, J., Shang, J., Ren, X., Zhang, M., Han, J.: An attention-based collaboration framework for multi-view network representation learning. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1767–1776. ACM (2017)Google Scholar
  11. 11.
    Luong, M.-T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:1508.04025 (2015)
  12. 12.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  13. 13.
    Weston, J., Bengio, S., Usunier, N.: WSABIE: scaling up to large vocabulary image annotation. In: IJCAI, vol. 11, pp. 2764–2770 (2011)Google Scholar
  14. 14.
    Iyyer, M., Guha, A., Chaturvedi, S., Boyd-Graber, J., Daumé III, H.: Feuding families and former friends: unsupervised learning for dynamic fictional relationships. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1534–1544 (2016)Google Scholar
  15. 15.
    He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 388–397 (2017)Google Scholar
  16. 16.
    Grover, A., Leskovec, J.: node2vec: Scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864. ACM (2016)Google Scholar
  17. 17.
    Dai, Q., Li, Q., Tang, J., Wang, D.: Adversarial network embedding. In: Proceedings of AAAI (2018)Google Scholar
  18. 18.
    Fan, R.-E., Chang, K.-W., Hsieh, C.-J., Wang, X.-R., Lin, C.-J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9(Aug), 1871–1874 (2008)zbMATHGoogle Scholar
  19. 19.
    Makhzani, A., Shlens, J., Jaitly, N., Goodfellow, I., Frey, B.: Adversarial autoencoders. arXiv preprint arXiv:1511.05644 (2015)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Lei Sang
    • 1
    • 2
  • Min Xu
    • 2
    Email author
  • Shengsheng Qian
    • 3
  • Xindong Wu
    • 1
  1. 1.School of Computer Science and Information TechnologyHefei University of TechnologyHefeiChina
  2. 2.Faculty of Engineering and ITUniversity of Technology SydneyUltimoAustralia
  3. 3.Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations