A Graph Representation Learning Algorithm Based on Attention Mechanism and Node Similarity
Recently graph representation learning has attracted much attention of researchers, aiming to capture and preserve the graph structure by encoding it into low-dimensional vectors. Attention mechanism is a recent research hotspot in learning the representation of graph. In this paper, a graph representation learning algorithm based on Attention Mechanism and Node Similarity (AMNS for short) is proposed. Firstly, the similarity neighborhood is generated for each node in graph. Secondly, attention mechanism is used to learn weight coefficients for each node and its similarity neighborhood. Thirdly, the node vectors are generated by aggregating its similarity neighborhood with weight coefficients. Finally, node vectors are applied to many tasks, e.g., node classification and clustering. The experiments on real-world network datasets prove that the AMNS algorithm achieves excellent results.
KeywordsGraph representation learning Attention mechanism Node similarity
This work is partly supported by the National Natural Science Foundation of China under Grant No. 61300104, No. 61300103 and No. 61672159, the Fujian Province High School Science Fund for Distinguished Young Scholars under Grant No. JA12016, the Fujian Natural Science Funds for Distinguished Young Scholar under Grant No. 2015J06014, the Fujian Industry-Academy Cooperation Project under Grant No. 2017H6008 and No. 2018H6010, and Haixi Government Big Data Application Cooperative Innovation Center.
- 5.Hamilton, W.L., Ying, R., Leskovec, J.: Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584 (2017)
- 6.Perozzi, B., Al-Rfou, R., Skiena, S: DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 701–710. ACM (2014). https://doi.org/10.1145/2623330.2623732
- 7.Grover, A., Leskovec, J.: node2vec: scalable feature learning for networks. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 855–864. ACM (2016). https://doi.org/10.1145/2939672.2939754
- 8.Tang, J., Qu, M., Wang, M., Zhang, M., Yan, J., Mei, Q.: Line: large-scale information network embedding. In: Proceedings of the 24th International Conference on World Wide Web, pp. 1067–1077. International World Wide Web Conferences Steering Committee (2015). https://doi.org/10.1145/2736277.2741093
- 9.Wang, D., Cui, P., Zhu, W.: Structural deep network embedding. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1225–1234. ACM (2016). https://doi.org/10.1145/2939672.2939753
- 11.Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)
- 12.Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)Google Scholar
- 13.Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (2017)
- 15.Thekumparampil, K.K., et al.: Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv:1803.03735 (2018)
- 16.Abu-El-Haija, S., et al.: Watch your step: learning node embeddings via graph attention. In: Advances in Neural Information Processing Systems, pp. 9180–9190 (2018)Google Scholar
- 17.Niwattanakul, S., Singthongchai, J., Naenudorn, E., Wanapu, S.: Using of Jaccard coefficient for keywords similarity. In: Proceedings of the International Multiconference of Engineers and Computer Scientists, pp. 380–384 (2013)Google Scholar
- 19.Tang, L., Liu, H.: Relational learning via latent social dimensions. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 817–826. ACM (2009). https://doi.org/10.1145/1557019.1557109