Abstract
Graph contrastive learning has emerged as a pivotal task in the realm of graph representation learning, with the primary objective of maximizing mutual information between graph-augmented pairs exhibiting similar semantics. However, existing unsupervised graph contrastive learning approaches face a notable limitation in capturing both structural and semantic global information. This issues poses a substantial challenge, as nodes in close geographical proximity do not consistently possess similar features. To tackle this issue, this study introduces a simple framework for Distillation Node and Prototype Graph Contrastive Learning (DNPGCL). The framework enables contrastive learning by harnessing similar knowledge distillation to obtain more valuable structural and semantic global indications. Experimental results demonstrate that DNGCL outperforms existing unsupervised learning methods across a range of diverse graph datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 132–149 (2018)
Cuturi, M.: Sinkhorn distances: lightspeed computation of optimal transport. In: Advances in Neural Information Processing Systems, vol. 26 (2013)
Grill, J.B., et al.: Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural. Inf. Process. Syst. 33, 21271–21284 (2020)
Hassani, K., Khasahmadi, A.H.: Contrastive multi-view representation learning on graphs. In: International Conference on Machine Learning, pp. 4116–4126. PMLR (2020)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)
Lin, B., Luo, B., He, J., Gui, N.: Self-supervised adaptive aggregator learning on graph. In: Karlapalem, K., et al. (eds.) PAKDD 2021. LNCS (LNAI), vol. 12714, pp. 29–41. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75768-7_3
Lin, S., et al.: Prototypical graph contrastive learning. IEEE Trans. Neural Netw. Learn. Syst. 35, 2747–2758 (2022)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579–2605 (2008)
Narayanan, A., Chandramohan, M., Venkatesan, R., Chen, L., Liu, Y., Jaiswal, S.: graph2vec: learning distributed representations of graphs. arXiv preprint arXiv:1707.05005 (2017)
Qiu, J., et al.: GCC: graph contrastive coding for graph neural network pre-training. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1150–1160 (2020)
Scarselli, F., Gori, M., Tsoi, A.C., Hagenbuchner, M., Monfardini, G.: The graph neural network model. IEEE Trans. Neural Networks 20(1), 61–80 (2008)
Sun, F.Y., Hoffmann, J., Verma, V., Tang, J.: Infograph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arXiv preprint arXiv:1908.01000 (2019)
Tarvainen, A., Valpola, H.: Mean teachers are better role models: weight-averaged consistency targets improve semi-supervised deep learning results. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Xia, J., Wu, L., Chen, J., Hu, B., Li, S.Z.: SimGRACE: a simple framework for graph contrastive learning without data augmentation. In: Proceedings of the ACM Web Conference 2022, pp. 1070–1079 (2022)
You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z., Shen, Y.: Graph contrastive learning with augmentations. Adv. Neural. Inf. Process. Syst. 33, 5812–5823 (2020)
Zhang, W., et al.: Evaluating deep graph neural networks. arXiv preprint arXiv:2108.00955 (2021)
Zhu, Y., Xu, Y., Yu, F., Liu, Q., Wu, S., Wang, L.: Graph contrastive learning with adaptive augmentation. In: Proceedings of the Web Conference 2021, pp. 2069–2080 (2021)
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grant No. U1936213, Program of Shanghai Academic Research Leader No. 21XD1421500, Shanghai Science and Technology Commission Project No. 20020500600.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Wen, M., Wang, H., Xue, Y., Wu, Y., Wen, H. (2024). Improving Structural and Semantic Global Knowledge in Graph Contrastive Learning with Distillation. In: Yang, DN., Xie, X., Tseng, V.S., Pei, J., Huang, JW., Lin, J.CW. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2024. Lecture Notes in Computer Science(), vol 14646. Springer, Singapore. https://doi.org/10.1007/978-981-97-2253-2_29
Download citation
DOI: https://doi.org/10.1007/978-981-97-2253-2_29
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-2252-5
Online ISBN: 978-981-97-2253-2
eBook Packages: Computer ScienceComputer Science (R0)