Skip to main content
Log in

Neighborhood-enhanced contrast for pre-training graph neural networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Pre-training graph neural networks (GNNs) have been proposed to promote graph-related downstream tasks, such as link prediction and node classification. Most existing works employ contrastive learning to explore graph characteristics by enforcing positive sample pairs to be close and negative sample pairs to be distant after performing data augmentation on the input graph. However, these methods apply random operations on input data in data augmentation and sample pairs construction, which leads to neglecting central nodes and the neighbor relationship between nodes. To address the corresponding problem, we propose a novel framework for pre-training GNNs, named Neighborhood-Enhanced Contrast for Pre-Training Graph Neural Networks (NECPT). Specifically, we propose data augmentation strategy based on node centrality to preserve central nodes and corresponding edges, which is used to generate two semantically similar views from input graph. Notably, our NECPT constructs sample pairs by integrating the potential node neighbors in graph structure and semantic space to explore general graph regularities. After generating node representations with GNN encoders and multilayer perceptrons, contrastive sample pairs are selected from different node neighbors, which combines diverse neighborhood relations into contrastive learning. Finally, node representations obtained from the model are used to predict the attributes of nodes and edges, which extracts deep semantic connections between attribute and structure information. Extensive experiments on benchmark datasets in biology and chemistry demonstrate the effectiveness of our proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data availability

These data were derived from the following resources available in the public domain: https://openreview.net/forum?id=HJlWWJSFDH.

References

  1. Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS (2021) A comprehensive survey on graph neural networks. IEEE Transact Neural Netw Learn Syst 32(1):4–24

    Article  MathSciNet  Google Scholar 

  2. Liu C, Wen L, Kang Z, Luo G, Tian L (2021) Self-supervised consensus representation learning for attributed graph. In: ACM multimedia, pp. 2654–2662

  3. Mo Y, Peng L, Xu J, Shi X, Zhu X (2022) Simple unsupervised graph representation learning. AAAI

  4. Perozzi B, Al-Rfou R, Skiena S (2014) Deepwalk: online learning of social representations. In: SIGKDD, pp. 701–710

  5. Xie H, Ma J, Xiong L, Yang C (2021) Federated graph classification over non-iid graphs. NIPS 34:18839–18852

    Google Scholar 

  6. Ou M, Cui P, Pei J, Zhang Z, Zhu W (2016) Asymmetric transitivity preserving graph embedding. In: SIGKDD, pp. 1105–1114

  7. Weiss KR, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big Data 3:9

    Article  Google Scholar 

  8. Velickovic P, Fedus W, Hamilton WL, Liò P, Bengio Y, Hjelm RD (2019) Deep graph infomax. In: ICLR

  9. Thakoor S, Tallec C, Azar MG, Munos R, Veličković P, Valko M (2021) Bootstrapped representation learning on graphs. In: ICLR 2021 workshop on geometrical and topological representation learning

  10. Wang Q, Zhao W, Yang J, Wu J, Xue S, Xing Q, Yu PS (2021) C-deeptrust: a context-aware deep trust prediction model in online social networks. IEEE Transactions on neural networks and learning systems, 1–14

  11. Qiu J, Chen Q, Dong Y, Zhang J, Yang H, Ding M, Wang K, Tang J (2020) Gcc: graph contrastive coding for graph neural network pre-training. In: SIGKDD, pp. 1150–1160

  12. Xu M, Wang H, Ni B, Guo H, Tang J (2021) Self-supervised graph-level representation learning with local and global structure. In: ICML, pp. 11548–11558

  13. Hu Z, Dong Y, Wang K, Chang K-W, Sun Y (2020) Gpt-gnn: generative pre-training of graph neural networks. In: SIGKDD, pp. 1857–1867

  14. Zhang Z, Liu Q, Wang H, Lu C, Lee C-K (2021) Motif-based graph self-supervised learning for molecular property prediction. NIPS 34:15870–15882

    Google Scholar 

  15. Hu* W, Liu* B, Gomes J, Zitnik M, Liang P, Pande V, Leskovec J (2020) Strategies for pre-training graph neural networks. In: ICLR

  16. Lu Y, Jiang X, Fang Y, Shi C (2021) Learning to pre-train graph neural networks. In: AAAI, vol. 35, pp. 4276–4284

  17. Sun M, Zhou K, He X, Wang Y, Wang X (2022) Gppt: Graph pre-training and prompt tuning to generalize graph neural networks. In: SIGKDD, pp. 1717–1727

  18. Li P, Wang J, Li Z, Qiao Y, Liu X, Ma F, Gao P, Song S, Xie G (2021) Pairwise half-graph discrimination: a simple graph-level self-supervised strategy for pre-training graph neural networks. In: IJCAI, pp. 2694–2700

  19. Hou Y, Hu B, Zhao WX, Zhang Z, Zhou J, Wen J-R (2022) Neural graph matching for pre-training graph neural networks. In: SDM, pp. 172–180

  20. Liu Y, Jin M, Pan S, Zhou C, Zheng Y, Xia F, Yu P (2022) Graph self-supervised learning: a survey. IEEE transactions on knowledge and data engineering, 1

  21. You Y, Chen T, Shen Y, Wang Z (2021) Graph contrastive learning automated. In: ICML, pp. 12121–12132

  22. Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, Bengio Y (2019) Learning deep representations by mutual information estimation and maximization. In: ICLR

  23. Sun F-Y, Hoffman J, Verma V, Tang J (2020) Infograph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. In: ICLR

  24. You Y, Chen T, Sui Y, Chen T, Wang Z, Shen Y (2020) Graph contrastive learning with augmentations. NIPS 33:5812–5823

    Google Scholar 

  25. Hassani K, Khasahmadi AH (2020) Contrastive multi-view representation learning on graphs. In: ICML, pp. 4116–4126

  26. Chu G, Wang X, Shi C, Jiang X (2021) Cuco: graph representation with curriculum contrastive learning. In: IJCAI, pp. 2300–2306

  27. Zhao H, Yang X, Wang Z, Yang E, Deng C (2021) Graph debiased contrastive learning with joint representation clustering. In: IJCAI, pp. 3434–3440

  28. Kefato ZT, Girdzijauskas S (2021) Self-supervised graph neural networks without explicit negative sampling. The workshop on self-supervised learning for the web

  29. Jin M, Zheng Y, Li Y-F, Gong C, Zhou C, Pan S (2021) Multi-scale contrastive siamese networks for self-supervised graph representation learning. In: IJCAI, pp. 1477–1483

  30. Gidaris S, Singh P, Komodakis N (2018) Unsupervised representation learning by predicting image rotations. In: ICLR

  31. Zhong Z, Zheng L, Kang G, Li S, Yang Y (2020) Random erasing data augmentation. In: AAAI 34:13001–13008

  32. Lin S, Liu C, Zhou P, Hu Z-Y, Wang S, Zhao R, Zheng Y, Lin L, Xing E, Liang X (2022) Prototypical graph contrastive learning. IEEE transactions on neural networks and learning systems, 1–12

  33. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. NIPS 30

  34. Xu K, Hu W, Leskovec J, Jegelka S (2019) How powerful are graph neural networks ? In: ICLR

  35. Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: ICLR

  36. Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y (2018) Graph attention networks. In: ICLR

Download references

Acknowledgements

This work was supported by the Natural Science Foundation of Guangdong Province, China No. 2022A1515010148 and National Natural Science Foundation of China No. 62177015.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jin Huang.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Huang, J., Yu, W. et al. Neighborhood-enhanced contrast for pre-training graph neural networks. Neural Comput & Applic 36, 4195–4205 (2024). https://doi.org/10.1007/s00521-023-09274-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-023-09274-6

Keywords

Navigation