Skip to main content
Log in

Incorporating self-attentions into robust spatial-temporal graph representation learning against dynamic graph perturbations

  • Special Issue Article
  • Published:
Computing Aims and scope Submit manuscript

Abstract

This paper proposes a Robust Spatial-Temporal Graph Neural Network (RSTGNN), which overcomes the limitations faced by graph-based models against dynamic graph perturbations using robust spatial-temporal self-attentions to learn dynamic graph embeddings. In the RSTGNN model training, a selective spatial self-attention technique is employed to aggregate neighboring information based on projected node similarity, which reduces attention weights of edges with less similarity, enabling better information aggregation and preventing the model from ignoring spatial-temporal information. The temporal self-attention layer in the RSTGNN model intensifies temporal patterns using time-span-limited temporal attention weights. Additionally, the model uses a spatial-temporal loss function that penalizes nodes and edges most likely perturbed to alleviate the influence of dynamic graph perturbation. Specifically, the spatial loss focuses on attention weights associated with high-degree and potentially-attacked nodes, while the temporal loss targets attention weights of high centrality-varied nodes to prevent nodes from experiencing excessive centrality changes. To verify the effectiveness of our approach, we evaluate RSTGNN compared with other graph-based models under different node-based or edge-based perturbation rates. Results demonstrate that RSTGNN maintains high effectiveness in dynamic node classification and link prediction for five real dynamic graph datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Availability of data and materials

DBLP3, DBLP5, Reddit, Brain, Epinion are public and real datasets, and all used references are available.

References

  1. Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: Proceedings of the international conference on learning representations

  2. Velikovi P et al (2018) Graph attention networks. In: International conference on learning representations

  3. Bai J et al (2021) A3T-GCN: attention temporal graph convolutional network for traffic forecasting. ISPRS Int J Geo Inf 10(7):485

    Article  Google Scholar 

  4. Pareja A et al (2020) Evolvegcn: evolving graph convolutional networks for dynamic graphs. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, no 04

  5. Chen J, Wang X, Xu X (2022) GC-LSTM: graph convolution embedded LSTM for dynamic network link prediction. Appl Intell 52(7):7513–7528

    Article  Google Scholar 

  6. Fan Y, Yao Y, Joe-Wong C (2021) Gcn-se: attention as explainability for node classification in dynamic graphs. In: 2021 IEEE international conference on data mining (ICDM). IEEE

  7. Zhu L et al (2021) Adversarial diffusion attacks on graph-based traffic prediction models. arXiv preprint arXiv:2104.09369

  8. Ma Jiaqi, Ding Shuangrui, Mei Qiaozhu (2020) Towards more practical adversarial attacks on graph neural networks. Adv Neural Inf Process Syst 33:4756–4766

    Google Scholar 

  9. Liu F, Moreno LM, Sun L (2021) One vertex attack on graph neural networks-based spatiotemporal forecasting. In: ICLR Conference OpenReview

  10. Geisler S et al (2021) Robustness of graph neural networks at scale. Adv Neural Inf Process Syst 34:7637–7649

    Google Scholar 

  11. Li H et al (2022) Black-box adversarial attack and defense on graph neural networks. In: 2022 IEEE 38th international conference on data engineering (ICDE). IEEE

  12. Wang B, Li Y, Zhou P (2022) Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

  13. Xu K et al (2019) Topology attack and defense for graph neural networks: An optimization perspective. In: 28th International joint conference on artificial intelligence, IJCAI 2019. International joint conferences on artificial intelligence

  14. Zügner D, Günnemann S (2019) Adversarial attacks on graph neural networks via meta learning. arXiv preprint arXiv:1902.08412

  15. Chang et al (2020) A restricted black-box adversarial framework towards attacking graph embedding models. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, no. 04

  16. Sharma K et al (2022) Imperceptible adversarial attacks on discrete-time dynamic graph models. In: NeurIPS 2022 temporal graph learning workshop

  17. The dblp team: dblp computer science bibliography. Monthly snapshot release of November 2019. https://dblp.org/xml/release/dblp-2019-11-01.xml.gz

  18. Zhuang J, Al Hasan M (2022) Defending graph convolutional networks against dynamic graph perturbations via Bayesian self-supervision. In: AAAI

  19. Sankar A et al (2020) Dysat: deep neural representation learning on dynamic graphs via self-attention networks. In: Proceedings of the 13th international conference on web search and data mining

  20. Zhu D et al (2019) Robust graph convolutional networks against adversarial attacks. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining

  21. Jin W et al (2021) Node similarity preserving graph convolutional networks. In: Proceedings of the 14th ACM international conference on web search and data mining

  22. Zhao X et al (2021) Expressive 1-Lipschitz neural networks for robust multiple graph learning against adversarial attacks. In: International conference on machine learning. PMLR

  23. Vaswani A et al (2017) Attention is all you need. Adv Neural Inf Process Syst 30

  24. Wu H et al (2019) Adversarial examples on graph data: Deep insights into attack and defense. arXiv preprint arXiv:1903.01610

  25. Duen-Ren Liu et al (2021) Air pollution prediction based on factory-aware attentional LSTM neural network. Computing 103:75–98

    Article  Google Scholar 

  26. Zareie Ahmad, Sakellariou Rizos (2020) Similarity-based link prediction in social networks using latent relationships between the users. Sci Rep 10(1):20137

    Article  Google Scholar 

  27. He T et al (2022) Not all neighbors are worth attending to: graph selective attention networks for semi-supervised learning. arXiv:2210.07715

  28. Ning K-P et al (2021) Improving model robustness by adaptively correcting perturbation levels with active queries. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, no. 10

  29. Shanthamallu US et al (2018) A regularized attention mechanism for graph attention networks. In: ICASSP 2020—2020 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 3372–3376

  30. Tang X et al (2020) Transferring robustness for graph neural network against poisoning attacks. In: Proceedings of the 13th international conference on web search and data mining

  31. Yang Z et al (2020) Understanding negative sampling in graph representation learning. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining

  32. Kai Wang et al (2020) Enhancing knowledge graph embedding by composite neighbors for link prediction. Computing 102:2587–2606

    Article  MathSciNet  Google Scholar 

  33. Nair AB et al (2022) Comparative study of centrality based adversarial attacks on graph convolutional network model for node classification. In: 2022 7th International conference on communication and electronics systems (ICCES). IEEE

  34. Ma J, Deng J, Mei Q (2022) Adversarial attack on graph neural networks as an influence maximization problem. In: Proceedings of the fifteenth ACM international conference on web search and data mining

  35. Hamilton W, Ying Z, Leskovec J (2017) Inductive representation learning on large graphs. Adv Neural Inf Process Syst 30

  36. Taylor D et al (2017) Eigenvector-based centrality measures for temporal networks. Multiscale Model Simul 15(1):537–574

    Article  MathSciNet  MATH  Google Scholar 

  37. Rozemberczki B et al (2021) Pytorch geometric temporal: Spatiotemporal signal processing with neural machine learning models. In: Proceedings of the 30th ACM international conference on information & knowledge management

  38. Kingma D, Ba J (2014) Adam: a method for stochastic optimization. Comput Sci

  39. UpAndRunning. HCP Protocols. Human Connectome Project. https://www.humanconnectome.org/hcp-protocols. Accessed 31 Dec 2022

  40. Torra V, Salas J (2019) Graph perturbation as noise graph addition: a new perspective for graph anonymization

  41. Hay M, Miklau G, Jensen D, Weis P, Srivastava S. Anonymizing social networks, Technical report No. 07-19, Computer Science Department, University of Massachusetts Amherst, UMass Amherst

  42. Li Y et al (2021) DeepRobust: a platform for adversarial attacks and defenses. In: AAAI conference on artificial intelligence

  43. Bojchevski A, Günnemann S (2019) Adversarial attacks on node embeddings via graph poisoning. In: International conference on machine learning. PMLR

  44. Li Y et al (2017) Diffusion convolutional recurrent neural network: data-driven traffic forecasting

Download references

Acknowledgements

This work is supported by the Chongqing Technology Innovation & Application Development Key Project (cstc2020jscx-dxwtBX0055; cstb2022tiad-kpx0148) and the Fundamental Research Funds for the Central Universities (No.2022CDJYGRH-001).

Funding

Chongqing Technology Innovation & Application Development Key Project (cstc2020jscx-dxwtBX0055; cstb2022tiad-kpx0148) and the Fundamental Research Funds for the Central Universities (No.2022CDJYGRH-001).

Author information

Authors and Affiliations

Authors

Contributions

Zhuo Zeng finished paper writing and experiments; Chengliang Wang guided research interests; Fei Ma improved some experiment programs; Xusheng Li, Xinrun Chen provided theoretical supports.

Corresponding author

Correspondence to Chengliang Wang.

Ethics declarations

Conflicts of interest

The authors in this work declare no conflict of interest.

Code availability

RSTGNN model code is public to researchers for further exploration in robust dynamic-graph based models. There are available at https://github.com/blackzz133/RSTGNN/tree/master.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zeng, Z., Wang, C., Ma, F. et al. Incorporating self-attentions into robust spatial-temporal graph representation learning against dynamic graph perturbations. Computing (2023). https://doi.org/10.1007/s00607-023-01235-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00607-023-01235-0

Keywords

Mathematics Subject Classification

Navigation