Skip to main content

A Framework for Accelerating Graph Convolutional Networks on Massive Datasets

  • Conference paper
  • First Online:
Computational Data and Social Networks (CSoNet 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13116))

Included in the following conference series:

  • 829 Accesses

Abstract

In recent years, there has been much interest in Graph Convolutional Networks (GCNs). There are several challenges associated with training GCNs. Particularly among them, because of massive scale of graphs, there is not only a large computation time, but also the need for partitioning and loading data multiple times. This paper presents a different framework in which existing GCN methods can be accelerated for execution on large graphs. Building on top of ideas from meta-learning we present an optimization strategy. This strategy is applied to three existing frameworks, resulting in new methods that we refer to as GraphSage++, ClusterGCN++, and GraphSaint++. Using graphs with order of 100 million edges, we demonstrate that we reduce the overall training time by up to 30%, while not having a noticeable reduction in F1 scores in most cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Awan, A.A., Hamidouche, K., Hashmi, J.M., Panda, D.K.: S-caffe: co-designing MPI runtimes and Caffe for scalable deep learning on modern GPU clusters. In: Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, SIGPLAN Notices, vol. 52, no. 8, pp. 193–205, January 2017

    Google Scholar 

  2. Cai, H., Zheng, V.W., Chang, K.C.C.: A comprehensive survey of graph embedding: Problems, techniques and applications. CoRR, abs/1709.07604 (2017)

    Google Scholar 

  3. Ohio Supercomputer Center. Ohio supercomputer center (1987)

    Google Scholar 

  4. Chen, J., Zhu, J., Song, L.: Stochastic training of graph convolutional networks with variance reduction. In: ICML, pp. 941–949 (2018)

    Google Scholar 

  5. Chen, J., Ma, T., Xiao, C.: FastGCN: fast learning with graph convolutional networks via importance sampling. In: International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  6. Chiang, W.L., Liu, X., Si, S., Li, Y., Bengio, S., Hsieh, C.J.: Cluster-GCN. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), July 2019

    Google Scholar 

  7. Choi, D., Passos, A., Shallue, C.J., Dahl, G.E.: Faster neural network training with data echoing. CoRR, abs/1907.05550 (2019)

    Google Scholar 

  8. Fischetti, M., Mandatelli, I., Salvagnin, D.: Faster SGD training by minibatch persistency. CoRR, abs/1806.07353 (2018)

    Google Scholar 

  9. Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: Advances in Neural Information Processing Systems, vol. 30, pp. 1024–1034 (2017)

    Google Scholar 

  10. Huang, W., Zhang, T., Rong, Y., Huang, J.: Adaptive sampling towards fast graph representation learning. In: Advances in Neural Information Processing Systems, pp. 4558–4567 (2018)

    Google Scholar 

  11. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20(1), 359–392 (1998)

    Article  MathSciNet  Google Scholar 

  12. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference for Learning Representations (2015)

    Google Scholar 

  13. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (ICLR), abs/1609.02907 (2017)

    Google Scholar 

  14. Nichol, A., Schulman, J.: Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, vol. 2, no. 3, p. 4 (2018)

  15. Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. CoRR, abs/1901.00596 (2019)

    Google Scholar 

  16. Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W.L., Leskovec, J.: Graph convolutional neural networks for web-scale recommender systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2018 (2018)

    Google Scholar 

  17. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.: Accurate, efficient and scalable graph embedding. In: 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), May 2019

    Google Scholar 

  18. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.: Graphsaint: graph sampling based inductive learning method. In: International Conference on Learning Representations (ICLR) abs/1907.04931 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiang Li , Ruoming Jin , Rajiv Ramnath or Gagan Agrawal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, X., Jin, R., Ramnath, R., Agrawal, G. (2021). A Framework for Accelerating Graph Convolutional Networks on Massive Datasets. In: Mohaisen, D., Jin, R. (eds) Computational Data and Social Networks. CSoNet 2021. Lecture Notes in Computer Science(), vol 13116. Springer, Cham. https://doi.org/10.1007/978-3-030-91434-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-91434-9_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-91433-2

  • Online ISBN: 978-3-030-91434-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics