Skip to main content

Accelerating GNN Training on CPU\(+\)Multi-FPGA Heterogeneous Platform

  • Conference paper
  • First Online:
High Performance Computing (CARLA 2022)

Abstract

Training Graph Neural Networks (GNNs) has become time consuming as the graphs grow larger. Thus, many works have been proposed to accelerate GNN training on multi-GPU platforms. Though GPUs feature high computation power, training GNNs on GPU suffers from low resource utilization. We propose to accelerate GNN training on a CPU+Multi-FPGA heterogeneous platform. By utilizing the customizable hardware resources on the FPGAs, we instantiate multiple hardware kernels with optimized data access pattern and memory organization. The optimized hardware kernels can efficiently access graph-structured data and thus achieve high training performance. However, training GNN with multiple FPGAs also leads to high FPGA-to-FPGA communication overhead and workload imbalance. We develop optimized graph partitioning techniques to minimize FPGA-to-FPGA data communication, and develop a task scheduler to balance the workload among the FPGAs. Compared with the state-of-the-art GNN training implementation on a multi-GPU platform, our work achieves up to 24.7\(\times \) bandwidth efficiency; this superior efficiency enables our work to achieve up to 3.88\(\times \) speedup and 7.18\(\times \) energy efficiency using much less compute power and memory bandwidth than GPUs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/KarypisLab/METIS.

  2. 2.

    https://docs.python.org/3/library/time.html.

References

  1. Amazon ec2 f1. https://aws.amazon.com/tw/ec2/instance-types/f1/. Accessed 23 June 2022

  2. Azure np-series. https://docs.microsoft.com/en-us/azure/virtual-machines/np-series. Accessed 23 June 2022

  3. Intel devcloud. https://www.intel.com/content/www/us/en/developer/tools/devcloud/overview.html. Accessed 23 June 2022

  4. Measuring GPU memory latency. https://chipsandcheese.com/2021/04/16/measuring-gpu-memory-latency/. Accessed 20 June 2022

  5. Nvidia system management interface. https://developer.nvidia.com/nvidia-system-management-interface. Accessed 21 June 2022

  6. PowerTOP. https://github.com/fenrus75/powertop. Accessed 21 June 2022

  7. Cai, Z., Yan, X., Wu, Y., Ma, K., Cheng, J., Yu, F.: DGCL: an efficient communication library for distributed GNN training. In: 16th European Conference on Computer Systems (2021)

    Google Scholar 

  8. Chen, J., Monga, R., Bengio, S., Jozefowicz, R.: Revisiting distributed synchronous SGD. In: International Conference on Learning Representations Workshop (2016)

    Google Scholar 

  9. Hamilton, W.L., Ying, R., Leskovec, J.: Inductive representation learning on large graphs. In: 31st Neural Information Processing Systems (2017)

    Google Scholar 

  10. Hu, W., et al.: Open graph benchmark: datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687 (2020)

  11. Huang, K., Zhai, J., Zheng, Z., Yi, Y., Shen, X.: Understanding and bridging the gaps in current GNN performance optimizations. In: 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2021 (2021)

    Google Scholar 

  12. Jiang, W., Luo, J.: Graph neural network for traffic forecasting: a survey. arXiv preprint arXiv:2101.11174 (2021)

  13. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20, 359–392 (1998)

    Google Scholar 

  14. Kathail, V.: Xilinx vitis unified software platform. In: ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (2020)

    Google Scholar 

  15. Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: International Conference on Learning Representations (2017)

    Google Scholar 

  16. Lin, Y.C., Zhang, B., Prasanna, V.: GCN inference acceleration using high-level synthesis. In: IEEE High Performance Extreme Computing Conference (2021)

    Google Scholar 

  17. Lin, Y.C., Zhang, B., Prasanna, V.: HP-GNN: generating high throughput GNN training implementation on CPU-FPGA heterogeneous platform. In: ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (2022)

    Google Scholar 

  18. Lin, Z., Li, C., Miao, Y., Liu, Y., Xu, Y.: PaGraph: scaling GNN training on large graphs via computation-aware caching. In: ACM Cloud Computing (2020)

    Google Scholar 

  19. Liu, X., Yan, M., Deng, L., Li, G., Ye, X., Fan, D.: Sampling methods for efficient training of graph convolutional networks: a survey. IEEE/CAA J. Autom. Sinica 9, 205–234 (2022)

    Article  MathSciNet  Google Scholar 

  20. Yan, M., et al.: HYGCN: a GCN accelerator with hybrid architecture. In: International Symposium on High Performance Computer Architecture (HPCA) (2020)

    Google Scholar 

  21. Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W.L., Leskovec, J.: Graph convolutional neural networks for web-scale recommender systems. In: 24th ACM SIGKDD Knowledge Discovery & Data Mining (2018)

    Google Scholar 

  22. Zeng, H., Prasanna, V.: GraphACT: accelerating GCN training on CPU-FPGA heterogeneous platforms. In: ACM/SIGDA Field-Programmable Gate Arrays (2020)

    Google Scholar 

  23. Zeng, H., Zhou, H., Srivastava, A., Kannan, R., Prasanna, V.: GraphSAINT: graph sampling based inductive learning method. In: International Conference on Learning Representations (2020)

    Google Scholar 

  24. Zhang, B., Kannan, R., Prasanna, V.: BoostGCN: a framework for optimizing GCN inference on FPGA. In: 29th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE (2021)

    Google Scholar 

  25. Zhang, B., Zeng, H., Prasanna, V.: Hardware acceleration of large scale GCN inference. In: 31st International Conference on Application-specific Systems, Architectures and Processors (ASAP). IEEE (2020)

    Google Scholar 

  26. Zheng, D., et al.: DistDGL: distributed graph neural network training for billion-scale graphs. CoRR (2020)

    Google Scholar 

Download references

Acknowledgement

This work has been supported by the U.S. National Science Foundation under grant number OAC-2209563.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi-Chien Lin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lin, YC., Zhang, B., Prasanna, V. (2022). Accelerating GNN Training on CPU\(+\)Multi-FPGA Heterogeneous Platform. In: Navaux, P., Barrios H., C.J., Osthoff, C., Guerrero, G. (eds) High Performance Computing. CARLA 2022. Communications in Computer and Information Science, vol 1660. Springer, Cham. https://doi.org/10.1007/978-3-031-23821-5_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-23821-5_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-23820-8

  • Online ISBN: 978-3-031-23821-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics