Abstract
In this paper, we investigate the benefits of hardware-aware quantization in the gFADES hardware accelerator targeting Graph Convolutional Networks (GCNs). GCNs are a type of Graph Neural Networks (GNNs) that combine sparse and dense data compute requirements that are challenging to meet in resource-constrained embedded hardware. The gFADES architecture is optimized to work with the pruned data representations typically present in graph neural networks for the graph structure and features. It is described in High-Level Synthesis (HLS) which enables efficient design-space exploration of mixed precision hardware configurations. In this work, the mixed-precision design is embedded in the forward pass of the PyTorch back-propagation training loop to enable run-time hardware-aware training. It uses different data types to represent adjacency, feature, weight, internal, and output values which allows for a fine-grained optimization at the tensor level. The resulting hardware configuration after training reduces precision to a 4-bit data type for all inputs. It achieves little to no degradation in the classification accuracy, when training on the Planetoid database dataset, compared to the original 32-bit floating-point. The optimized hardware design running on an AMD/Xilinx Zynq Ultrascale+ FPGA device achieves over \(600\times \) speedup compared to the optimized PyTorch software implementation running on the multi-core ARM CPU in the processing system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, Y., Khadem, A., He, X., Talati, N., Khan, T.A., Mudge, T.: PEDAL: a power efficient GCN accelerator with multiple dataflows. In: Proceedings of Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, April 2023. https://doi.org/10.23919/date56975.2023.10137240
Courbariaux, M., Bengio, Y., David, J.P.: Training deep neural networks with low precision multiplications (2014). https://doi.org/10.48550/ARXIV.1412.7024
Geng, T., et al.: AWB-GCN: a graph convolutional network accelerator with runtime workload rebalancing (2019). https://doi.org/10.48550/ARXIV.1908.10834
Grohe, M.: The descriptive complexity of graph neural networks (2023). https://doi.org/10.48550/ARXIV.2303.04613
Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision (2015). https://doi.org/10.48550/ARXIV.1502.02551
Haghi, P., et al.: FLASH: FPGA-accelerated smart switches with GCN case study. In: Proceedings of the 37th International Conference on Supercomputing, ICS 2023. ACM, June 2023. https://doi.org/10.1145/3577193.3593739
Khurana, D., Koli, A., Khatter, K., Singh, S.: Natural language processing: state of the art, current trends and challenges. Multimedia Tools Appl. 82(3), 3713–3744 (2022). https://doi.org/10.1007/s11042-022-13428-4
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks (2016). https://doi.org/10.48550/ARXIV.1609.02907
Li, J., Louri, A., Karanth, A., Bunescu, R.: GCNAX: a flexible and energy-efficient accelerator for graph convolutional neural networks, Seoul, Korea (South), pp. 775–788. IEEE (2021). https://doi.org/10.1109/HPCA51647.2021.00070
Nunez-Yanez, J.: Accelerating graph neural networks in Pytorch with HLS and deep dataflows. In: Palumbo, F., Keramidas, G., Voros, N., Diniz, P.C. (eds.) ARC 2023. LNCS, vol. 14251, pp. 131–145. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-42921-7_9
Padmaja, B., Moorthy, C.V.K.N.S.N., Venkateswarulu, N., Bala, M.M.: Exploration of issues, challenges and latest developments in autonomous cars. J. Big Data 10(1) (2023). https://doi.org/10.1186/s40537-023-00701-y
Tailor, S.A., Fernandez-Marques, J., Lane, N.D.: Degree-quant: quantization-aware training for graph neural networks (2020). https://doi.org/10.48550/ARXIV.2008.05000
Wang, Y., Feng, B., Ding, Y.: QGTC: accelerating quantized graph neural networks via GPU tensor core. In: Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2022. ACM, March 2022. https://doi.org/10.1145/3503221.3508408
Wu, H., Judd, P., Zhang, X., Isaev, M., Micikevicius, P.: Integer quantization for deep learning inference: principles and empirical evaluation (2020). https://doi.org/10.48550/ARXIV.2004.09602
Xie, X., et al.: Accel-GCN: high-performance GPU accelerator design for graph convolution networks (2023). https://doi.org/10.48550/ARXIV.2308.11825
Yan, M., et al.: HyGCN: a GCN accelerator with hybrid architecture (2020). https://doi.org/10.48550/ARXIV.2001.02514
Yin, L., Wang, J., Zheng, H.: Exploring architecture, dataflow, and sparsity for GCN accelerators: a holistic framework. In: Proceedings of the Great Lakes Symposium on VLSI, GLSVLSI 2023. ACM, June 2023. https://doi.org/10.1145/3583781.3590243
Acknowledgments
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hansson, O., Grailoo, M., Gustafsson, O., Nunez-Yanez, J. (2024). Deep Quantization of Graph Neural Networks with Run-Time Hardware-Aware Training. In: Skliarova, I., Brox Jiménez, P., Véstias, M., Diniz, P.C. (eds) Applied Reconfigurable Computing. Architectures, Tools, and Applications. ARC 2024. Lecture Notes in Computer Science, vol 14553. Springer, Cham. https://doi.org/10.1007/978-3-031-55673-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-55673-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-55672-2
Online ISBN: 978-3-031-55673-9
eBook Packages: Computer ScienceComputer Science (R0)