Skip to main content

Multi-clusters: An Efficient Design Paradigm of NN Accelerator Architecture Based on FPGA

  • Conference paper
  • First Online:
Network and Parallel Computing (NPC 2022)

Abstract

With the serial development of neural network models, choosing a superior platform for these complex computing applications is essential. Field-Programmable Gate Array (FPGA) is gradually becoming an accelerating platform that balances power and performance. The design of architecture in neural network accelerator based on FPGA is about two categories, stream and single-engine. Both design paradigms have advantages and disadvantages. The stream is easier to achieve high performance because of model customization but has low kernel compatibility. The single-engine is more flexible but has more scheduling overhead. Therefore, this work proposes a new design paradigm for the neural network accelerator based on FPGA, called the Multi-clusters (MC), which combines the characteristics of the above two design categories. We divide the original network model according to the calculated features. Then, different cores are designed to map these parts separately for efficient execution. The fine-grained pipeline is performed inside the cores. Multiple cores are executed by software scheduling and implement a coarse-grained schedule, thereby improving the overall computing performance. The experimental results show that the accelerator with the MC category achieved 39.7\(\times \) times improvement of performance and 7.9\(\times \) times improvement of energy efficiency compared with CPU and GPU, and finally obtained nearly 680.3 GOP/s computing performance in the peek.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Chen, Y., He, J., Zhang, X., Hao, C., Chen, D.: Cloud-DNN: an open framework for mapping DNN models to cloud FPGAs. In: Bazargan, K., Neuendorffer, S. (eds.) Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2019, Seaside, CA, USA, 24–26 February 2019, pp. 73–82. ACM (2019). https://doi.org/10.1145/3289602.3293915

  2. Dhouibi, M., Ben Salem, A.K., Saidi, A., Ben Saoud, S.: Accelerating deep neural networks implementation: a survey. IET Comput. Digit. Tech. 15(2), 79–96 (2021)

    Article  Google Scholar 

  3. Geng, T., Wang, T., Sanaullah, A., Yang, C., Patel, R., Herbordt, M.: A framework for acceleration of CNN training on deeply-pipelined FPGA clusters with work and weight load balancing. In: 2018 28th International Conference on Field Programmable Logic and Applications (FPL), pp. 394–3944. IEEE (2018)

    Google Scholar 

  4. Gokhale, V., Zaidy, A., Chang, A.X.M., Culurciello, E.: Snowflake: an efficient hardware accelerator for convolutional neural networks. In: 2017 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–4. IEEE (2017)

    Google Scholar 

  5. Gong, L., Wang, C., Li, X., Chen, H., Zhou, X.: MALOC: a fully pipelined FPGA accelerator for convolutional neural networks with all layers mapped on chip. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. 37(11), 2601–2612 (2018). https://doi.org/10.1109/TCAD.2018.2857078

    Article  Google Scholar 

  6. Gong, L., Wang, C., Li, X., Zhou, X.: Improving HW/SW adaptability for accelerating CNNs on FPGAs through a dynamic/static co-reconfiguration approach. IEEE Trans. Parallel Distrib. Syst. 32(7), 1854–1865 (2021). https://doi.org/10.1109/TPDS.2020.3046762

    Article  Google Scholar 

  7. Gong, Y., et al.: N3H-Core: neuron-designed neural network accelerator via FPGA-based heterogeneous computing cores. arXiv preprint arXiv:2112.08193 (2021)

  8. Guan, Y., et al.: FP-DNN: an automated framework for mapping deep neural networks onto FPGAs with RTL-HLS hybrid templates. In: 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 152–159. IEEE (2017)

    Google Scholar 

  9. Hameed, R., et al.: Understanding sources of inefficiency in general-purpose chips. In: Proceedings of the 37th Annual International Symposium on Computer Architecture, pp. 37–47 (2010)

    Google Scholar 

  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  11. Liu, Z., Dou, Y., Jiang, J., Xu, J.: Automatic code generation of convolutional neural networks in FPGA implementation. In: 2016 International Conference on Field-Programmable Technology (FPT), pp. 61–68. IEEE (2016)

    Google Scholar 

  12. Lou, W., Gong, L., Wang, C., Du, Z., Xuehai, Z.: OctCNN: a high throughput FPGA accelerator for CNNs using octave convolution algorithm. IEEE Trans. Comput. 71(8), 1847–1859 (2021)

    MATH  Google Scholar 

  13. Qiu, J., et al.: Going deeper with embedded FPGA platform for convolutional neural network. In: Chen, D., Greene, J.W. (eds.) Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Monterey, CA, USA, 21–23 February 2016, pp. 26–35. ACM (2016). https://doi.org/10.1145/2847263.2847265

  14. Venieris, S.I., Kouris, A., Bouganis, C.S.: Toolflows for mapping convolutional neural networks on FPGAs: a survey and future directions. arXiv preprint arXiv:1803.05900 (2018)

  15. Wang, C., Gong, L., Li, X., Zhou, X.: A ubiquitous machine learning accelerator with automatic parallelization on FPGA. IEEE Trans. Parallel Distrib. Syst. 31(10), 2346–2359 (2020). https://doi.org/10.1109/TPDS.2020.2990924

    Article  Google Scholar 

  16. Wang, C., Gong, L., Yu, Q., Li, X., Xie, Y., Zhou, X.: DLAU: a scalable deep learning accelerator unit on FPGA. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. 36(3), 513–517 (2017). https://doi.org/10.1109/TCAD.2016.2587683

    Article  Google Scholar 

  17. Wang, C., Li, X., Chen, P., Zhang, J., Feng, X., Zhou, X.: Regarding processors and reconfigurable IP cores as services. In: Moser, L.E., Parashar, M., Hung, P.C.K. (eds.) 2012 IEEE Ninth International Conference on Services Computing, Honolulu, HI, USA, 24–29 June 2012, pp. 668–669. IEEE Computer Society (2012). https://doi.org/10.1109/SCC.2012.72

  18. Wang, C., Li, X., Zhang, J., Zhou, X., Wang, A.: A star network approach in heterogeneous multiprocessors system on chip. J. Supercomput. 62(3), 1404–1424 (2012). https://doi.org/10.1007/s11227-012-0810-x

    Article  Google Scholar 

  19. Wang, X., Wang, C., Cao, J., Gong, L., Zhou, X.: WinoNN: optimizing FPGA-based convolutional neural network accelerators using sparse Winograd algorithm. IEEE Trans. Comput. Aided Des. Integr. Circ. Syst. 39(11), 4290–4302 (2020). https://doi.org/10.1109/TCAD.2020.3012323

    Article  Google Scholar 

  20. You, Y., et al.: New paradigm of FPGA-based computational intelligence from surveying the implementation of DNN accelerators. Des. Autom. Embed. Syst. 26, 1–27 (2022). https://doi.org/10.1007/s10617-021-09256-8

    Article  Google Scholar 

  21. Yu, Y., Wu, C., Zhao, T., Wang, K., He, L.: OPU: an FPGA-based overlay processor for convolutional neural networks. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 28(1), 35–47 (2020). https://doi.org/10.1109/TVLSI.2019.2939726

  22. Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 161–170 (2015)

    Google Scholar 

  23. Zhang, X., et al.: DNNBuilder: an automated tool for building high-performance DNN hardware accelerators for FPGAs. In: Bahar, I. (ed.) Proceedings of the International Conference on Computer-Aided Design, ICCAD 2018, San Diego, CA, USA, 05–08 November 2018, p. 56. ACM (2018). https://doi.org/10.1145/3240765.3240801

  24. Zhang, X., et al.: DNNExplorer: a framework for modeling and exploring a novel paradigm of FPGA-based DNN accelerator. In: 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pp. 1–9 (2020)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Key R &D Program of China under Grants 2017YFA0700900 and 2017YFA0700903, in part by the National Natural Science Foundation of China under Grants 62102383, 61976200, and 62172380, in part by Jiangsu Provincial Natural Science Foundation under Grant BK20210123, in part by Youth Innovation Promotion Association CAS under Grant Y2021121, and in part by the USTC Research Funds of the Double First-Class Initiative under Grant YD2150002005.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lei Gong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, T., Gong, L., Wang, C., Yang, Y., Gao, Y. (2022). Multi-clusters: An Efficient Design Paradigm of NN Accelerator Architecture Based on FPGA. In: Liu, S., Wei, X. (eds) Network and Parallel Computing. NPC 2022. Lecture Notes in Computer Science, vol 13615. Springer, Cham. https://doi.org/10.1007/978-3-031-21395-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21395-3_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21394-6

  • Online ISBN: 978-3-031-21395-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics