Advertisement

Improving Performance Estimation for FPGA-Based Accelerators for Convolutional Neural Networks

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12083)

Abstract

Field-programmable gate array (FPGA) based accelerators are being widely used for acceleration of convolutional neural networks (CNNs) due to their potential in improving the performance and reconfigurability for specific application instances. To determine the optimal configuration of an FPGA-based accelerator, it is necessary to explore the design space and an accurate performance prediction plays an important role during the exploration. This work introduces a novel method for fast and accurate estimation of latency based on a Gaussian process parametrised by an analytic approximation and coupled with runtime data. The experiments conducted on three different CNNs on an FPGA-based accelerator on Intel Arria 10 GX 1150 demonstrated a 30.7% improvement in accuracy with respect to the mean absolute error in comparison to a standard analytic method in leave-one-out cross-validation.

Keywords

Field-programmable gate array Deep learning Convolutional neural network Performance estimation Gaussian process 

Notes

Acknowledgments

We thank Yann Herklotz, Alexander Montgomerie-Corcoran and ARC’20 reviewers for insightful suggestions.

References

  1. 1.
    Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). https://www.tensorflow.org/
  2. 2.
    Dai, S., Zhou, Y., Zhang, H., Ustun, E., Young, E.F., Zhang, Z.: Fast and accurate estimation of quality of results in high-level synthesis with machine learning. In: Proceedings of the 2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), pp. 129–132. IEEE, Boulder (2018)Google Scholar
  3. 3.
    Enzler, R., Jeger, T., Cottet, D., Tröster, G.: High-level area and performance estimation of hardware building blocks on FPGAs. In: Hartenstein, R.W., Grünbacher, H. (eds.) FPL 2000. LNCS, vol. 1896, pp. 525–534. Springer, Heidelberg (2000).  https://doi.org/10.1007/3-540-44614-1_57CrossRefGoogle Scholar
  4. 4.
    Fan, H., et al.: A real-time object detection accelerator with compressed SSDLite on FPGA. In: Proceedings of the 2018 International Conference on Field-Programmable Technology (FPT), pp. 14–21. IEEE, Sakura (2018)Google Scholar
  5. 5.
    Fan, H., et al.: F-E3D: FPGA-based acceleration of an efficient 3D convolutional neural network for human action recognition. In: Proceedings of the 2019 IEEE 30th International Conference on Application-Specific Systems, Architectures and Processors (ASAP), vol. 2160, pp. 1–8. IEEE, New York (2019)Google Scholar
  6. 6.
    Fortuin, V., Rätsch, G.: Deep mean functions for meta-learning in Gaussian processes. arXiv preprint arXiv:1901.08098 (2019)
  7. 7.
    Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38, 367–378 (2002)MathSciNetCrossRefGoogle Scholar
  8. 8.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2016, pp. 770–778. IEEE, Las Vegas (2016)Google Scholar
  9. 9.
    Holland, B., George, A.D., Lam, H., Smith, M.C.: An analytical model for multilevel performance prediction of multi-FPGA systems. ACM Trans. Reconfig. Technol. Syst. (TRETS) 4(3), 27 (2011)Google Scholar
  10. 10.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  11. 11.
    Lian, X., Liu, Z., Song, Z., Dai, J., Zhou, W., Ji, X.: High-performance FPGA-based CNN accelerator with block-floating-point arithmetic. IEEE Trans. Very Large Scale Integr. VLSI Syst. 27, 1874–1885 (2019)CrossRefGoogle Scholar
  12. 12.
    Liu, W., et al.: SSD: single shot MultiBox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  13. 13.
    Matthews, D.G., et al.: GPflow: a Gaussian process library using TensorFlow. J. Mach. Learn. Res. 18, 1299–1304 (2017)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Rasmussen, C.E.: Gaussian Processes in Machine Learning. The MIT Press, Cambridge (2005)CrossRefGoogle Scholar
  16. 16.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 3, pp. 779–788. IEEE, Las Vegas (2016)Google Scholar
  17. 17.
    Venieris, S., Kouris, A., Bouganis, C.S.: Toolflows for mapping convolutional neural networks on FPGAs: a survey and future directions. ACM Comput. Surv. (CSUR) 51, 1–39 (2018)CrossRefGoogle Scholar
  18. 18.
    Williams, C.K., Rasmussen, C.E.: Gaussian processes for regression. In: Advances in Neural Information Processing Systems, pp. 514–520 (1996)Google Scholar
  19. 19.
    Yasudo, R., Coutinho, J., Varbanescu, A., Luk, W., Amano, H., Becker, T.: Performance estimation for exascale reconfigurable dataflow platforms. In: Proceedings of the 2018 International Conference on Field-Programmable Technology (FPT), pp. 314–317. IEEE, Sakura (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Electronic and Electrical EngineeringUniversity College LondonLondonUK
  2. 2.Department of ComputingImperial College LondonLondonUK
  3. 3.Department of Computer ScienceUniversity College LondonLondonUK
  4. 4.Department of Information Technology and Electrical EngineeringETH ZurichZurichSwitzerland

Personalised recommendations