Advertisement

Boosting Convolutional Neural Networks Performance Based on FPGA Accelerator

  • Omran Al-Shamma
  • Mohammed Abdulraheem FadhelEmail author
  • Rabab Alaa Hameed
  • Laith Alzubaidi
  • Jinglan Zhang
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 940)

Abstract

Convolutional Neural Network (CNN) has been extensively used for image recognition due to its great accuracy. This accuracy is achieved through emulating the optic nerves behavior in living human beings. The speedy progress of the current applications derived from deep learning algorithms has extra enhanced research and developments. More specifically, several deep CNN accelerators have been planned on FPGA-based platform, due to its fast development round, reconfigure-ability, and high performance. The FPGA is extremely faster than the CPU because it based on parallel mechanism, as well as, consumes very low energy. This paper employs the FPGA in establishing the CNN architecture of type VGG16 model. The FPGA solves the convolutional computations for accelerating the computation time by 11%, without losing much fidelity when using 16-bit fixed-point data format rather than 32-bit floating-point data format.

Keywords

CNN HPS FPGA VGG16 Floating point Fixed point 

References

  1. 1.
    Zhang, C., et al.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: Proceedings of the 2015 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM (2015)Google Scholar
  2. 2.
    Suda, N., et al.: Throughput-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM (2016)Google Scholar
  3. 3.
    Ovtcharov, K., et al.: Accelerating deep convolutional neural networks using specialized hardware. Microsoft Research Whitepaper 2.11 (2015)Google Scholar
  4. 4.
    Chen, Y.-H., et al.: Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circuits 52(1), 127–138 (2017)CrossRefGoogle Scholar
  5. 5.
    Al-Zubaidi, L.: Deep learning based nuclei detection for quantitative histopathology image analysis. Dissertation University of Missouri–Columbia (2016)Google Scholar
  6. 6.
    Wang, C., et al.: DLAU: a scalable deep learning accelerator unit on FPGA. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 36(3), 513–517 (2017)Google Scholar
  7. 7.
    Gupta, S., et al.: Deep learning with limited numerical precision. In: International Conference on Machine Learning (2015)Google Scholar
  8. 8.
    Han, S., et al.: EIE: efficient inference engine on compressed deep neural network. In: 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture, ISCA. IEEE (2016)Google Scholar
  9. 9.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  10. 10.
    Liu, B., et al.: Sparse convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  11. 11.
    Alzubaidi, L., et al.: Nucleus detection in H&E images with fully convolutional regression networks. In: Proceedings of First International Workshop on Deep Learning for Pattern Recognition (2016)Google Scholar
  12. 12.
    Abdelouahab, K., et al.: Accelerating CNN inference on FPGAs: a survey. arXiv preprint arXiv:1806.01683 (2018)
  13. 13.
    Li, H., et al.: A high performance FPGA-based accelerator for large-scale convolutional neural networks. In: 2016 26th International Conference on Field Programmable Logic and Applications, FPL. IEEE (2016)Google Scholar
  14. 14.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  15. 15.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  16. 16.
    Vedaldi, A., Lenc, K.: MatConvNet: convolutional neural networks for MATLAB. In: Proceedings of the 23rd ACM International Conference on Multimedia. ACM (2015)Google Scholar
  17. 17.
    Karpathy, A., et al.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (2014)Google Scholar
  18. 18.
    Qiu, J., et al.: Going deeper with embedded FPGA platform for convolutional neural network. In: Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. ACM (2016)Google Scholar
  19. 19.
    Shafiee, A., et al.: ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars. ACM SIGARCH Comput. Archit. News 44(3), 14–26 (2016)CrossRefGoogle Scholar
  20. 20.
    Chi, P., et al.: PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory. ACM SIGARCH Comput. Archit. News 44(3), 27–39 (2016)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Shin, H.-C., et al.: Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imag. 35(5), 1285 (2016)CrossRefGoogle Scholar
  22. 22.
    Cong, J., Xiao, B.: Minimizing computation in convolutional neural networks. In: International Conference on Artificial Neural Networks. Springer, Cham (2014)Google Scholar
  23. 23.
    Xu, J., et al.: MSR-VTT: a large video description dataset for bridging video and language. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  24. 24.
    Rastegari, M., et al.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: European Conference on Computer Vision. Springer, Cham (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Omran Al-Shamma
    • 1
  • Mohammed Abdulraheem Fadhel
    • 1
    Email author
  • Rabab Alaa Hameed
    • 2
  • Laith Alzubaidi
    • 1
    • 3
  • Jinglan Zhang
    • 3
  1. 1.University of Information Technology and CommunicationsBaghdadIraq
  2. 2.University of BaghdadBaghdadIraq
  3. 3.Faculty of Science and EngineeringQueensland University of TechnologyBrisbaneAustralia

Personalised recommendations