Advertisement

Optimum Network/Framework Selection from High-Level Specifications in Embedded Deep Learning Vision Applications

  • Delia Velasco-MonteroEmail author
  • Jorge Fernández-Berni
  • Ricardo Carmona-Galán
  • Ángel Rodríguez-Vázquez
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11182)

Abstract

This paper benchmarks 16 combinations of popular Deep Neural Networks for 1000-category image recognition and Deep Learning frameworks on an embedded platform. A Figure of Merit based on high-level specifications is introduced. By sweeping the relative weight of accuracy, throughput and power consumption on global performance, we demonstrate that only a reduced set of the analyzed combinations must actually be considered for real deployment. We also report the optimum network/framework selection for all possible application scenarios defined in those terms, i.e. weighted balance of the aforementioned parameters. Our approach can be extended to other networks, frameworks and performance parameters, thus supporting system-level design decisions in the ever-changing ecosystem of Deep Learning technology.

Keywords

Deep learning Convolutional neural networks Embedded vision Performance High-level specifications 

Notes

Acknowledgments

This work was supported by Spanish Government MINECO (European Region Development Fund, ERDF/FEDER) through Project TEC2015-66878-C3-1-R, by Junta de Andalucía CEICE through Project TIC 2338-2013 and by EU H2020 MSCA ACHIEVE-ITN, Grant No. 765866.

References

  1. 1.
    Abadi, M. et al.: TensorFlow: a system for large-scale machine learning. In: 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), pp. 265–283 (2016)Google Scholar
  2. 2.
    DeftWork: a docker image for Tensorflow (2018). https://github.com/DeftWork/rpi-tensorflow. Accessed July 2018
  3. 3.
    Facebook Open Source: Caffe2 (2018). https://caffe2.ai/. Accessed July 2018
  4. 4.
    Howard, A. et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)
  5. 5.
    Iandola, F. et al.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)1 MB model size. arXiv:1602.07360 (2016)
  6. 6.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. arXiv:1408.5093 (2014)
  7. 7.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  8. 8.
    Lin, M., Chen, Q., Yan, S.: Network in network. arXiv:1312.4400 (2013)
  9. 9.
    OpenCV Team: OpenCV (2018). https://opencv.org/. Accessed July 2018
  10. 10.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. (IJCV) 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Sze, V., et al.: Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105(12), 2295–2329 (2017)CrossRefGoogle Scholar
  12. 12.
    Szegedy, C. et al.: Going deeper with convolutions. arXiv:1409.4842 (2014)
  13. 13.
    Velasco-Montero, D. et al.: Performance analysis of real-time DNN inference on Raspberry Pi. In: SPIE Commercial + Scientific Sensing and Imaging, April 2018Google Scholar
  14. 14.
    Verhelst, M., Moons, B.: Embedded deep neural network processing. IEEE Solid-State Circ. Mag. 9(4), 55–65 (2017)CrossRefGoogle Scholar
  15. 15.
    Xianyi, Z.: OpenBLAS, optimized BLAS library based on GotoBLAS2 1.13 BSD version (2018). https://github.com/xianyi/OpenBLAS. Accessed July 2018

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Delia Velasco-Montero
    • 1
    Email author
  • Jorge Fernández-Berni
    • 1
  • Ricardo Carmona-Galán
    • 1
  • Ángel Rodríguez-Vázquez
    • 1
  1. 1.Instituto de Microelectrónica de SevillaUniversidad de Sevilla-CSICSevillaSpain

Personalised recommendations