Advertisement

POWER: A Parallel-Optimization-Based Framework Towards Edge Intelligent Image Recognition and a Case Study

  • Yingyi Yang
  • Xiaoming Mai
  • Hao Wu
  • Ming Nie
  • Hui Wu
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11334)

Abstract

To improve the intelligent image recognition abilities of edge devices, a parallel-optimization-based framework called POWER is introduced in this paper. With FPGA (Field-Programmable Gate Array) as its hardware module, POWER provides well extensibility and flexible customization capability for developing intelligent firmware suitable for different types of edge devices in various scenarios. Through an actual case study, we design and implement a firmware prototype following the specification of POWER and explore its performance improvement using parallel optimization. Our experimental results show that the firmware prototype we implement exhibits good performance and is applicable to substation inspection robots, which also validate the effectiveness of our POWER framework in designing edge intelligent firmware modules indirectly.

Keywords

Edge intelligence Image recognition Framework Parallel optimization Substation inspection robot 

Notes

Acknowledgement

This work is funded by the Guangdong Power Grid Co., Ltd. Science and Technology Program under Grant No. GDKJXM20161136.

References

  1. 1.
    Abadi, M., et al.: TensorFlow: a system for large-scale machine learning (2016)Google Scholar
  2. 2.
    Huynh, T.V.: Deep neural network accelerator based on FPGA. In: NAFOSTED Conference on Information and Computer Science, pp. 254–257 (2017)Google Scholar
  3. 3.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: International Conference on Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  4. 4.
    Lacey, G., Taylor, G.W., Areibi, S.: Deep learning on FPGAs: past, present, and future (2016)Google Scholar
  5. 5.
    Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  6. 6.
    Li, E., Zhou, Z., Chen, X.: Edge intelligence: on-demand deep learning model co-inference with device-edge synergy (2018)Google Scholar
  7. 7.
    Li, X., Ye, M., Li, T., Center, R.: Review of object detection based on convolutional neural networks. Appl. Res. Comput. 34(10), 2881–2886 (2017)Google Scholar
  8. 8.
    Li, X., Ding, L., Wang, L., Cao, F.: FPGA accelerates deep residual learning for image recognition. In: IEEE Information Technology, Networking, Electronic and Automation Control Conference, pp. 837–840 (2017)Google Scholar
  9. 9.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46448-0_2CrossRefGoogle Scholar
  10. 10.
    Lowe, D.G.: Object recognition from local scale-invariant features. In: IEEE International Conference on Computer Vision, p. 1150 (2002)Google Scholar
  11. 11.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  12. 12.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2015)CrossRefGoogle Scholar
  13. 13.
    Shi, W., Sun, H., Cao, J., Zhang, Q., Liu, W.: Edge computingan emerging computing model for the internet of everything era. J. Comput. Res. Dev. 54, 907–924 (2017)Google Scholar
  14. 14.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. arXiv preprint arXiv:1409-1556 (2014)
  15. 15.
    Szegedy, C., et al.: Going deeper with convolutions, pp. 1–9 (2014)Google Scholar
  16. 16.
    Venieris, S.I., Kouris, A., Bouganis, C.S.: Toolflows for mapping convolutional neural networks on FPGAs: a survey and future directions. ACM Comput. Surv. 51(3), 56 (2018)CrossRefGoogle Scholar
  17. 17.
    Wang, C., Gong, L., Yu, Q., Li, X., Xie, Y., Zhou, X.: DLAU: a scalable deep learning accelerator unit on FPGA. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 36(3), 513–517 (2017)Google Scholar
  18. 18.
    Yu, Q., Wang, C., Ma, Xiang, L.X., Zhou, X.: A deep learning prediction process accelerator based FPGA. In: Proceedings of the Annual ACM Symposium on Theory of Computing, pp. 585–594 (2015)Google Scholar
  19. 19.
    Zeng, H., Zhang, C., Prasanna, V.: Fast generation of high throughput customized deep learning accelerators on FPGAs. In: International Conference on Reconfigurable Computing and FPGAs, pp. 1–8 (2017)Google Scholar
  20. 20.
    Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pp. 161–170 (2015)Google Scholar
  21. 21.
    Zhang, H., Wang, K.F., Wang, F.Y.: Advances and perspectives on applications of deep learning in visual object detection. Acta Autom. Sin. 43(8), 1289–1305 (2017)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Electric Power Research Institute of Guangdong Power Grid Co., Ltd.GuangZhouChina

Personalised recommendations