Advertisement

Video Restoration Using Convolutional Neural Networks for Low-Level FPGAs

  • Kwok-Wai Hung
  • Chaoming Qiu
  • Jianmin Jiang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11062)

Abstract

Deep convolutional neural networks (CNNs) have attracted wide attentions for video restoration in the last few years. Due to enormous computational complexity of deep CNNs, implementations on high-level FPGAs have been proposed to achieve the power efficient solutions. However, low-end devices, such as mobile devices and low-level FPGAs, have very limited processing capabilities, such as limited logic gates and memory bandwidth. In this paper, we propose a power-efficient design of CNNs for implementation on low-level FPGAs for near real-time video frame restoration. Specifically, our video restoration method reduces the model parameters by analyzing the network hyper-parameters. Fixed-point quantization is adopted during the training process to improve the processing frame rate while retaining the PSNR quality. Hence, the computational requirement of the proposed CNNs is alleviated for implementation using OpenCL framework on a low-level FPGA with only 85K logic gates. Experimental results show that the proposed FPGA platform consumes more than 8 times less power than the CPU and GPU implementations.

Keywords

FPGA Video restoration Real-time applications 

Notes

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (No. 61602312, 61620106008) and the Shenzhen Emerging Industries of the Strategic Basic Research Project (No. JCYJ20160226191842793).

References

  1. 1.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image superresolution using deep convolutional networks. TPAMI 38(2), 295–307 (2015)CrossRefGoogle Scholar
  2. 2.
    Kim, J., Lee, J.K., Lee, K.M.: Accurate image superresolution using very deep convolutional networks. In: CVPR (2016)Google Scholar
  3. 3.
    Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: CVPR (2016)Google Scholar
  4. 4.
    Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate superresolution. In: CVPR (2017)Google Scholar
  5. 5.
    Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: ICCV (2017)Google Scholar
  6. 6.
    Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: CVPR (2017)Google Scholar
  7. 7.
    Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)Google Scholar
  8. 8.
    Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR (2016)Google Scholar
  9. 9.
    Chen, X., Hu, X., Xu, N.: FxpNet: training a deep convolutional neural network in fixed-point representation. In: IJCNN (2017)Google Scholar
  10. 10.
    Lai, L., Suda, N., Chandra, V.: Deep convolutional neural network inference with floating-point weights and fixed-point activations. In: CVPR (2017)Google Scholar
  11. 11.
    Lin, D.D., Talathi, S.S., Sreekanth Annapureddy, V.: Fixed point quantization of deep convolutional networks. In: ICML (2016)Google Scholar
  12. 12.
    Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: FPGA (2015)Google Scholar
  13. 13.
    Suda, N., Chandra, V., Dasika, G., Mohanty, A., Ma, Y.: Throughout-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: FPGA (2016)Google Scholar
  14. 14.
    Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: ASP-DAC (2016)Google Scholar
  15. 15.
    Pouchet, L., Zhang, P., Sadayappan, P., Cong, J.: Polyhedral-based data reuse optimization for configurable computing. In: FPGA (2013)Google Scholar
  16. 16.
    Hubara, I., Courbariaux, M., Soudry, D., EI-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. arXiv preprint arXiv: 1609.07061 (2016)Google Scholar
  17. 17.
    Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: CVPR (2017)Google Scholar
  18. 18.
    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: CVPR (2018)Google Scholar
  19. 19.
    Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)CrossRefGoogle Scholar
  20. 20.
    Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image superresolution via sparse representation. IEEE TIP 19(11), 2861–2873 (2010)zbMATHGoogle Scholar
  21. 21.
    Marco Bevilacqua, C.G., Roumy, A., Morel, M.-L.A.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC (2012)Google Scholar
  22. 22.
    Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-27413-8_47CrossRefGoogle Scholar
  23. 23.
    Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Computer Science and Software EngineeringShenzhen UniversityShenzhenChina

Personalised recommendations