Skip to main content

Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12374))

Included in the following conference series:

Abstract

We present a novel method of CNN inference for pixel processor array (PPA) vision sensors, designed to take advantage of their massive parallelism and analog compute capabilities. PPA sensors consist of an array of processing elements (PEs), with each PE capable of light capture, data storage and computation, allowing various computer vision processes to be executed directly upon the sensor device. The key idea behind our approach is storing network weights “in-pixel” within the PEs of the PPA sensor itself to allow various computations, such as multiple different image convolutions, to be carried out in parallel. Our approach can perform convolutional layers, max pooling, ReLu, and a final fully connected layer entirely upon the PPA sensor, while leaving no untapped computational resources. This is in contrast to previous works that only use a sensor-level processing to sequentially compute image convolutions, and must transfer data to an external digital processor to complete the computation. We demonstrate our approach on the SCAMP-5 vision system, performing inference in a MNIST digit classification network at over 3000 frames per second and over 93% classification accuracy. This is the first work demonstrating CNN inference conducted entirely upon a PPA vision sensor, requiring no external processing.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aimar, A., et al.: NullHop: a flexible convolutional neural network accelerator based on sparse representations of feature maps. IEEE Trans. Neural Netw. Learn. Syst. 99, 1–13 (2018)

    Google Scholar 

  2. Bose, L., Chen, J., Carey, S.J., Dudek, P., Mayol-Cuevas, W.: Visual odometry for pixel processor arrays. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4604–4612 (2017)

    Google Scholar 

  3. Bose, L., Chen, J., Carey, S.J., Dudek, P., Mayol-Cuevas, W.: A camera that CNNs: towards embedded neural networks on pixel processor arrays. arXiv preprint arXiv:1909.05647 (2019). (ICCV 2019 Accepted Submission)

  4. Carey, S.J., Lopich, A., Barr, D.R., Wang, B., Dudek, P.: A 100,000 fps vision sensor with embedded 535GOPS/W 256 \(\times \) 256 SIMD processor array. In: 2013 Symposium on VLSI Circuits, pp. C182–C183. IEEE (2013)

    Google Scholar 

  5. Carey, S.J., Zarándy, Á., Dudek, P.: Characterization of processing errors on analog fully-programmable cellular sensor-processor arrays. In: 2014 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1580–1583. IEEE (2014)

    Google Scholar 

  6. Chen, J., Carey, S.J., Dudek, P.: Scamp5d vision system and development framework. In: Proceedings of the 12th International Conference on Distributed Smart Cameras, p. 23. ACM (2018)

    Google Scholar 

  7. Chen, Y.H., Emer, J., Sze, V.: Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. In: ACM SIGARCH Computer Architecture News, vol. 44, pp. 367–379. IEEE Press (2016)

    Google Scholar 

  8. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, pp. 3123–3131 (2015)

    Google Scholar 

  9. Du, Z., et al.: ShiDianNao: shifting vision processing closer to the sensor. In: ACM SIGARCH Computer Architecture News, vol. 43, pp. 92–104. ACM (2015)

    Google Scholar 

  10. Guillard, B.: Optimising convolutional neural networks for super fast inference on focal-plane sensor-processor arrays. Master’s thesis, Imperial College London (2019)

    Google Scholar 

  11. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18(1), 6869–6898 (2017)

    MathSciNet  MATH  Google Scholar 

  12. Komuro, T., Kagami, S., Ishikawa, M.: A dynamically reconfigurable SIMD processor for a vision chip. IEEE J. Solid-State Circuits 39(1), 265–268 (2004)

    Article  Google Scholar 

  13. Liang, S., Yin, S., Liu, S., Luk, W., Wei, S.: FP-BNN: binarized neural network on FPGA. Neurocomputing 275, 1072–1086 (2018)

    Article  Google Scholar 

  14. Rodriguez-Vazquez, A., Fernández-Berni, J., Leñero-Bardallo, J.A., Vornicu, I., Carmona-Galán, R.: CMOS vision sensors: embedding computer vision at imaging front-ends. IEEE Circuits Syst. Mag. 18(2), 90–107 (2018)

    Article  Google Scholar 

  15. Sim, J., Park, J.S., Kim, M., Bae, D., Choi, Y., Kim, L.S.: A 1.42 TOPS/W deep convolutional neural network recognition processor for intelligent IoE systems. In: 2016 IEEE International Solid-State Circuits Conference (ISSCC), pp. 264–265. IEEE (2016)

    Google Scholar 

  16. Wong, M.: Analog vision - neural network inference acceleration using analog SIMD computation in the focal plane. M.Sc. dissertation, Imperial College London (2018)

    Google Scholar 

  17. Zhao, R., et al.: Accelerating binarized convolutional neural networks with software-programmable FPGAs, pp. 15–24 (02 2017)

    Google Scholar 

  18. Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: towards lossless CNNs with low-precision weights. arXiv preprint arXiv:1702.03044 (2017)

  19. Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. arXiv preprint arXiv:1612.01064 (2016)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laurie Bose .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 29294 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bose, L., Dudek, P., Chen, J., Carey, S.J., Mayol-Cuevas, W.W. (2020). Fully Embedding Fast Convolutional Networks on Pixel Processor Arrays. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12374. Springer, Cham. https://doi.org/10.1007/978-3-030-58526-6_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58526-6_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58525-9

  • Online ISBN: 978-3-030-58526-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics