Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Neuromorphic vision chips

  • 362 Accesses

  • 5 Citations

Abstract

The paper reviews the progress of neuromorphic vision chip research in decades. It focuses on two kinds of the neuromorphic vision chips: frame-driven (FD) and event-driven (ED) vision chips. The FD and ED vision chips are very different from each other in system architecture, image sensing, image information coding, image processing algorithm, design methodology. The vision chips can overcome serial data transmission and processing bottlenecks in traditional image processing systems. They can perform the high speed image capture and real-time image processing operations. This paper selects two typical chips from the two kinds of vision chips, respectively, and introduces their architectures, image sensing schemes, image processing processors and system operation. The FD neuromorphic reconfigurable vision chip comprises a high speed image sensor, a processing element array and self-organizing map neural network. The FD vision chip has the advantages in image resolution, static object detection, time-multiplex image processing, and chip area. The ED neuromorphic vision chip system is based on address-event-representation image sensor and event-driven multi-kernel convolution network. The ED vision chip has the advantages in fast sensing, low communication bandwidth, brain-like processing, and high energy efficiency. Finally, this paper discusses the architecture and the challenges of the future neuromorphic vision chip and indicates that the reconfigurable vision chip with left- and right-brain functions integrated in the three dimensional (3D) large-scale integrated circuit (LSI) technology becomes a trend of the research on the vision chip.

This is a preview of subscription content, log in to check access.

References

  1. 1

    Mead C. Neuromorphic electronic systems. Proc IEEE, 1990, 78: 1629–1636

  2. 2

    Aizawa K. Computational sensors — vision VLSI. IEICE Trans Inf Syst, 1999, 82: 580–588

  3. 3

    Boahen K A. Communicating neuronal ensembles between neuromorphic chips. In: Neuromorphic Systems Engineering. Berlin: Springer, 1998. 229–259

  4. 4

    Wu C Y, Chiu C F. A new structure of the 2-D silicon retina. IEEE J Solid-State Circ, 1995, 30: 890–897

  5. 5

    Funatsu E, Nitta Y, Miyake Y, et al. An artificial retina chip with current-mode focal plane image processing functions. IEEE Trans Electron Dev, 1997, 44: 1777–1782

  6. 6

    Dudek P, Hicks P J. A general-purpose processor-per-pixel analog SIMD vision chip. IEEE Trans Circ Syst I Regul Pap, 2005, 52: 13–20

  7. 7

    Kim D, Cho J, Lim S, et al. A 5000S/s single-chip smart eye-tracking sensor. In: Proceedings of IEEE International Solid-State Circuits Conference — Digest of Technical Papers, San Francisco, 2008

  8. 8

    Moini A, Bouzerdoum A, Eshraghian K, et al. An insect vision-based motion detection chip. IEEE J Solid-State Circ, 1997, 32: 279–284

  9. 9

    Oike Y, Ikeda M, Asada K. A 375/spl times/365 high-speed 3-D range-finding image sensor using row-parallel search architecture and multisampling technique. IEEE J Solid-State Circ, 2005, 40: 444–453

  10. 10

    Leon-Salas W D, Balkir S, Sayood K, et al. A CMOS imager with focal plane compression using predictive coding. IEEE J Solid-State Circ, 2007, 42: 2555–2572

  11. 11

    Miao W, Lin Q Y, Wu N J. A novel vision chip for high-speed target tracking. Jpn J Appl Phys, 2007, 46: 2220–2225

  12. 12

    Komuro T, Kagami S, Ishikawa M. A dynamically reconfigurable SIMD processor for a vision chip. IEEE J Solid-State Circ, 2004, 39: 265–268

  13. 13

    Yamaguchi K, Watanabe Y, Komuro T, et al. Design of a massively parallel vision processor based on multi- SIMD architecture. In: Proceedings of IEEE International Symposium on Circuits and Systems, New Orleans, 2007. 3498–3501

  14. 14

    Miao W, Lin Q Y, Zhang W C, et al. A programmable SIMD vision chip for real-time vision applications. IEEE J Solid-State Circ, 2008, 43: 1470–1479

  15. 15

    Lin Q Y, Miao W, Zhang W C, et al. A 1000 frames/s programmable vision chip with variable resolution and row-pixel-mixed parallel image processors. Sensors, 2009, 9: 5933–5951

  16. 16

    Zhang W C, Fu Q Y, Wu N J. A programmable vision chip based on multiple levels of parallel processors. IEEE J Solid-State Circ, 2011, 46: 2132–2147

  17. 17

    Shi C, Yang J, Han Y, et al. A 1000 fps vision chip based on a dynamically reconfigurable hybrid architecture comprising a PE array processor and self-organizing map neural network. IEEE J Solid-State Circ, 2014, 49: 2067–2082

  18. 18

    Yang Y X, Yang J, Liu L Y, et al. High-speed target tracking system based on a hierarchical parallel vision processor and gray-level LBP algorithm. IEEE Trans Syst Man Cybern Syst, 2017, 47: 950–964

  19. 19

    Yang J, Yang Y X, Chen Z, et al. A heterogeneous parallel processor for high-speed vision chip. IEEE Trans Circ Syst Video Technol, 2016. doi: 10.1109/TCSVT.2016.2618753

  20. 20

    Li H L, Zhang Z X, Yang J, et al. A novel vision chip architecture for image recognition based on convolutional neural network. In: Proceedings of the 11th International Conference on ASIC, Chengdu, 2015

  21. 21

    Schmitz J A, Gharzai M K, Balkir S, et al. A 1000 frames/s vision chip using scalable pixel-neighborhood-level parallel processing. IEEE J Solid-State Circ, 2017, 52: 556–568

  22. 22

    Yamazaki T, Katayama H, Uehara S, et al. 4.9 A 1ms high-speed vision chip with 3D-stacked 140GOPS column-parallel PEs for spatio-temporal image processing. In: Proceedings of IEEE International Solid-State Circuits Conference, San Francisco, 2017. 82–83

  23. 23

    Culurciello E, Etienne-Cummings R, Boahen K A. A biomorphic digital image sensor. IEEE J Solid-State Circ, 2003, 38: 281–294

  24. 24

    Chen S S, Bermak A. Arbitrated time-to-first spike CMOS image sensor with on-chip histogram equalization. IEEE Trans VLSI Syst, 2007, 15: 346–357

  25. 25

    Lichtsteiner P, Posch C, Delbruck T. A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. IEEE J Solid-State Circ, 2008, 43: 566–576

  26. 26

    Xu J T, Zhang M X, Yan S, et al. A method to solve the side effects of dual-line timed address event vision system. J Circ Syst Comput, 2015, 24: 1550028

  27. 27

    Xu J T, Zou J W, Yan S, et al. Effective target binarization method for linear timed address-event vision system. Opt Eng, 2016, 55: 063103

  28. 28

    Chan V, Jin C, van Schaik A. An address-event vision sensor for multiple transient object detection. IEEE Trans Biome Circ Syst, 2007, 1: 278–288

  29. 29

    Venier P, Mortara A, Arreguit X, et al. An integrated cortical layer for orientation enhancement. IEEE J Solid-State Circuits, 1997, 32: 177–186

  30. 30

    Serrano-Gotarredona T, Andreou A G, Linares-Barranco B. AER image filtering architecture for vision-processing systems. IEEE Trans Circ Syst I Fund Theory Appl, 1999, 46: 1064–1071

  31. 31

    Serrano-Gotarredona R, Serrano-Gotarredona T, Acosta-Jimenez A, et al. A neuromorphic cortical-layer microchip for spike-based event processing vision systems. IEEE Trans Circ Syst I Regul Pap, 2006, 53: 2548–2566

  32. 32

    Serrano-Gotarredona R, Serrano-Gotarredona T, Acosta-Jiménez A, et al. On real-time AER 2-D convolutions hardware for neuromorphic spike-based cortical processing. IEEE Trans Neural Netw, 2008, 19: 1196–1219

  33. 33

    Choi T Y W, Merolla P A, Arthur J V, et al. Neuromorphic implementation of orientation hypercolumns. IEEE Trans Circ Syst I Regul Pap, 2005, 52: 1049–1060

  34. 34

    Camunas-Mesa L, Acosta-Jimenez A, Zamarreno-Ramos C, et al. A 32×32 pixel convolution processor chip for address event vision sensors with 155 ns event latency and 20 Meps throughput. IEEE Trans Circ Syst I Regul Pap, 2011, 58: 777–790

  35. 35

    Camunas-Mesa L, Zamarreno-Ramos C, Linares-Barranco A, et al. An event-driven multi-kernel convolution processor module for event-driven vision sensors. IEEE J Solid-State Circ, 2012, 47: 504–517

  36. 36

    Serrano-Gotarredona R, Oster M, Lichtsteiner P, et al. CAVIAR: a 45 k neuron, 5 M synapse, 12 G connects/s AER hardware sensory processing learning actuating system for high-speed visual object recognition and tracking. IEEE Trans Neural Netw, 2009, 20: 1417–1438

  37. 37

    Zhao B, Ding R X, Chen S S, et al. Feedforward categorization on AER motion events using cortex-like features in a spiking neural network. IEEE Trans Neural Netw Learn Syst, 2015, 26: 1963–1978

  38. 38

    Pérez-Carrasco J A, Zhao B, Serrano C, et al. Mapping from frame-driven to frame-free event-driven vision systems by low-rate rate coding and coincidence processing—application to feedforward convNets. IEEE Trans Pattern Anal Mach Intell, 2013, 35: 2706–2719

  39. 39

    Stromatias E, Soto M, Serrano-Gotarredona T, et al. An event-driven classifier for spiking neural networks fed with synthetic or dynamic vision sensor data. Front Neuros, 2017, 11: 350

  40. 40

    Wang H Y, Xu J T, Gao Z Y, et al. An event-based neurobiological recognition system with orientation detector for objects in multiple orientations. Front Neuros, 2016, 10: 498

  41. 41

    Son B, Suh Y, Kim S, et al. 4.1 A 640×480 dynamic vision sensor with a 9 μm pixel and 300 Meps address-event representation. In: Proceedings of IEEE International Solid-State Circuits Conference, San Francisco, 2017. 66–67

  42. 42

    Shi C, Yang J, Han Y, et al. 7.3 A 1000fps vision chip based on a dynamically reconfigurable hybrid architecture comprising a PE array and self-organizing map neural network. In: Proceedings of IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), San Francisco, 2014. 128–129

  43. 43

    Cao Z X, Zhou Y F, Li Q L, et al. Design of pixel for high speed CMOS image sensors. In: Proceedings International Image Sensor Workshop, Snowbird, 2013, 229–232

  44. 44

    Kohonen T. Self-organizing Maps. Berlin: Springer, 2001

  45. 45

    Chen Z, Yang J, Shi C, et al. High speed vision processor with reconfigurable processing element array based on full-custom distributed memory. Jpn J Appl Phys, 2016, 55: 04EF08

  46. 46

    Lenero-Bardallo J A, Serrano-Gotarredona T, Linares-Barranco B. A 3.6 μs latency asynchronous frame-free eventdriven dynamic-vision-sensor. IEEE J Solid-State Circ, 2011, 46: 1443–1455

  47. 47

    Kim S J, Kang B, Kim J D K, et al. A 1920×1080 3.65 μm-pixel 2D/3D image sensor with split and binning pixel structure in 0.11 pm standard CMOS. In: Proceedings of IEEE International Solid-State Circuits Conference, San Francisco, 2012. 396–398

  48. 48

    Chen Z, Di S, Cao Z X, et al. A 256×256 time-of-flight image sensor based on center-tap demodulation pixel structure. Sci China Inf Sci, 2016, 59: 042409

  49. 49

    Chen Y H, Krishna T, Emer J S, et al. Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circ, 2017, 52: 127–138

  50. 50

    Shin D, Lee J, Lee J, et al. 14.2 DNPU: an 8.1 TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks. In: Proceedings of IEEE International Solid-State Circuits Conference, San Francisco, 2017. 240–241

  51. 51

    Cao Y Q, Chen Y, Khosla D. Spiking deep convolutional neural networks for energy-efficient object recognition. Int J Comput Vision, 2015, 113: 54–66

  52. 52

    Merolla P A, Arthur J V, Alvarez-Icaza R, et al. A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014, 345: 668–673

  53. 53

    Wu H Q, Wang X H, Gao B, et al. Resistive random access memory for future information processing system. Proc IEEE, 2017, 105: 1770–1789

  54. 54

    Zheng Z J, Weng J Y. Mobile device based outdoor navigation with on-line learning neural network: a comparison with convolutional neural network. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, 2016. 11–18

  55. 55

    Fan D L, Shim Y, Raghunathan A, et al. STT-SNN: a spin-transfer-torque based soft-limiting non-linear neuron for low-power artificial neural networks. IEEE Trans Nanotechnol, 2015, 14: 1013–1023

  56. 56

    Koyanagi M, Nakagawa Y, Lee K W, et al. Neuromorphic vision chip fabricated using three-dimensional integration technology. In: Proceedings of IEEE International Solid-State Circuits Conference, San Francisco, 2001. 270–271

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 61234003, 61434004, 61504141), Brain Project of Beijing (Grant No. Z161100000216129), and CAS Interdisciplinary Project (Grant No. KJZD-EW-L11-04). The author would like to thank all members in the research group for their collaborations.

Author information

Correspondence to Nanjian Wu.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wu, N. Neuromorphic vision chips. Sci. China Inf. Sci. 61, 060421 (2018). https://doi.org/10.1007/s11432-017-9303-0

Download citation

Keywords

  • neuromorphic
  • vison chip
  • frame-driven
  • address-event-representation (AER)
  • event-driven
  • convolution neural network
  • image sensor
  • image processing