Advertisement

Microsaccades for Neuromorphic Stereo Vision

  • Jacques Kaiser
  • Jakob Weinland
  • Philip Keller
  • Lea Steffen
  • J. Camilo Vasquez TieckEmail author
  • Daniel Reichard
  • Arne Roennau
  • Jörg Conradt
  • Rüdiger Dillmann
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11139)

Abstract

Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.

Keywords

Spiking neural networks Event-based stereo vision Eye movements 

Notes

Acknowledgments

This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 720270 (Human Brain Project SGA1) and No. 785907 (Human Brain Project SGA2).

References

  1. 1.
    Davies, E.R.: Computer and Machine Vision: Theory, Algorithms, Practicalities. Academic Press, Cambridge (2012)Google Scholar
  2. 2.
    Davison, A.P.: PyNN: a common interface for neuronal network simulators. Front. Neuroinform. 2, 11 (2008)CrossRefGoogle Scholar
  3. 3.
    Dikov, G., Mohsen, F., Röhrbein, F., Conradt, J., Richter, C.: Spiking cooperative stereo-matching at 2 ms latency with neuromorphic hardware. Front. Neurosci. (2017)Google Scholar
  4. 4.
    Dodgson, N.A.: Variation and extrema of human interpupillary distance. Proc. Soc. Photo-Opt. Instrum. Eng. 12(8), 36–46 (2004)Google Scholar
  5. 5.
    Furber, S., Temple, S., Brown, A.: On-chip and inter-chip networks for modelling large-scare neural systems, pp. 6–9 (2006)Google Scholar
  6. 6.
    Furber, S.B., Galluppi, F., Temple, S., Plana, L.A.: The spinnaker project. Proc. IEEE 102(5), 652–665 (2014)CrossRefGoogle Scholar
  7. 7.
    Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations. Plasticity. Cambridge University Press, Cambridge (2002)CrossRefGoogle Scholar
  8. 8.
    Gewaltig, M.O., Diesmann, M.: Nest (neural simulation tool). Scholarpedia 2(4), 1430 (2007)CrossRefGoogle Scholar
  9. 9.
    Hermann, A., et al.: Hardware and software architecture of the bimanual mobile manipulation robot HoLLiE and its actuated upper body. In: 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics: Mechatronics for Human Wellbeing, AIM 2013, pp. 286–292, July 2013Google Scholar
  10. 10.
    Kaiser, J., et al.: Benchmarking microsaccades for feature extraction with spiking neural networks on continuous event streams. In: International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob) (2018, submitted)Google Scholar
  11. 11.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A \(128\, \times \,128\) 120 db 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)CrossRefGoogle Scholar
  12. 12.
    Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)CrossRefGoogle Scholar
  13. 13.
    Marr, D.: Vision: a computational investigation into the human representation and processing of visual information. W.H. Freeman and Company, San Francisco (1982)Google Scholar
  14. 14.
    Marr, D., Poggio, T.: A theory of human stereo vision. Proc. Roy. Soc. Lond. B Biol. Sci. 204, 301–328 (1977)CrossRefGoogle Scholar
  15. 15.
    Martinez-Conde, S., Macknik, S.L., Hubel, D.H.: The role of fixational eye movements in visual perception. Nat. Rev. Neurosci. 5(3), 229–240 (2004)CrossRefGoogle Scholar
  16. 16.
    Masquelier, T., Portelli, G., Kornprobst, P.: Microsaccades enable efficient synchrony-based coding in the retina: a simulation study. Sci. Rep. 6, 24086 (2016)CrossRefGoogle Scholar
  17. 17.
    Mueggler, E., Huber, B., Scaramuzza, D.: Event-based, 6-DOF pose tracking for high-speed maneuvers. In: International Conference on Intelligent Robots and Systems. IEEE (2014)Google Scholar
  18. 18.
    Orchard, G., Jayawant, A., Cohen, G., Thakor, N.: Converting static image datasets to spiking neuromorphic datasets using saccades. arXiv preprint arXiv:1507.07629 (2015)
  19. 19.
    Osswald, M., Ieng, S.H., Benosman, R., Indiveri, G.: A Spiking Neural Network Model of 3D Perception For Event-Based Neuromorphic Stereo Vision Systems, pp. 1–11. Nature Publishing Group, London (2017)Google Scholar
  20. 20.
    Osswald, M., Ieng, S.H., Benosman, R., Indiveri, G.: Supplementary Material: A Spiking Neural Network Model of 3D Perception for Event-Based Neuromorphic Stereo Vision Systems, pp. 1–14 (2017)Google Scholar
  21. 21.
    Rucci, M., Victor, J.D.: The unsteady eye: an information-processing stage, not a bug. Trends Neurosci. 38(4), 195–206 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jacques Kaiser
    • 1
  • Jakob Weinland
    • 1
  • Philip Keller
    • 1
  • Lea Steffen
    • 1
  • J. Camilo Vasquez Tieck
    • 1
    Email author
  • Daniel Reichard
    • 1
  • Arne Roennau
    • 1
  • Jörg Conradt
    • 1
  • Rüdiger Dillmann
    • 1
  1. 1.FZI Research Center for Information TechnologyKarlsruheGermany

Personalised recommendations