Retina Color-Opponency Based Pursuit Implemented Through Spiking Neural Networks in the Neurorobotics Platform
- 4 Citations
- 2.1k Downloads
Abstract
The ‘red-green’ pathway of the retina is classically recognized as one of the retinal mechanisms allowing humans to gather color information from light, by combining information from L-cones and M-cones in an opponent way. The precise retinal circuitry that allows the opponency process to occur is still uncertain, but it is known that signals from L-cones and M-cones, having a widely overlapping spectral response, contribute with opposite signs. In this paper, we simulate the red-green opponency process using a retina model based on linear-nonlinear analysis to characterize context adaptation and exploiting an image-processing approach to simulate the neural responses in order to track a moving target. Moreover, we integrate this model within a visual pursuit controller implemented as a spiking neural network to guide eye movements in a humanoid robot. Tests conducted in the Neurorobotics Platform confirm the effectiveness of the whole model. This work is the first step towards a bio-inspired smooth pursuit model embedding a retina model using spiking neural networks.
Keywords
Receptive Field Humanoid Robot Smooth Pursuit Spike Neural Network Retinal SlipNotes
Acknowledgements
The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project). The authors would like to thank the Italian Ministry of Foreign Affairs, General Directorate for the Promotion of the “Country System”, Bilateral and Multilateral Scientific and Technological Cooperation Unit, for the support through the Joint Laboratory on Biorobotics Engineering project.
References
- 1.Dacey, D.M.: Primate retina: cell types, circuits and color opponency. Prog. Retinal Eye Res. 18(6), 737–763 (1999)CrossRefGoogle Scholar
- 2.Baylor, D., Nunn, B., Schnapf, J.: Spectral sensitivity of cones of the monkey Macaca fascicularis. J. Physiol. 390, 145 (1987)CrossRefGoogle Scholar
- 3.Dacey, D.M., Packer, O.S.: Colour coding in the primate retina: diverse cell types and cone-specific circuitry. Curr. Opin. Neurobiol. 13(4), 421–427 (2003)CrossRefGoogle Scholar
- 4.Shibata, T., Vijayakumar, S., Conradt, J., Schaal, S.: Biomimetic oculomotor control. Adapt. Behav. 9(3–4), 189–207 (2001)CrossRefGoogle Scholar
- 5.Falotico, E., Zambrano, D., Muscolo, G., Marazzato, L., Dario, P., Laschi, C.: Implementation of a bio-inspired visual tracking model on the icub robot. In: Proceedings of IEEE International Workshop on Robot and Human Interactive Communication, pp. 564–569 (2010)Google Scholar
- 6.Vannucci, L., Cauli, N., Falotico, E., Bernardino, A., Laschi, C.: Adaptive visual pursuit involving eye-head coordination and prediction of the target motion. In: IEEE-RAS International Conference on Humanoid Robots, pp. 541–546 (2014)Google Scholar
- 7.Vannucci, L., Falotico, E., Di Lecce, N., Dario, P., Laschi, C.: Integrating feedback and predictive control in a bio-inspired model of visual pursuit implemented on a humanoid robot. In: Wilson, S.P., Verschure, P.F.M.J., Mura, A., Prescott, T.J. (eds.) Living Machines 2015. LNCS (LNAI), vol. 9222, pp. 256–267. Springer, Heidelberg (2015)CrossRefGoogle Scholar
- 8.Zambrano, D., Falotico, E., Manfredi, L., Laschi, C.: A model of the smooth pursuit eye movement with prediction and learning. Appl. Bionics Biomech. 7(2), 109–118 (2010)CrossRefGoogle Scholar
- 9.Falotico, E., Taiana, M., Zambrano, D., Bernardino, A., Santos-Victor, J., Dario, P., Laschi, C.: Predictive tracking across occlusions in the icub robot. In: 9th IEEE-RAS International Conference on Humanoid Robots, HUMANOIDS 2009, pp. 486–491 (2009)Google Scholar
- 10.Benoit, A., Caplier, A., Durette, B., Hérault, J.: Using human visual system modeling for bio-inspired low level image processing. Comput. Vis. Image Underst. 114(7), 758–773 (2010)CrossRefGoogle Scholar
- 11.Wohrer, A., Kornprobst, P.: Virtual retina: a biological retina model and simulator, with contrast gain control. J. Comput. Neurosci. 26(2), 219–249 (2009)MathSciNetCrossRefGoogle Scholar
- 12.Hérault, J., Durette, B.: Modeling visual perception for image processing. In: Sandoval, F., Prieto, A.G., Cabestany, J., Graña, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 662–675. Springer, Heidelberg (2007)CrossRefGoogle Scholar
- 13.Morillas, C.A., Romero, S.F., Martínez, A., Pelayo, F.J., Ros, E., Fernández, E.: A design framework to model retinas. Biosystems 87(2), 156–163 (2007)CrossRefGoogle Scholar
- 14.Martínez-Cañada, P., Morillas, C., Pino, B., Ros, E., Pelayo, F.: A computational framework for realistic retina modeling. Int. J. Neural Syst. (Accepted for publication)Google Scholar
- 15.Martínez-Cañada, P., Morillas, C., Nieves, J.L., Pino, B., Pelayo, F.: First stage of a human visual system simulator: the retina. In: Trémeau, A., Schettini, R., Tominaga, S. (eds.) CCIW 2015. LNCS, vol. 9016, pp. 118–127. Springer, Heidelberg (2015)CrossRefGoogle Scholar
- 16.Hines, M.L., Carnevale, N.T.: The NEURON simulation environment. Neural Comput. 9(6), 1179–1209 (1997)CrossRefGoogle Scholar
- 17.Gewaltig, M.O., Diesmann, M.: NEST (NEural Simulation Tool). Scholarpedia 2(4), 1430 (2007)CrossRefGoogle Scholar
- 18.Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2004), vol. 3, pp. 2149–2154. IEEE (2004)Google Scholar
- 19.Metta, G., Natale, L., Nori, F., Sandini, G., Vernon, D., Fadiga, L., Von Hofsten, C., Rosander, K., Lopes, M., Santos-Victor, J., et al.: The iCub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw. 23(8), 1125–1134 (2010)CrossRefGoogle Scholar
- 20.Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3, p. 5 (2009)Google Scholar
- 21.Hinkel, G., Groenda, H., Vannucci, L., Denninger, O., Cauli, N., Ulbrich, S.: A domain-specific language (DSL) for integrating neuronal networks in robot control. In: ACM International Conference Proceeding Series, pp. 9–15, 21 July 2015Google Scholar
- 22.Brette, R., Gerstner, W.: Adaptive exponential integrate-and-fire model as an effective description of neuronal activity. J. Neurophysiol. 94(5), 3637–3642 (2005)CrossRefGoogle Scholar
- 23.Vannucci, L., Ambrosano, A., Cauli, N., Albanese, U., Falotico, E., Ulbrich, S., Pfotzer, L., Hinkel, G., Denninger, O., Peppicelli, D., Guyot, L., Von Arnim, A., Deser, S., Maier, P., Dillman, R., Klinker, G., Levi, P., Knoll, A., Gewaltig, M.O., Laschi, C.: A visual tracking model implemented on the iCub robot as a use case for a novel neurorobotic toolkit integrating brain and physics simulation. In: IEEE-RAS International Conference on Humanoid Robots, pp. 1179–1184 (2015)Google Scholar
- 24.Painkras, E., Plana, L.A., Garside, J., Temple, S., Galluppi, F., Patterson, C., Lester, D.R., Brown, A.D., Furber, S.B.: SpiNNaker: A 1-w 18-core system-on-chip for massively-parallel neural network simulation. IEEE J. Solid-State Circuits 48(8), 1943–1953 (2013)CrossRefGoogle Scholar