Sparse coding predicts optic flow specificities of zebrafish pretectal neurons
- 30k Downloads
Zebrafish pretectal neurons exhibit specificities for large-field optic flow patterns associated with rotatory or translatory body motion. We investigate the hypothesis that these specificities reflect the input statistics of natural optic flow. Realistic motion sequences were generated using computer graphics simulating self-motion in an underwater scene. Local retinal motion was estimated with a motion detector and encoded in four populations of directionally tuned retinal ganglion cells, represented as two signed input variables. This activity was then used as input into one of three learning networks: a sparse coding network (competitive learning), PCA whitening with subsequent sparse coding, and a backpropagation network (supervised learning). All simulations developed specificities for optic flow which are comparable to those found in a neurophysiological study (Kubo et al. in Neuron 81(6):1344–1359, 2016. https://doi.org/10.1016/j.neuron.2014.02.043), but relative frequencies of the various neuronal responses were best modeled by the sparse coding approach without whitening. We conclude that the optic flow neurons in the zebrafish pretectum do reflect the optic flow statistics. The predicted vectorial receptive fields show not only typical optic flow fields but also “Gabor” and dipole-shaped patterns that likely reflect difference fields needed for reconstruction by linear superposition.
KeywordsOptic flow Sparse coding Optimality pretectum Egomotion detection
This work was carried out at the Department of Biology of the Eberhard-Karls-University, Tübingen, Germany. Additional support was obtained for TW from the Deutsche Forschungsgemeinschaft within the Werner Reichardt Center for Integrative Neuroscience (CIN), Tübingen.
Compliance with ethical standards
Conflict of interest
The authors declared that they have no conflicts of interest to this work.
- 4.Dosovitskiy A et al (2015) FlowNet: learning optical flow with convolutional networks. In: 2015 IEEE international conference on computer vision (ICCV). IEEE, pp 2758–2766. https://doi.org/10.1109/iccv.2015.316
- 9.Ilg E et al (2017) FlowNet 2.0: evolution of optical flow estimation with deep networks. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr.2017.179
- 11.Marr D (1982) Vision: a computational investigation into the human representation and processing of visual information. W.H. Freeman, New YorkGoogle Scholar
- 21.Schultz PF et al (2014) Replicating kernels with a short stride allows sparse reconstructions with fewer independent kernels. arXiv preprint arXiv:1406.4205
- 23.Timofte R, Van Gool L (2015) Sparse flow: sparse matching for small to large displacement optical flow. In: 2015 IEEE winter conference on applications of computer vision. IEEE, pp 1100–1106. https://doi.org/10.1109/wacv.2015.151
- 25.Vijayanarasimhan S et al (2017) Sfm-net: learning of structure and motion from video. arXiv preprint arXiv:1704.07804
- 26.Wulff J, Black MJ (2015) Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 120–130. https://doi.org/10.1109/cvpr.2015.7298607
- 27.Zhou T et al (2017) Unsupervised learning of depth and ego-motion from video. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE. https://doi.org/10.1109/cvpr.2017.700