Skip to main content
Log in

Biologically inspired visual computing: the state of the art

  • Review Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Visual information is highly advantageous for the evolutionary success of almost all animals. This information is likewise critical for many computing tasks, and visual computing has achieved tremendous successes in numerous applications over the last 60 years or so. In that time, the development of visual computing has moved forwards with inspiration from biological mechanisms many times. In particular, deep neural networks were inspired by the hierarchical processing mechanisms that exist in the visual cortex of primate brains (including ours), and have achieved huge breakthroughs in many domain-specific visual tasks. In order to better understand biologically inspired visual computing, we will present a survey of the current work, and hope to offer some new avenues for rethinking visual computing and designing novel neural network architectures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. 2012, 1097–1125

  2. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proceedings of International Conference on Learning Representations. 2015

  3. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 1–9

  4. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770–778

  5. He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceedings of European Conference on Computer Vision. 2014, 346–361

  6. Mikolov T, Karafiat M, Burget L, Cernocky J, Khudanpur S. Recurrent neural network based language model. In: Proceedings of the 11th Annual Conference of the International Speech Communication Association. 2010, 1045–1048

  7. Zaremba W, Sutskever I, Vinyals O. Recurrent neural network regularization. Neural and Evolutionary Computing, 2014

  8. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735–1780

    Google Scholar 

  9. Sak H, Senior A, Beaufays F. Long short-term memory recurrent neural network architectures for large scale acoustic modeling. In: Proceedings of the 15th Annual Conference of the International Speech Communication Association. 2014, 338–342

  10. Hariharan B, Arbelaez P, Girshick R, Malik J. Simultaneous detection and segmentation. In: Proceedings of European Conference on Computer Vision. 2014, 297–312

  11. Hariharan B, Arbelaez P, Girshick R, Malik J. Hypercolumns for object segmentation and fine-grained localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 447–456

  12. Hayder Z, He X, Salzmann M. Shape-aware instance segmentation. 2017, arXiv preprint arXiv:1612.03129

  13. Liu S, Jia J, Fidler S, Urtasun R. SGN: sequential grouping networks for instance segmentation. In: Proceedings of IEEE International European Conference on Computer Vision. 2017

  14. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 580–587

  15. Girshick R. Fast R-CNN. In: Proceedings of IEEE International Conference on Computer Vision. 2015, 1440–1448

  16. Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015, 91–99

  17. Szegedy C, Erhan D. Deep neural networks forobject detection. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. 2013, 2553–2561

  18. Erhan D, Szegedy C, Toshev A, Anguelov D. Scalable object detection using deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 2155–2162

  19. Szegedy C, Reed S, Erhan D, Anguelov D. Scalable, high-quality object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014

  20. Phillips P J, Yates A N, Hu Y, Hahn C A, Noyes E, Jackson K, Cavazos J G, Jeckeln G, Ranjan R, Sankaranarayanan S, Chen J C, Castillo C D, Chellappa R, White D, O’Toole A J. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms. Proceedings of the National Academy of Sciences of the United States of America, 2018, 115(24): 6171–6176

    Google Scholar 

  21. Fukushima K, Miyake S. Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. In: Amari S, Arbib M A, eds. Competition and Cooperation in Neural Nets. Springer, Berlin, Heidelberg, 1982

    Google Scholar 

  22. Jim M, David L G. Multiclass object recognition with sparse, localized features. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006, 11–18

  23. Averbeck B, Latham P E, Pouget A. Neural correlations, population coding and computation. Nature Reviews Neuroscience, 2006, 7(5): 358–366

    Google Scholar 

  24. Faisal A, Selen L, Wolpert D. Noise in the nervous system. Nature Reviews Neuroscience, 2008, 9(4): 292–303

    Google Scholar 

  25. Schneidman E. Towards the design principles of neural population codes. Current Opinion in Neurobiology, 2016, 37: 133–140

    Google Scholar 

  26. Kohn A, Coencagli R, Kanitscheider I, Pouget A. Correlations and neuronal population information. Annual Review of Neuroscience, 2016, 39(1): 237–256

    Google Scholar 

  27. Echeveste R, Lengyel M. The redemption ofnoise: inference with neural populations. Trends in Neurosciences, 2018, 41(11): 767–770

    Google Scholar 

  28. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 2014, 15(1): 1929–1958

    MathSciNet  MATH  Google Scholar 

  29. Wan L, Zeiler M, Zhang S, LeCun Y, Fergus R. Regularization of neural networks using dropconnect. In: Proceedings of International Conference on Machine Learning. 2013, 1058–1066

  30. Huang G, Sun Y, Liu Z, Sedra D, Weinberger K Q. Deep networks with stochastic depth. In: Proceedings of European Conference on Computer Vision. 2016,646-661

  31. Zhao G, Wang J, Zhang Z. Random shifting for CNN: a solution to reduce information loss in down-sampling layers. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 3476–3482

  32. Ni A M, Murray S O, Horwitz G D. Object-centered shifts of receptive field positions in monkey primary visual cortex. Current Biology, 2014, 24: 1653–1658

    Google Scholar 

  33. Aguila J, Cudeiro F J, Rivadulla C. Suppression of V1 feedback produces a shift in the topographic representation of receptive fields of LGN cells by unmasking latent retinal drives. Cerebral Cortex, 2017, 27(6): 3331–3345

    Google Scholar 

  34. Wang W, Andolina I M, Lu Y, Jones H E, Sillito A M. Focal gain control of thalamic visual receptive fields by layer 6 corticothalamic feedback. Cerebral Cortex, 2016, 28(1): 267–280

    Google Scholar 

  35. Tsodyks M, Kenet T, Grinvald A, Arieli A. Linking spontaneous activity of single cortical neurons and the underlying functional architecture. Science, 1999, 286(5446): 1943–1946

    Google Scholar 

  36. Muller L, Chavane F, Reynolds J, Sejnowski T J. Cortical travelling waves: mechanisms and computational principles. Nature Reviews Neuroscience, 2018, 19(5): 255–268

    Google Scholar 

  37. Sabour S, Frosst N, Hinton G E. Dynamic routing between capsules. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 3856–3866

  38. Zhu G, Zhang Z, Zhang X, Liu C. Diverse neuron type selection for convolutional neural networks. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 3560–3566

  39. Land E H. The retinex theory of color vision. Scientific American, 1977, 237(6): 108–129

    MathSciNet  Google Scholar 

  40. Finlayson G D, Trezzi E. Shades of gray and colour constancy. In: Proceedings of the 12th Color Imaging Conference: Color Science And Engineering Systems, Technologies, Applications. Springfield: SOC Imaging Science and Technology. 2004, 37–41

  41. Van D, Weijer J, Gevers T, Gijsenij A. Edge-based color constancy. IEEE Transactions on Image Processing, 2007, 16(9): 2207–2214

    MathSciNet  Google Scholar 

  42. Buchsbaum G. A spatial processor model for object colour perception. Journal of the Franklin Institute, 1980, 310(1): 1–26

    Google Scholar 

  43. Vazquez-Corral J, Vanrell M, Baldrich R, Tous F. Color constancy by category correlation. IEEE Transactions on Image Processing, 2012, 1(4): 1997–2007

    MathSciNet  MATH  Google Scholar 

  44. Gao S, Yang K, Li C, Li Y. Color constancy using double-opponency. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(10): 1973–1985

    Google Scholar 

  45. Yang K, Gao S, Guo C, Li C, Li Y. Boundary detection using double-opponency and spatial sparseness constraint. IEEE Transactions on Image Processing, 2015, 24(8): 2565–2578

    MathSciNet  MATH  Google Scholar 

  46. Li Y, Li C. A color-opponency based biological model for color constancy. I-Perception, 2011, 2(4): 384–384

    Google Scholar 

  47. Li Y, Tang X, Li C Y. Disinhibition among the extra-classical receptive field of retinal ganglion cells contributes to color constancy. In: Proceedings of Perception European Conference on Visual Perception Abstract. 2013

  48. Gao S, Yang K, Li C, Li Y. A color constancy model with double-opponency mechanisms. In: Proceedings of the IEEE International Conference on Computer Vision. 2013, 929–936

  49. Gao S, Han W, Yang K, Li C, Li Y. Efficient color constancy with local surface reflectance statistics. In: Proceedings of European Conference on Computer Vision. 2014, 158–173

  50. Yang K, Gao S, Li Y. Efficient illuminant estimation for color constancy using grey pixels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 2254–2263

  51. Conway B R, Eskew R T, Martin P R, Stockman A. A tour of contemporary color vision research. Vision Research, 2018, 151: 2–6

    Google Scholar 

  52. Zhang X, Gao S, Li R, Du X, Li C, Li Y. A retinal mechanism inspired color constancy model. IEEE Transactions on Image Processing, 2016, 25(3): 1219–1232

    MathSciNet  MATH  Google Scholar 

  53. Gao S, Li Y. A retinal mechanism based colorconstancy model. In: Proceedings of Chinese Conference on Pattern Recognition. 2012, 422–429

  54. Bi G, Poo M. Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. Journal of Neuroscience, 1998, 18(24): 10464–10472

    Google Scholar 

  55. Bengio Y, Scellier B, Bilaniuk O, Sacramento J, Senn W. Feedforward initialization for fast inference of deep generative networks is biologically plausible. 2016, arXiv preprint arXiv:1606.01651

  56. Scellier B, Bengio Y. Towards a biologically plausible backprop. Computing Research Repository. 2016, arXiv preprint arXiv: 1602.05179

  57. Boyn S, Grollier J, Lecerf G, Xu B, Locatelli N, Fusil S, Tomas J. Learning through ferroelectric domain dynamics in solid-state synapses. Nature Communications, 2017, 8(1): 1–7

    Google Scholar 

  58. Block H, Knight J, Rosenblatt F. Analysis of a four-layer series-coupled perceptron. Reviews of Modern Physics, 1962, 34(1): 135

    MathSciNet  MATH  Google Scholar 

  59. Fukushima K. Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 1980, 36(4): 193–202

    MATH  Google Scholar 

  60. Fukushima K, Miyake S. Neocognitron: a new algorithm for pattern recognition tolerant of deformations and shifts in position. Pattern Recognition, 1982, 15(6): 455–469

    Google Scholar 

  61. Hubel D, Wiesel T. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 1962, 160(1): 106–154

    Google Scholar 

  62. Hubel D, Wiesel T. Receptive fields and functional architecture in two nonstriate visual area (18 and 19) of the cat. The Journal of Neurophysiol, 1965, 28: 229–289

    Google Scholar 

  63. Fukushima K, Miyake S, Ito T. Neocognitron: a neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 1983, 5: 826–834

    Google Scholar 

  64. Riesenhuber M, Poggio T. Hierarchical models of object recognition in cortex. Nature Neuroscience, 1999, 2(11): 1019–1025

    Google Scholar 

  65. LeCun Y. Huang F J, Bottou L. Learning methods for generic object recognition with invariance to pose and lighting. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2004

  66. Wersing H, Korner E. Learning optimized features for hierarchical models of invariant recognition. Neural Computation, 2003, 15(7): 1559–1588

    MATH  Google Scholar 

  67. Hubel D, Wiesel T. Receptive fields of single neurones in the cat’s striate cortex. Journal of Physiology, 1959, 148(3): 574–591

    Google Scholar 

  68. Mutch J, Lowe D G. Multiclass object recognition with sparse, localized features. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006, 11–18

  69. Hu X, Zhang J, Li J, Zhang B. Sparsity-regularized HMAX for visual recognition. PLoS ONE, 2014, 9(1): e81813

    Google Scholar 

  70. Desimone R. Face-selective cells in the temporal cortex of monkeys. Journal of Cognitive Neuroscience, 1991, 3(1): 1–8

    Google Scholar 

  71. Rolls E T. Neurophysiological mechanisms underlying face processing within and beyond the temporal cortical visual areas. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 1992,335(1273): 11–21

    Google Scholar 

  72. Rolls E T, Tovee M J. Sparseness of the neuronal representation of stimuli in the primate temporal visual cortex. Journal of Neurophysiology, 1995, 73(2): 713–726

    Google Scholar 

  73. Tanaka K, Saito H, Fukada Y, Moriya M. Coding visual images of objects in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, 1991, 66(1): 170–189

    Google Scholar 

  74. Rolls E T. Brain mechanisms for invariant visual recognition and learning. Behavioural Processes, 1994, 33(1–2): 113–138

    Google Scholar 

  75. Rolls E T. Learning mechanisms in the temporal lobe visual cortex. Behavioural Brain Research, 1995, 66(1–2): 177–185

    Google Scholar 

  76. Rolls E T. A neurophysiological and computational approach to the functions of the temporal lobe cortical visual areas in invariant object recognition. In: Jenkin M, Harris L, eds. Computational and Psychophysical Mechanisms of Visual Coding. Cambridge University Press, 1997, 184–220

  77. Rolls E T. Functions of the primate temporal lobe cortical visual areas in invariant visual object and face recognition. Neuron, 2000, 27(2): 205–218

    Google Scholar 

  78. Wallis G, Rolls E T. A model of invariant object recognition in the visual system. Progress in Neurobiology, 1997, 51: 167–194

    Google Scholar 

  79. Rolls E T, Milward T. A model of invariant object recognition in the visual system: learning rules, activation functions, lateral inhibition, and information-based performance measures. Neural Computation, 2000, 12(11): 2547–2572

    Google Scholar 

  80. LeCun Y, Boser B, Denker J S. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1989, 1(4): 541–551

    Google Scholar 

  81. LeCun Y, Bottou L, Bengio Y. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278–2324

    Google Scholar 

  82. Khaligh-Razavi S M, Kriegeskorte N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS Computational Biology, 2014, 10(11): e1003915

    Google Scholar 

  83. Cadieu C F, Hong H, Yamins D L, Pinto N, Ardila D, Solomon E A, Majaj N J, DiCarlo J J. Deep neural networks rival the representation of primate IT cortex for core visual object recognition. PLoS Computational Biology, 2014, 10(12): e1003963

    Google Scholar 

  84. Kriegeskorte N. Deep neural networks: a new framework for modeling biological vision and brain information processing. Annual Review of Vision Science, 2015, 1: 417–446

    Google Scholar 

  85. Kar K, Kubilius J, Schmidt K, Issa E B, DiCarlo J J. Evidence that recurrent circuits are critical to the ventral stream’s execution of core object recognition behavior. Nature Neuroscience, 2019, 22(6): 974–983

    Google Scholar 

  86. Nayebi A, Bear D, Kubilius J, Kar K, Ganguli S, Sussillo D, Yamins D L. Task-driven convolutional recurrent models of the visual system. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 5290–5301

  87. Herzog M H, Clarke A M. Why vision is not both hierarchical and feedforward. Frontiers in Computational Neuroscience, 2014, 8: 135

    Google Scholar 

  88. Dayan P, Abbott L F, Abbott L. Theoretical neuroscience: computational and mathematical modeling of neural systems. The Quarterly Review of Biology, 2001, 79(1): 113

    MATH  Google Scholar 

  89. Gilbert C D, Li W. Top-down influences on visual processing. Nature Reviews Neuroscience, 2013, 14(5): 350–363

    Google Scholar 

  90. Markov N, Misery P, Falchier A, Lamy C, Vezoli J, Quilodran R, Gariel M A, Giroud P, Ercsey-Ravasz M, Pilaz L J, Huissoud C, Barone P, Dehay C, Toroczkai Z, Van Essen D C, Kennedy H, Knoblauch K. Weight consistency specifies regularities of macaque cortical networks. Cerebral Cortex, 2011, 21(6): 1254–1272

    Google Scholar 

  91. Hochstein S, Ahissar M. View from the top: hierarchies and reverse hierarchies in the visual system. Neuron, 2002, 36(5): 791–804

    Google Scholar 

  92. Shi T, Liang M, Hu X. A reverse hierarchy model for predicting eye fixations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014, 2822–2829

  93. Mumford D. On the computational architecture of the neocortex. Biological Cybernetics, 1992, 66(3): 241–251

    Google Scholar 

  94. Lee T S, Mumford D, Romero R, Lamme V. The role of the primary visual cortex in higher level vision. Vision Research, 1998, 38(15–16): 2429–2454

    Google Scholar 

  95. Rao R P N, Ballard D H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 1999, 2(1): 79–87

    Google Scholar 

  96. Lotter W, Kreiman G, Cox D. A neural network trained to predict future video frames mimics critical properties of biological neuronal responses and perception. 2018, arXiv preprint arXiv:1805.10734

  97. George D, Lehrach W, Kansky K, Mely D, Hay N, Lazaro-Gredilla M. A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Science, 2017, 358(6368): eaag2612

    Google Scholar 

  98. George D, Lavin A, Guntupalli J S, Mely D, Hay N, Lazaro-Gredilla M. Cortical microcircuits from a generative vision model. 2018, arXiv preprint arXiv:1808.01058

  99. Angelucci A, Bijanzadeh M, Nurminen L, Federer F, Merlin S, Bressloff P C. Circuits and mechanisms for surround modulation in visual cortex. Annual Review of Neuroscience, 2017, 40(1): 425–451

    Google Scholar 

  100. Liang M, Hu X. Recurrent convolutional neural network for object recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015, 3367–3375

  101. Liang M, Hu X, Zhang B. Convolutional neural networks with intra-layer recurrent connections for scene labeling. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015, 937–945

  102. Roelfsema P R, Holtmaat A. Control of synaptic plasticity in deep cortical networks. Nature Reviews Neuroscience, 2018, 19(3): 166

    Google Scholar 

  103. Richards B A, Lillicrap T P. Can neocortical feedback alter the sign of plasticity. Nature Reviews Neuroscience, 2018, 19(10): 636

    Google Scholar 

  104. Yoo D, Park S, Lee J Y, Paek A S, So Kweon I. Attentionnet: aggregating weak directions for accurate object detection. In: Proceedings of the IEEE International Conference on Computer Vision. 2015, 2659–2667

  105. Fang Y, Ma Z, Zhang Z. Dynamic multi-task learning with convolutional neural network. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 19–25

  106. Hao W, Zhang Z, Guan H. Integrating both visual and audio cues for enhanced video caption. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018

  107. Simonyan K, Zisserman A. Two-stream convolutional networks for action recognition in videos. In: Proceedings of Advances in Neural Information Processing Systems. 2014, 568–576

  108. O’Reilly R C, Wyatte D R, Rohrlich J. Deep predictive learning: a comprehensive model of three visual streams. 2017, arXiv preprint arXiv:1709.04654

  109. Hao W, Zhang Z, Guan H. CMCGAN: a uniform framework for cross-modal visual-audio mutual generation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018

  110. Tatler B, Hayhoe M, Land M, Ballard D. Eye guidance in natural vision: reinterpreting salience. Journal of Vision, 2011, 11(5): 5

    Google Scholar 

  111. Ognibene D, Baldassare G. Ecological active vision: four bioinspired principles to integrate bottom-up and adaptive top-down attention tested with a simple camera-arm robot. IEEE Transactions on Autonomous Mental Development, 2014, 7(1): 3–25

    Google Scholar 

  112. Yang H M, Zhang X Y, Yin F, Liu C L. Robust classification with convolutional prototype learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3474–3482

  113. Chen Y, Wang N, Zhang Z. DarkRank: accelerating deep metric learning via cross sample similarities transfer. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018

  114. Parisi G I, Kemker R, Part J L, Kanan C, Wermter S. Continual lifelong learning with neural networks: a review. Neural Networks, 2019, 113: 54–71

    Google Scholar 

  115. Rusu A A, Rabinowitz N C, Desjardins G, Soyer H, Kirkpatrick J, Kavukcuoglu K, Hadsell R. Progressive neural networks. 2016, arXiv preprint arXiv:1606.04671

  116. Fernando C, Banarse D, Blundell C, Zwols Y, Ha D, Rusu A A, Wierstra D. Pathnet: evolution channels gradient descent in super neural networks. 2017, arXiv preprint arXiv:1701.08734

  117. Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu A A, Milan K, Quan J, Ramalho T, Grabska-Barwinska A, Hassabis D, Clopath C, Kumaran D, Hadsell R. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521–3526

    MathSciNet  MATH  Google Scholar 

  118. Lake B M, Salakhutdinov R, Tenenbaum J B. Human-level concept learning through probabilistic program induction. Science, 2015, 350(6266): 1332–1338

    MathSciNet  MATH  Google Scholar 

  119. Atherton M. How to write the history of vision: understanding the relationship between berkeley and descartes. In: Levin D M, eds. Sites of Vision: the Discursive Construction of Sight in the History of Philosophy. Cambridge, Massachusetts: The MIT Press, 1999, 139–166

    Google Scholar 

  120. Gibson J. The Ecological Approach to Visual Perception. Boston, USA: Houghton Mifflin, 2013

    Google Scholar 

  121. Musall S, Kaufman M T, Juavinett A L, Gluf S, Churchland A K. Single-trial neural dynamics are dominated by richly varied movements. Nature Neuroscience, 2019, 22(10): 1677–1686

    Google Scholar 

  122. Stringer C, Pachitariu M, Steinmetz N, Bai Reddy C, Carandini M, Harris K D. Spontaneous behaviors drive multidimensional, brain-wide population activity. bioRxiv. 2018: 306019

  123. Schröder S, Steinmetz N A, Krumin M, Pachitariu M, Rizzi M, Lagnado L, Harris K D, Carandini M. Retinal outputs depend on behavioural state. bioRxiv. 2019: 638049

  124. Ahissar E, Assa E. Perception as a closed-loop convergence process. ELife, 2016, 5: e12830

    Google Scholar 

  125. Rucci M, Ahissar E, Burr D. Temporal coding of visual space. Trends in Cognitive Sciences, 2018, 22(10): 883–895

    Google Scholar 

  126. Rucci M, Victor J. The unsteady eye: an information-processing stage, not a bug. Trends in Neuroscience, 2015, 38(4): 195–206

    Google Scholar 

  127. Friston K J, Adams R A, Perrinet L, Breakspear M. Perceptions as hypotheses: saccades as experiments. Frontiers in Psychology, 2012, 3: 151

    Google Scholar 

  128. Chong E, Familiar A M, Shim W M, Reconstructing representations of dynamic visual objects in early visual cortex. Proceedings of the National Academy of Sciences of the United States of America, 2016, 113(5): 1453–1458

    Google Scholar 

  129. Lawrence S J D, van Mourik T, Kok P, Koopmans P, Norris D, de Lange F. Laminar organization of working memory signals in human visual cortex. Current Biology, 2018, 28(21): 3435–3440

    Google Scholar 

  130. Petro L S, Paton A T, Muckli L. Contextual modulation of primary visual cortex by auditory signals. Philosophical Transactions of the Royal Society B: Biological Sciences, 2017, 372(1714): 20160104

    Google Scholar 

  131. Williams M A, Baker C I, De Beeck H P O, Mok Shim W, Dang S, Triantafyllou C, Kanwisher N. Feedback of visual object information to foveal retinotopic cortex. Nature Neuroscience, 2008, 11(12): 1439

    Google Scholar 

  132. Wyatte D, Curran T, O’Reilly R. The limits of feedforward vision: recurrent processing promotes robust object recognition when objects are degraded. Journal of Cognitive Neuroscience, 2012, 24(11): 2248–2261

    Google Scholar 

  133. Roelfsema P R, de Lange F P. Early visual cortex as a multiscale cognitive blackboard. Annual Review of Vision Science, 2016, 2: 131–151

    Google Scholar 

  134. Jaegle A, Mehrpour V, Rust N. Visual novelty, curiosity, and intrinsic reward in machine learning and the brain. Current Opinion in Neurobiology, 2019, 58: 167–174

    Google Scholar 

  135. Lu Y, Yin J, Chen Z, Gong H, Liu Y, Qian L, Li X, Liu R, Andolina I M, Wang W. Revealing detail along the visual hierarchy: neural clustering preserves acuity from V1 to V4. Neuron, 2018, 98(2): 417–428

    Google Scholar 

  136. Groen I I A, Silson E H, Baker C I. Contributions of low-and high-level properties to neural processing of visual scenes in the human brain. Philosophical Transactions of the Royal Society B: Biological Sciences, 2017, 372(1714): 20160102

    Google Scholar 

  137. Mackey W E, Winawer J, Curtis C E. Visual field map clusters in human frontoparietal cortex. ELife, 2017, 6: e22974

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Key R&D Program of China (2018YFB1004600), the National Natural Science Foundation of China (Grant Nos. 61761146004, 61773375), the Beijing Municipal Natural Science Foundation (Z181100008918010), and Chinese Academy of Sciences (153D31KYSB20160282).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaoxiang Zhang.

Additional information

Wangli Hao is currently a PhD candidate in the Center for Research on Intelligent Perception and Computing, Institute of Automation, Chinese Academy of Sciences (CAS), China. She received her bachelor’s degree in Shanxi Agricultural University, China in 2011. After that, She received her master’s degree in the Beijing Institute of Technology, China in 2014.

Ian Max Andolina is the Director of the Cognition & Behaviour research platform and Associate Investigator at the Institute of Neuroscience, Chinese Academy of Sciences, China. He obtained a first class degree in Neuroscience at the University of Sussex and a PhD in Visual Neuroscience at University College London (UCL), UK. He worked at UCL specialising on the neurophysiology of the recurrent circuitry between cortical areas and between the visual cortex and thalamus during visual processing of form and motion. He moved to Shanghai to join the Institute of Neuroscience, CAS, China in 2016.

Wei Wang is a Senior Investigator and the Head of the Laboratory of Visual Perceptual Mechanisms at Institute of Neuroscience, CAS, China. He received his MSc and PhD from the University of Science & Technology of China (USTC), China in 1993 and 1998. He was a research fellow and a senior research fellow at University College London (UCL), University of Manchester Institute of Science & Technology (UMIST) and University of Manchester (UM), UK respectively, from 1998 to 2006.

Zhaoxiang Zhang is a professor in the Center for Research on Intelligent Perception and Computing, Institute of Automation, CAS, China. He received his bachelor’s degree in Circuits and Systems from USTC, China in 2004. After that, he was a PhD candidate under the supervision of Professor Tieniu Tan in the National Laboratory of Pattern Recognition, CASIA, where he received his PhD degree in 2009. He joined the School of Computer Science and Engineering, Beihang University, China as an Assistant Professor (2009–2011), an Associate professor (2012–2015) and vice-director of the Department of Computer application technology (2014–2015). In July 2015, he returned to the Institute of Automation, CAS, China.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hao, W., Andolina, I.M., Wang, W. et al. Biologically inspired visual computing: the state of the art. Front. Comput. Sci. 15, 151304 (2021). https://doi.org/10.1007/s11704-020-9001-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-020-9001-8

Keywords

Navigation