figure a
figure b

Much of early vision research drew inspirations from what was then known about biological vision. Progress was slow however, and over time machine learning in combination with features selected more on pragmatic grounds took over. Increasingly impressive results seem to justify that approach. The advent of affordable depth cameras further moved the field (certainly in robotics) away from biological considerations—why bother how the human brain arrives at 3D scene interpretations when the 3D data is just readily available? Not all problems simply vanish however by throwing novel sensors and heavy machine learning at them. 3D sensors really only give 2.5D data. The backside, as well as sensor artefacts coming from physical limitations, still need to be filled in. And how meaning can be attached to visual percepts—2D or 3D—can not simply be explained by learning from large hand labelled data bases.

So there is still a lot to learn from biological vision systems. How to arrive at a sufficiently clear (whatever that means in detail) interpretation of the scene from several patchy cues? How to tightly couple vision to other aspects of a cognitive system? What is the right level of abstraction for representations? In this issue we present current work in bio-inspired vision systems and explore the possibilities offered by new findings in biological vision systems as well as latest developments in machine vision.

The survey article by Krüger et al. starts with a short history of biologically motivated methods in computer vision and lists several open problems in current computer vision research. This is contrasted with key findings from the primate’s visual system, and the article goes on to argue for rethinking the potential impact of biologically motivated methods in the light of these new findings.

One of these findings is the importance of shared intermediate level representations. Rodríguez-Sánchez et al. discuss in their article the role such intermediate-level representations play in achieving higher-level object abstraction and present recent developments in the neural computational modeling of intermediate-level shape processing. Another important finding is that biological vision seems to maintain several possibly conflicting (think: Necker Cube) interpretations of a visual scene and avoid early commitment to a single solution. This is explored in the article by Sabatini in the context of stereopsis and active vergence control.

The primate’s visual system does not come completely pre-programmed, but partly matures during the individual’s development. The article by Atil and Kalkan argues for such a developmental view of a vision system within a cognitive agent. Learning in a very specific application context is presented in the article by Rudzits and Pugeault, where an agent learns autonomous driving from pre-attentive visual cues, i.e. scene gist, in a weakly supervised student-teacher framework.

Primate vision is also known to make heavy use of attentional mechanims. In their article García et al. present results of the DFG-funded project “Situated Vision to Perceive Object Shape and Affordances” related to attentional scene exploration and object discovery in 3D data. Models of how an agent actively moving around a 3-dimensional world updates its spatial reference are explored in the European research project “Spatial Cognition”, presented in the article by Hamker. Staying in 3D, the dissertation summary of Richtsfeld et al. shows how well-known principles of perceptual grouping in 2D can be applied to 3D input data to segment object candidates from cluttered table top scenes.

Finally the interview with Prof. Christoph von der Malsburg again stresses the importance of revisiting and rethinking “old” ideas to achieve a paradigm shift in modern vision research and tackle fundamental problems.

Biologically inspired vision is a research area drawing from two already large research areas. So a single journal issue can only provide a small glimpse here. We hope nonetheless that you enjoy our collection of articles and that we can kindle your interest for further exploration of this fascinating topic.

Michael Zillich and Norbert Krüger

1 Content

1.1 Technical Contributions

  • Krüger, Zillich, Janssen, Buch: What we can learn from the primate’s visual system

  • Rodríguez-Sánchez, Neumann, Piater: Beyond simple and complex neurons

  • Sabatini: Deep Representation Hierarchies for 3D Active Vision

  • Atil, Kalkan: Towards an Embodied Developing Vision System

  • Rudzits, Pugeault: Efficient Learning of Pre-attentive Steering in a Driving School Framework

1.2 Research Projects

  • García, Werner, Frintrop: Attentional Scene-Exploration and Object Discovery in Image and RGB-D Data

  • Hamker: Spatial Cognition of Humans and brain-inspired artificial Agents

1.3 Interview

  • Krüger: Interview with Prof. Christoph von der Malsburg

1.4 Dissertation Summary

  • Richtsfeld, Zillich, Vincze: Object Detection for Robotic Applications Using Perceptual Organization in 3D

2 Service

2.1 Conferences

2.2 Workshops

2.3 Organizations

2.4 Journals

  • International Journal of Computer Vision

  • IEEE Transactions on Pattern Analysis and Machine Intelligence

  • Journal of Vision

  • Vision Research

  • Computer Vision and Image Understanding

2.5 Books

  • Brian A. Wandell: Foundations of Vision, Sinauer Associates Inc, 1995

  • Irvin Rock, Stepen E. Palmer: Indirect Perception, A Bradford Book, 1997

  • Stephen E. Palmer: Vision Science: Photons to Phenomenology, MIT Press, 1999

  • Zygmunt Pizlo, Yunfeng Li, Tadamasa Sawada, and Robert M. Steinman: Making a Machine That Sees Like Us, Oxford University Press, 2014

2.6 Summer Schools