, Volume 46, Issue 2, pp 121-138
Date: 30 Dec 2005

Analog Integrated 2-D Optical Flow Sensor

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

I present a new focal-plane analog very-large-scale-integrated (aVLSI) sensor that estimates optical flow in two visual dimensions. Its computational architecture consists of a two-layer network of locally connected motion units that collectively estimate the optimal optical flow field. The applied gradient-based optical flow model assumes visual motion to be translational and smooth, and is formulated as a convex optimization problem. The model also guarantees that the estimation problem is well-posed regardless of the visual input by imposing a bias towards a preferred motion under ambiguous or noisy visual conditions. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. The non-linear network conductances improve the resulting optical flow estimate because they reduce spatial smoothing across large velocity differences and minimize the bias for reliable stimuli. Extended characterization and recorded optical flow fields from a 30 × 30 array prototype sensor demonstrate the validity of the optical flow model and the robustness and functionality of the computational architecture and its implementation.

Alan A. Stocker received the MS degree in Material Sciences and Biomedical Engineering and the PhD degree in Physics from the Swiss Federal Institute of Technology in Zürich, Switzerland, in 1995 and 2001, respectively. In 1995, he was with the Institute for Biomedical Engineering at the Helsinki University of Technology, Finland, where he performed his master's thesis. In the end of 1995, he joined the Institute of Neuroinformatics in Zürich where he graduated with a dissertation about the analysis and the design of analog network architectures for visual motion perception. Since 2003, he is a postdoctoral fellow at the Center for Neural Sciences of New York University. His research interests are in computational neuroscience with a focus on models for primate's visual motion system. Furthermore he is interested in the analysis and development of neurally inspired, computationally efficient hardware models of visual perception and their applications in robotics.