Bio-inspired Motion-Based Object Segmentation

  • Sonia Mota
  • Eduardo Ros
  • Javier Díaz
  • Rodrigo Agis
  • Francisco de Toro
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4141)


Although motion extraction requires high computational resources and normally produces very noisy patterns in real sequences, it provides useful cues to achieve an efficient segmentation of independent moving objects. Our goal is to employ basic knowledge about biological vision systems to address this problem. We use the Reichardt motion detectors as first extraction primitive to characterize the motion in scene. The saliency map is noisy, therefore we use a neural structure that takes full advantage of the neural population coding, and extracts the structure of motion by means of local competition. This scheme is used to efficiently segment independent moving objects. In order to evaluate the model, we apply it to a real-life case of an automatic watch-up system for car-overtaking situations seen from the rear-view mirror. We describe how a simple, competitive, neural processing scheme can take full advantage of this motion structure for segmenting overtaking-cars.


Receptive Field Motion Detection Rigid Body Motion Intelligent Vehicle Velocity Channel 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Reichardt, W.: Autocorrelation, a principle for the evaluation of sensory information by central nervous system. In: Rosenblith, W.A. (ed.) Sensory Communication, pp. 303–317 (1961)Google Scholar
  2. 2.
    Mota, S., Ros, E., Ortigosa, E.M., Pelayo, F.J.: Bio-Inspired motion detection for blind spot overtaking monitor. International Journal of Robotics and Automation 19(4) (2004)Google Scholar
  3. 3.
    Mota, S., Ros, E., Díaz, J., Ortigosa, E.M., Agís, R., Carrillo, R.: Real-time visual motion detection of overtaking cars for driving assistance using FPGAs. In: Becker, J., Platzner, M., Vernalde, S. (eds.) FPL 2004. LNCS, vol. 3203, pp. 1158–1161. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  4. 4.
    Haag, J., Borst, A.: Encoding of visual motion information and reliability in spiking and graded potential neurons. Journal of Neuroscience 17, 4809–4819 (1997)Google Scholar
  5. 5.
    Gonzalez, R., Woods, R.: Digital Image Processing. Addison-Wesley, Reading (1992)Google Scholar
  6. 6.
    Clifford, C.W.G., Ibbotson, M.R., Langley, K.: An adaptive Reichardt detector model of motion adaptation in insects and mammals. Visual Neuroscience 14, 741–749 (1997)CrossRefGoogle Scholar
  7. 7.
    Ros, E., Pelayo, F.J., Palomar, D., Rojas, I., Bernier, J.L., Prieto, A.: Stimulus correlation and adaptive local motion detection using spiking neurons. International Journal of Neural Systems 9(5), 485–490 (1999)CrossRefGoogle Scholar
  8. 8.
    Barlow, H.B.: The efficiency of detecting changes of intensity in random dot patterns. Vision Research 18(6), 637–650 (1978)CrossRefGoogle Scholar
  9. 9.
    Field, D.J., Hayes, A., Hess, R.F.: Contour integration by the human visual system: evidence for local “association field”. Vision Research 33(2), 173–193 (1993)CrossRefGoogle Scholar
  10. 10.
    Saarinen, J., Levi, D.M., Shen, B.: Integration of local pattern elements into a global shape in human vision. Proceeding of the National Academic of Sciences USA 94, 8267–8271 (1997)CrossRefGoogle Scholar
  11. 11.
    Gilbert, C.D., Wiesel, T.N.: Intrinsic connectivity and receptive field properties in visual cortex. Vision Research 25(3), 365–374 (1985)CrossRefGoogle Scholar
  12. 12.
    Ross, M., Wenzel, T.: Losing weight to save lives: a review of the role of automobile weight and size in traffic fatalities. Report from the American Council for an Energy-Efficient Economy, ACEEE-T013 (2001)Google Scholar
  13. 13.
    Franke, U., et al.: From door to door- Principles and Application on Computer Vision for driver assistant systems. In: Proceeding of Intelligent Vehicle Technologies: Theory and Applications (2000)Google Scholar
  14. 14.
    Handmann, U., et al.: Computer Vision for Driver Assistance Systems. In: Proceeding of SPIE, vol. 3364, pp. 136–147 (1998)Google Scholar
  15. 15.
    Schneider, R., Wenger, J.: High resolution radar for automobile applications. Advances in Radio Science 1, 105–111 (2003)CrossRefGoogle Scholar
  16. 16.
    Ewald, A., Willhoeft, V.: Laser Scanners for Obstacle Detection in Automotive Applications. In: Proceedings of the IEEE Intelligent Vehicle Symposium, pp. 682–687 (2000)Google Scholar
  17. 17.
    Heisele, B., Neef, N., Ritter, W., Schneider, R., Wanielik, G.: Object Detection in Traffic Scenes by a Colour Video and Radar Data Fusion Approach. In: First Australian Data Fusion Symposium, pp. 48–52 (1996)Google Scholar
  18. 18.
    Fang, Y., Masaki, I., Horn, B.: Depth-Based Target Segmentation for Intelligent Vehicles: Fusion of Radar and Binocular Stereo. IEEE Transactions on Intelligent Transportation Systems 3(3) (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Sonia Mota
    • 1
  • Eduardo Ros
    • 2
  • Javier Díaz
    • 2
  • Rodrigo Agis
    • 2
  • Francisco de Toro
    • 3
  1. 1.Departamento de Informática y Análisis NuméricoUniversidad de CórdobaCórdobaSpain
  2. 2.Departamento de Arquitectura y Tecnología de ComputadoresUniversidad de GranadaGranadaSpain
  3. 3.Departamento de Teoría de la Señal, Telemática y ComunicacionesUniversidad de GranadaGranadaSpain

Personalised recommendations