Advertisement

Bimodal Active Stereo Vision

  • Andrew Dankers
  • Nick Barnes
  • Alex Zelinsky
Part of the Springer Tracts in Advanced Robotics book series (STAR, volume 25)

Summary

We present a biologically inspired active vision system that incorporates two modes of perception. A peripheral mode provides a broad and coarse perception of where mass is in the scene in the vicinity of the current fixation point, and how that mass is moving. It involves fusion of actively acquired depth data into a 3D occupancy grid. A foveal mode then ensures coordinated stereo fixation upon mass/objects in the scene, and enables extraction of the mass/object using a maximum a-posterior probability zero disparity filter. Foveal processing is limited to the vicinity of the camera optical centres. Results for each mode and both modes operating in parallel are presented. The regime operates at approximately 15Hz on a 3GHz single processor PC.

Keywords

Active Stereo Vision Road-scene Fovea Periphery 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Aloimonos, I. Weiss, and A. Bandyopadhyay, “Active vision,” in IEEE Int. Journal on Computer Vision, 1988.Google Scholar
  2. 2.
    N. Apostoloff and A. Zelinsky, “Vision in and out of vehicles: Integrated driver and road scene monitoring,” IEEE Int. Journal of Robotics Research, vol. 23, no. 4, 2004.Google Scholar
  3. 3.
    R. Bajczy, “Active perception,” in IEEE Int. Journal on Computer Vision, 1988.Google Scholar
  4. 4.
    D. Ballard, “Animate vision,” in Artificial Intelligence, 1991.Google Scholar
  5. 5.
    J. Banks and P. Corke, “Quantitative evaluation of matching methods and validity measures for stereo vision,” IEEE Int. Journal of Robotics Research, vol. 20, no. 7, 1991.Google Scholar
  6. 6.
    Y. Boykov, O. Veksler, and R. Zabih, “Markov random fields with efficient approximations,” Computer Science Department, Cornell University Ithaca, NY 14853, Tech. Rep. TR97-1658, 3 1997.Google Scholar
  7. 7.
    A. Dankers, N. Barnes, and A. Zelinsky, “Active vision-rectification and depth mapping,” in Australian Conf. on Robotics and Automation, 2004.Google Scholar
  8. 8.
    —, “Active vision for road scene awareness,” in IEEE Intelligent Vehicles Symposium, 2005.Google Scholar
  9. 9.
    A. Dankers and A. Zelinsky, “Driver assistance: Contemporary road safety,” in Australian Conf. on Robotics and Automation, 2004.Google Scholar
  10. 10.
    A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” IEEE Computer Magazine, 6 1989.Google Scholar
  11. 11.
    L. Fletcher, N. Barnes, and G. Loy, “Robot vision for driver support systems,” in IEEE Int. Conf. on Intelligent Robots and Systems, 2004.Google Scholar
  12. 12.
    L. Ford and D. Fulkerson, Flows in Networks. Princeton University Press, 1962.Google Scholar
  13. 13.
    S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of images,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, 1984.Google Scholar
  14. 14.
    G. Grubb, A. Zelinsky, L. Nilsson, and M. Rilbe, “3d vision sensing for improved pedestrian safety,” in IEEE Intelligent Vehicles Symposium, 2004.Google Scholar
  15. 15.
    R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Second Edition. Cambridge University Press, 2004.Google Scholar
  16. 16.
    S. Kagami, K. Okada, M. Inaba, and H. Inoue, “Realtime 3d depth flow generation and its application to track to walking human being,” in IEEE Int. Conf. on Robotics and Automation, 2000.Google Scholar
  17. 17.
    V. Kolmogorov and R. Zabih, “Multi-camera scene reconstruction via graph cuts,” in Europuan Conf. on Comupter Vision, 2002.Google Scholar
  18. 18.
    —, “What energy functions can be minimized via graph cuts?” in Europuan Conf. on Comupter Vision, 2002.Google Scholar
  19. 19.
    G. Loy and N. Barnes, “Fast shape-based road sign detection for a driver assistance system,” in IEEE Int. Conf. on Intelligent Robots and Systems, 2004.Google Scholar
  20. 20.
    N. Pettersson and L. Petersson, “Online stereo calibration using fpgas,” in IEEE Intelligent Vehicles Symposium, 2005.Google Scholar
  21. 21.
    E. Schwartz, “A quantitative model of the functional architecture of human striate cortex with application to visual illusion and cortical texture analysis,” in Biological Cybernetics, 1980.Google Scholar
  22. 22.
    H. Truong, S. Abdallah, S. Rougeaux, and A. Zelinsky, “A novel mechanism for stereo active vision,” in Australian Conf. on Robotics and Automation, 2000.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Andrew Dankers
    • 1
    • 2
  • Nick Barnes
    • 1
    • 2
  • Alex Zelinsky
    • 3
  1. 1.National ICT AustraliaCanberraAustralia
  2. 2.Australian National UniversityActonAustralia
  3. 3.CSIRO ICT CentreCanberraAustralia

Personalised recommendations