Advertisement

Spatiotemporal representations for visual navigation

  • LoongFah Cheong
  • Cornelia Fermüller
  • Yiannis Aloimonos
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1064)

Abstract

The study of visual navigation problems requires the integration of visual processes with motor control. Most essential in approaching this integration is the study of appropriate spatio-temporal representations which the system computes from the imagery and which serve as interfaces to all motor activities. Since representations resulting from exact quantitative reconstruction have turned out to be very hard to obtain, we argue here for the necessity of representations which can be computed easily, reliably and in real time and which recover only the information about the 3D world which is really needed in order to solve the navigational problems at hand. In this paper we introduce a number of such representations capturing aspects of 3D motion and scene structure which are used for the solution of navigational problems implemented in visual servo systems. In particular, the following three problems are addressed: (a) to change the robot's direction of motion towards a fixed direction, (b) to pursue a moving target while keeping a certain distance from the target, and (c) to follow a wall-like perimeter. The importance of the introduced representations lies in the following:
  • They can be extracted using minimal visual information, in particular the sign of flow measurements or the the first order spatiotemporal derivatives of the image intensity function. In that sense they are direct representations needing no intermediate level of computation such as correspondence.

  • They are global in the sense that they represent how three-dimensional information is globally encoded in them. Thus, they are robust representations since local errors do not affect them.

  • Usually, from sequences of images, three-dimensional quantities such as motion and shape are computed and used as input to control processes. The representations discussed here are given directly as input to the control procedures, thus resulting in a real time solution.

Keywords

Mobile Platform Servo System Forward Translation Visual Navigation Inverse Depth 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aloimonos, J. (Y.): Purposive and qualitative active vision. Proc. DARPA Image Understanding Workshop (1990) 816–828Google Scholar
  2. 2.
    J. Aloimonos, I. Weiss, and A. Bandopadhay, Active vision. International Journal of Computer Vision, 2:333–356, 1988.Google Scholar
  3. 3.
    R.C. Arkin, R. Murphy, M. Pearson and D. Vaughn, Mobile robot docking operations in a manufacturing environment: Progress in visual perceptual strategies. In Proc. IEEE International Workshop on Intelligent Robots and Systems, pages 147–154, 1989.Google Scholar
  4. 4.
    R. Bajcsy, Active perception. Proc. of the IEEE, 76:996–1005, 1988.Google Scholar
  5. 5.
    D. Ballard and C. Brown, Principles of animate vision. CVGIP: Image Understanding, 45:3–21, Special Issue on Purposive, Qualitative, Active Vision, Y. Aloimonos (Ed.), 1992.Google Scholar
  6. 6.
    B. Espiau, F. Chaumette, and P. Rives, A new approach to visual servoing in robotics. IEEE Trans. on Robotics and Automation, 8:313–326, 1992.Google Scholar
  7. 7.
    C. Fermüller, L.F. Cheong and Y. Aloimonos, 3D Motion and Shape Representations in Visual Servo Control. Technical Report, Center for Automation Research, University of Maryland, CAR-TR-799, July 1995.Google Scholar
  8. 8.
    C. Fermüller and Y. Aloimonos, Tracking facilitates 3-D motion estimation. Biological Cybernetics, 67:147–158, 1992.Google Scholar
  9. 9.
    C. Fermüller and Y. Aloimonos, Direct perception of three-dimensional motion through patterns of visual motion. Science, 270:1973–1976, 1995.Google Scholar
  10. 10.
    C. Fermüller and Y. Aloimonos, On the geometry of visual correspondence. International Journal of Computer Vision, to appear, 1995.Google Scholar
  11. 11.
    E. Francois and P. Bouthemy, Derivation of qualitative information in motion analysis. Image and Vision Computing, 8:279–288, 1990.Google Scholar
  12. 12.
    R.C. Nelson and Y. Aloimonos, Obstacle avoidance using flow field divergence. IEEE Trans. on Pattern Analysts and Machine Intelligence, 11:1102–1106, 1989.Google Scholar
  13. 13.
    D. Raviv and M. Herman, Visual Servoing from 2D image cues. In Y. Aloimonos (Ed.), Active Perception, Advances in Computer Vision, pages 191–229, Lawrence Erlbaum, Hillsdale, NJ, 1993.Google Scholar
  14. 14.
    J. Santos-Victor, G. Sandini, F. Curotto and S. Garibaldi, Divergent stereo for robot navigation: Learning from bees. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 434–439, 1993.Google Scholar
  15. 15.
    G. Sandini, F. Gandolfo, E. Grosso and M. Tistarelli, Vision during action. In Y. Aloimonos (Ed.), Active Perception, Advances in Computer Vision, pages 151–190. Lawrence Erlbaum, Hillsdale, NJ, 1993.Google Scholar
  16. 16.
    S.B. Skaar, W.H. Brockman, and R. Hanson, Camera-space manipulation. International Journal of Robotics Research, 6:20–32, 1987.Google Scholar
  17. 17.
    M. Subbarao, Bounds on time-to-collision and rotational component from first-order derivatives of image flow. Computer Vision, Graphics, and Image Processing, 50:329–341, 1990.Google Scholar
  18. 18.
    L.E Weiss and A.C. Sanderson, Dynamic sensor-based control of robots with visual feedback. IEEE Trans. on Robotics and Automation, 3:404–417, 1987.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1996

Authors and Affiliations

  • LoongFah Cheong
    • 1
  • Cornelia Fermüller
    • 1
  • Yiannis Aloimonos
    • 1
  1. 1.Computer Vision Laboratory, Center for Automation ResearchUniversity of MarylandCollege Park

Personalised recommendations