Spatiotemporal representations for visual navigation
They can be extracted using minimal visual information, in particular the sign of flow measurements or the the first order spatiotemporal derivatives of the image intensity function. In that sense they are direct representations needing no intermediate level of computation such as correspondence.
They are global in the sense that they represent how three-dimensional information is globally encoded in them. Thus, they are robust representations since local errors do not affect them.
Usually, from sequences of images, three-dimensional quantities such as motion and shape are computed and used as input to control processes. The representations discussed here are given directly as input to the control procedures, thus resulting in a real time solution.
KeywordsMobile Platform Servo System Forward Translation Visual Navigation Inverse Depth
Unable to display preview. Download preview PDF.
- 1.Aloimonos, J. (Y.): Purposive and qualitative active vision. Proc. DARPA Image Understanding Workshop (1990) 816–828Google Scholar
- 2.J. Aloimonos, I. Weiss, and A. Bandopadhay, Active vision. International Journal of Computer Vision, 2:333–356, 1988.Google Scholar
- 3.R.C. Arkin, R. Murphy, M. Pearson and D. Vaughn, Mobile robot docking operations in a manufacturing environment: Progress in visual perceptual strategies. In Proc. IEEE International Workshop on Intelligent Robots and Systems, pages 147–154, 1989.Google Scholar
- 4.R. Bajcsy, Active perception. Proc. of the IEEE, 76:996–1005, 1988.Google Scholar
- 5.D. Ballard and C. Brown, Principles of animate vision. CVGIP: Image Understanding, 45:3–21, Special Issue on Purposive, Qualitative, Active Vision, Y. Aloimonos (Ed.), 1992.Google Scholar
- 6.B. Espiau, F. Chaumette, and P. Rives, A new approach to visual servoing in robotics. IEEE Trans. on Robotics and Automation, 8:313–326, 1992.Google Scholar
- 7.C. Fermüller, L.F. Cheong and Y. Aloimonos, 3D Motion and Shape Representations in Visual Servo Control. Technical Report, Center for Automation Research, University of Maryland, CAR-TR-799, July 1995.Google Scholar
- 8.C. Fermüller and Y. Aloimonos, Tracking facilitates 3-D motion estimation. Biological Cybernetics, 67:147–158, 1992.Google Scholar
- 9.C. Fermüller and Y. Aloimonos, Direct perception of three-dimensional motion through patterns of visual motion. Science, 270:1973–1976, 1995.Google Scholar
- 10.C. Fermüller and Y. Aloimonos, On the geometry of visual correspondence. International Journal of Computer Vision, to appear, 1995.Google Scholar
- 11.E. Francois and P. Bouthemy, Derivation of qualitative information in motion analysis. Image and Vision Computing, 8:279–288, 1990.Google Scholar
- 12.R.C. Nelson and Y. Aloimonos, Obstacle avoidance using flow field divergence. IEEE Trans. on Pattern Analysts and Machine Intelligence, 11:1102–1106, 1989.Google Scholar
- 13.D. Raviv and M. Herman, Visual Servoing from 2D image cues. In Y. Aloimonos (Ed.), Active Perception, Advances in Computer Vision, pages 191–229, Lawrence Erlbaum, Hillsdale, NJ, 1993.Google Scholar
- 14.J. Santos-Victor, G. Sandini, F. Curotto and S. Garibaldi, Divergent stereo for robot navigation: Learning from bees. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages 434–439, 1993.Google Scholar
- 15.G. Sandini, F. Gandolfo, E. Grosso and M. Tistarelli, Vision during action. In Y. Aloimonos (Ed.), Active Perception, Advances in Computer Vision, pages 151–190. Lawrence Erlbaum, Hillsdale, NJ, 1993.Google Scholar
- 16.S.B. Skaar, W.H. Brockman, and R. Hanson, Camera-space manipulation. International Journal of Robotics Research, 6:20–32, 1987.Google Scholar
- 17.M. Subbarao, Bounds on time-to-collision and rotational component from first-order derivatives of image flow. Computer Vision, Graphics, and Image Processing, 50:329–341, 1990.Google Scholar
- 18.L.E Weiss and A.C. Sanderson, Dynamic sensor-based control of robots with visual feedback. IEEE Trans. on Robotics and Automation, 3:404–417, 1987.Google Scholar