Skip to main content
Log in

New Visual Invariants for Terrain Navigation Without 3D Reconstruction

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

For autonomous vehicles to achieve terrain navigation, obstacles must be discriminated from terrain before any path planning and obstacle avoidance activity is undertaken. In this paper, a novel approach to obstacle detection has been developed. The method finds obstacles in the 2D image space, as opposed to 3D reconstructed space, using optical flow. Our method assumes that both nonobstacle terrain regions, as well as regions with obstacles, will be visible in the imagery. Therefore, our goal is to discriminate between terrain regions with obstacles and terrain regions without obstacles. Our method uses new visual linear invariants based on optical flow. Employing the linear invariance property, obstacles can be directly detected by using reference flow lines obtained from measured optical flow. The main features of this approach are: (1) 2D visual information (i.e., optical flow) is directly used to detect obstacles; no range, 3D motion, or 3D scene geometry is recovered; (2) knowledge about the camera-to-ground coordinate transformation is not required; (3) knowledge about vehicle (or camera) motion is not required; (4) the method is valid for the vehicle (or camera) undergoing general six-degree-of-freedom motion; (5) the error sources involved are reduced to a minimum, because the only information required is one component of optical flow. Numerous experiments using both synthetic and real image data are presented. Our methods are demonstrated in both ground and air vehicle scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Albus, J.S. 1991. Outline for a theory of intelligence. IEEE Transactions on System, Man, and Cybernetics, 21(3).

  • Albus, J.S. and Hong, T.-H. 1990. Motion, depth, and image flow. In Proc. IEEE Int’l Conf. on Robotics and Automation.

  • Aloimonos, Y. 1992. Is visual reconstruction necessary? Obstacle avoidance without passive ranging. Journal of Robotic Systems9(6):843-858.

    Google Scholar 

  • Barron, J., Fleet, D., and Beauchemin, S. 1994. Performance of optical flowtechniques. International Journal of Computer Vision, 12(1):43-77.

    Google Scholar 

  • Bhanu, B., Roberts, B., and Ming, J. 1990. Inertial navigation sensor intergated motion analysis for obstacle detection. In Proc. IEEE Int’l Conf. on Robotics and Automation.

  • Burt, P. and Anandan, P. 1994. Image stabilization by registration to a reference mosaic. In Proc. ARPA Image UnderstandingWorkshop.

  • Camus, T. 1995. Real-time quantized optical flow. In Proc. IEEE Conf. on Computer Architectures for Machine Perception, Como, Italy.

  • Camus, T. Real-time quantized optical flow. Real-Time Imaging, in press.

  • Camus, T., Coombs, D., Herman, M., and Hong, T.-H. 1996. Realtime single-workstation obstacle avoidance using only wide-field flowdivergence. In Proc. 13th International Conference onPattern Recognition.

  • Coombs, D., Herman, M., Hong, T.-H., and Nashman, M. 1995. Real-time obstacle avoidance using central flow divergence and peripheral flow. In Proc. Fifth International Conference on Computer Vision, pp. 226-283.

  • Coombs, D., Herman, M., Hong, T.-H., and Nashman, M. Real-time obstacle avoidance using central flow divergence and peripheral flow. IEEE Transactions on Robotics and Automation, in press.

  • Daily, M.J., Harris, J.G., and Reiser, K. 1987. Detecting obstacles in range imagery. In Proc. DARPA Image Understanding Workshop.

  • Dunlay, R.T. and Morgenthaler, D.G. 1986. Obstacle avoidance on roadways using range data. In SPIE Mobile Robots, Vol. 727.

  • Enkelmann, W. 1990. Obstacle detection by evaluation of optical flow fields from image sequences. In First European Conf. on Computer Vision.

  • Gibson, J.J. 1986. The Ecological Approach to Visual Perception. Lawrence Erlbaum Associates.

  • Hebert, M., Kanade, T., and Kweon, I. 1990. 3D vision techniques for autonomous vehicles. In Analysis and Interpretation of Range Images, sR.C. Jain and A.K. Jain (Eds.), Springer-Verlag.

  • Heeger, D. and Jepson, A. 1992. Subspace methods for recovering rigid motion I: Algorithm and implementation. International Journal of Computer Vision, 7(2):95-117.

    Google Scholar 

  • Herman, M. and Hong, T.-H. 1991. Visual navigation using optical flow. In Proc.NATODefense Research Group Seminar on Robotics in the Battlefield, Paris, France.

  • Herman, M., Raviv, D., Schneiderman, H., and Nashman, M. 1993. Visual road following without 3D reconstruction. In Proc. SPIE 22nd Applied Imagery Pattern Recognition Workshop, Vol. 2103.

  • Herman, M., Nashman, M., Hong, T.-H., Schneiderman, H., Coombs, D., Young, G.-S., Raviv, D., and Wavering, A.J. 1997. Minimalist vision for navigation. In Visual Navigation: From Biological Systems to Unmanned Ground Vehicles, Aloimonos, Y. (Ed.), Lawrence Erlbaum Associates: Mahwah, NJ.

    Google Scholar 

  • Hoff, W. and Sklair, C. 1990. Planetary terminal descent hazard avoidance using optical flow. In Proc. IEEE Int’l Conf. on Robotics and Automation.

  • Lau, H. 1992. Optical flow extraction with motion known a priori, Master’s Thesis, Electrical Engineering Department, University of Maryland.

  • Lenz, R. and Tsai, R. 1988. Techniques for calibration of the scale factor and image center for high accuracy 3D machine vision metrology. IEEE Transactions on Parttern Analysis and Machine Intelligence, 10(5).

  • Liu, H., Hong, T.-H., Herman, M., and Chellapa, R. 1993. A reliable optical flow algorithm using 3D hermite polynomials. NISTIR-5333, NIST, Gaithersburg, MD.

    Google Scholar 

  • Liu, H., Hong, T.-H., Herman, M., and Chellapa, R. 1994. A general motion model and spatiotemporal filters for computing optical flow. NISTIR-5539, NIST, Gaithersburg, MD; International Journal of Computer Vision, in press.

    Google Scholar 

  • Liu, H., Hong, T.-H., Herman, M., and Chellapa, R. 1996a. Accuracy vs. efficiency trade-offs in optical flow algorithms. In Proc. Fourth European Conference on Computer Vision.

  • Liu, H., Hong, T.-H., Herman, M., and Chellapa, R. 1996b. Image gradient evolution: A visual cue for collision avoidance. In Proc. 13th International Conference on Pattern Recognition.

  • Longuet-Higgins, H.C. and Prazdny, K. 1980. The interpretation of a moving retinal image. In Proc. R. Soc. Lond. B208, pp. 385-397.

  • Lucas, B. and Kanade, T. 1981. An iterative image registration technique with an application to stereo vision. In Proc. DARPA Image Understanding Workshop, pp. 121-130.

  • Mallot, H.A. et al., 1991. Inverse perspective mapping simpli-fies optical flow computation and obstacle detection. Biological Cybernetics, 64:177-185.

    Article  Google Scholar 

  • Matthies, L. 1992. Toward stochastic modeling of obstacle detectability in passive stereo range imagery. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition.

  • Nakayama, K. 1985. Biological image motion processing: A review. Vision Research, 5(5):625-660.

    Article  Google Scholar 

  • Nelson, R.C. and Aloimonos, J. 1989. Obstacle avoidance using flow field divergence. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(10).

  • Oskard, D.N., Hong, T.H., and Shaffer, C.A. 1990. Real-time algorithms and data structures for underwater mapping. IEEE Transactions on Systems, Man, and Cybernetics, 20(6).

  • Raviv, D. 1992. Flat surfaces: A visual invariant. NISTIR 4794, NIST, Gaithersburg, MD.

    Google Scholar 

  • Raviv, D. and Herman, M. 1993. Visual servoing from 2D image cues. In ActivePerception,Y. Aloimonos (Ed.), Lawrence Erlbaum Associates: Hillsdale, NJ.

    Google Scholar 

  • Sandini, G., Gandolfo, F., Grosso, E., and Tistarelli, M. 1993. Vision during action. In Active Perception, Y. Aloimonos (Ed.), Lawrence Erlbaum Associates: Hillsdale, NJ.

    Google Scholar 

  • Singh, S. and Keller, P. 1991. Obstacle detection for high speed autonomous navigation. In Proc. IEEE Int’l Conf. on Robotics and Automation.

  • Solder, U. and Graefe, V. 1990. Object detection in real time. In Proc. SPIE, Vol. 1388, Mobile Robots V.

  • Sridhar, B., Suorsa, R., and Smith, P. 1991. Vision based techniques for rotorcraft low altitude flight. In Proc. SPIE, Vol. 1571.

  • Storjohann, K., Zielke, T., Mallot, H.A., and Seelen, W. 1990. Visual obstacle detection for automatically guided vehicles. In Proc. IEEE Int’l Conf. on Robotics and Automation.

  • Tistarelli, M. and Sandini, G. 1992. Dynamic aspects in active vision. Computer Vision, Graphics, and Image Processing: Image Understanding, 56(1).

  • Tomasi, C. and Kanade, T. 1992. Shape and motion from image streams under orthography: Afactorization method. International Journal of Computer Vision, 9(2): 137-154.

    Google Scholar 

  • Veatch, P.A. and Davis, L.S. 1990. Efficient algorithms for obstacle detection using range data. Computer Vision, Graphics and Image Processing, 50(1).

  • Yao, Y.S., Burlina, P., and Chellapa, R. 1996. Stabilization of images acquired by unmanned ground vehicles. In Proc. ARPA Image Understanding Workshop.

  • Young, G.-S. 1993. Safe navigation and active vision for autonomous vehicles: A purposive and direct solution. Ph.D. Dissertation, Dept. of Mechanical Engineering, University of Maryland.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Young, GS., Herman, M., Hong, TH. et al. New Visual Invariants for Terrain Navigation Without 3D Reconstruction. International Journal of Computer Vision 28, 45–71 (1998). https://doi.org/10.1023/A:1008002714698

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1008002714698

Navigation