Skip to main content
Log in

Reactive Visual Navigation Based on Omnidirectional Sensing – Path Following and Collision Avoidance

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

Described here is a visual navigation method for navigating a mobile robot along a man-made route such as a corridor or a street. We have proposed an image sensor, named HyperOmni Vision, with a hyperboloidal mirror for vision-based navigation of the mobile robot. This sensing system can acquire an omnidirectional view around the robot in real time. In the case of the man-made route, road boundaries between the ground plane and wall appear as a close-looped curve in the image. By making use of this optical characteristic, the robot can avoid obstacles and move along the corridor by tracking the close-looped curve with an active contour model. Experiments that have been done in a real environment are described.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Cipolla R. and Blake, A.: Surface orientation and time to contact from image divergence and deformation, in: Proc. of the 2nd European Conf. on Computer Vision, 1992, pp. 187-202.

  2. Dickmanns, E. D.: Performance improvements for autonomous road vehicles, in: Proc. of Intelligent Autonomous System, 1995, pp. 2-14.

  3. Holenstein, A. and Badreddin, E.: Collision avoidance in a behavior-based mobile robot design, in: Proc. of IEEE Internat. Conf. on Robotics and Automation, Vol. 1, 1991, pp. 898-903.

    Google Scholar 

  4. Kass, M., Witkin, A., and Terzopoulo, D.: Snakes: Active contour models, Internat. J. Computer Vision 1(4) (1988), 321-331. 5. Santos-Victor, J., Sandini, G., Curotto, F., and Garibaldi, S.: Divergent stereo in autonomous navigation: From bees to robots, Internat. J. Computer Vision 14 (1995), 159-177. 6. Storjohann, K., Zielke, T., Mallot, H. A., and von Seelen, W.: Visual obstacle detection for automatically guided vehicles, in: Proc. of IEEE Internat. Conf. on Robotics and Automation, 1990, pp. 761-766. 7. Thorpe, C.: Machine learning and human interface for the CMU Navlab, in: Proc. Computer Vision for Space Applications, 1993. 8. Yagi, Y.: Omnidirectional sensing and its applications, IEICE Trans. Inform. Systems 82(3) (1999), 568-579. 9. Yagi, Y., Kawato, S., and Tsuji, S.: Real-time omnidirectional image sensor (COPIS) for vision-guided navigation, IEEE Trans. Robotics Automat. 10(1) (1994), 11-22. 10. Yamazawa, K., Yagi, Y., and Yachida, M.: Omnidirectional imaging with hyperboloidal projection, in: Proc. of IEEE/RSJ Internat. Conf. on Intelligent Robots and Systems, Vol. 2, 1993, pp. 1029-1034. 11. Zapata, R., Perrier, M., Lepinay, P., Thompson, P., and Jouvencel, B.: Fast mobile robots in ill-structured environments, in: Proc. of IEEE/RSJ Internat. Conf. on Intelligent Robots and Systems, 1991, pp. 793-798.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yagi, Y., Nagai, H., Yamazawa, K. et al. Reactive Visual Navigation Based on Omnidirectional Sensing – Path Following and Collision Avoidance. Journal of Intelligent and Robotic Systems 31, 379–395 (2001). https://doi.org/10.1023/A:1012047708277

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1012047708277

Navigation