Advertisement

International Journal of Computer Vision

, Volume 33, Issue 2, pp 117–137 | Cite as

Visual Homing: Surfing on the Epipoles

  • Ronen Basri
  • Ehud Rivlin
  • Ilan Shimshoni
Article

Abstract

We introduce a novel method for visual homing. Using this method a robot can be sent to desired positions and orientations in 3D space specified by single images taken from these positions. Our method is based on recovering the epipolar geometry relating the current image taken by the robot and the target image. Using the epipolar geometry, most of the parameters which specify the differences in position and orientation of the camera between the two images are recovered. However, since not all of the parameters can be recovered from two images, we have developed specific methods to bypass these missing parameters and resolve the ambiguities that exist. We present two homing algorithms for two standard projection models, weak and full perspective.

Our method determines the path of the robot on-line, the starting position of the robot is relatively not constrained, and a 3D model of the environment is not required. The method is almost entirely memoryless, in the sense that at every step the path to the target position is determined independently of the previous path taken by the robot. Because of this property the robot may be able, while moving toward the target, to perform auxiliary tasks or to avoid obstacles, without this impairing its ability to eventually reach the target position. We have performed simulations and real experiments which demonstrate the robustness of the method and that the algorithms always converge to the target pose.

visual navigation camera motion computation robot navigation visual servoing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Basri, R. and Rivlin, E. 1995. Localization and homing using combinations of model views. Artificial Intelligence, 78:327–354.Google Scholar
  2. Basri, R., Rivlin, E., and Shimshoni, I. 1998. Visual homing: surfing on the epipoles. In 6th Int. Conf. Computer Vision (ICCV-98), Bombay, pp. 863–869.Google Scholar
  3. Beardsley, P.A., Reid, I.D., Zisserman, A., and Murray, D.W. 1995. Active visual navigation using non-metric structure. In 6th Int. Conf. Computer Vision (ICCV-95), Boston, pp. 58–64.Google Scholar
  4. Bradshaw, K.J., McLauchlan, P.F., Reid, I.D., and Murray, D.W. 1994. Saccade and Pursuit on an active head eye platform. Image and Vision Computing, 12(3):155–163.Google Scholar
  5. Dudek, G. and Zhang, C. 1996. Vision-based robot localization without explicit object models. In IEEE Int. Conf. on Robotics and Automation, pp. 76–82.Google Scholar
  6. Espiau, B., Chaumette, F., and Rives, P. 1992. A new approach to visual servoing in robotics. IEEE Transaction on Robotics and Automation, 8(3):313–326.Google Scholar
  7. Fayman, J.A., Mosse, D., and Rivlin, E. 1996. Real-time active vision with fault-tolerance. In International Conference on Pattern Recognition.Google Scholar
  8. Fennema, C., Hanson, A., Riseman, E., Beveridge, R.J., and Kumar, R. 1990. Model-directed mobile robot navigation. IEEE Trans. on Systems, Man and Cybernetics, 20:1352–1369.Google Scholar
  9. Fischler, M.A. and Bolles, R.C. 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communication of the ACM, 24(6):381–395.Google Scholar
  10. Hartley, R.I. 1997. In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6):580–593.Google Scholar
  11. Hashimoto, K. (Ed.). 1993. Visual Servoing. World Scientific: Singapore.Google Scholar
  12. Hong, J., Tan, X., Pinette, B., Weiss, R., and Riseman, E.M. 1992. Image-based homing. IEEE Control Systems, pp. 38–44.Google Scholar
  13. Huang, T.S. and Lee, C.H. 1989. Motion and structure from orthographic projections. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(5):536–540.Google Scholar
  14. Hutchinson, S., Hager, G.D., and Corke, P.I. 1996. A tutorial on visual servo control. IEEE Transaction on Robotics and Automation, 12(5):651–670.Google Scholar
  15. Jenkin, M.R.M. and Tsotsos, J.K. 1994. Active stereo vision and cyclotorsion. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR-94), Seattle, pp. 806–811.Google Scholar
  16. Kontsevich, L.L. 1993. Pairwise comparison technique: a simple solution for depth reconstruction. Journal of Optical Society, 10(6):1129–1135.Google Scholar
  17. Kruppa, E. 1913. Zur ermittlung eines objektes aus zwei perspektiven mit innerer orientierung. Sitz.-Ber. Akad. Wiss., Wien, Math. Naturw. Kl., Abt. Ila., 122:1939–1948.Google Scholar
  18. Lee, C.H. and Huang, T.S. 1990. Finding point correspondences and determining motion of a rigid object from two weak perspective views. Computer Vision, Graphics, and Image Processing, 52:309–327.Google Scholar
  19. Levitt, T.S. and Lawton, D.T. 1990. Qualitative navigation. Artificial Intelligence, 44(3):305–361.Google Scholar
  20. Longuet-Higgins, H.C. 1981. A computer algorithm for reconstructing a scene from two projections. Nature, 293:133–135.Google Scholar
  21. Madsen, C.B. and Christensen, H.I. 1989. A viewpoint planning strategy for determining true angles on polyhedral objects by camera alignment. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(2):158–163.Google Scholar
  22. Matsumoto, Y., Masayuki, I., and Inoue, H. 1996. Visual navigation using view-sequenced route representation. IEEE Int. Conf. on Robotics and Automation, pp. 83–88.Google Scholar
  23. Moré, J.J., Garbow, B.S., and Hillstrom, K.E. 1980. User Guide for MINPACK-1. ANL–80–74, Argonne National Laboratories.Google Scholar
  24. Nelson, R.C. 1989. Visual homing using an associative memory. DARPA Image Understanding Workshop, pp. 245–262.Google Scholar
  25. Nelson, B. and Khosla, P.K. 1993. Increasing the tracking region of eye-in-hand system by singularity and joint limit avoidance. In IEEE Int. Conf. on Robotics and Automation, pp. 418–423.Google Scholar
  26. Pahlavan, K. and Eklundh, J.O. 1992. A head-eye system: analysis and design. Computer Vision, Graphics, and Image Processing, 56(1):41–56.Google Scholar
  27. Pretlove, J. and Parker, G. 1991. The development of a real-time stereo-vision system to aid robot guidance in carrying out a typical manufacturing task. In Proceedings of the 22nd International Symposium on Robotic Research, pp. 21.1–21.23.Google Scholar
  28. Pritchett, P. and Zisserman, A. 1998. Wide baseline stereo matching. In Proc. 6th International Conference on Computer Vision, Bombay, pp. 754–760.Google Scholar
  29. Shapiro, L.S., Zisserman, A., and Brady, M. 1995. 3d motion recovery via affine epipolar geometry. Intl. Journal of Computer Vision, 16(2):147–182.Google Scholar
  30. Shimshoni, I., Basri, R., and Rivlin, E. 1999. A geometric interpretation of weak-perspective motion. IEEE Trans. on Pattern Analysis and Machine Intelligence, 21(3):252–257.Google Scholar
  31. Smith, S.M. and Brady, J.M. 1997. SUSAN—A new approach to low level image processing. Intl. Journal of Computer Vision, 23(1):45–78.Google Scholar
  32. Torr, P., Fitzgibbon, A.W., and Zisserman, A. 1998. Maintaining multiple motion model hypotheses over many views to recover matching and structure. In 6th Int. Conf. Computer Vision (ICCV-98), Bombay, pp. 485–491.Google Scholar
  33. Tsai, R.Y. and Huang, T.S. 1984. Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces. IEEE Trans. on Pattern Analysis and Machine Intelligence, 6(1):13–27.Google Scholar
  34. Ullman, S. 1979. The Interpretation of Visual Motion. M.I.T. Press: Cambridge, MA.Google Scholar
  35. Weng, J., Huang, T.S., and Ahuja, N. 1989. Motion and structure from two perspective views: algorithms, error analysis, and error estimation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 11(5):451–476.Google Scholar
  36. Wilkes, D. and Tsotsos, J.K. 1992. Active object recognition. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR-92), Urbana-Champaign, pp. 136–141.Google Scholar
  37. Yi, X. and Camps, O. 1997. Robust occluding contour detection using the Hausdorff distance. In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR-97), Puerto Rico, pp. 962–968.Google Scholar
  38. Zheng, J.Y. and Tsuji, S. 1992. Panoramic representation for route recognition by a mobile robot. Intl. Journal of Computer Vision, 9(1):55–76.Google Scholar
  39. Zipser, D. 1986. Biologically plausible models of place recognition and goal location. In Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 2, Psychological and Biological Models, D.E. Rumelhart, J.L. McClelland, and the P.D.P. Group, M.I.T. Press: Cambridge, MA, pp. 432–471.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • Ronen Basri
    • 1
  • Ehud Rivlin
    • 2
  • Ilan Shimshoni
    • 3
  1. 1.Department of Applied MathThe Weizmann Institute of ScienceRehovotIsrael
  2. 2.Department of Computer ScienceThe TechnionHaifaIsrael
  3. 3.Department of Industrial Engineering and ManagementThe TechnionHaifaIsrael

Personalised recommendations