ECCV 1998: Computer Vision — ECCV’98 pp 796-808 | Cite as

Optimal robot self-localization and reliability evaluation

  • Kenichi Kanatani
  • Naoya Ohta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1407)

Abstract

We discuss optimal estimation of the current location of a robot by matching an image of the scene taken by the robot with the model of the environment. We first present a theoretical accuracy bound and then give a method that attains that bound, which can be viewed as describing the probability distribution of the current location. Using real images, we demonstrate that our method is superior to the naive least-squares method. We also confirm the theoretical predictions of our theory by applying the bootstrap procedure.

Keywords

Mobile Robot Feature Point Motion Parameter Current Location Real Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    N. Ayache and O. D. Faugeras, “Building, registrating, and fusing noisy visual maps,” Int. J. Robotics Research, 7-6 (1988), 45–65.Google Scholar
  2. 2.
    M. Betke and L. Gurvits, “Mobile robot localization using landmarks,” IEEE Trans. Robotics Automation, 13-2 (1997), 251–263.CrossRefGoogle Scholar
  3. 3.
    B. Efron and R. J. Tibshirani, An Introduction to Bootstrap, Chapman-Hall, New York, 1993.Google Scholar
  4. 4.
    M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Comm. ACM, 24-6 (1981), 381–395.MathSciNetCrossRefGoogle Scholar
  5. 5.
    K. Kanatani, “Renormalization for motion analysis: Statistically optimal algorithm,” IEICE Trans. Inf. & Syst., E77-D-11 (1994), 1233–1239.Google Scholar
  6. 6.
    K. Kanatani, Statistical Optimization for Geometric Computation: Theory and Practice, Elsevier, Amsterdam 1996.Google Scholar
  7. 7.
    K. Sugihara, “Some location problems for robot navigation using a single camera,” Comput. Vis. Gr. Image Process., 42 (1988), 112–129.CrossRefGoogle Scholar
  8. 8.
    R. E. Suorsa and B. Sridhar, “A parallel implementation of a multisensor feature-based range-estimation method,” IEEE Trans. Robotics Automation, 10-6 (1994), 755–768.CrossRefGoogle Scholar
  9. 9.
    K. T. Sutherland and W. B. Thompson, “Localizing in unconstrained environment: Dealing with the errors,” IEEE Trans. Robotics Automation, 10-6 (1994), 740–754.CrossRefGoogle Scholar
  10. 10.
    R. Talluri and J. K. Aggarwal, “Mobile robot self-location using model-image feature correspondence,” IEEE Trans. Robotics Automation, 12-1 (1996), 63–77.CrossRefGoogle Scholar
  11. 11.
    Y. Yagi, Y. Nishimitsu and M. Yachida, “Map-based navigation for a mobile robot with ominidirectional image sensor COPIS,” IEEE Trans. Robotics Automation, 11-5 (1995), 634–648.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Kenichi Kanatani
    • 1
  • Naoya Ohta
    • 1
  1. 1.Department of Computer ScienceGunma UniversityGunmaJapan

Personalised recommendations