Robot Navigation for Automatic Model Construction using Safe Regions

  • Héctor González-Baños
  • Jean-Claude Latombe
Conference paper
Part of the Lecture Notes in Control and Information Sciences book series (LNCIS, volume 271)

Abstract

Automatic model construction is a core problem in mobile robotics. To solve this task efficiently, we need a motion strategy to guide a robot equipped with a range sensor through a sequence of “good” observations. Such a strategy is generated by an algorithm that repeatedly computes locations where the robot must perform the next sensing operation. This is called the next-best view problem. In practice, however, several other considerations must be taken into account. Of these, two stand out as decisive. One is the problem of safe navigation given incomplete knowledge about the robot surroundings. The second one is the issue of guaranteeing the alignment of multiple views, closely related to the problem of robot self-localization. The concept of safe region proposed in this paper makes it possible to simultaneously address both problems.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    R. Chatila and J.P. Laumond. Position referencing and consistent world modeling for mobile robots. In Proc. IEEE Int. Conf. on Robotics and Automation, pages 138–143, 1985.Google Scholar
  2. [2]
    H. Choset and J. Burdick. Sensor based motion planning: The hierarchical generalized voronoi diagram. In J.-P. Laumond and M. Overmars, editors, Proc. 2nd Workshop on Algorithmic Foundations of Robotics. A.K. Peters, Wellesley, MA, 1996.Google Scholar
  3. [3]
    B. Curless and M. Levoy. A volumetric method for building complex models from range images. In Proc. ACM SIGGRAPH, 1996.Google Scholar
  4. [4]
    A. Elfes. Sonar-based real world mapping and navigation. IEEE J. Robotics and Automation, RA-3(3):249–265, 1987.CrossRefGoogle Scholar
  5. [5]
    H. González-Banos, A. Efrat, J.C. Latombe, E. Mao, and T.M. Murali. Planning robot motion strategies for efficient model construction. In Robotics Research-The Eight Int. Symp., Salt Lake City, UT, 1999. Final proceedings to appear.Google Scholar
  6. [6]
    B. Kuipers, R. Froom, W.K. Lee, and D. Pierce. The semantic hierarchy in robot learning. In J. Connell and S. Mahadevan, editors, Robot Learning. Kluwer Academic Publishers, Boston, MA, 1993.Google Scholar
  7. [7]
    J.J. Leonard and H.F. Durrant-Whyte. Stochastic multisensory data fusion for mobile robot location and environment modeling. In Proc. IEEE Int. Conf. on Intelligent Robot Syst., 1991.Google Scholar
  8. [8]
    K. Mehlhorn and St. Nahër. LEDA: A Platform of Combinatorial and Geometric Computing. Cambridge University Press, Cambridge, UK, 1999.MATHGoogle Scholar
  9. [9]
    P. Moutarlier and R. Chatila. Stochastic multisensory data fusion for mobile robot location and environment modeling. In H. Miura and S. Arimoto, editors, Robotics Research-The 5th Int. Symp., pages 85–94. MIT Press, Cambridge, MA, 1989.Google Scholar
  10. [10]
    J. O’Rourke. Visibility. In J.E. Goodman and J. O’Rourke, editors, Handbook of Discrete and Computational Geometry, pages 467–479. CRC Press, Boca Raton, FL, 1997.Google Scholar
  11. [11]
    R. Pito. A sensor based solution to the next best view problem. In Proc. IEEE 13th Int. Conf. on Pattern Recognition, volume 1, pages 941–5, 1996.CrossRefGoogle Scholar
  12. [12]
    S. Teller. Automated urban model acquisition: Project rationale and status. In Proc. 1998 DARPA Image Understanding Workshop. DARPA, 1998.Google Scholar
  13. [13]
    G. Turk and M. Levoy. Zippered polygon meshes from range images. In Proc. ACM SIGGRAPH, pages 311–318, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Héctor González-Baños
    • 1
  • Jean-Claude Latombe
    • 1
  1. 1.Department of Computer ScienceStanford UniversityStanfordUSA

Personalised recommendations