Skip to main content
Log in

CAD-Vision-Range-Based Self-Localization for Mobile Robot Using One Landmark

  • Published:
Journal of Intelligent and Robotic Systems Aims and scope Submit manuscript

Abstract

Localization is the process of determining the robot's posture within its environment including its current position and heading direction (or orientation). The process is of utmost importance for the autonomous navigational functions of a service robot. This paper describes a new localization method for service robots operating in a building based on a CAD model of the indoor environment in reasonable details. Only one specific landmark pasted within a specific region on the wall is needed. The camera with pan/tilt/zoom functions mounted on the robot first searches for this identification landmark and starts to conduct measurements using a laser rangefinder. With the polar coordinates of few measurement points on the wall and an accurate local CAD model, the exact position and orientation of the robot can be identified. This method has five distinctive advantages. First, the position of the landmark does not need to be precise. Second, each localization exercise is independent and no previous history of the moving track of the robot is required but the computational speed is still high. Third, the method is very robust with good fault-tolerance because it makes use of the reliable Hough transform. Fourth, the resolution is automatically adjusted because the panning resolution of the camera is based on the first effective measurement representing the distance of the robot from the landmark. Fifth, only the local CAD model of the room at the vicinity of the landmark needs a high precision because this model is used for localization. The system does not demand a highly accurate CAD model of the entire built environment. CAD models at other places are for navigation and path planning only.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Atiya, S. and Hager, G.: Real-time vision-based localization, in: Proc. of 1991 IEEE Internat. Conf. on Robotics and Automation, Sacramento, CA, 1992, pp. 639–644.

  2. Brown, R. G. and Donald, B. R.: Mobile robot self-localization without explicit landmarks, Algorithmica 26 (2000), 515–559.

    Google Scholar 

  3. Chenavier, F. and Crowley, J. L.: Position estimation for a mobile robot using vision and odometry, in: Proc. of 1992 IEEE Internat. Conf. on Robotics and Automat., Nice, 1992, pp. 2588–2593.

  4. Elfes, A.: Sonar-based real-world mapping and navigation, IEEE J. Robotics Automation 3(3) (1987), 245–252.

    Google Scholar 

  5. Jagannathan, H.: Wavelet enhanced robot navigation using image sequences, in: TENCON 99. Proc. of the IEEE Region 10th Conference, Vol. 1, 1999, pp. 633–636.

    Google Scholar 

  6. Kleeman, L.: Optimal estimation of position and heading for mobile robots using ultrosonic beacons and dead-reckoning, in: Proc. of 1992 IEEE Internat. Conf. on Robotics and Automation, Nice, 1992, pp. 2582–2587.

  7. Kweon, I. S, and Kanade, T.: Extracting topographic features for outdoor mobile robotics, in: Proc. of 1991 IEEE Internat. Conf. on Robotics and Automation, Sacramento, CA, 1991, pp. 1992–1997.

  8. Leonard, J., Durrant-Whyte, H., and Cox, I. J.: Dynamic map building for an autonomous robot, in: Proc. of 1990 IEEE/RSJ Internat. Conf. on Intelligent Robot Systems, 1990, pp. 828–837.

  9. Liu, Y., Tanaka, T., Yamamura, T., and Ohnishi, N.: Character-based mobile robot navigation, in: Proc. of the 1999 IEEE/RSJ Internat. Conf. on Intelligent Robots and Systems, 1999, pp. 610–616.

  10. McGillem, C. D. and Rappaport, T. S.: Infrared location system for navigation of autonomous vehicles, Proc. of 1988 IEEE Internat. Conf. on Robotics and Automation, Philadelphia, 1988, pp. 1236–1238.

  11. Neven, H. and Schöner, G.: Dynamics parametrically controlled by image correlations organize robot navigation, Biological Cybernetics 75 (1996), 293–307.

    Google Scholar 

  12. Rasmussen, C. and Hager, G.: Robot navigation using image sequences, in: Proc. of the AAAI Conf. on Artificial Intelligence, 1996, pp. 938–943.

  13. Roth, Y., Wu, A. S., Arpaci, R. H., Weymouth, T., and Jain, R.: Model-driven pose correction, in: Proc. of 1992 IEEE Internat. Conf. on Robotics and Automation, Nice, 1992, pp. 2625–2630.

  14. Siemiakowska, B. and Dubrawski, A.: Cellular neural networks for navigation of a mobile robot, in: Proc. of RSCTC'98, Vol. 1424, 1998, pp. 147–154.

    Google Scholar 

  15. Sim, R. and Dudek, G.: Mobile robot localization from learned landmarks, in: Proc. of 1998 IEEE/RSJ Internat. Conf. on Intelligent Robots and Systems, Vol. 2, 1998, pp. 1060–1065.

    Google Scholar 

  16. Sim, R. and Dudek, G.: Learning and evaluating visual features for pose estimation, in: Proc. of the Seventh IEEE Internat. Conf. on Computer Vision, Vol. 2, 1999, pp. 1217–1222.

    Google Scholar 

  17. Sim, R. and Dudek, G.: Learning visual landmarks for pose estimation, in: Proc. of the 1999 IEEE Internat. Conf. on Robotics and Automation, Vol. 3, Detroit, 1999, pp. 1972–1978.

    Google Scholar 

  18. Vitabile, S., Pilato, G., Pullara, F., and Sorbello, F.: A navigation system for vision-guided mobile robots, in: Proc. of Internat. Conf. on Image Analysis and Processing, 1999, pp. 566–571.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Li, X.J., So, A.T.P. & Tso, S.K. CAD-Vision-Range-Based Self-Localization for Mobile Robot Using One Landmark. Journal of Intelligent and Robotic Systems 35, 61–81 (2002). https://doi.org/10.1023/A:1020240026070

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1020240026070

Navigation