Advertisement

A Neural Network-Based Camera Calibration Method for Mobile Robot Localization Problems

  • Anmin Zou
  • Zengguang Hou
  • Lejie Zhang
  • Min Tan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3498)

Abstract

To navigate reliably in indoor environments, a mobile robot has to know where it is. The methods for pose (position and orientation) estimation can be roughly divided into two classes: methods for keeping track of the robot’s pose and methods for global pose estimation [1]. In this paper, a neural network-based camera calibration method is presented for the global localization of mobile robots with monocular vision. In order to localize and navigate the robot using vision information, the camera has to be first calibrated. We calibrate the camera using the neural network based method, which can simplify the tedious calibration process and does not require specialized knowledge of the 3D geometry and computer vision. The monocular vision is used to initialize and recalibrate the robot’s pose, and the extended Kalman filter is adopted to keep track of the mobile robot’s pose.

Keywords

Mobile Robot Extended Kalman Filter Camera Calibration Monocular Vision Mobile Robot Localization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Borenstein, J., Everett, B., Feng, L.: Navigating Mobile Robots: Systems and Techniques. A.K.Peters, LTD, Wellesley (1996)zbMATHGoogle Scholar
  2. 2.
    Gutmann, J.-S., Burgard, W., Fox, D., Konolige, K.: An Experimental Comparison of Localization Methods. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 736–743 (1998)Google Scholar
  3. 3.
    Burgard, W., Derr, A., Fox, D., Cremers, A.B.: Integrating Global Position Estimation and Position Tracking for Mobile Robots: the Dynamic Markov Localization Approach. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 2, pp. 730–735 (1998)Google Scholar
  4. 4.
    Dellaert, F., Fox, D., Burgard, W., Thrun, S.: Monte Carlo Localization for Mobile Robots. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2, pp. 1322–1328 (1999)Google Scholar
  5. 5.
    Gutmann, J.-S., Fox, D.: An Experimental Comparison of Localization Methods Continued. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 1, pp. 454–459 (2002)Google Scholar
  6. 6.
    Hu, H.S., Gu, D.B.: Landmark-Based Navigation of Mobile Robot in Manufacturing. In: Proceedings of the IEEE International Conference on Emerging Technologies and Factory Automation, pp. 114–121 (1999)Google Scholar
  7. 7.
    Sugihara, K.: Some Location Problems for Robot Navigation Using a Single Camera. Computer Vision, Graphics, and Image Processing 42, 112–129 (1988)CrossRefGoogle Scholar
  8. 8.
    Krotkov, E.: Mobile Robot Localization Using a Single Image. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 2, pp. 978–983 (1989)Google Scholar
  9. 9.
    Munoz, A.J., Gonzalez, J.: Two-Dimensional Landmark-Based Position Estimation From a Single Image. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 4, pp. 3709–3714 (1998)Google Scholar
  10. 10.
    Feddema, J.T., Lee, C.S.G., Mitchell, O.R.: Weighted Selection of Image Features for Resolved Rate Visual Feedback Control. IEEE Transactions on Robotics and Automation 7, 31–47 (1991)CrossRefGoogle Scholar
  11. 11.
    Georgiev, A., Allen, P.K.: Vision for Mobile Robot Localization in Urban Environments. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and System, vol. 1, pp. 472–477 (2002)Google Scholar
  12. 12.
    Meng, M., Kak, A.C.: Fast Vision-Guided Mobile Robot Navigation Using Neural Networks. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 111–116 (1992)Google Scholar
  13. 13.
    Nayar, S.K., Murase, H., Nene, S.A.: Learning, Positioning, and Tracking Visual Appearance. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 4, pp. 3237–3244 (1994)Google Scholar
  14. 14.
    Matsumoto, Y., Inaba, M., Inoue, H.: Visual Navigation Using View-Sequenced Route Representation. In: Proceedings of the IEEE International Conference on Robotics and Automation, vol. 1, pp. 83–88 (1996)Google Scholar
  15. 15.
    Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Printice Hall, Upper Saddle River (1999)zbMATHGoogle Scholar
  16. 16.
    Ahmed, M., Farag, A.: Locked, Unlocked and Semi-locked Network Weights for Four Different Camera Calibration Problems. In: Proceedings of International Joint Conference on Neural Networks, vol. 4, pp. 2826–2831 (2001)Google Scholar
  17. 17.
    Jun, J., Kim, C.: Robust Camera Calibration Using Neural Network. In: Proceedings of the 1999 IEEE Region 10 Conference on TENCON, vol. 1, pp. 694–697 (1999)Google Scholar
  18. 18.
    Lippman, R.: An Introduction to Computing with Neural Nets. IEEE ASSP Magazine 4, 4–22 (1987)CrossRefGoogle Scholar
  19. 19.
    Simpson, P.K.: Artificial Neural Systems. Pergamon Press, New York (1990)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Anmin Zou
    • 1
  • Zengguang Hou
    • 1
  • Lejie Zhang
    • 1
  • Min Tan
    • 1
  1. 1.Laboratory of Complex Systems and Intelligence Science, Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations