Applying High-Level Understanding to Visual Localisation for Mapping

  • Trevor Taylor
Part of the Studies in Computational Intelligence book series (SCI, volume 76)

Digital cameras are often used on robots these days. One of the common limitations of these cameras is a relatively small field of view. Consequently, the camera is usually tilted downwards to see the floor immediately in front of the robot in order to avoid obstacles. With the camera tilted, vertical edges no longer appear vertical in the image. This feature can however be used to advantage to discriminate amongst straight line edges extracted from the image when searching for landmarks. It might also be used to estimate angles of rotation and distances moved between successive images in order to assist with localisation. Horizontal edges in the real world very rarely appear horizontal in the image due to perspective. By mapping these back to real-world coordinates, it is possible to use the locations of these edges in two successive images to measure rotations or translations of the robot.

Keywords

Vertical Edge Horizontal Edge Visual Localisation Structure From Motion Visual Odometry 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bouguet J-Y (2006) Camera calibration toolbox for Matlab. http://www.vision.caltech.edu/bouguetj/calib doc/, visited 19-Mar-2006
  2. 2.
    Campbell J, Sukthankar R, Nourbakhsh I, Pahwa A (2005) A robust visual odometry and precipice detection system using consumergrade monocular vision. In: Proc IEEE Int Conf on Robotics and Automation, Barcelona, pp 3421-3427Google Scholar
  3. 3.
    Davison AJ, Cid YG, Kita N (2004) Real-time 3D SLAM with wide-angle vision. In: Proc 5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles, LisbonGoogle Scholar
  4. 4.
    Eade E, Drummond T (2006) Scalable monocular SLAM. In: Proc IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, vol 1, pp 469-476Google Scholar
  5. 5.
    Faugeras O (1993) Three-Dimensional Computer Vision. MIT Press, Cambridge MAGoogle Scholar
  6. 6.
    Hartley R, Zisserman A (2003) Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge UKGoogle Scholar
  7. 7.
    Horn BKP (1986) Robot Vision. MIT Press, Cambridge MAGoogle Scholar
  8. 8.
    Intel Corporation (2005) Open Source Computer Vision Library (OpenCV). http://www.intel.com/technology/computing/opencv/, visited 2-Aug-2005
  9. 9.
    Se S, Lowe DG, Little JJ (2005) Vision-based global localization and mapping for mobile robots. IEEE Trans on Robotics, vol 21:3, pp 364-375CrossRefGoogle Scholar
  10. 10.
    Taylor T, Geva S, Boles WW (2004) Monocular vision as a range sensor. In: Proc Int Conf on Computational Intelligence for Modelling Control and Automation, Gold Coast, Australia, CD-ROMGoogle Scholar
  11. 11.
    Taylor T, Geva S, Boles WW (2005) Early Results in Vision-Based Map Building. In: Murase K, Sekiyama K, Kubota N, Naniwa T, Sitte, J (eds) Proc 3rd Int Symp on Autonomous Minirobots for Research and Edutainment, Fukui, Japan, pp 207-216Google Scholar
  12. 12.
    Taylor T, Geva S, Boles WW (2006) Using Camera Tilt to Assist with Localisation. In: Proc 3rd Int Conf on Autonomous Robots and Agents, Palmerston North, New ZealandGoogle Scholar
  13. 13.
    Wolf J, Burgard W, Burkhardt H (2005) Robust vision-based localization by combining an image retrieval system with Monte Carlo localization. IEEE Trans on Robotics, vol 21:2, pp 208-216CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Trevor Taylor
    • 1
  1. 1.Faculty of IT Queensland University of TechnologyBrisbaneAustralia

Personalised recommendations