Fast Monocular Visual Compass for a Computationally Limited Robot

  • Peter Anderson
  • Bernhard Hengst
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8371)

Abstract

This paper introduces an extremely computationally inexpensive method for estimating monocular, feature-based, heading-only visual odometry - a visual compass. The method is shown to reduce the odometric uncertainty of an uncalibrated humanoid robot by 73%, while remaining robust to the presence of independently moving objects. High efficiency is achieved by exploiting the planar motion assumption in both the feature extraction process and in the pose estimation problem. On the relatively low powered Intel Atom processor this visual compass takes only 6.5ms per camera frame and was used effectively to assist localisation in the UNSW Standard Platform League entry in RoboCup 2012.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Scaramuzza, D., Fraundorfer, F.: Visual odometry: Part I - the first 30 years and fundamentals. IEEE Robotics and Automation Magazine 18 (2011)Google Scholar
  2. 2.
    Nister, D., Bergen, O.N., Visual, J.: odometry for ground vehicle applications. Journal of Field Robotics 23 (2006)Google Scholar
  3. 3.
    Liang, B., Pears, N.: Visual navigation using planar homographies. In: IEEE International Conference on Robotics and Automation (ICRA 2002), pp. 205–210 (2002)Google Scholar
  4. 4.
    Scaramuzza, D., Fraundorfer, F., Siegwart, R.: Real-time monocular visual odometry for on-road vehicles with 1-point ransac. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (2009)Google Scholar
  5. 5.
    Scaramuzza, D.: 1-point-RANSAC structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. International Journal of Computer Vision 95, 75–85 (2011)CrossRefGoogle Scholar
  6. 6.
    Choi, S., Joung, J.H., Yu, W., Cho, J.I.: What does ground tell us? monocular visual odometry under planar motion constraint. In: 11th International Conference on Control, Automation and Systems (ICCAS), pp. 1480–1485 (2011)Google Scholar
  7. 7.
    Hoiem, D., Efros, A.A., Hebert, M.: Recovering surface layout from an image. International Journal of Computer Vision 75 (2007)Google Scholar
  8. 8.
    Juan, L., Gwun, O.: A comparison of SIFT, PCA-SIFT and SURF. International Journal of Image Processing (IJIP) 3(4), 143–152 (2009)Google Scholar
  9. 9.
    Kitt, B., Moosmann, F., Stiller, C.: Moving on to dynamic environments: Visual odometry using feature classification. In: IEEE International Conference on Intelligent Robots and Systems, IROS (2010)Google Scholar
  10. 10.
    Anderson, P., Yusmanthia, Y., Hengst, B., Sowmya, A.: Robot localisation using natural landmarks. In: Chen, X., Stone, P., Sucar, L.E., van der Zant, T. (eds.) RoboCup 2012. LNCS (LNAI), vol. 7500, pp. 118–129. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  11. 11.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (SURF). Computer Vision and Image Understanding 110(3), 346–359 (2008)CrossRefGoogle Scholar
  12. 12.
    Borenstein, J., Feng, L.: UMBmark: A benchmark test for measuring odometry errors in mobile robots. In: SPIE Conference on Mobile Robots (1995)Google Scholar
  13. 13.
    Harris, S., Anderson, P., Teh, B., Hunter, Y., Liu, R., Hengst, B., Roy, R., Li, S., Chatfield, C.: Robocup standard platform league - rUNSWift 2012 innovations. In: Proceedings of the 2012 Australasian Conference on Robotics and Automation, ACRA 2012 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Peter Anderson
    • 1
  • Bernhard Hengst
    • 1
  1. 1.School of Computer Science and EngineeringUniversity of New South WalesAustralia

Personalised recommendations