Skip to main content

Fast 6D Odometry Based on Visual Features and Depth

  • Chapter
Frontiers of Intelligent Autonomous Systems

Part of the book series: Studies in Computational Intelligence ((SCI,volume 466))

Abstract

The availability of affordable RGB-D cameras which provide color and depth data at high data rates, such as Microsoft MS Kinect, poses a challenge to the limited resources of the computers onboard autonomous robots. Estimating the sensor trajectory, for example, is a key ingredient for robot localization and SLAM (Simultaneous Localization And Mapping), but current computers can hardly handle the stream of measurements. In this paper, we propose an efficient and reliable method to estimate the 6D movement of an RGB-D camera (3 linear translations and 3 rotation angles) of a moving RGB-D camera. Our approach is based on visual features that are mapped to the three Cartesian coordinates (3D) using measured depth. The features of consecutive frames are associated in 3D and the sensor pose increments are obtained by solving the resulting linear least square minimization system. The main contribution of our approach is the definition of a filter setup that produces the most reliable features that allows for keeping track of the sensor pose with a limited number of feature points. We systematically evaluate our approach using ground truth from an external measurement systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Moravec, H.: Obstacle avoidance and navigation in the real world by a seeing robot rover. Tech. report and doctoral dissertation, Robotics Institute, Carnegie Mellon University, Stanford University, CMU-RI-TR-80-03 (1980)

    Google Scholar 

  2. Nister, D., Naroditsky, O., Bergen, J.: Visual odometry for ground vehicle applications. Journal of Field Robotics 23(1) (2006)

    Google Scholar 

  3. Davison, A.J.: Real-time simultaneous localization and mapping with a single camera. In: Proceedings of the International Conference on Computer Vision, Nice (2003)

    Google Scholar 

  4. Matties, L.: Dynamic Stereo Vision. PhD thesis, Dept. of Computer Science, Carnegie Mellon University, CMU-CS-89-195 (1989)

    Google Scholar 

  5. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of Optical Flow Techniques. The International Journal of Computer Vision 12(1), 43–77 (1994)

    Article  Google Scholar 

  6. Lucas, B.D., Kanade, T.: An Iterative Image Registration Technique with an Application to Stereo Vision. In: Proceedings of 7th International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)

    Google Scholar 

  7. Einhorn, E., Schrter, C., Gross-M, H.-M.: Can’t Take my Eye on You: Attention-Driven Monocular Obstacle Detection and 3D Mapping. In: IROS 2010 (2010)

    Google Scholar 

  8. Lowe, D.G.: Distinctive image features from scale invariant keypoints. International Journal of Computer Vision 60(2), 91110 (2004)

    Article  Google Scholar 

  9. Harris, C., Stephens, M.: A Combined Corner and Edge Detector. In: Proc. of The Fourth Alvey Vision Conference, Manchester, pp. 147–151 (1988)

    Google Scholar 

  10. Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition, Seattle (1993)

    Google Scholar 

  11. Klippenstein, J., Zhang, H.: Quantitative Evaluation of Feature Extractors for Visual SLAM. J Department of Computing Science University of Alberta Edmonton, AB, Canada T6G 2E8 (2007)

    Google Scholar 

  12. Golub, G.H., Van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins (1996) ISBN 978-0-8018-5414-9

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Domínguez, S., Zalama, E., García-Bermejo, J.G., Worst, R., Behnke, S. (2013). Fast 6D Odometry Based on Visual Features and Depth. In: Lee, S., Yoon, KJ., Lee, J. (eds) Frontiers of Intelligent Autonomous Systems. Studies in Computational Intelligence, vol 466. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-35485-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-35485-4_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-35484-7

  • Online ISBN: 978-3-642-35485-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics