Advertisement

Stereo Visual Odometry for Urban Vehicles Using Ground Features

  • Arturo de la EscaleraEmail author
  • Ebroul Izquierdo
  • David Martín
  • Fernando García
  • José María Armingol
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 417)

Abstract

Autonomous vehicles rely on the accurate estimation of their pose, speed and direction of travel to perform basic navigation tasks. Although GPSs are very useful, they have some drawbacks in urban applications. Visual odometry is an alternative or complementary method, because it uses a sensor already available in many vehicles for other tasks and provides the ego motion of the vehicle with enough accuracy. In this paper, a new method is proposed that detects and tracks features available on the surface of the ground, due to the texture of the road or street and road markings. This way it is assured only static points are taking into account in order to obtain the relative movement between images. A Kalman filter is used taking into account the Ackermann steering restrictions. Some results in real urban environments are shown in order to demonstrate the good performance of the algorithm.

Keywords

Autonomous vehicles Visual odometry 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Martín, D., García, F., Musleh, B., Olmeda, D., Peláez, G., Marín, P., Ponz, A., Rodríguez, C., Al-Kaff, A., de la Escalera, A., Armingol, J.M.: IVVI 2.0: An Intelligent Vehicle based on Computational Perception. Expert Systems with Applications 41, 7927–7944 (2014)CrossRefGoogle Scholar
  2. 2.
    Nister, D., Naroditsky, O., Bergen, J.: Visual odometry. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 652–659. IEEE Press, New York (2004)Google Scholar
  3. 3.
    Scaramuzza, D., Fraundorfer, F.: Visual Odometry Part I: The First 30 Years and Fundamentals. IEEE Robotics & Automation Magazine 18, 80–92 (2011)CrossRefGoogle Scholar
  4. 4.
    Fraundorfer, F., Scaramuzza, D.: Visual Odometry Part II: Matching, Robustness, Optimization, and Applications. IEEE Robotics & Automation Magazine 19, 78–90 (2012)CrossRefGoogle Scholar
  5. 5.
    Badino, H., Yamamoto, A., Kanade, T.: Visual odometry by multi-frame feature integration. In: IEEE International Conference on Computer Vision Workshops, pp. 222–229. IEEE Press, New York (2013)Google Scholar
  6. 6.
    Lu, W., Xiang, Z., Liu, J.: High-performance visual odometry with two-stage local binocular high-performance visual odometry with two-stage local binocular BA and GPU. In: IEEE Intelligent Vehicles Symposium, pp. 1107–1112. IEEE Press, New York (2013)Google Scholar
  7. 7.
    Fischler, M.A., Bolles, R.C.: Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Comm. of the ACM 24, 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Bellavia, F., Fanfani, M., Pazzaglia, F., Colombo, C.: Robust selective stereo SLAM without loop closure and bundle adjustment. In: Image Analysis and Processing – ICIAP 2013. LNCS, vol. 8156, pp. 462–471. Springer, Heidelberg (2013)Google Scholar
  9. 9.
    Sanfourche, M., Vittori, V., Le Besnerais, G.: eVO: a realtime embedded stereo odometry for MAV applications. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2107–2114. IEEE Press, New York (2013)Google Scholar
  10. 10.
    Bosse, M., Zlot, R.: Map matching and data association for large-scale two-dimensional laser scan-based SLAM. The International Journal of Robotics Research 27, 667–691 (2008)CrossRefGoogle Scholar
  11. 11.
    Almeida, J., Santos, V.M.: Real time egomotion of a nonholonomic vehicle using LIDAR measurements. Journal of Field Robotics 30, 129–141 (2013)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Hirschmuller, H.: Stereo Processing by Semiglobal Matching and Mutual Information. IEEE T on PAMI 30, 328–341 (2008)CrossRefGoogle Scholar
  13. 13.
    Point Cloud Library (PCL). http://pointclouds.org
  14. 14.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60, 91–110 (2004)CrossRefGoogle Scholar
  15. 15.
    Hesch, J.A., Roumeliotis, S.I.: A direct least-squares (DLS) method for PnP. In: IEEE International Conference on Computer Vision, pp. 383–390. IEEE Press, New York (2011)Google Scholar
  16. 16.
    Open source Computer Vision. http://Opencv.org
  17. 17.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE Press, New York (2012)Google Scholar
  18. 18.
    Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets Robotics: The KITTI Dataset. International Journal of Robotics Research 32, 1231–1237 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Arturo de la Escalera
    • 1
    Email author
  • Ebroul Izquierdo
    • 2
  • David Martín
    • 1
  • Fernando García
    • 1
  • José María Armingol
    • 1
  1. 1.Universidad Carlos III de MadridLeganésSpain
  2. 2.Queen Mary University LondonLondonUK

Personalised recommendations