Advertisement

Continuous-Time Stereo Visual Odometry Based on Dynamics Model

  • Xin WangEmail author
  • Fei Xue
  • Zike Yan
  • Wei Dong
  • Qiuyuan Wang
  • Hongbin ZhaEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11366)

Abstract

We propose a dynamics model to represent the camera trajectory as a continuous function of time and forces. Equipped with such a representation, we convert the classical visual odometry problem to analyzing the forces applied to the camera. In contrast to the classical discrete-time estimation strategy, the continuous nature of the camera motion is inherently revealed in the framework, and the camera motion can be simply modeled with only few parameters within time intervals. The dynamics model guarantees the continuous velocity, and hence assures a smooth trajectory, which is robust against noise and avoiding the pose vibration. Evaluations on real-world benchmark datasets show that our method outperforms other continuous-time methods.

References

  1. 1.
    Agarwal, S., Mierle, K., et al.: Ceres Solver (2012). http://ceres-solver.org
  2. 2.
    Caruso, D., Engel, J., Cremers, D.: Large-scale direct SLAM for omnidirectional cameras. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 141–148 (2015)Google Scholar
  3. 3.
    Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)CrossRefGoogle Scholar
  4. 4.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10605-2_54CrossRefGoogle Scholar
  5. 5.
    Engel, J., Koltun, V., Cremers, D.: Direct sparse odometry. IEEE Trans. Pattern Anal. Mach. Intell. 40(3), 611–625 (2018)CrossRefGoogle Scholar
  6. 6.
    Engel, J., Stückler, J., Cremers, D.: Large-scale direct SLAM with stereo cameras. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1935–1942 (2015)Google Scholar
  7. 7.
    Furgale, P., Barfoot, T.D., Sibley, G.: Continuous-time batch estimation using temporal basis functions. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 2088–2095 (2012)Google Scholar
  8. 8.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361 (2012)Google Scholar
  9. 9.
    Geiger, A., Ziegler, J., Stiller, C.: StereoScan: dense 3D reconstruction in real-time. In: Intelligent Vehicles Symposium (2011)Google Scholar
  10. 10.
    Kerl, C., Sturm, J., Cremers, D.: Dense visual SLAM for RGB-D cameras. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2100–2106 (2013)Google Scholar
  11. 11.
    Kerl, C., Stückler, J., Cremers, D.: Dense continuous-time tracking and mapping with rolling shutter RGB-D cameras. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2264–2272 (2015)Google Scholar
  12. 12.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 225–234. IEEE (2007)Google Scholar
  13. 13.
    Lovegrove, S., Patron-Perez, A., Sibley, G.: Spline fusion: a continuous-time representation for visual-inertial fusion with application to rolling shutter cameras. In: Proceedings of British Machine Vision Conference, pp. 1–12 (2013)Google Scholar
  14. 14.
    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  15. 15.
    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source slam system for monocular, stereo, and RGB-D cameras. IEEE Trans. Rob. 33(5), 1255–1262 (2017)CrossRefGoogle Scholar
  16. 16.
    Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2320–2327 (2011)Google Scholar
  17. 17.
    Ovrén, H., Forssén, P.E.: Spline error weighting for robust visual-inertial fusion. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–12 (2018)Google Scholar
  18. 18.
    Qin, T., Li, P., Shen, S.: VINS-mono: a robust and versatile monocular visual-inertial state estimator. arXiv preprint arXiv:1708.03852 (2017)
  19. 19.
    Rosten, E., Porter, R., Drummond, T.: Faster and better: a machine learning approach to corner detection. IEEE Trans. Pattern Anal. Mach. Intell. 32(1), 105–119 (2010)CrossRefGoogle Scholar
  20. 20.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.: ORB: an efficient alternative to SIFT or SURF. In: Proceedings of IEEE International Conference on Computer Vision, pp. 2564–2571 (2011)Google Scholar
  21. 21.
    Schops, T., Engel, J., Cremers, D.: Semi-dense visual odometry for AR on a smartphone. In: Proceedings of IEEE International Symposium on Mixed and Augmented Reality, pp. 145–150 (2014)Google Scholar
  22. 22.
    Sola, J.: Quaternion kinematics for the error-state Kalman filter. arXiv preprint arXiv:1711.02508 (2017)
  23. 23.
    Wang, R., Schworer, M., Cremers, D.: Stereo DSO: large-scale direct sparse visual odometry with stereo cameras. In: Proceedings of IEEE International Conference on Computer Vision, pp. 3923–3931 (2017)Google Scholar
  24. 24.
    Yang, N., Wang, R., Stückler, J., Cremers, D.: Deep virtual stereo odometry: leveraging deep depth prediction for monocular direct sparse odometry. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 835–852. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01237-3_50CrossRefGoogle Scholar
  25. 25.
    Younes, G., Asmar, D., Shammas, E., Zelek, J.: Keyframe-based monocular SLAM: design, survey, and future directions. Rob. Auton. Syst. 98, 67–88 (2017)CrossRefGoogle Scholar
  26. 26.
    Zhou, H., Zou, D., Pei, L., Ying, R., Liu, P., Yu, W.: StructSLAM: visual SLAM with building structure lines. IEEE Trans. Veh. Technol. 64(4), 1364–1375 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Key Laboratory of Machine Perception (MOE), School of EECSPeking UniversityBeijingChina
  2. 2.Cooperative Medianet Innovation CenterShanghai Jiao Tong UniversityShanghaiChina
  3. 3.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations