Advertisement

CubemapSLAM: A Piecewise-Pinhole Monocular Fisheye SLAM System

  • Yahui Wang
  • Shaojun CaiEmail author
  • Shi-Jie Li
  • Yun Liu
  • Yangyan Guo
  • Tao Li
  • Ming-Ming Cheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11366)

Abstract

We present a real-time feature-based SLAM (Simultaneous Localization and Mapping) system for fisheye cameras featured by a large field-of-view (FoV). Large FoV cameras are beneficial for large-scale outdoor SLAM applications, because they increase visual overlap between consecutive frames and capture more pixels belonging to the static parts of the environment. However, current feature-based SLAM systems such as PTAM and ORB-SLAM limit their camera model to pinhole only. To compensate for the vacancy, we propose a novel SLAM system with the cubemap model that utilizes the full FoV without introducing distortion from the fisheye lens, which greatly benefits the feature matching pipeline. In the initialization and point triangulation stages, we adopt a unified vector-based representation to efficiently handle matches across multiple faces, and based on this representation we propose and analyze a novel inlier checking metric. In the optimization stage, we design and test a novel multi-pinhole reprojection error metric that outperforms other metrics by a large margin. We evaluate our system comprehensively on a public dataset as well as a self-collected dataset that contains real-world challenging sequences. The results suggest that our system is more robust and accurate than other feature-based fisheye SLAM approaches. The CubemapSLAM system has been released into the public domain.

Keywords

Omnidirectional vision Fisheye SLAM Cubemap 

References

  1. 1.
    Arican, Z., Frossard, P.: OmniSIFT: scale invariant features in omnidirectional images. In: IEEE International Conference on Image Processing (ICIP), pp. 3505–3508. IEEE (2010)Google Scholar
  2. 2.
    Barfoot, T.D.: State Estimation for Robotics. Cambridge University Press, Cambridge (2017)CrossRefGoogle Scholar
  3. 3.
    Caruso, D., Engel, J., Cremers, D.: Large-scale direct slam for omnidirectional cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 141–148. IEEE (2015)Google Scholar
  4. 4.
    Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular SLAM. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8690, pp. 834–849. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10605-2_54CrossRefGoogle Scholar
  5. 5.
    Forster, C., Zhang, Z., Gassner, M., Werlberger, M., Scaramuzza, D.: SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 33(2), 249–265 (2017)CrossRefGoogle Scholar
  6. 6.
    Furgale, P., et al.: Toward automated driving in cities using close-to-market sensors: an overview of the V-Charge project. In: IEEE Intelligent Vehicles Symposium (IV), pp. 809–816. IEEE (2013)Google Scholar
  7. 7.
    Gálvez-López, D., Tardos, J.D.: Bags of binary words for fast place recognition in image sequences. IEEE Trans. Robot. 28(5), 1188–1197 (2012)CrossRefGoogle Scholar
  8. 8.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  9. 9.
    Heng, L., Choi, B.: Semi-direct visual odometry for a fisheye-stereo camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4077–4084. IEEE (2016)Google Scholar
  10. 10.
    Heng, L., Li, B., Pollefeys, M.: CamOdoCal: automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1793–1800. IEEE (2013)Google Scholar
  11. 11.
    Kangni, F., Laganiere, R.: Orientation and pose recovery from spherical panoramas. In: IEEE International Conference on Computer Vision (ICCV), pp. 1–8. IEEE (2007)Google Scholar
  12. 12.
    Kneip, L., Furgale, P.: OpenGV: a unified and generalized approach to real-time calibrated geometric vision. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1–8. IEEE (2014)Google Scholar
  13. 13.
    Lee, G.H., Faundorfer, F., Pollefeys, M.: Motion estimation for self-driving cars with a generalized camera. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2746–2753. IEEE (2013)Google Scholar
  14. 14.
    Linegar, C., Churchill, W., Newman, P.: Work smart, not hard: recalling relevant experiences for vast-scale but time-constrained localisation. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 90–97. IEEE (2015)Google Scholar
  15. 15.
    Liu, P., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.: Direct visual odometry for a fisheye-stereo camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)Google Scholar
  16. 16.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. (IJCV) 60(2), 91–110 (2004)CrossRefGoogle Scholar
  17. 17.
    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  18. 18.
    Mur-Artal, R., Tardós, J.D.: ORB-SLAM2: an open-source SLAM system for monocular, stereo, and RGB-D cameras. IEEE Trans. Robot. 33(5), 1255–1262 (2017)CrossRefGoogle Scholar
  19. 19.
    Pagani, A., Stricker, D.: Structure from motion using full spherical panoramic cameras. In: IEEE International Conference on Computer Vision Workshops (ICCV Workshops), pp. 375–382. IEEE (2011)Google Scholar
  20. 20.
    Pless, R.: Using many cameras as one. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. II-587. IEEE (2003)Google Scholar
  21. 21.
    Rituerto, A., Puig, L., Guerrero, J.J.: Visual SLAM with an omnidirectional camera. In: International Conference on Pattern Recognition (ICPR), pp. 348–351. IEEE (2010)Google Scholar
  22. 22.
    Ros, G., Sappa, A., Ponsa, D., Lopez, A.M.: Visual SLAM for driverless cars: a brief survey. In: Intelligent Vehicles Symposium (IV) Workshops, vol. 2 (2012)Google Scholar
  23. 23.
    Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easily calibrating omnidirectional cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5695–5701. IEEE (2006)Google Scholar
  24. 24.
    Scaramuzza, D., Siegwart, R.: Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Trans. Robot. 24(5), 1015–1026 (2008)CrossRefGoogle Scholar
  25. 25.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 573–580. IEEE (2012)Google Scholar
  26. 26.
    Tardif, J.P., Pavlidis, Y., Daniilidis, K.: Monocular visual odometry in urban environments using an omnidirectional camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2531–2538. IEEE (2008)Google Scholar
  27. 27.
    Urban, S., Hinz, S.: MultiCol-SLAM-A modular real-time multi-camera SLAM system. arXiv preprint arXiv:1610.07336 (2016)
  28. 28.
    Urban, S., Jutzi, B.: LaFiDa—a laserscanner multi-fisheye camera dataset. J. Imaging 3(1), 5 (2017)CrossRefGoogle Scholar
  29. 29.
    Ventura, J., Höllerer, T.: Wide-area scene mapping for mobile visual tracking. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 3–12. IEEE (2012)Google Scholar
  30. 30.
    Zhang, Z., Rebecq, H., Forster, C., Scaramuzza, D.: Benefit of large field-of-view cameras for visual odometry. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 801–808. IEEE (2016)Google Scholar
  31. 31.
    Zhao, Q., Feng, W., Wan, L., Zhang, J.: SPHORB: a fast and robust binary feature on the sphere. Int. J. Comput. Vis. (IJCV) 113(2), 143–159 (2015)MathSciNetCrossRefGoogle Scholar
  32. 32.
    Ziegler, J., et al.: Making Bertha drive—an autonomous journey on a historic route. IEEE Intell. Transp. Syst. Mag. 6(2), 8–20 (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yahui Wang
    • 1
  • Shaojun Cai
    • 2
    Email author
  • Shi-Jie Li
    • 1
  • Yun Liu
    • 1
  • Yangyan Guo
    • 3
  • Tao Li
    • 1
  • Ming-Ming Cheng
    • 1
  1. 1.College of Computer ScienceNankai UniversityTianjinChina
  2. 2.UISEE Technology (Beijing) Co., Ltd.BeijingChina
  3. 3.University of Chinese Academy of SciencesBeijingChina

Personalised recommendations