Advertisement

Sharing Heterogeneous Spatial Knowledge: Map Fusion Between Asynchronous Monocular Vision and Lidar or Other Prior Inputs

  • Yan LuEmail author
  • Joseph Lee
  • Shu-Hao Yeh
  • Hsin-Min Cheng
  • Baifan Chen
  • Dezhen Song
Conference paper
Part of the Springer Proceedings in Advanced Robotics book series (SPAR, volume 10)

Abstract

To enable low-cost mobile devices and robots equipped with monocular cameras to obtain accurate position information in GPS-denied environments, we propose to use pre-collected lidar or other prior data to rectify imprecise visual simultaneous localization and mapping (SLAM) results. This leads to a novel and nontrivial problem that fuses vision and prior/lidar data acquired at different perspectives and time. In fact, the lidar inputs can be replaced by other prior mapping inputs as long as we can extract vertical planes from these inputs. Hence, they are referred as prior/lidar data in general. We exploit the planar structure extracted from both vision and prior/lidar data and use it as the anchoring information to fuse the heterogeneous maps. We formulate a constrained global bundle adjustment using coplanarity constraints and solve it using a penalty-barrier approach. By error analysis we prove that the coplanarity constraints help reduce the estimation uncertainties. We have implemented the system and tested it with real data. The initial results show that our algorithm significantly reduces the absolute trajectory error of visual SLAM by as much as 68.3%.

Notes

Acknowledgements

This work was supported in part by National Science Foundation under NRI-1426752, NRI-1526200 and NRI-1748161, and in part by National Science Foundation of China under 61403423.

References

  1. 1.
    Baudouin, L., Mezouar, Y., Ait-Aider, O., Araújo, H.: Multi-modal sensors path merging. In: Intelligent Autonomous Systems, vol. 13, pp. 191–201. Springer (2016)Google Scholar
  2. 2.
    Bódis-Szomorú, A., Riemenschneider, H., Van Gool, L.: Fast, approximate piecewise-planar modeling based on sparse structure-from-motion and superpixels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 469–476 (2014)Google Scholar
  3. 3.
    Carpin, S.: Merging maps via hough transform. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008. IROS 2008, pp. 1878–1883. IEEE (2008)Google Scholar
  4. 4.
    Caselitz, T., Steder, B., Ruhnke, M., Burgard, W.: Monocular camera localization in 3d lidar maps. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016)Google Scholar
  5. 5.
    Civera, J., Grasa, O.G., Davison, A.J., Montiel, J.M.M.: 1-point RANSAC for extended Kalman filtering: Application to real-time structure from motion and visual odometry. J. Field Robot. 27(5), 609–631 (2010)Google Scholar
  6. 6.
    Dedeoglu, G., Sukhatme, G.S.: Landmark-based matching algorithm for cooperative mapping by autonomous robots. In: Distributed Autonomous Robotic Systems vol. 4, pp. 251–260. Springer (2000)Google Scholar
  7. 7.
    Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. Robot. Autom. Mag. IEEE 13(2), 99–110 (2006)CrossRefGoogle Scholar
  8. 8.
    Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3-D mapping with an RGB-D camera. IEEE Trans. Robot. 30(1), 177–187 (2014)CrossRefGoogle Scholar
  9. 9.
    Engel, J., Schöps, T., Cremers, D.: Lsd-slam: Large-scale direct monocular slam. In: European Conference on Computer Vision, pp. 834–849. Springer (2014)Google Scholar
  10. 10.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fox, D., Ko, J., Konolige, K., Limketkai, B., Schulz, D., Stewart, B.: Distributed multirobot exploration and mapping. Proc. IEEE 94(7), 1325–1339 (2006)CrossRefGoogle Scholar
  12. 12.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the KITTI vision benchmark suite. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361 (2012)Google Scholar
  13. 13.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using kinect-style depth cameras for dense 3D modeling of indoor environments. Int. J. Robot. Res. 31(5), 647–663 (2012)CrossRefGoogle Scholar
  14. 14.
    Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (2012)Google Scholar
  15. 15.
    Kaess, M., Johannsson, H., Roberts, R., Ila, V., Leonard, J.J., Dellaert, F.: isam2: incremental smoothing and mapping using the bayes tree. Int. J. Robot. Res. 31(2), 216–235 (2012)Google Scholar
  16. 16.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), pp. 225–234 (2007)Google Scholar
  17. 17.
    Kohlbrecher, S., Von Stryk, O., Meyer, J., Klingauf, U.: A flexible and scalable slam system with full 3d motion estimation. In: 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), pp. 155–160. IEEE (2011)Google Scholar
  18. 18.
    Konolige, K., Agrawal, M.: Frameslam: from bundle adjustment to real-time visual mapping. IEEE Trans. Robot. 24(5), 1066–1077 (2008)CrossRefGoogle Scholar
  19. 19.
    Kummerle, R., Grisetti, G., Strasdat, H., Konolige, K., Burgard, W.: g 2 o: a general framework for graph optimization. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 3607–3613 (2011)Google Scholar
  20. 20.
    Lu, Y., Song, D.: Visual navigation using heterogeneous landmarks and unsupervised geometric constraints. IEEE Trans. Robot. (T-RO) 31(3), 736–749 (2015)Google Scholar
  21. 21.
    Ma, L., Kerl, C., Stückler, J., Cremers, D.: CPA-SLAM: consistent plane-model alignment for direct RGB-D SLAM. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1285–1291. IEEE (2016)Google Scholar
  22. 22.
    María Paz, L., Piniés, P., Tardós, J.D., Neira, J.: Large-scale 6-DOF SLAM with stereo-in-hand. IEEE Trans. Robot. 24(5), 946–957 (2008)Google Scholar
  23. 23.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)Google Scholar
  24. 24.
    Newcombe, R.A., Fox, D., Seitz, S.M.: Dynamicfusion: reconstruction and tracking of non-rigid scenes in real-time. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 343–352 (2015)Google Scholar
  25. 25.
    Newman, P., Cole, D., Ho, K.: Outdoor SLAM using visual appearance and laser ranging. In: Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, pp. 1180–1187. IEEE (2006)Google Scholar
  26. 26.
    Pollefeys, M., Lee, G.H., Fraundorfer, F.: MAV visual SLAM with plane constraint. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 3139–3144 (2011)Google Scholar
  27. 27.
    Royer, E., Lhuillier, M., Dhome, M., Lavest, J.-M.: Monocular vision for mobile robot localization and autonomous navigation. Int. J. Comput. Vis. 74(3), 237–260 (2007)CrossRefGoogle Scholar
  28. 28.
    Salas-Moreno, R.F., Glocker, B., Kelly, P.H.J., Davison, A.J.: Dense planar SLAM. In: IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR), pp. 157–164 (2014)Google Scholar
  29. 29.
    Salas-Moreno, R.F., Newcombe, R.A., Strasdat, H., Kelly, P.H.J., Davison, A.J.: Slam++: simultaneous localisation and mapping at the level of objects. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1352–1359 (2013)Google Scholar
  30. 30.
    Strasdat, H., Montiel, J.M.M., Davison, A.J.: Real-time monocular SLAM: why filter? In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2657–2664 (2010)Google Scholar
  31. 31.
    Strasdat, H., Montiel, J.M.M., Davison, A.J.: Scale drift-aware large scale monocular SLAM. In: Robotics: Science and Systems (RSS), vol. 1, p. 4 (2010)Google Scholar
  32. 32.
    Taguchi, Y., Jian, Y-D., Ramalingam, S., Feng, C.: Point-plane SLAM for hand-held 3D sensors. In: IEEE Int. Conf. Robot. Autom. (ICRA), pp. 5182–5189 (2013)Google Scholar
  33. 33.
    Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press (2005)Google Scholar
  34. 34.
    Zhang, J., Singh, S.: Loam: lidar odometry and mapping in real-time. In: Robotics: Science and Systems, vol. 2. Citeseer (2014)Google Scholar
  35. 35.
    Zhang, J., Singh, S.: Visual-lidar odometry and mapping: Low-drift, robust, and fast. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 2174–2181. IEEE (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Yan Lu
    • 1
    Email author
  • Joseph Lee
    • 2
  • Shu-Hao Yeh
    • 3
  • Hsin-Min Cheng
    • 3
  • Baifan Chen
    • 4
  • Dezhen Song
    • 3
  1. 1.Honda Research Institute USAMountain ViewUSA
  2. 2.U.S. Army TARDECWarrenUSA
  3. 3.Texas A&M UniversityCollege StationUSA
  4. 4.Central South UniversityHunanChina

Personalised recommendations