Advertisement

Map Slammer: Densifying Scattered KSLAM 3D Maps with Estimated Depth

  • Jose Miguel Torres-Camara
  • Felix Escalona
  • Francisco Gomez-DonosoEmail author
  • Miguel Cazorla
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1093)

Abstract

There are a range of small-size robots that cannot afford to mount a three-dimensional sensor due to energy, size or power limitations. However, the best localization and mapping algorithms and object recognition methods rely on a three-dimensional representation of the environment to provide enhanced capabilities. Thus, in this work we propose a method to create a dense three-dimensional representation of the environment by fusing the output of a KSLAM algorithm with predicted point clouds. We demonstrate with quantitative and qualitative results the advantages of our method, focusing in three different measures: localization accuracy, densification capabilities and accuracy of the resultant three-dimensional map.

Keywords

SLAM 3D maps Point clouds Depth perception Depth estimation Sensor fusion 

Notes

Acknowledgements

This work has been supported by the Spanish Government TIN2016-76515R Grant, supported with Feder funds and by a Spanish Government grant for cooperating in research tasks ID 998142. This work has also been supported by a Spanish grant for PhD studies ACIF/2017/243 and FPU16/00887. Thanks to Nvidia for the generous donation of a Titan Xp and a Quadro P6000.

References

  1. 1.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24, 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Yahya, N.A.B.H., Ashrafi, N., Humod, A.: Development and adaptability of in-pipe inspection robots. IOSR J. Mech. Civ. Eng. 11, 01–08 (2014)CrossRefGoogle Scholar
  3. 3.
    Bay, H., Tuytelaars, T., Van Gool, L.: SURF: speeded up robust features. In: European Conference on Computer Vision, pp. 404–417. Springer (2006)Google Scholar
  4. 4.
    Deng, C., Wang, S., Huang, Z., Tan, Z., Liu, J.: Unmanned aerial vehicles for power line inspection: a cooperative way in platforms and communications. JCM 9, 687–692 (2014)CrossRefGoogle Scholar
  5. 5.
    Eggert, D., Lorusso, A., Fisher, R.: Estimating 3-D rigid body transformations: a comparison of four major algorithms. Mach. Vis. Appl. 9(5), 272–290 (1997).  https://doi.org/10.1007/s001380050048CrossRefGoogle Scholar
  6. 6.
    Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014)Google Scholar
  7. 7.
    Hisham, M., Yaakob, S.N., Raof, R.A., Nazren, A.A., Embedded, N.W.: Template matching using sum of squared difference and normalized cross correlation. In: 2015 IEEE Student Conference on Research and Development (SCOReD), pp. 100–104. IEEE (2015)Google Scholar
  8. 8.
    Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 1–10. IEEE Computer Society (2007)Google Scholar
  9. 9.
    Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., Navab, N.: Deeper depth prediction with fully convolutional residual networks. In: 2016 Fourth international conference on 3D vision (3DV), pp. 239–248. IEEE (2016)Google Scholar
  10. 10.
    Linder, T., Tretyakov, V., Blumenthal, S., Molitor, P., Holz, D., Murphy, R., Tadokoro, S., Surmann, H.: Rescue robots at the collapse of the municipal archive of Cologne City: a field report. In: 2010 IEEE Safety Security and Rescue Robotics, pp. 1–6, July 2010Google Scholar
  11. 11.
    Longuet-Higgins, H.C.: A computer algorithm for reconstructing a scene from two projections. Nature 293(5828), 133 (1981)CrossRefGoogle Scholar
  12. 12.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  13. 13.
    Mur-Artal, R., Montiel, J.M.M., Tardos, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Rob. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  14. 14.
    Zin, M.R.A.M., Saad, J.M.D., Anuar, A., Zulkarnain, A.T., Sahari, K.: Development of a low cost small sized in-pipe robot. Procedia Eng. 41, 1469–1475 (2012)CrossRefGoogle Scholar
  15. 15.
    Rublee, E., Rabaud, V., Konolige, K., Bradski, G.R.: ORB: an efficient alternative to SIFT or SURF. In: ICCV, vol. 11, p. 2. Citeseer (2011)Google Scholar
  16. 16.
    Strasdat, H., Montiel, J., Davison, A.J.: Real-time monocular SLAM: why filter? In: 2010 IEEE International Conference on Robotics and Automation, pp. 2657–2664. IEEE (2010)Google Scholar
  17. 17.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of the International Conference on Intelligent Robot Systems (IROS), October 2012Google Scholar
  18. 18.
    Ummenhofer, B., Zhou, H., Uhrig, J., Mayer, N., Ilg, E., Dosovitskiy, A., Brox, T.: DeMoN: depth and motion network for learning monocular stereo. CoRR abs/1612.02401 (2016). http://arxiv.org/abs/1612.02401
  19. 19.
    Ummenhofer, B., Zhou, H., Uhrig, J., Mayer, N., Ilg, E., Dosovitskiy, A., Brox, T.: DeMoN: depth and motion network for learning monocular stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5038–5047 (2017)Google Scholar
  20. 20.
    Younes, G., Asmar, D., Shammas, E., Zelek, J.: Keyframe-based monocular SLAM: design, survey, and future directions. Robot. Auton. Syst. 98, 67–88 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Jose Miguel Torres-Camara
    • 1
  • Felix Escalona
    • 1
  • Francisco Gomez-Donoso
    • 1
    Email author
  • Miguel Cazorla
    • 1
  1. 1.Institute for Computer ResearchUniversity of AlicanteAlicanteSpain

Personalised recommendations