Advertisement

PSDF Fusion: Probabilistic Signed Distance Function for On-the-fly 3D Data Fusion and Scene Reconstruction

  • Wei Dong
  • Qiuyuan Wang
  • Xin Wang
  • Hongbin Zha
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11213)

Abstract

We propose a novel 3D spatial representation for data fusion and scene reconstruction. Probabilistic Signed Distance Function (Probabilistic SDF, PSDF) is proposed to depict uncertainties in the 3D space. It is modeled by a joint distribution describing SDF value and its inlier probability, reflecting input data quality and surface geometry. A hybrid data structure involving voxel, surfel, and mesh is designed to fully exploit the advantages of various prevalent 3D representations. Connected by PSDF, these components reasonably cooperate in a consistent framework. Given sequential depth measurements, PSDF can be incrementally refined with less ad hoc parametric Bayesian updating. Supported by PSDF and the efficient 3D data representation, high-quality surfaces can be extracted on-the-fly, and in return contribute to reliable data fusion using the geometry information. Experiments demonstrate that our system reconstructs scenes with higher model quality and lower redundancy, and runs faster than existing online mesh generation systems.

Keywords

Signed Distance Function Bayesian updating 

Notes

Acknowledgments

This work is supported by the National Natural Science Foundation of China (61632003, 61771026), and National Key Research and Development Program of China (2017YFB1002601).

Supplementary material

Supplementary material 1 (mp4 10051 KB)

References

  1. 1.
    Botsch, M., Kobbelt, L.: High-quality point-based rendering on modern GPUs. In: Proceedings of Pacific Conference on Computer Graphics and Applications, pp. 335–343 (2003)Google Scholar
  2. 2.
    Choi, S., Zhou, Q.Y., Koltun, V.: Robust reconstruction of indoor scenes. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  3. 3.
    CloudCompare-project: CloudCompare. http://www.cloudcompare.org/
  4. 4.
    Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of ACM SIGGRAPH, pp. 303–312 (1996)Google Scholar
  5. 5.
    Dai, A., Nießner, M., Zollhöfer, M., Izadi, S., Theobalt, C.: BundleFusion: real-time globally consistent 3D reconstruction using on-the-fly surface reintegration. ACM Trans. Graph. 36(3), 24 (2017)CrossRefGoogle Scholar
  6. 6.
    Dong, W., Shi, J., Tang, W., Wang, X., Zha, H.: An efficient volumetric mesh representation for real-time scene reconstruction using spatial hashing. In: Proceedings of IEEE International Conference on Robotics and Automation (2018)Google Scholar
  7. 7.
    Dzitsiuk, M., Sturm, J., Maier, R., Ma, L., Cremers, D.: De-noising, stabilizing and completing 3D reconstructions on-the-go using plane priors. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3976–3983 (2017)Google Scholar
  8. 8.
    Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 15–22 (2014)Google Scholar
  9. 9.
    Handa, A., Whelan, T., Mcdonald, J., Davison, A.J.: A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 1524–1531 (2014)Google Scholar
  10. 10.
    Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Rob. 34(3), 189–206 (2013)CrossRefGoogle Scholar
  11. 11.
    Kähler, O., Prisacariu, V., Valentin, J., Murray, D.: Hierarchical voxel block hashing for efficient integration of depth images. IEEE Rob. Autom. Lett. 1(1), 192–197 (2016)CrossRefGoogle Scholar
  12. 12.
    Kähler, O., Prisacariu, V.A., Ren, C.Y., Sun, X., Torr, P., Murray, D.: Very high frame rate volumetric integration of depth images on mobile devices. IEEE Trans. Vis. Comput. Graph. 21(11), 1241–1250 (2015)CrossRefGoogle Scholar
  13. 13.
    Keller, M., Lefloch, D., Lambers, M., Weyrich, T., Kolb, A.: Real-time 3D reconstruction in dynamic scenes using point-based fusion. In: Proceedings of International Conference on 3DTV, pp. 1–8 (2013)Google Scholar
  14. 14.
    Klingensmith, M., Dryanovski, I., Srinivasa, S.S., Xiao, J.: CHISEL: real time large scale 3D reconstruction onboard a mobile device using spatially-hashed signed distance fields. In: Proceedings of Robotics: Science and Systems, pp. 1–8 (2015)Google Scholar
  15. 15.
    Kolev, K., Tanskanen, P., Speciale, P., Pollefeys, M.: Turning mobile phones into 3D scanners. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3946–3953 (2014)Google Scholar
  16. 16.
    Lefloch, D., Kluge, M., Sarbolandi, H., Weyrich, T., Kolb, A.: Comprehensive use of curvature for robust and accurate online surface reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2349–2365 (2017)CrossRefGoogle Scholar
  17. 17.
    Lorensen, W.E., Cline, H.E.: Marching cubes: a high resolution 3D surface construction algorithm. In: Proceedings of ACM SIGGRAPH, vol. 6, pp. 7–9 (1987)CrossRefGoogle Scholar
  18. 18.
    Marton, Z.C., Rusu, R.B., Beetz, M.: On fast surface reconstruction methods for large and noisy point clouds. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3218–3223 (2009)Google Scholar
  19. 19.
    Newcombe, R.: Dense visual SLAM. Ph.D. thesis, Imperial College London, UK (2012)Google Scholar
  20. 20.
    Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 127–136 (2011)Google Scholar
  21. 21.
    Nguyen, C.V., Izadi, S., Lovell, D.: Modeling kinect sensor noise for improved 3D reconstruction and tracking. In: Proceedings of IEEE International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, pp. 524–530 (2012)Google Scholar
  22. 22.
    Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Real-time 3D reconstruction at scale using voxel hashing. ACM Trans. Graph. 32(6), 169 (2013)CrossRefGoogle Scholar
  23. 23.
    Prisacariu, V.A., et al.: InfiniTAM v3: a framework for large-scale 3D reconstruction with loop closure. arXiv e-prints, August 2017Google Scholar
  24. 24.
    Steinbrücker, F., Sturm, J., Cremers, D.: Volumetric 3D mapping in real-time on a CPU. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 2021–2028 (2014)Google Scholar
  25. 25.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: Proceedings of International Conference on Intelligent Robot Systems, pp. 573–580 (2012)Google Scholar
  26. 26.
    Tanskanen, P., Kolev, K., Meier, L., Camposeco, F., Saurer, O., Pollefeys, M.: Live metric 3D reconstruction on mobile phones. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 65–72 (2013)Google Scholar
  27. 27.
    Ulusoy, A.O., Black, M.J., Geiger, A.: Patches, planes and probabilities: a non-local prior for volumetric 3D reconstruction. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3280–3289 (2016)Google Scholar
  28. 28.
    Ulusoy, A.O., Geiger, A., Black, M.J.: Towards probabilistic volumetric reconstruction using ray potentials. In: Proceedings of International Conference on 3D Vision, pp. 10–18 (2015)Google Scholar
  29. 29.
    Vogiatzis, G., Hernández, C.: Video-based, real-time multi-view stereo. Image Vis. Comput. 29(7), 434–441 (2011)CrossRefGoogle Scholar
  30. 30.
    Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., Mcdonald, J.: Kintinuous: spatially extended KinectFusion. Rob. Auton. Syst. 69(C), 3–14 (2012)Google Scholar
  31. 31.
    Whelan, T., Kaess, M., Johannsson, H., Fallon, M., Leonard, J.J., McDonald, J.: Real-time large-scale dense RGB-D SLAM with volumetric fusion. Int. J. Robot. Res. 34(4–5), 598–626 (2015)CrossRefGoogle Scholar
  32. 32.
    Woodford, O.J., Vogiatzis, G.: A generative model for online depth fusion. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 144–157. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33715-4_11CrossRefGoogle Scholar
  33. 33.
    Zeng, M., Zhao, F., Zheng, J., Liu, X.: Octree-based fusion for realtime 3D reconstruction. Graph. Models 75(3), 126–136 (2013)CrossRefGoogle Scholar
  34. 34.
    Zhou, Q., Koltun, V.: Dense scene reconstruction with points of interest. ACM Trans. Graph. 32(4), 112 (2013)zbMATHGoogle Scholar
  35. 35.
    Zienkiewicz, J., Tsiotsios, A., Davison, A., Leutenegger, S.: Monocular, real-time surface reconstruction using dynamic level of detail. In: Proceedings of International Conference on 3D Vision, pp. 37–46 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Key Laboratory of Machine Perception (MOE), School of EECSPeking UniversityBeijingChina
  2. 2.Cooperative Medianet Innovation CenterShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations