Advertisement

Quantitative Analysis of Surface Reconstruction Accuracy Achievable with the TSDF Representation

  • Diana Werner
  • Philipp Werner
  • Ayoub Al-Hamadi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9163)

Abstract

During the last years KinectFusion and related algorithms have facilitated significant advances in real-time simultaneous localization and mapping (SLAM) with depth-sensing cameras. Nearly all of these algorithms represent the observed area with the truncated signed distance function (TSDF). The reconstruction accuracy achievable with the representation is crucial for camera pose estimation and object reconstruction. Therefore, we evaluate this reconstruction accuracy in an optimal context, i.e. assuming error-free camera pose estimation and depth measurement. For this purpose we use a synthetic dataset of depth image sequences and corresponding camera pose ground truth and compare the reconstructed point clouds with the ground truth meshes. We investigate several influencing factors, especially the TSDF resolution and show that the TSDF is a very powerful representation even for low resolutions.

Keywords

TSDF Reconstruction accuracy KinectFusion 

Notes

Acknowledgments

This work was supported by Transregional Collaborative Research Centre SFB/TRR 62 (Companion-Technology for Cognitive Technical Systems) funded by the German Research Foundation (DFG).

References

  1. 1.
    Bylow, E., Sturm, J., Kerl, C., Kahl, F., Cremers, D.: Real-time camera tracking and 3d reconstruction using signed distance functions. In: Robotics: Science and Systems (RSS) Conference 2013, vol. 9 (2013)Google Scholar
  2. 2.
    Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 303–312. ACM (1996)Google Scholar
  3. 3.
    Georgia Institute of Technology: Large geometric models archive. http://www.cc.gatech.edu/projects/large_models/
  4. 4.
    Girardeau-Montaut, D.: CloudCompare (2015). http://www.danielgm.net/cc/
  5. 5.
    Hemmat, H.J., Bondarev, E., Dubbelman, G., With, P.: Improved ICP-based pose estimation by distance-aware 3d mapping. In: International Conference on Computer Vision Theory and Applications (VISAPP), pp. 360–367. SciTePress (2014)Google Scholar
  6. 6.
    Hemmat, H.J., Bondarev, E., de With, P.H.N.: Exploring Distance-Aware weighting strategies for accurate reconstruction of Voxel-Based 3D synthetic models. In: Gurrin, C., Hopfgartner, F., Hurst, W., Johansen, H., Lee, H., O’Connor, N. (eds.) MMM 2014, Part I. LNCS, vol. 8325, pp. 412–423. Springer, Heidelberg (2014) Google Scholar
  7. 7.
    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A.: KinectFusion: real-time 3d reconstruction and interaction using a moving depth camera. In: Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, pp. 559–568. ACM (2011)Google Scholar
  8. 8.
    Jiang, S.Y., Chang, N.C., Wu, C.C., Wu, C.H., Song, K.T.: Error analysis and experiments of 3d reconstruction using a rgb-d sensor. In: IEEE International Conference on Automation Science and Engineering (CASE), pp. 1020–1025 (2014)Google Scholar
  9. 9.
    Kinect: Kinect (2014). http://www.xbox.com/en-us/kinect/
  10. 10.
    Meister, S., Izadi, S., Kohli, P., Hammerle, M., Rother, C., Kondermann, D.: When can we use kinectfusion for ground truth acquisition? In: Workshop on Color-Depth Camera Fusion in Robotics, IROS (2012)Google Scholar
  11. 11.
    Morgan McGuire: Computer graphics archive. http://graphics.cs.williams.edu/data/meshes.xml
  12. 12.
    Newcombe, R.A., Davison, A.J., et al.: KinectFusion: real-time dense surface mapping and tracking. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136. IEEE (2011)Google Scholar
  13. 13.
    PCL: Kinectfusion in PCL 1.7.1 (2014). http://www.pointclouds.org/
  14. 14.
    Rusu, R., Cousins, S.: 3d is here: Point Cloud Library (PCL). In: IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4 (2011)Google Scholar
  15. 15.
    Stanford University: The stanford 3d scanning repository. http://graphics.stanford.edu/data/3Dscanrep/
  16. 16.
    Werner, D., Al-Hamadi, A., Werner, P.: Truncated signed distance function: experiments on voxel size. In: Campilho, A., Kamel, M. (eds.) ICIAR 2014, Part II. LNCS, vol. 8815, pp. 357–364. Springer, Heidelberg (2014) Google Scholar
  17. 17.
    Whelan, T., Johannsson, H., Kaess, M., Leonard, J., McDonald, J.: Robust tracking for real-time dense RGB-D mapping with Kintinuous. Tech. Rep. MIT-CSAIL-TR-2012-031, Computer Science and Artificial Intelligence Laboratory, MIT, September 2012Google Scholar
  18. 18.
    Whelan, T., Johannsson, H., Kaess, M., Leonard, J.J., McDonald, J.: Robust real-time visual odometry for dense RGB-D mapping. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 5724–5731. IEEE (2013)Google Scholar
  19. 19.
    Whelan, T., Kaess, M., Fallon, M., Johannsson, H., Leonard, J., McDonald, J.: Kintinuous: spatially extended kinectfusion. In: RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.University of MagdeburgMagdeburgGermany

Personalised recommendations