Exploring Distance-Aware Weighting Strategies for Accurate Reconstruction of Voxel-Based 3D Synthetic Models

  • Hani Javan Hemmat
  • Egor Bondarev
  • Peter H. N. de With
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8325)

Abstract

In this paper, we propose and evaluate various distance-aware weighting strategies to improve reconstruction accuracy of a voxel-based model according to the Truncated Signed Distance Function (TSDF), from the data obtained by low-cost depth sensors. We look at two strategy directions: (a) weight definition strategies prioritizing importance of the sensed data depending on the data accuracy, and (b) model updating strategies defining the level of influence of the new data on the existing 3D model. In particular, we introduce Distance-Aware (DA) and Distance-Aware Slow-Saturation (DASS) updating methods to intelligently integrate the depth data into the synthetic 3D model based on the distance-sensitivity metric of a low-cost depth sensor. By quantitative and qualitative comparison of the resulting synthetic 3D models to the corresponding ground-truth models, we identify the most promising strategies, which lead to an accuracy improvement involving a reduction of the model error by 10 − 35%.

Keywords

3D Reconstruction Voxel-Models Weighting Strategy Truncated Signed Distance Function (TSDF) Low-cost Depth Sensor 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: ACM SIGGRAPH Conf. Proceedings, New Orleans, pp. 303–312 (1996)Google Scholar
  2. 2.
    Engelhard, N., et al.: Real-time 3D visual SLAM with a Hand-held Camera. In: RGB-D Workshop on 3D Perception in Robotics, European Robotics Forum (2011)Google Scholar
  3. 3.
    Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: Dense Tracking and Mapping in Real-Time. In: IEEE Int. Conf. on Computer Vision (ICCV), Spain (2011)Google Scholar
  4. 4.
    Steinbruecker, F., et al.: Real-Time Visual Odometry from Dense RGB-D Images. In: Live Dense Reconstruction with Moving Cameras at the ICCV (2011)Google Scholar
  5. 5.
    Andersen, V., et al.: Surfel Based Geometry Resonstruction. In: Theory and Practice of Computer Graphics: UK Chapter of Eurographics Association, pp. 39–44 (2010)Google Scholar
  6. 6.
    Chang, J.Y., et al.: GPU-friendly multi-view stereo reconstruction using surfel representation and graph cuts. In: Comput. Vis. Image Understand. (2011)Google Scholar
  7. 7.
    Newcombe, R.A., et al.: KinectFusion: Real-time dense surface mapping and tracking. In: IEEE Int. Symp. Mixed and Augmented Reality, vol. 7, pp. 127–136 (2011)Google Scholar
  8. 8.
    Izadi, S., et al.: KinectFusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera. In: Proc. of 24th Annual ACM UIST 2011, pp. 559–568 (2011)Google Scholar
  9. 9.
    Bondarev, E., et al.: On Photo-realistic 3D Reconstruction of Large-scale and Arbitrary-shaped Environments. In: CCNC. IEEE Press, Las Vegas (2013)Google Scholar
  10. 10.
    Whelan, T., et al.: Kintinuous: Spatially Extended KinectFusion. In: RSS Workshop on RGB-D: Advanced Reasoning with Depth Cameras, Sydney, Australia (2012)Google Scholar
  11. 11.
    Whelan, T., et al.: Robust Tracking for Real-Time Dense RGB-D Mapping with Kintinuous. In: MIT technical report, MIT-CSAIL-TR-2012-031 (2012)Google Scholar
  12. 12.
    Whelan, T., et al.: Robust Real-Time Visual Odometry for Dense RGB-D Mapping. In: IEEE Int. Conf. on Robotics and Automation. ICRA, Germany (2013)Google Scholar
  13. 13.
    Hornung, A., Wurm, et al.: OctoMap: an Efficient Probabilistic 3D Mapping Framework Based on Octrees. J. Autonomous Robots 34(3), 189–206 (2013)CrossRefGoogle Scholar
  14. 14.
    Zeng, M., Zhao, F., Zheng, J., Liu, X.: A Memory-Efficient KinectFusion Using Octree. In: Hu, S.-M., Martin, R.R. (eds.) CVM 2012. LNCS, vol. 7633, pp. 234–241. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    A large scale, open project for point cloud processing, http://pointclouds.org/
  16. 16.
    Robotic Operationg System, http://www.ros.org/wiki/
  17. 17.
  18. 18.
  19. 19.
  20. 20.
    CloudCompare (version 2.4) (GPL software), http://www.danielgm.net/cc/
  21. 21.
    Meilland, M., Comport, A.I.: Simultaneous super-resolution, tracking and mapping. Sophia-Antipolis, France, CNRS-I3S/UNS (2012)Google Scholar
  22. 22.
    Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2), 239–256 (1992)CrossRefGoogle Scholar
  23. 23.
    Chow, J., et al.: Performance Analysis of a Low-cost Triangulation-based 3D Camera: Microsoft Kinect System. The Int.l Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 39(pt. B5), 175–180 (2012)CrossRefGoogle Scholar
  24. 24.
    Khoshelham, K.: Accuracy Analysis of Kinect Depth Data. In: Lichti, D.D., Habib, A.F. (eds.) ISPRS Workshop Laser Scanning. IAPRS XXXVIII5/W12 (2011)Google Scholar
  25. 25.
    Khoshelham, K., Oude Elberink, S.J.: Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications. Sensors 12(2), 1437–1454 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Hani Javan Hemmat
    • 1
  • Egor Bondarev
    • 1
  • Peter H. N. de With
    • 1
  1. 1.Eindhoven University of TechnologyEindhovenNetherlands

Personalised recommendations