Robust and Practical Depth Map Fusion for Time-of-Flight Cameras

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10269)

Abstract

Fusion of overlapping depth maps is an important part in many 3D reconstruction pipelines. Ideally fusion produces an accurate and nonredundant point cloud robustly even from noisy and partially poorly registered depth maps. In this paper, we improve an existing fusion algorithm towards a more ideal solution. Our method builds a nonredundant point cloud from a sequence of depth maps so that the new measurements are either added to the existing point cloud if they are in an area which is not yet covered or used to refine the existing points. The method is robust to outliers and erroneous depth measurements as well as small depth map registration errors due to inaccurate camera poses. The results show that the method overcomes its predecessor both in accuracy and robustness.

Keywords

Depth map merging RGB-D reconstruction 

References

  1. 1.
    Choi, S., Zhou, Q.Y., Koltun, V.: Robust reconstruction of indoor scenes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5556–5565 (2015)Google Scholar
  2. 2.
    Córdova-Esparza, D.M., Terven, J.R., Jiménez-Hernández, H., Herrera-Navarro, A.M.: A multiple camera calibration and point cloud fusion tool for kinect v2. In: Science of Computer Programming (2017, inpress)Google Scholar
  3. 3.
    Fuhrmann, S., Goesele, M.: Fusion of depth maps with multiple scales. In: Proceedings of the 2011 SIGGRAPH Asia Conference, pp. 148:1–148:8. ACM (2011)Google Scholar
  4. 4.
    Goesele, M., Curless, B., Seitz, S.M.: Multi-view stereo revisited. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2006)Google Scholar
  5. 5.
    Herrera, C.D., Kannala, J., Heikkilä, J.: Joint depth and color camera calibration with distortion correction. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 34(10), 2058–2064 (2012)CrossRefGoogle Scholar
  6. 6.
    Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson surface reconstruction. In: Eurographics Symposium on Geometry Processing (2006)Google Scholar
  7. 7.
    Kyöstilä, T., Herrera C., D., Kannala, J., Heikkilä, J.: Merging overlapping depth maps into a nonredundant point cloud. In: Kämäräinen, J.-K., Koskela, M. (eds.) SCIA 2013. LNCS, vol. 7944, pp. 567–578. Springer, Heidelberg (2013). doi:10.1007/978-3-642-38886-6_53 CrossRefGoogle Scholar
  8. 8.
    Labatut, P., Pons, J.P., Keriven, R.: Robust and efficient surface reconstruction from range data. Comput. Graph. Forum (CGF) 28(8), 2275–2290 (2009)CrossRefGoogle Scholar
  9. 9.
    Li, J., Li, E., Chen, Y., Xu, L., Zhang, Y.: Bundled depth-map merging for multi-view stereo. In: IEEE Conference on Computer Vision and Pattern Recognition (2010)Google Scholar
  10. 10.
    Mendel, J.: Lessons in Estimation Theory for Signal Processing, Communications and Control. Prentice Hall, Englewood Cliffs (1995)MATHGoogle Scholar
  11. 11.
    Merrell, P., et al.: Real-time visibility-based fusion of depth maps. In: IEEE International Conference on Computer Vision (ICCV) (2007)Google Scholar
  12. 12.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  13. 13.
    Naik, N., Kadambi, A., Rhemann, C., Izadi, S., Raskar, R., Kang, S.B.: A light transport model for mitigating multipath interference in time-of-flight sensors. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 73–81 (2015)Google Scholar
  14. 14.
    Nießner, M., Zollhöfer, M., Izadi, S., Stamminger, M.: Real-time 3D reconstruction at scale using voxel hashing. ACM Trans. Graph. (TOG) 32(6), 169 (2013)CrossRefGoogle Scholar
  15. 15.
    Pagliari, D., Pinto, L.: Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors. Sensors 15(11), 27569–27589 (2015)CrossRefGoogle Scholar
  16. 16.
    Richard A., N., Shahram, I., Otmar, H., David, M., David, K., Andrew J., D., Pushmeet, K., Jamie, S., Steve, H., Andrew, F.: KinectFusion: real-time dense surface mapping and tracking. In: IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 127–136, October 2011Google Scholar
  17. 17.
    Roth, H., Vona, M.: Moving volume KinectFusion. In: British Machine Vision Conference (2012)Google Scholar
  18. 18.
    Tola, E., Strecha, C., Fua, P.: Efficient large-scale multi-view stereo for ultra high-resolution image sets. Mach. Vis. Appl. 23(5), 903–920 (2012)CrossRefGoogle Scholar
  19. 19.
    Whelan, T., Kaess, M., Maurice, F., Johannsson, H., Leonard, J., McDonald, J.: Kintinuous: spatially extended KinectFusion. Technical report (2012)Google Scholar
  20. 20.
    Ylimäki, M., Kannala, J., Heikkilä, J.: Optimizing the Accuracy and Compactness of Multi-view Reconstructions, pp. 171–183, September 2015Google Scholar
  21. 21.
    Zach, C., Pock, T., Bischof, H.: A globally optimal algorithm for robust TV-\(L^1\) range image integration. In: IEEE International Conference on Computer Vision (ICCV) (2007)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Markus Ylimäki
    • 1
  • Juho Kannala
    • 2
  • Janne Heikkilä
    • 1
  1. 1.Center for Machine Vision ResearchUniversity of OuluOuluFinland
  2. 2.Department of Computer ScienceAalto UniversityEspooFinland

Personalised recommendations