Advertisement

3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

  • Ali K. Thabet
  • Jean Lahoud
  • Daniel Asmar
  • Bernard Ghanem
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9004)

Abstract

RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

Keywords

Markov Random Field Indoor Scene Visual Slam Joint Bilateral Filter Unknown Depth 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgement

Research reported in this publication was supported by competitive research funding from King Abdullah University of Science and Technology (KAUST).

Supplementary material

336656_1_En_16_MOESM1_ESM.zip (18 mb)
Supplementary material (zip 18,404 KB)

References

  1. 1.
    Barron, J.T., Malik, J., Berkeley, U.C.: Intrinsic scene properties from a single RGB-D image. In: CVPR (2013)Google Scholar
  2. 2.
    Bazin, J., Seo, Y.: Globally optimal line clustering and vanishing point estimation in manhattan world. In: CVPR (2012)Google Scholar
  3. 3.
    Boykov, Y., Funka-Lea, G.: Graph cuts and efficient N-D image segmentation. IJCV 70(2), 109–131 (2006)CrossRefGoogle Scholar
  4. 4.
    Camplani, M., Salgado, L.: Efficient spatio-temporal hole filling strategy for kinect depth maps. In: SPIE (2012)Google Scholar
  5. 5.
    Cheung, S.C.S.: Layer depth denoising and completion for structured-light RGB-D cameras. In: CVPR (2013)Google Scholar
  6. 6.
    Diebel, J., Thrun, S.: An application of markov random fields to range sensing. In: NIPS (2005)Google Scholar
  7. 7.
    Endres, F., Hess, J., Engelhard, N., Sturm, J., Cremers, D., Burgard, W.: An evaluation of the RGB-D SLAM system. In: ICRA (2012)Google Scholar
  8. 8.
    Flint, A., Murray, D., Reid, I.: Manhattan scene understanding using monocular, stereo, and 3D features. In: ICCV (2011)Google Scholar
  9. 9.
    Furukawa, Y., Curless, B., Seitz, S., Szeliski, R.: Manhattan-world stereo. In: CVPR (2009)Google Scholar
  10. 10.
    Furukawa, Y., Curless, B., Seitz, S.M., Szeliski, R.: Reconstructing building interiors from images. In: ICCV (2009)Google Scholar
  11. 11.
    Gallup, D., Frahm, J.M., Pollefeys, M.: Piecewise planar and non-planar stereo for urban scene reconstruction. In: CVPR (2012)Google Scholar
  12. 12.
    Ghanem, B., Ahuja, N.: Dinkelbach NCUT: An efficient framework for solving normalized cuts problems with priors and convex constraints. IJCV 89(1), 40–55 (2010)CrossRefGoogle Scholar
  13. 13.
    Gupta, S., Arbel, P., Malik, J., Berkeley, B.: Perceptual organization and recognition of indoor scenes from RGB-D images. In: CVPR (2013)Google Scholar
  14. 14.
    Hedau, V., Hoiem, D., Forsyth, D.: Recovering the spatial layout of cluttered rooms. In: CVPR (2009)Google Scholar
  15. 15.
    Hedau, V., Hoiem, D., Forsyth, D.: Thinking inside the box: using appearance models and context based on room geometry. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part VI. LNCS, vol. 6316, pp. 224–237. Springer, Heidelberg (2010) CrossRefGoogle Scholar
  16. 16.
    Hedau, V., Hoiem, D., Forsyth, D.: Recovering free space of indoor scenes from a single image. In: CVPR (2012)Google Scholar
  17. 17.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using kinect-style depth cameras for dense 3D modeling of indoor environments. IJRR 31(5), 647–663 (2012)Google Scholar
  18. 18.
    Hu, G., Huang, S., Zhao, L.: A robust RGB-D SLAM algorithm. In: IROS (2012)Google Scholar
  19. 19.
    Jia, Z., Gallagher, A., Saxena, A., Chen, T.: 3D-based reasoning with blocks, support, and stability. In: CVPR (2013)Google Scholar
  20. 20.
    Kim, B.s., Arbor, A., Savarese, S.: Accurate localization of 3D objects from RGB-D data using segmentation hypotheses. In: CVPR (2013)Google Scholar
  21. 21.
    Kopf, J., Cohen, M.: Joint bilateral upsampling. In: SIGGRAPH (2007)Google Scholar
  22. 22.
    Koppula, H.S., Anand, A., Joachims, T., Saxena, A.: Semantic labeling of 3D point clouds for indoor scenes. In: NIPS (2011)Google Scholar
  23. 23.
    Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. In: SIGGRAPH (2004)Google Scholar
  24. 24.
    Park, J., Kim, H., Brown, M.S., Kweon, I.: High quality depth map upsampling for 3D-TOF cameras. In: ICCV (2011)Google Scholar
  25. 25.
    Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part V. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  26. 26.
    Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: IROS (2012)Google Scholar
  27. 27.
    Wang, L., Jin, H., Yang, R., Gong, M.: Stereoscopic inpainting: joint color and depth completion from stereo images. In: CVPR (2008)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Ali K. Thabet
    • 1
  • Jean Lahoud
    • 1
  • Daniel Asmar
    • 2
  • Bernard Ghanem
    • 1
  1. 1.King Abdullah University of Science and Technology (KAUST)ThuwalSaudi Arabia
  2. 2.American University of Beirut (AUB)BeirutLebanon

Personalised recommendations