Skip to main content
Log in

Texture-guided depth upsampling using Bregman split: a clustering graph-based approach

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Recently, RGB-D sensors have gained significant popularity due to their affordable cost. Compared to their associated high-resolution (HR) color images, their depth maps counterparts are typically with lower resolution. In addition, the quality of those maps is still inadequate for further applications due to the existing holes, noises and artifacts. In this paper, we propose a clustering graph-based framework for depth map super-resolution. This framework uses the guidance of HR textured-intensity layer to support and compel high-frequency details in the depth map recovery process. This textured layer is extracted from the consolidated HR intensity image in a texture–structure separation process via a new relative total variation technique. Furthermore, instead of the standard sparse representation that does not consider the local structural information effectively, we propose a novel clustered-graph sparse representation with a low-rank prior. With this joint representation, any signal can be coded effectively, as the low-rank property reveals the global structure information while the intrinsic information is kept by a novel multiclass incoherence self-learning between classes. At the same time, a grouped coherence within each class dictionary is preserved. We optimize that joint objective function using state-of-the-art split Bregman algorithm. Experimental results on Middleburry 2005, 2007, 2014 and real-world datasets demonstrate that the proposed algorithm is very efficient and outperforms the state-of-the-art approaches in terms of objective and subjective quality.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Han, J., Shao, L., Xu, D., Shotton, J.: Enhanced computer vision with microsoft kinect sensor. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)

    Google Scholar 

  2. Cai, Z., Han, J., Liu, L., Shao, L.: RGB-D datasets using microsoft kinect or similar sensors: a survey. Multimed. Tools Appl. 76(3), 4313–4355 (2017)

    Google Scholar 

  3. Tseng, C.W., Su, H.R., Lai, S.H., Liu, J.: Depth image super-resolution via multi-frame registration and deep learning. In: Signal and Information Processing Association Annual Summit and Conference (APSIPA), pp. 1–8 (2016)

  4. Al Ismaeil, K., Aouada, D., Solignac, T., Mirbach, B., Ottersten, B.: Real-time non-rigid multi-frame depth video super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’15), pp. 8–16 (2015)

  5. Dai, Q., Yoo, S., Kappeler, A., Katsaggelos, A.K.: Sparse representation-based multiple frame video super-resolution. IEEE Trans. Image Process. 26(2), 765–781 (2017)

    MathSciNet  MATH  Google Scholar 

  6. Xie, J., Feris, R.S., Sun, M.T.: Edge-guided single depth image super resolution. IEEE Trans. Image Process. 25(1), 428–438 (2016)

    MathSciNet  MATH  Google Scholar 

  7. Mac Aodha, O., Campbell, N.D., Nair, A., Brostow, G.J.: Patch based synthesis for single depth image super-resolution. In: European Conference on Computer Vision, pp. 71–84. Springer, Berlin (2012)

    Google Scholar 

  8. Zheng, H., Bouzerdoum, A., Phung, S.L.: Depth image super-resolution using multi-dictionary sparse representation. In: 20th IEEE International Conference on Image Processing (ICIP), pp. 957–961 (2013)

  9. Zhang, Y., Zhou, Y., Wang, A., Wu, Q., Hou, C.: Joint nonlocal sparse representation for depth map super-resolution. In: IEEE International Conference on Image Processing (ICIP), pp. 972–976 (2017)

  10. Liu, F., Shen, C., Lin, G., Reid, I.: Learning depth from single monocular images using deep convolutional neural fields. IEEE Trans. Pattern Anal. Mach. Intell. 38(10), 2024–2039 (2016)

    Google Scholar 

  11. Hui, T.W., Loy, C.C., Tang, X.: Depth map super-resolution by deep multi-scale guidance. In: European Conference on Computer Vision, pp. 353–369 (2016)

    Google Scholar 

  12. Song, X., Dai, Y., Qin, X.: Deep depth super-resolution: learning depth super-resolution using deep convolutional neural network. In: Asian Conference on Computer Vision, pp. 360–376 (2016)

    Google Scholar 

  13. Yang, H., Sun, X., Zhu, M., Wu, K.: Non-local l0 gradient minimization filter and its applications for depth image upsampling. In: International Conference on Image and Graphics, pp. 85–96 (2017)

  14. Eichhardt, I., Chetverikov, D., Jankó, Z.: Image-guided ToF depth upsampling: a survey. Mach. Vis. Appl. 28(3–4), 267–282 (2017)

    Google Scholar 

  15. Yang, Y., Wang, Z.: Range image super-resolution via guided image filter. In: Proceedings of the 4th International Conference on Internet Multimedia Computing and Service on Internet Multimedia Computing and Service, pp. 200–203 (2012)

  16. Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM Trans. Graph. 26(3), 96 (2007)

    Google Scholar 

  17. Yuan, L., Jin, X., Li, Y., Yuan, C.: Depth map super-resolution via low-resolution depth guided joint trilateral up-sampling. J. Vis. Commun. Image Represent. 46, 280–291 (2017)

    Google Scholar 

  18. Garcia, F., Aouada, D., Mirbach, B., Solignac, T., Ottersten, B.: Unified multi-lateral filter for real-time depth map enhancement. Image Vis. Comput. 41, 26–41 (2015)

    Google Scholar 

  19. Liu, M.Y., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In: Proceedings of IEEE Conference Computer Vision Pattern Recognition (CVPR), pp. 169–176 (2013)

  20. Lu, X., Guo, Y., Liu, N., Wan, L., Fang, T.: Non-convex joint bilateral guided depth upsampling. Multimed. Tools Appl. 1, 1–24 (2017)

    Google Scholar 

  21. Ham, B., Cho, M., Ponce, J.: Robust guided image filtering using nonconvex potentials. IEEE Trans. Pattern Anal. Mach. Intell. 40(1), 192–207 (2018)

    Google Scholar 

  22. Wang, Y., Zhong, F., Peng, Q., Qin, X.: Depth map enhancement based on color and depth consistency. Vis. Comput. 30(10), 1157–1168 (2014)

    Google Scholar 

  23. Chen, C., Cai, J., Zheng, J., Cham, T.J., Shi, G.: Kinect depth recovery using a color-guided, region-adaptive, and depth-selective framework. ACM Trans. Intell. Syst. Technol. (TIST) 6(2), 12 (2015)

    Google Scholar 

  24. Liu, W., Chen, X., Yang, J., Wu, Q.: Robust color guided depth map restoration. IEEE Trans. Image Process. 26(1), 315–327 (2017)

    MathSciNet  MATH  Google Scholar 

  25. Ferstl, D., Reinbacher, C., Ranftl, R., Rüther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 993–1000 (2013)

  26. Ding, K., Chen, W., Wu, X.: Optimum inpainting for depth map based on l 0 total variation. Vis. Comput. 30(12), 1311–1320 (2014)

    Google Scholar 

  27. Zhang, H.T., Yu, J., Wang, Z.F.: Probability contour guided depth map inpainting and superresolution using non-local total generalized variation. Multimed. Tools Appl. 77(7), 9003–9020 (2018)

    Google Scholar 

  28. Yang, J., Ye, X., Li, K., Hou, C., Wang, Y.: Color-guided depth recovery from RGB-D data using an adaptive autoregressive model. IEEE Trans. Image Process. 23(8), 3443–3458 (2014)

    MathSciNet  MATH  Google Scholar 

  29. Liu, W., Chen, X., Yang, J., Wu, Q.: Variable bandwidth weighting for texture copy artifact suppression in guided depth upsampling. IEEE Trans. Circuits Syst. Video Technol. 27(10), 2072–2085 (2017)

    Google Scholar 

  30. Zhang, H.T., Yu, J., Wang, Z.F.: Depth map super-resolution using non-local higher-order regularization with classified weights. In: International Conference on Image Processing (ICIP), pp. 4043–4047 (2017)

  31. Jiang, Z., Hou, Y., Yue, H., Yang, J., Hou, C.: Depth super-resolution from RGB-D pairs with transform and spatial domain regularization. IEEE Trans. Image Process. 27(5), 2587–2602 (2018)

    MathSciNet  MATH  Google Scholar 

  32. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)

    MathSciNet  MATH  Google Scholar 

  33. Kiechle, M., Hawe, S., Kleinsteuber, M.: A joint intensity and depth co-sparse analysis model for depth map super-resolution. In: IEEE International Conference on Computer Vision (ICCV), pp. 1545–1552 (2013)

  34. Kwon, H., Tai, Y.W., Lin, S.: Data-driven depth map refinement via multi-scale sparse representation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 159–167 (2015)

  35. Hui, T.W., Loy, C.C., Tang, X.: Depth map super-resolution by deep multi-scale guidance. In: Proceedings of European Conference Computer Vision (ECCV), pp. 353–369 (2016)

    Google Scholar 

  36. Zhu, J., Zhai, W., Cao, Y., Zha, Z.J.: Co-occurrent structural edge detection for color-guided depth map super-resolution. In: International Conference on Multimedia Modeling, pp. 93–105 (2018)

    Google Scholar 

  37. Xu, L., Yan, Q., Xia, Y., Jia, J.: Structure extraction from texture via relative total variation. ACM Trans. Graph. (TOG) 31(6), 139 (2012)

    Google Scholar 

  38. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Google Scholar 

  39. Gilboa, G., Sochen, N.A., Zeevi, Y.Y.: Regularized shock filters and complex diffusion. In: European Conference on Computer Vision, pp. 399–413 (2002)

    Google Scholar 

  40. Tropp, J.A., Wright, S.J.: Computational methods for sparse solution of linear inverse problems. Proc. IEEE 98(6), 948–958 (2010)

    Google Scholar 

  41. Zhang, Z., Xu, Y., Yang, J., Li, X., Zhang, D.: A survey of sparse representation: algorithms and applications. IEEE Access 3, 490–530 (2015)

    Google Scholar 

  42. Ning, Q., Chen, K., Yi, L., Fan, C., Lu, Y., Wen, J.: Image super-resolution via analysis sparse prior. IEEE Signal Process. Lett. 20(4), 399–402 (2013)

    Google Scholar 

  43. Zheng, M., Bu, J., Chen, C., Wang, C., Zhang, L., Qiu, G., Cai, D.: Graph regularized sparse coding for image representation. IEEE Trans. Image Process. 20(5), 1327–1336 (2011)

    MathSciNet  MATH  Google Scholar 

  44. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers found. Trends Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  45. Goldstein, T., Osher, S.: The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2(2), 323–343 (2009)

    MathSciNet  MATH  Google Scholar 

  46. Cai, J.F., Candès, E.J., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20(4), 1956–1982 (2010)

    MathSciNet  MATH  Google Scholar 

  47. http://vision.middlebury.edu/stereo/data/. Accessed 9 Nov 2018

  48. Subr, K., Soler, C., Durand, F.: Edge-preserving multiscale image decomposition based on local extrema. In: ACM Transactions on Graphics (TOG), vol. 28, no. 5, p. 147. ACM (2009)

    Google Scholar 

  49. Xu, L., Lu, C., Xu, Y., Jia, J.: Image smoothing via L 0 gradient minimization. In: ACM Transactions on Graphics (TOG), vol. 30, no. 6, p. 174. ACM (2011)

  50. Min, D., Choi, S., Lu, J., Ham, B., Sohn, K., Do, M.N.: Fast global image smoothing based on weighted least squares. IEEE Trans. Image Process. 23(12), 5638–5653 (2014)

    MathSciNet  MATH  Google Scholar 

  51. Zhang, Q., Shen, X., Xu, L., Jia, J.: Rolling guidance filter. In: European Conference on Computer Vision, pp. 815–830. Springer, Cham (2014)

    Google Scholar 

  52. Cho, H., Lee, H., Kang, H., Lee, S.: Bilateral texture filtering. ACM Trans. Graph. (TOG) 33(4), 128 (2014)

    Google Scholar 

  53. Bao, L., Song, Y., Yang, Q., Yuan, H., Wang, G.: Tree filtering: efficient structure-preserving smoothing with a minimum spanning tree. IEEE Trans. Image Process. 23(2), 555–569 (2014)

    MathSciNet  MATH  Google Scholar 

  54. Zhu, L., Fu, C.W., Jin, Y., Wei, M., Qin, J., Heng, P.A.: Non-local sparse and low-rank regularization for structure-preserving image smoothing. In: Computer Graphics Forum, vol. 35, no. 7, pp. 217–226 (2016)

    Google Scholar 

  55. Starck, J.L., Elad, M., Donoho, D.L.: Image decomposition via the combination of sparse representations and a variational approach. IEEE Trans. Image Process. 14(10), 1570–1582 (2005)

    MathSciNet  MATH  Google Scholar 

  56. Park, J., Kim, H., Tai, Y.W., Brown, M.S., Kweon, I.: High quality depth map upsampling for 3d-tof cameras. In: IEEE International Conference on Computer Vision (ICCV), pp. 1623–1630 (2011)

  57. Jung, C., Yu, S., Kim, J.: Intensity-guided edge-preserving depth upsampling through weighted L0 gradient minimization. J. Vis. Commun. Image Represent. 42, 132–144 (2017)

    Google Scholar 

  58. Li, Y., Min, D., Do, M.N., Lu, J.: Fast guided global interpolation for depth and motion. In: European Conference on Computer Vision, pp. 717–733 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sherif S. Kishk.

Ethics declarations

Conflict of interest

The authors who are involved in this manuscript declare that they have no affiliations or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements) or non-financial interest (such as personal or professional relationships, affiliations, knowledge or beliefs) in the subject matter or materials discussed in this manuscript. In addition, no animals were involved in the research.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Altantawy, D.A., Saleh, A.I. & Kishk, S.S. Texture-guided depth upsampling using Bregman split: a clustering graph-based approach. Vis Comput 36, 333–359 (2020). https://doi.org/10.1007/s00371-018-1611-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1611-x

Keywords

Navigation