Edge Orientation Driven Depth Super-Resolution for View Synthesis

  • Chao YaoEmail author
  • Jimin Xiao
  • Jian Jin
  • Xiaojuan BanEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11903)


The limited resolution of depth images is a constraint for most of practical computer vision applications. To solve this problem, in this paper, we present a novel depth super-resolution method based on machine learning. The proposed super-resolution method incorporates an edge-orientation based depth patch clustering method, which classifies the patches into several categories based on gradient strength and directions. A linear mapping between the low resolution (LR) and high resolution (HR) patch pairs is learned for each patch category by minimizing the synthesis view distortion. Since depth maps are not viewed directly, they are used to generate the virtual views, our method takes synthesis view distortion as the optimization strategy. Experimental results show that our proposed depth super-resolution approach performs well on depth super-resolution performance and the view synthesis compared to other depth super-resolution approaches.


View synthesis Depth-image-based rendering Linear mapping Edge orientation 



This research was supported in part by the National Key Research and Development Program of China (2016YFB0700502) National Natural Science Foundation of China (61873299, 61702036, 61572075).


  1. 1.
    Choi, J.S., Kim, M.: Super-interpolation with edge-orientation-based mapping kernels for low complex \(2\times \) upscaling. IEEE Trans. Image Process. 25(1), 469–483 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Choi, O., Jung, S.W.: A consensus-driven approach for structure and texture aware depth map upsampling. IEEE Trans. Image Process. 23(8), 3321–3335 (2014)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Deng, H., Yu, L., Qiu, J., Zhang, J.: A joint texture/depth edge-directed up-sampling algorithm for depth map coding. In: 2012 IEEE International Conference on Multimedia and Expo (ICME), pp. 646–650. IEEE (2012)Google Scholar
  4. 4.
    Dong, Y., Lin, C., Zhao, Y., Yao, C., Hou, J.: Depth map up-sampling with texture edge feature via sparse representation. In: Visual Communications and Image Processing (VCIP), pp. 1–4. IEEE (2016)Google Scholar
  5. 5.
    Fehn, C.: Depth-image-based rendering (DIBR), compression, and transmission for a new approach on 3D-TV. In: Stereoscopic Displays and Virtual Reality Systems XI, vol. 5291, pp. 93–105. International Society for Optics and Photonics (2004)Google Scholar
  6. 6.
    Ferstl, D., Reinbacher, C., Ranftl, R., Rüther, M., Bischof, H.: Image guided depth upsampling using anisotropic total generalized variation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 993–1000 (2013)Google Scholar
  7. 7.
    Guo, C., Li, C., Guo, J., Cong, R., Fu, H., Han, P.: Hierarchical features driven residual learning for depth map super-resolution. IEEE Trans. Image Process. 28(5), 2545–2557 (2019)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Hartigan, J.A., Wong, M.A.: Algorithm as 136: a k-means clustering algorithm. J. Roy. Stat. Soc.: Ser. C (Appl. Stat.) 28(1), 100–108 (1979)zbMATHGoogle Scholar
  9. 9.
    Hornácek, M., Rhemann, C., Gelautz, M., Rother, C.: Depth super resolution by rigid body self-similarity in 3D. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2013)Google Scholar
  10. 10.
    Hu, W., Cheung, G., Li, X., Au, O.: Depth map super-resolution using synthesized view matching for depth-image-based rendering. In: 2012 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 605–610. IEEE (2012)Google Scholar
  11. 11.
    Jiang, Z., Hou, Y., Yue, H., Yang, J., Hou, C.: Depth super-resolution from RGB-D pairs with transform and spatial domain regularization. IEEE Trans. Image Process. 27(5), 2587–2602 (2018)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Jin, Z., Tillo, T., Yao, C., Xiao, J., Zhao, Y.: Virtual-view-assisted video super-resolution and enhancement. IEEE Trans. Circ. Syst. Video Technol. 26(3), 467–478 (2016)CrossRefGoogle Scholar
  13. 13.
    Kim, D., Yoon, K.J.: High-quality depth map up-sampling robust to edge noise of range sensors. In: 2012 19th IEEE International Conference on Image Processing (ICIP), pp. 553–556. IEEE (2012)Google Scholar
  14. 14.
    Kim, K.I., Kwon, Y.: Single-image super-resolution using sparse regression and natural image prior. IEEE Trans. Pattern Anal. Mach. Intell. 32(6), 1127–1133 (2010)CrossRefGoogle Scholar
  15. 15.
    Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. ACM Trans. Graph. (ToG) 26(3), 96 (2007)CrossRefGoogle Scholar
  16. 16.
    Li, J., Lu, Z., Zeng, G., Gan, R., Zha, H.: Similarity-aware patchwork assembly for depth image super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3374–3381 (2014)Google Scholar
  17. 17.
    Liu, M., Zhao, Y., Liang, J., Lin, C., Bai, H., Yao, C.: Depth mapup-samplingwith fractal dimension and texture-depth boundary consistencies. Neurocomputing 257, 185–192 (2017)CrossRefGoogle Scholar
  18. 18.
    Liu, M.Y., Tuzel, O., Taguchi, Y.: Joint geodesic upsampling of depth images. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 169–176 (2013)Google Scholar
  19. 19.
    Liu, X., Zhai, D., Chen, R., Ji, X., Zhao, D., Gao, W.: Depth super-resolution via joint color-guided internal and external regularizations. IEEE Trans. Image Process. 28(4), 1636–1645 (2019)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Ni, M., Lei, J., Cong, R., Zheng, K., Peng, B., Fan, X.: Color-guided depth map super resolution using convolutional neural network. IEEE Access 5, 26666–26672 (2017)CrossRefGoogle Scholar
  21. 21.
    Oh, B.T., Lee, J., Park, D.S.: Depth map coding based on synthesized view distortion function. IEEE J. Sel. Top. Sign. Process. 5(7), 1344–1352 (2011)CrossRefGoogle Scholar
  22. 22.
    Oh, B.T., Oh, K.J.: View synthesis distortion estimation for AVC-and HEVC-compatible 3-D video coding. IEEE Trans. Circ. Syst. Video Technol. 24(6), 1006–1015 (2014)CrossRefGoogle Scholar
  23. 23.
    Park, J., Kim, H., Tai, Y.W., Brown, M.S., Kweon, I.: High quality depth map upsampling for 3D-TOF cameras. In: IEEE International Conference on Computer Vision (ICCV), pp. 1623–1630. IEEE (2011)Google Scholar
  24. 24.
    Riemens, A., Gangwal, O., Barenbrug, B., Berretty, R.P.: Multistep joint bilateral depth upsampling. In: Visual Communications and Image Processing 2009, vol. 7257, p. 72570M. International Society for Optics and Photonics (2009)Google Scholar
  25. 25.
    Scharstein, D., et al.: High-resolution stereo datasets with subpixel-accurate ground truth. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 31–42. Springer, Cham (2014). Scholar
  26. 26.
    Tanimoto, M., Fujii, T., Suzuki, K.: View synthesis algorithm in view synthesis reference software 3.5 (VSRS3. 5) document M16090, ISO/IEC JTC1/SC29/WG11 (MPEG) (2009)Google Scholar
  27. 27.
    Timofte, R., De, V., Van Gool, L.: Anchored neighborhood regression for fast example-based super-resolution. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 1920–1927. IEEE (2013)Google Scholar
  28. 28.
    Tsai, C.Y., Tsai, S.H.: Simultaneous 3D object recognition and pose estimation based on RGB-D images. IEEE Access 6, 28859–28869 (2018)CrossRefGoogle Scholar
  29. 29.
    Wang, L., Xiang, S., Meng, G., Wu, H., Pan, C.: Edge-directed single-image super-resolution via adaptive gradient magnitude self-interpolation. IEEE Trans. Circ. Syst. Video Technol. 23(8), 1289–1299 (2013)CrossRefGoogle Scholar
  30. 30.
    Wang, Y., Ortega, A., Tian, D., Vetro, A.: A graph-based joint bilateral approach for depth enhancement. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 885–889. IEEE (2014)Google Scholar
  31. 31.
    Wen, Y., Sheng, B., Li, P., Lin, W., Feng, D.D.: Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution. IEEE Trans. Image Process. 28(2), 994–1006 (2019). Scholar
  32. 32.
    Wildeboer, M.O., Yendo, T., Tehrani, M.P., Fujii, T., Tanimoto, M.: Color based depth up-sampling for depth compression. In: Picture Coding Symposium (PCS), pp. 170–173. IEEE (2010)Google Scholar
  33. 33.
    Xie, J., Chou, C.C., Feris, R., Sun, M.T.: Single depth image super resolution and denoising via coupled dictionary learning with local constraints and shock filtering. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2014)Google Scholar
  34. 34.
    Xie, J., Feris, R.S., Sun, M.T.: Edge-guided single depth image super resolution. IEEE Trans. Image Process. 25(1), 428–438 (2016)MathSciNetCrossRefGoogle Scholar
  35. 35.
    Xie, J., Feris, R.S., Yu, S.S., Sun, M.T.: Joint super resolution and denoising from a single depth image. IEEE Trans. Multimedia 17(9), 1525–1537 (2015)CrossRefGoogle Scholar
  36. 36.
    Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)MathSciNetCrossRefGoogle Scholar
  37. 37.
    Yang, Y., Gao, M., Zhang, J., Zha, Z., Wang, Z.: Depth map super-resolution using stereo-vision-assisted model. Neurocomputing 149, 1396–1406 (2015)CrossRefGoogle Scholar
  38. 38.
    Yao, C., Xiao, J., Tillo, T., Zhao, Y., Lin, C., Bai, H.: Depth map down-sampling and coding based on synthesized view distortion. IEEE Trans. Multimedia 18(10), 2015–2022 (2016)CrossRefGoogle Scholar
  39. 39.
    Zhu, J., Wang, L., Gao, J., Yang, R.: Spatial-temporal fusion for high accuracy depth maps using dynamic mrfs. IEEE Trans. Pattern Anal. Mach. Intell. 32(5), 899–909 (2010)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Beijing Advanced Innovation Center for Materials Genome Engineering, School of Computer and Communication EngineeringUniversity of Science and Technology BeijingBeijingChina
  2. 2.The Department of Electrical and Electronic EngineeringXi’an Jiaotong-Liverpool UniversitySuzhouChina
  3. 3.Institute of Information ScienceBeijing Jiaotong UniversityBeijingChina

Personalised recommendations