Skip to main content
Log in

Depth-guided learning light field angular super-resolution with edge-aware inpainting

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

High angular resolution light field (LF) enables exciting applications such as depth estimation, virtual reality, and augmented reality. Although many light field angular super-resolution methods have been proposed, the reconstruction problem of LF with a wide-baseline is far from being solved. In this paper, we propose an end-to-end learning-based approach to achieve angular super-resolution of the light field with a wide-baseline. Our model consists of three components. We first train a convolutional neural network to predict the depth map for each sub-aperture view. Then the estimated depth maps are used to warp the input views. In the final component, we first use a convolutional neural network to fuse the initial warped light fields, and then we propose an edge-aware inpainting network to modify the inaccurate pixels in the near-edge regions. Accordingly, we design an EdgePyramid structure that contains multi-scale edges to perform the inpainting of near-edge pixels. Moreover, we introduce a novel loss function to reduce the artifacts and further estimate the similarity in near-edge regions. Experimental results on various light field datasets including large-baseline light field images show that our method outperforms the state-of-the-art light field angular super-resolution methods, especially in the terms of visual performance near edges.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Lytro. https://www.lytro.com/ (2016)

  2. Raytrix. https://www.raytrix.de/ (2016)

  3. Chao, D., Chen, C.L., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision (ECCV), pp. 184–199 (2014)

  4. Chaurasia, G., Sorkine, O., Drettakis, G.: Silhouette-aware warping for image-based rendering. Comput. Graph. Forum 4, 1223–1232 (2011)

    Article  Google Scholar 

  5. Couillaud, J., Ziou, D.: Light field variational estimation using a light field formation model. Vis. Comput. 36, 237–251 (2020)

    Article  Google Scholar 

  6. Chaurasia, G., Duchene, S., Sorkine-Hornung, O., Drettakis, G.: Depth synthesis and local warps for plausible image-based navigation. ACM Trans. Graph. (TOG) 32(3), 30:1–30:12 (2013)

    Article  Google Scholar 

  7. Habtegebrial, T., Varanasi, K., Bailer, C., Stricker, D.: Fast view synthesis with deep stereo vision. arXiv:1804.09690 (2018)

  8. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4d light fields. In: Asian Conference on Computer Vision (2016)

  9. Jin, J., Hou, J., Yuan, H., Kwong, S.: Learning light field angular super-resolution via a geometry-aware network. In: Association for the Advance of Artificial Intelligence (AAAI) (2020)

  10. Kalantari, N.K., Wang, T.C., Ravi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. (SIGGRAPH Asia) 35(6), 193:1–193:10 (2016)

    Google Scholar 

  11. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. (NeurIPS) 2, 1097–1105 (2012)

    Google Scholar 

  12. Levoy, M., Hanrahan, P.: Light field rendering. In: Siggraph, pp. 31–42 (1996)

  13. Liu, Y., Dai, Q., Xu, W.: A real time interactive dynamic light field transmission system. In: IEEE International Conference on Multimedia and Expo (2006)

  14. Min, X., Zhou, J., Zhai, G., Callet, P.L., Yang, X., Guan, X.: A metric for light field reconstruction, compression, and display quality evaluation. IEEE Trans. Image Process. 29, 3790–3804 (2020)

    Article  Google Scholar 

  15. Mitra, K., Veeraraghavan, A.: Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior. In: Computer Vision and Pattern Recognition Workshops (CVPR) (2012)

  16. Pearson, J., Brookes, M., Dragotti, P.L.: Plenoptic layer-based modeling for image based rendering. IEEE Trans. Image Process A Publ. IEEE Signal Process. Soc. 9, 3405–3419 (2013)

    Article  MathSciNet  Google Scholar 

  17. Schilling, H., Diebold, M., Rother, C., Jahne, B.: Trust your model: light field depth estimation with inline occlusion handling. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2018)

  18. Shi, J., Jiang, X., Guillemot, C.: A framework for learning depth from a flexible subset of dense and sparse light field views. IEEE Trans. Image Process. 99, 1–15 (2019)

    MathSciNet  MATH  Google Scholar 

  19. Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph. 1, 1–13 (2014)

    Google Scholar 

  20. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proc. ICLR (2015)

  21. Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4d RGBD light field from a single image. In: International Conference on Computer Vision (ICCV), pp. 1–9 (2017)

  22. Vadathya, A.K., Girish, S., Mitra, K.: A deep learning framework for light field reconstruction from focus-defocus pair: a minimal hardware approach. In: Computational Cameras and Displays (CCD) workshop (CVPRW) (2018)

  23. Vagharshakyan, S., Bregovic, R., Gotchev, A.: Light field reconstruction using Shearlet transform. IEEE Trans. Pattern Anal. Mach. Intell. 1, 133–147 (2018)

    Article  Google Scholar 

  24. Vaish, V., Adams, A.: The (new) Stanford light field archive. http://lightfield.stanford.edu/acq.html (2018)

  25. Wang, T.C., Efros, A.A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In: IEEE International Conference on Computer Vision (2016)

  26. Wang, Y., Liu, F., Wang, Z., Hou, G., Sun, Z., Tan, T.: End-to-end view synthesis for light field imaging with pseudo 4DCNN. In: European Conference on Computer Vision (ECCV), pp. 1–16 (2018)

  27. Wang, Y., Liu, F., Zhang, K., Wang, Z., Tan, T.: High-fidelity view synthesis for light field imaging with extended pseudo 4DCNN. IEEE Trans. Comput. Imaging 99, 1 (2020)

    Google Scholar 

  28. Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 1 (2013)

    Google Scholar 

  29. Wanner, S., Meister, S., Goldlücke, B.: Datasets and Benchmarks for Densely Sampled 4D Light Fields. The Eurographics Association (2013)

  30. Wetzstein, G.: MIT synthetic light field archive. https://web.media.mit.edu/~gordonw/SyntheticLightFields/index.php (2013)

  31. Wilburn, B., Joshi, N., Vaish, V., Talvala, E.V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 3, 765–776 (2005)

    Article  Google Scholar 

  32. Wu, G., Liu, Y., Fang, L., Chai, T.: Lapepi-net: a Laplacian pyramid EPI structure for learning-based dense light field reconstruction. arXiv:1902.06221 (2019)

  33. Wu, G., Zhao, M., Wang, L., Dai, Q., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

  34. Wu, Y., Wang, Y., Liang, J., Baji, I.V., Wang, A.: Light field all-in-focus image fusion based on spatially-guided angular information. J. Vis. Commun. Image Represent. 72, 102878 (2020)

    Article  Google Scholar 

  35. Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., Kweon, I.S.: Learning a deep convolutional network for light-field image super-resolution. In: IEEE International Conference on Computer Vision Work-shop (ICCVW), pp. 1–9 (2015)

  36. Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., Kweon, I.S.: Light-field image super-resolution using convolutional neural network. IEEE Signal Process. Lett. 6, 848–852 (2017)

    Article  Google Scholar 

  37. Zhang, Z., Liu, Y., Dai, Q.: Light field from micro-baseline image pair. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2015)

  38. Zheng, H., Ji, M., Wang, H., Liu, Y., Fang, L.: Crossnet: an end-to-end reference-based super resolution network using cross-scale warping. In: European Conference on Computer Vision (ECCV) (2018)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Minghui Wang.

Ethics declarations

Funding

This work was supported in part by the Science and Technology Plan Project of Sichuan Province under Grant 2021YFG0350, in part by the National Key Research and Development Program of China under Grant 2016YFB0700802, and in part by the Innovative Youth Fund Program of the State Oceanic Administration of China under Grant 2015001.

Conflict of interest

The authors have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by the Science and Technology Plan Project of Sichuan Province under Grant 2021YFG0350, in part by the National Key Research and Development Program of China under Grant 2016YFB0700802, and in part by the Innovative Youth Fund Program of the State Oceanic Administration of China under Grant 2015001.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, X., Wang, M., Wang, A. et al. Depth-guided learning light field angular super-resolution with edge-aware inpainting. Vis Comput 38, 2839–2851 (2022). https://doi.org/10.1007/s00371-021-02159-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02159-6

Keywords

Navigation