Skip to main content

Deep Spatial-Angular Regularization for Compressive Light Field Reconstruction over Coded Apertures

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12347))

Included in the following conference series:

Abstract

Coded aperture is a promising approach for capturing the 4-D light field (LF), in which the 4-D data are compressively modulated into 2-D coded measurements that are further decoded by reconstruction algorithms. The bottleneck lies in the reconstruction algorithms, resulting in rather limited reconstruction quality. To tackle this challenge, we propose a novel learning-based framework for the reconstruction of high-quality LFs from acquisitions via learned coded apertures. The proposed method incorporates the measurement observation into the deep learning framework elegantly to avoid relying entirely on data-driven priors for LF reconstruction. Specifically, we first formulate the compressive LF reconstruction as an inverse problem with an implicit regularization term. Then, we construct the regularization term with an efficient deep spatial-angular convolutional sub-network to comprehensively explore the signal distribution free from the limited representation ability and inefficiency of deterministic mathematical modeling. Experimental results show that the reconstructed LFs not only achieve much higher PSNR/SSIM but also preserve the LF parallax structure better, compared with state-of-the-art methods on both real and synthetic LF benchmarks. In addition, experiments show that our method is efficient and robust to noise, which is an essential advantage for a real camera system. The code is publicly available at https://github.com/angmt2008/LFCA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ashok, A., Neifeld, M.A.: Compressive light field imaging. In: Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV, vol. 7690, p. 76900Q. International Society for Optics and Photonics (2010)

    Google Scholar 

  2. Babacan, S.D., Ansorge, R., Luessi, M., Mataran, P.R., Molina, R., Katsaggelos, A.K.: Compressive light field sensing. IEEE Trans. Image Process. 21(12), 4746–4757 (2012)

    Article  MathSciNet  Google Scholar 

  3. Chen, J., Chau, L.P.: Light field compressed sensing over a disparity-aware dictionary. IEEE Trans. Circ. Syst. Video Technol. 27(4), 855–865 (2015)

    Article  Google Scholar 

  4. Chen, J., Hou, J., Ni, Y., Chau, L.P.: Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Trans. Image Process. 27(10), 4889–4900 (2018)

    Article  MathSciNet  Google Scholar 

  5. Dong, W., Wang, P., Yin, W., Shi, G., Wu, F., Lu, X.: Denoising prior driven deep neural network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2305–2318 (2018)

    Article  Google Scholar 

  6. Guo, M., Zhu, H., Zhou, G., Wang, Q.: Dense light field reconstruction from sparse sampling using residual network. In: Asian Conference on Computer Vision (ACCV), pp. 50–65. Springer (2018)

    Google Scholar 

  7. Gupta, M., Jauhari, A., Kulkarni, K., Jayasuriya, S., Molnar, A., Turaga, P.: Compressive light field reconstructions using deep learning. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 11–20 (2017)

    Google Scholar 

  8. Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-54187-7_2

    Chapter  Google Scholar 

  9. Inagaki, Y., Kobayashi, Y., Takahashi, K., Fujii, T., Nagahara, H.: Learning to capture light fields through a coded aperture camera. In: European Conference on Computer Vision (ECCV), pp. 418–434 (2018)

    Google Scholar 

  10. Jin, J., Hou, J., Chen, J., Zeng, H., Kwong, S., Yu, J.: Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion. IEEE Trans. Pattern Anal. Mach. Intell. (2020). https://doi.org/10.1109/TPAMI.2020.3026039

  11. Jin, J., Hou, J., Yuan, H., Kwong, S.: Learning light field angular super-resolution via a geometry-aware network. In: Thirty-Fourth AAAI Conference on Artificial Intelligence, pp. 11141–11148 (2020)

    Google Scholar 

  12. Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193 (2016)

    Article  Google Scholar 

  13. Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016)

    Google Scholar 

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint (2014). arXiv:1412.6980

  15. Levoy, M., Hanrahan, P.: Light field rendering. In: ACM SIGGRAPH, pp. 31–42 (1996)

    Google Scholar 

  16. Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2806–2813 (2014)

    Google Scholar 

  17. Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. In: ACM SIGGRAPH, pp. 1–10 (2008)

    Google Scholar 

  18. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)

    Google Scholar 

  19. Lytro: https://www.lytro.com/ (2016)

  20. Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32(4), 46 (2013)

    Article  Google Scholar 

  21. Miandji, E., Hajisharif, S., Unger, J.: A unified framework for compression and compressed sensing of light fields and light field videos. ACM Trans. Graph. 38(3), 1–18 (2019)

    Article  Google Scholar 

  22. Mildenhall, B., Srinivasan, P.P., Ortiz-Cayon, R., Kalantari, N.K., Ramamoorthi, R., Ng, R., Kar, A.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38(4), 1–14 (2019)

    Article  Google Scholar 

  23. Nabati, O., Mendlovic, D., Giryes, R.: Fast and accurate reconstruction of compressed color light field. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–11. IEEE (2018)

    Google Scholar 

  24. Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., Nayar, S.K.: Programmable aperture camera using LCoS. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 337–350. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15567-3_25

    Chapter  Google Scholar 

  25. Ng, R., et al.: Digital Light Field Photography. Stanford University, United States (2006)

    Google Scholar 

  26. Qu, W., Zhou, G., Zhu, H., Xiao, Z., Wang, Q., Vidal, R.: High angular resolution light field reconstruction with coded-aperture mask. In: IEEE International Conference on Image Processing (ICIP), pp. 3036–3040. IEEE (2017)

    Google Scholar 

  27. RayTrix: 3d light field camera technology. https://raytrix.de/

  28. Romano, Y., Elad, M., Milanfar, P.: The little engine that could: regularization by denoising (red). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017)

    Article  MathSciNet  Google Scholar 

  29. Shi, J., Jiang, X., Guillemot, C.: A framework for learning depth from a flexible subset of dense and sparse light field views. IEEE Trans. Image Process. 28(12), 5867–5880 (2019)

    Article  MathSciNet  Google Scholar 

  30. Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 12 (2014)

    Article  Google Scholar 

  31. Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE International Conference on Computer Vision (ICCV), vol. 2, p. 6 (2017)

    Google Scholar 

  32. Sun, J., et al.: Deep ADMM-Net for compressive sensing MRI. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 10–18 (2016)

    Google Scholar 

  33. Venkatakrishnan, S.V., Bouman, C.A., Wohlberg, B.: Plug-and-play priors for model based reconstruction. In: IEEE Global Conference on Signal and Information Processing, pp. 945–948. IEEE (2013)

    Google Scholar 

  34. Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016)

    Article  Google Scholar 

  35. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  36. Wilburn, B., et al.: High performance imaging using large camera arrays. In: ACM SIGGRAPH, pp. 765–776 (2005)

    Google Scholar 

  37. Wing Fung Yeung, H., Hou, J., Chen, J., Ying Chung, Y., Chen, X.: Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues. In: European Conference on Computer Vision (ECCV), pp. 137–152 (2018)

    Google Scholar 

  38. Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6319–6327 (2017)

    Google Scholar 

  39. Yagi, Y., Takahashi, K., Fujii, T., Sonoda, T., Nagahara, H.: PCA-coded aperture for light field photography. In: IEEE International Conference on Image Processing (ICIP), pp. 3031–3035. IEEE (2017)

    Google Scholar 

  40. Yeung, H.W.F., Hou, J., Chen, X., Chen, J., Chen, Z., Chung, Y.Y.: Light field spatial super-resolution using deep efficient spatial-angular separable convolution. IEEE Trans. Image Process. 28(5), 2319–2330 (2018)

    Article  MathSciNet  Google Scholar 

  41. Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., So Kweon, I.: Learning a deep convolutional network for light-field image super-resolution. In: IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 24–32 (2015)

    Google Scholar 

  42. Zhang, J., Ghanem, B.: ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1828–1837 (2018)

    Google Scholar 

  43. Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3929–3938 (2017)

    Google Scholar 

  44. Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph. 37(4), 1–12 (2018)

    Google Scholar 

  45. Zhu, H., Guo, M., Li, H., Wang, Q., Robles-Kelly, A.: Revisiting spatio-angular trade-off in light field cameras and extended applications in super-resolution. IEEE Trans. Visual. Comput. Graph. (2019). https://doi.org/10.1109/TVCG.2019.2957761

Download references

Acknowledgements

This work was supported in part by the Hong Kong RGC under Grant 9048123 (CityU 21211518), and in part by the Basic Research General Program of Shenzhen Municipality under Grant JCYJ20190808183003968.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junhui Hou .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 150 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Guo, M., Hou, J., Jin, J., Chen, J., Chau, LP. (2020). Deep Spatial-Angular Regularization for Compressive Light Field Reconstruction over Coded Apertures. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12347. Springer, Cham. https://doi.org/10.1007/978-3-030-58536-5_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58536-5_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58535-8

  • Online ISBN: 978-3-030-58536-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics