Advertisement

Deep Spatial-Angular Regularization for Compressive Light Field Reconstruction over Coded Apertures

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12347)

Abstract

Coded aperture is a promising approach for capturing the 4-D light field (LF), in which the 4-D data are compressively modulated into 2-D coded measurements that are further decoded by reconstruction algorithms. The bottleneck lies in the reconstruction algorithms, resulting in rather limited reconstruction quality. To tackle this challenge, we propose a novel learning-based framework for the reconstruction of high-quality LFs from acquisitions via learned coded apertures. The proposed method incorporates the measurement observation into the deep learning framework elegantly to avoid relying entirely on data-driven priors for LF reconstruction. Specifically, we first formulate the compressive LF reconstruction as an inverse problem with an implicit regularization term. Then, we construct the regularization term with an efficient deep spatial-angular convolutional sub-network to comprehensively explore the signal distribution free from the limited representation ability and inefficiency of deterministic mathematical modeling. Experimental results show that the reconstructed LFs not only achieve much higher PSNR/SSIM but also preserve the LF parallax structure better, compared with state-of-the-art methods on both real and synthetic LF benchmarks. In addition, experiments show that our method is efficient and robust to noise, which is an essential advantage for a real camera system. The code is publicly available at https://github.com/angmt2008/LFCA.

Keywords

Light field Coded aperture Deep learning Regularization Observation model 

Notes

Acknowledgements

This work was supported in part by the Hong Kong RGC under Grant 9048123 (CityU 21211518), and in part by the Basic Research General Program of Shenzhen Municipality under Grant JCYJ20190808183003968.

Supplementary material

504434_1_En_17_MOESM1_ESM.pdf (150 kb)
Supplementary material 1 (pdf 150 KB)

References

  1. 1.
    Ashok, A., Neifeld, M.A.: Compressive light field imaging. In: Three-Dimensional Imaging, Visualization, and Display 2010 and Display Technologies and Applications for Defense, Security, and Avionics IV, vol. 7690, p. 76900Q. International Society for Optics and Photonics (2010)Google Scholar
  2. 2.
    Babacan, S.D., Ansorge, R., Luessi, M., Mataran, P.R., Molina, R., Katsaggelos, A.K.: Compressive light field sensing. IEEE Trans. Image Process. 21(12), 4746–4757 (2012)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Chen, J., Chau, L.P.: Light field compressed sensing over a disparity-aware dictionary. IEEE Trans. Circ. Syst. Video Technol. 27(4), 855–865 (2015)CrossRefGoogle Scholar
  4. 4.
    Chen, J., Hou, J., Ni, Y., Chau, L.P.: Accurate light field depth estimation with superpixel regularization over partially occluded regions. IEEE Trans. Image Process. 27(10), 4889–4900 (2018)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Dong, W., Wang, P., Yin, W., Shi, G., Wu, F., Lu, X.: Denoising prior driven deep neural network for image restoration. IEEE Trans. Pattern Anal. Mach. Intell. 41(10), 2305–2318 (2018)CrossRefGoogle Scholar
  6. 6.
    Guo, M., Zhu, H., Zhou, G., Wang, Q.: Dense light field reconstruction from sparse sampling using residual network. In: Asian Conference on Computer Vision (ACCV), pp. 50–65. Springer (2018)Google Scholar
  7. 7.
    Gupta, M., Jauhari, A., Kulkarni, K., Jayasuriya, S., Molnar, A., Turaga, P.: Compressive light field reconstructions using deep learning. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 11–20 (2017)Google Scholar
  8. 8.
    Honauer, K., Johannsen, O., Kondermann, D., Goldluecke, B.: A dataset and evaluation methodology for depth estimation on 4D light fields. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 19–34. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-54187-7_2CrossRefGoogle Scholar
  9. 9.
    Inagaki, Y., Kobayashi, Y., Takahashi, K., Fujii, T., Nagahara, H.: Learning to capture light fields through a coded aperture camera. In: European Conference on Computer Vision (ECCV), pp. 418–434 (2018)Google Scholar
  10. 10.
    Jin, J., Hou, J., Chen, J., Zeng, H., Kwong, S., Yu, J.: Deep coarse-to-fine dense light field reconstruction with flexible sampling and geometry-aware fusion. IEEE Trans. Pattern Anal. Mach. Intell. (2020).  https://doi.org/10.1109/TPAMI.2020.3026039
  11. 11.
    Jin, J., Hou, J., Yuan, H., Kwong, S.: Learning light field angular super-resolution via a geometry-aware network. In: Thirty-Fourth AAAI Conference on Artificial Intelligence, pp. 11141–11148 (2020)Google Scholar
  12. 12.
    Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193 (2016)CrossRefGoogle Scholar
  13. 13.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1646–1654 (2016)Google Scholar
  14. 14.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint (2014). arXiv:1412.6980
  15. 15.
    Levoy, M., Hanrahan, P.: Light field rendering. In: ACM SIGGRAPH, pp. 31–42 (1996)Google Scholar
  16. 16.
    Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2806–2813 (2014)Google Scholar
  17. 17.
    Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. In: ACM SIGGRAPH, pp. 1–10 (2008)Google Scholar
  18. 18.
    Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431–3440 (2015)Google Scholar
  19. 19.
  20. 20.
    Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. 32(4), 46 (2013)CrossRefGoogle Scholar
  21. 21.
    Miandji, E., Hajisharif, S., Unger, J.: A unified framework for compression and compressed sensing of light fields and light field videos. ACM Trans. Graph. 38(3), 1–18 (2019)CrossRefGoogle Scholar
  22. 22.
    Mildenhall, B., Srinivasan, P.P., Ortiz-Cayon, R., Kalantari, N.K., Ramamoorthi, R., Ng, R., Kar, A.: Local light field fusion: practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. 38(4), 1–14 (2019)CrossRefGoogle Scholar
  23. 23.
    Nabati, O., Mendlovic, D., Giryes, R.: Fast and accurate reconstruction of compressed color light field. In: IEEE International Conference on Computational Photography (ICCP), pp. 1–11. IEEE (2018)Google Scholar
  24. 24.
    Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., Nayar, S.K.: Programmable aperture camera using LCoS. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 337–350. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15567-3_25CrossRefGoogle Scholar
  25. 25.
    Ng, R., et al.: Digital Light Field Photography. Stanford University, United States (2006)Google Scholar
  26. 26.
    Qu, W., Zhou, G., Zhu, H., Xiao, Z., Wang, Q., Vidal, R.: High angular resolution light field reconstruction with coded-aperture mask. In: IEEE International Conference on Image Processing (ICIP), pp. 3036–3040. IEEE (2017)Google Scholar
  27. 27.
    RayTrix: 3d light field camera technology. https://raytrix.de/
  28. 28.
    Romano, Y., Elad, M., Milanfar, P.: The little engine that could: regularization by denoising (red). SIAM J. Imaging Sci. 10(4), 1804–1844 (2017)MathSciNetCrossRefGoogle Scholar
  29. 29.
    Shi, J., Jiang, X., Guillemot, C.: A framework for learning depth from a flexible subset of dense and sparse light field views. IEEE Trans. Image Process. 28(12), 5867–5880 (2019)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous fourier domain. ACM Trans. Graph. 34(1), 12 (2014)CrossRefGoogle Scholar
  31. 31.
    Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE International Conference on Computer Vision (ICCV), vol. 2, p. 6 (2017)Google Scholar
  32. 32.
    Sun, J., et al.: Deep ADMM-Net for compressive sensing MRI. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 10–18 (2016)Google Scholar
  33. 33.
    Venkatakrishnan, S.V., Bouman, C.A., Wohlberg, B.: Plug-and-play priors for model based reconstruction. In: IEEE Global Conference on Signal and Information Processing, pp. 945–948. IEEE (2013)Google Scholar
  34. 34.
    Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016)CrossRefGoogle Scholar
  35. 35.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  36. 36.
    Wilburn, B., et al.: High performance imaging using large camera arrays. In: ACM SIGGRAPH, pp. 765–776 (2005)Google Scholar
  37. 37.
    Wing Fung Yeung, H., Hou, J., Chen, J., Ying Chung, Y., Chen, X.: Fast light field reconstruction with deep coarse-to-fine modeling of spatial-angular clues. In: European Conference on Computer Vision (ECCV), pp. 137–152 (2018)Google Scholar
  38. 38.
    Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6319–6327 (2017)Google Scholar
  39. 39.
    Yagi, Y., Takahashi, K., Fujii, T., Sonoda, T., Nagahara, H.: PCA-coded aperture for light field photography. In: IEEE International Conference on Image Processing (ICIP), pp. 3031–3035. IEEE (2017)Google Scholar
  40. 40.
    Yeung, H.W.F., Hou, J., Chen, X., Chen, J., Chen, Z., Chung, Y.Y.: Light field spatial super-resolution using deep efficient spatial-angular separable convolution. IEEE Trans. Image Process. 28(5), 2319–2330 (2018)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., So Kweon, I.: Learning a deep convolutional network for light-field image super-resolution. In: IEEE International Conference on Computer Vision Workshops (ICCVW), pp. 24–32 (2015)Google Scholar
  42. 42.
    Zhang, J., Ghanem, B.: ISTA-Net: Interpretable optimization-inspired deep network for image compressive sensing. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1828–1837 (2018)Google Scholar
  43. 43.
    Zhang, K., Zuo, W., Gu, S., Zhang, L.: Learning deep CNN denoiser prior for image restoration. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3929–3938 (2017)Google Scholar
  44. 44.
    Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: learning view synthesis using multiplane images. ACM Trans. Graph. 37(4), 1–12 (2018)Google Scholar
  45. 45.
    Zhu, H., Guo, M., Li, H., Wang, Q., Robles-Kelly, A.: Revisiting spatio-angular trade-off in light field cameras and extended applications in super-resolution. IEEE Trans. Visual. Comput. Graph. (2019).  https://doi.org/10.1109/TVCG.2019.2957761

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceCity University of Hong KongHong KongChina
  2. 2.Department of Computer ScienceHong Kong Baptist UniversityHong KongChina
  3. 3.School of Electrical and Electronics EngineeringNanyang Technological UniversityNanyangSingapore

Personalised recommendations