Advertisement

Dense Light Field Reconstruction from Sparse Sampling Using Residual Network

  • Mantang Guo
  • Hao Zhu
  • Guoqing Zhou
  • Qing WangEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11366)

Abstract

A light field records numerous light rays from a real-world scene. However, capturing a dense light field by existing devices is a time-consuming process. Besides, reconstructing a large amount of light rays equivalent to multiple light fields using sparse sampling arises a severe challenge for existing methods. In this paper, we present a learning-based method to reconstruct multiple novel light fields between two mutually independent light fields. We indicate that light rays distributed in different light fields have the same consistent constraints under a certain condition. The most significant constraint is a depth related correlation between angular and spatial dimensions. Our method avoids working out the error-sensitive constraint by employing a deep neural network. We predict residual values of pixels on epipolar plane image (EPI) to reconstruct novel light fields. Our method is able to reconstruct 2 to 4 novel light fields between two mutually independent input light fields. We also compare our results with those yielded by a number of alternatives elsewhere in the literature, which shows our reconstructed light fields have better structure similarity and occlusion.

Keywords

Dense light field reconstruction Sparse sampling Epipolar plane image Residual network 

Supplementary material

484523_1_En_4_MOESM1_ESM.avi (26.8 mb)
Supplementary material 1 (avi 27434 KB)
484523_1_En_4_MOESM2_ESM.txt (1 kb)
Supplementary material 2 (txt 1 KB)

References

  1. 1.
    Bolles, R.C., Baker, H.H., Marimont, D.H.: Epipolar-plane image analysis: an approach to determining structure from motion. IJCV 1(1), 7–55 (1987)CrossRefGoogle Scholar
  2. 2.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: AISTATS, pp. 249–256 (2010)Google Scholar
  3. 3.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE CVPR, pp. 770–778 (2016)Google Scholar
  4. 4.
    Jeon, H.G., et al.: Accurate depth map estimation from a lenslet light field camera. In: IEEE CVPR, pp. 1547–1555 (2015)Google Scholar
  5. 5.
    Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM TOG 35(6), 193 (2016)CrossRefGoogle Scholar
  6. 6.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  7. 7.
    Levin, A., Durand, F.: Linear view synthesis using a dimensionality gap light field prior. In: IEEE CVPR, pp. 1831–1838 (2010)Google Scholar
  8. 8.
    Levoy, M., Hanrahan, P.: Light field rendering. In: ACM SIGGRAPH, pp. 31–42 (1996)Google Scholar
  9. 9.
    Li, N., Ye, J., Ji, Y., Ling, H., Yu, J.: Saliency detection on light field. In: IEEE CVPR, pp. 2806–2813 (2014)Google Scholar
  10. 10.
  11. 11.
    Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM TOG 32(4), 46 (2013)CrossRefGoogle Scholar
  12. 12.
    Ng, R.: Digital light field photography. Ph.D. thesis, Stanford University (2006)Google Scholar
  13. 13.
    RayTrix: 3D light field camera technology. https://raytrix.de/
  14. 14.
    Ren, Z., Zhang, Q., Zhu, H., Wang, Q.: Extending the FOV from disparity and color consistencies in multiview light fields. In: Proceedings of the ICIP, Beijing, China, pp. 1157–1161 (2017)Google Scholar
  15. 15.
    Schedl, D.C., Birklbauer, C., Bimber, O.: Directional super-resolution by means of coded sampling and guided upsampling. In: IEEE ICCP, pp. 1–10 (2015)Google Scholar
  16. 16.
    Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous Fourier domain. ACM TOG 34(1), 12 (2014)CrossRefGoogle Scholar
  17. 17.
    Si, L., Wang, Q.: Dense depth-map estimation and geometry inference from light fields via global optimization. In: Lai, S.-H., Lepetit, V., Nishino, K., Sato, Y. (eds.) ACCV 2016. LNCS, vol. 10113, pp. 83–98. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-54187-7_6CrossRefGoogle Scholar
  18. 18.
    Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE ICCV, vol. 2, p. 6 (2017)Google Scholar
  19. 19.
    Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE T-PAMI 38(11), 2170–2181 (2016)CrossRefGoogle Scholar
  20. 20.
    Wang, T.C., Zhu, J.Y., Kalantari, N.K., Efros, A.A., Ramamoorthi, R.: Light field video capture using a learning-based hybrid imaging system. ACM TOG 36(4), 133 (2017)Google Scholar
  21. 21.
    Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In: IEEE CVPR, pp. 41–48 (2012)Google Scholar
  22. 22.
    Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE T-PAMI 36(3), 606–619 (2014)CrossRefGoogle Scholar
  23. 23.
    Wilburn, B., et al.: High performance imaging using large camera arrays. ACM TOG 24, 765–776 (2005)CrossRefGoogle Scholar
  24. 24.
    Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: IEEE CVPR, vol. 2017, p. 2 (2017)Google Scholar
  25. 25.
    Xiao, Z., Wang, Q., Zhou, G., Yu, J.: Aliasing detection and reduction scheme on angularly undersampled light fields. IEEE TIP 26(5), 2103–2115 (2017)MathSciNetzbMATHGoogle Scholar
  26. 26.
    Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., So Kweon, I.: Learning a deep convolutional network for light-field image super-resolution. In: IEEE ICCV Workshops, pp. 24–32 (2015)Google Scholar
  27. 27.
    Zhang, F.L., Wang, J., Shechtman, E., Zhou, Z.Y., Shi, J.X., Hu, S.M.: PlenoPatch: patch-based plenoptic image manipulation. IEEE T-VCG 23(5), 1561–1573 (2017)Google Scholar
  28. 28.
    Zhang, Q., Zhang, C., Ling, J., Wang, Q., Yu, J.: A generic multi-projection-center model and calibration method for light field cameras. IEEE T-PAMI (2018).  https://doi.org/10.1109/TPAMI.2018.2864617
  29. 29.
    Zhang, Z., Liu, Y., Dai, Q.: Light field from micro-baseline image pairs. In: IEEE CVPR, pp. 3800–3809 (2015)Google Scholar
  30. 30.
    Zhu, H., Wang, Q., Yu, J.: Occlusion-model guided anti-occlusion depth estimation in light field. IEEE J-STSP 11(7), 965–978 (2017).  https://doi.org/10.1109/JSTSP.2017.2730818CrossRefGoogle Scholar
  31. 31.
    Zhu, H., Zhang, Q., Wang, Q.: 4D light field superpixel and segmentation. In: IEEE CVPR, pp. 6709–6717 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer ScienceNorthwestern Polytechnical UniversityXi’anChina

Personalised recommendations