Acquiring Dynamic Light Fields Through Coded Aperture Camera

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12364)


We investigate the problem of compressive acquisition of a dynamic light field. A promising solution for compressive light field acquisition is to use a coded aperture camera, with which an entire light field can be computationally reconstructed from several images captured through differently-coded aperture patterns. With this method, it was assumed that the scene should not move throughout the complete acquisition process, which restricted real applications. In this study, however, we assume that the target scene may change over time, and propose a method for acquiring a dynamic light field (a moving scene) using a coded aperture camera and a convolutional neural network (CNN). To successfully handle scene motions, we develop a new configuration of image observation, called V-shape observation, and train the CNN using a dynamic-light-field dataset with pseudo motions. Our method is validated through experiments using both a computer-generated scene and a real camera.


Light field CNN Coded aperture camera 

Supplementary material

504475_1_En_22_MOESM1_ESM.pdf (21 kb)
Supplementary material 1 (pdf 21 KB)


  1. 1.
    Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20 (1991)Google Scholar
  2. 2.
    Adelson, E.H., Wang, J.Y.: Single lens stereo with a plenoptic camera. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992)CrossRefGoogle Scholar
  3. 3.
    Arai, J., Okano, F., Hoshino, H., Yuyama, I.: Gradient-index lens-array method based on real-time integral photography for three-dimensional images. Appl. Opt. 37(11), 2034–2045 (1998)CrossRefGoogle Scholar
  4. 4.
    Babacan, S.D., Ansorge, R., Luessi, M., Mataran, P.R., Molina, R., Katsaggelos, A.K.: Compressive light field sensing. IEEE Trans. Image Process. 21(12), 4746–4757 (2012)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Bishop, T.E., Zanetti, S., Favaro, P.: Light field super resolution. In: 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2009)Google Scholar
  6. 6.
    Candes, E.J., Eldar, Y.C., Needell, D., Randall, P.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31(1), 59–73 (2011)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)CrossRefGoogle Scholar
  8. 8.
    Chen, B., Ruan, L., Lam, M.L.: LFGAN: 4D light field synthesis from a single RGB image. ACM Trans. Multimedia Comput. Commun. Appl. 16(1) (2020)Google Scholar
  9. 9.
    Computer Graphics Laboratory, Stanford University: The (new) stanford light field archive (2018).
  10. 10.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fujii, T., Mori, K., Takeda, K., Mase, K., Tanimoto, M., Suenaga, Y.: Multipoint measuring system for video and sound-100-camera and microphone system. In: 2006 IEEE International Conference on Multimedia and Expo, pp. 437–440. IEEE (2006)Google Scholar
  12. 12.
    Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54 (1996)Google Scholar
  13. 13.
    Gupta, M., Jauhari, A., Kulkarni, K., Jayasuriya, S., Molnar, A., Turaga, P.: Compressive light field reconstructions using deep learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1277–1286 (2017)Google Scholar
  14. 14.
    Heidelberg Collaboratory for Image Processing: Datasets and benchmarks for densely sampled 4D light fields (2016).
  15. 15.
    Heidelberg Collaboratory for Image Processing: 4D light field dataset (2018).
  16. 16.
    Huang, F.C., Chen, K., Wetzstein, G.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Trans. Graph. (TOG) 34(4), 60 (2015)Google Scholar
  17. 17.
    Inagaki, Y., Kobayashi, Y., Takahashi, K., Fujii, T., Nagahara, H.: Learning to capture light fields through a coded aperture camera. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 431–448. Springer, Cham (2018). Scholar
  18. 18.
    Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 297–306 (2000)Google Scholar
  19. 19.
    Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. (Proc. SIGGRAPH Asia 2016) 35(6) (2016)Google Scholar
  20. 20.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)Google Scholar
  21. 21.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: The International Conference on Learning Representations (ICLR) (2015)Google Scholar
  22. 22.
    Lee, S., Jang, C., Moon, S., Cho, J., Lee, B.: Additive light field displays: realization of augmented reality with holographic optical elements. ACM Trans. Graph. (TOG) 35(4), 60 (2016)Google Scholar
  23. 23.
    Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42. ACM (1996)Google Scholar
  24. 24.
    Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graph. (TOG) 27(3), 55 (2008)CrossRefGoogle Scholar
  25. 25.
    Maeno, K., Nagahara, H., Shimada, A., Taniguchi, R.I.: Light field distortion feature for transparent object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2786–2793 (2013)Google Scholar
  26. 26.
    Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph. (TOG) 32(4), 46 (2013)CrossRefGoogle Scholar
  27. 27.
    Mildenhall, B., et al.: Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Trans. Graph. (TOG) (2019)Google Scholar
  28. 28.
    MIT Media Lab’s Camera Culture Group: Compressive light field camera.
  29. 29.
    Nabati, O., Mendlovic, D., Giryes, R.: Fast and accurate reconstruction of compressed color light field. In: 2018 IEEE International Conference on Computational Photography (ICCP), pp. 1–11, May 2018Google Scholar
  30. 30.
    Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., Nayar, S.K.: Programmable aperture camera using LCoS. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 337–350. Springer, Heidelberg (2010). Scholar
  31. 31.
    Ng, R.: Digital light field photography. Ph.D. thesis, Stanford University (2006)Google Scholar
  32. 32.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. CSTR 2(11), 1–11 (2005)Google Scholar
  33. 33.
    Persistence of Vision Pty. Ltd.: Persistence of vision raytracer (version 3.6) (2004).
  34. 34.
    Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph. (TOG) 34(1), 12 (2014)CrossRefGoogle Scholar
  35. 35.
    Shin, C., Jeon, H.G., Yoon, Y., Kweon, I.S., Kim, S.J.: EPINET: a fully-convolutional neural network using Epipolar geometry for depth from light field images. In: IEEE CVPR, pp. 4748–4757 (2018)Google Scholar
  36. 36.
    Sonoda, T., Nagahara, H., Taniguchi, R.: Motion-invariant coding using a programmable aperture camera. IPSJ Trans. Comput. Vis. Appl. 6, 25–33 (2014)Google Scholar
  37. 37.
    Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE International Conference on Computer Vision, pp. 2262–2270 (2017)Google Scholar
  38. 38.
    Taguchi, Y., Koike, T., Takahashi, K., Naemura, T.: TransCAIP: a live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters. IEEE Trans. Visual Comput. Graphics 15(5), 841–852 (2009)CrossRefGoogle Scholar
  39. 39.
    Takahashi, K., Kobayashi, Y., Fujii, T.: From focal stack to tensor light-field display. IEEE Trans. Image Process. 27(9), 4571–4584 (2018)MathSciNetCrossRefGoogle Scholar
  40. 40.
    Tambe, S., Veeraraghavan, A., Agrawal, A.: Towards motion aware light field video for dynamic scenes. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1009–1016 (2013)Google Scholar
  41. 41.
    Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-ngeneration open source framework for deep learning. In: Workshop on Machine Learning Systems (LearningSys) in the Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS) (2015)Google Scholar
  42. 42.
    Vadathya, A.K., Girish, S., Mitra, K.: A unified learning based framework for light field reconstruction from coded projections. IEEE Trans. Comput. Imaging, 1 (2019)Google Scholar
  43. 43.
    Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. (TOG) 26(3), 69 (2007)CrossRefGoogle Scholar
  44. 44.
    Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016)CrossRefGoogle Scholar
  45. 45.
    Wang, T.C., Zhu, J.Y., Hiroaki, E., Chandraker, M., Efros, A., Ramamoorthi, R.: A 4d light-field dataset and CNN architectures for material recognition. In: European Conference on Computer Vision (ECCV), pp. 121–138 (2016)Google Scholar
  46. 46.
    Wang, T.C., Zhu, J.Y., Kalantari, N.K., Efros, A.A., Ramamoorthi, R.: Light field video capture using a learning-based hybrid imaging system. ACM Trans. Graph. 36(4), 133:1–133:13 (2017)Google Scholar
  47. 47.
    Wang, Y., Liu, F., Wang, Z., Hou, G., Sun, Z., Tan, T.: End-to-End view synthesis for light field imaging with pseudo 4DCNN. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11206, pp. 340–355. Springer, Cham (2018). Scholar
  48. 48.
    Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)CrossRefGoogle Scholar
  49. 49.
    Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R.: Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. (Proc. SIGGRAPH) 31(4), 1–11 (2012)Google Scholar
  50. 50.
    Wilburn, B., et al.: High performance imaging using large camera arrays. ACM Trans. Graph. (TOG) 24(3), 765–776 (2005)MathSciNetCrossRefGoogle Scholar
  51. 51.
    Williem, W., Park, I.K., Lee, K.M.: Robust light field depth estimation using occlusion-noise aware data costs. IEEE Trans. Pattern Anal. Mach. Intell. PP(99), 1 (2017)Google Scholar
  52. 52.
    Wu, G., Zhao, M., Wang, L., Dai, Q., Chai, T., Liu, Y.: Light field reconstruction using deep convolutional network on EPI. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1638–1646, July 2017Google Scholar
  53. 53.
    Yagi, Y., Takahashi, K., Fujii, T., Sonoda, T., Nagahara, H.: PCA-coded aperture for light field photography. In: IEEE International Conference on Image Processing (ICIP) (2017)Google Scholar
  54. 54.
    Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: Learning view synthesis using multiplane images. In: SIGGRAPH (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Graduate School of EngineeringNagoya UniversityNagoyaJapan
  2. 2.Institute for Datability ScienceOsaka UniversitySuitaJapan

Personalised recommendations