Advertisement

Learning to Capture Light Fields Through a Coded Aperture Camera

  • Yasutaka InagakiEmail author
  • Yuto Kobayashi
  • Keita Takahashi
  • Toshiaki Fujii
  • Hajime Nagahara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11211)

Abstract

We propose a learning-based framework for acquiring a light field through a coded aperture camera. Acquiring a light field is a challenging task due to the amount of data. To make the acquisition process efficient, coded aperture cameras were successfully adopted; using these cameras, a light field is computationally reconstructed from several images that are acquirToshiakied with different aperture patterns. However, it is still difficult to reconstruct a high-quality light field from only a few acquired images. To tackle this limitation, we formulated the entire pipeline of light field acquisition from the perspective of an auto-encoder. This auto-encoder was implemented as a stack of fully convolutional layers and was trained end-to-end by using a collection of training samples. We experimentally show that our method can successfully learn good image-acquisition and reconstruction strategies. With our method, light fields consisting of 5 \(\times \) 5 or 8 \(\times \) 8 images can be successfully reconstructed only from a few acquired images. Moreover, our method achieved superior performance over several state-of-the-art methods. We also applied our method to a real prototype camera to show that it is capable of capturing a real 3-D scene.

Keywords

Light field CNN Coded aperture 

References

  1. 1.
    Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20 (1991)Google Scholar
  2. 2.
    Levoy, M., Hanrahan, P.: Light field rendering. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 31–42. ACM (1996)Google Scholar
  3. 3.
    Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp. 43–54 (1996)Google Scholar
  4. 4.
    Tanimoto, M., Tehrani, M.P., Fujii, T., Yendo, T.: Free-viewpoint T.V. IEEE Signal Process. Mag. 28(1), 67–76 (2011)CrossRefGoogle Scholar
  5. 5.
    Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph. (TOG) 34(1), 12 (2014)CrossRefGoogle Scholar
  6. 6.
    Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. In: ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016), vol. 35(6) (2016)CrossRefGoogle Scholar
  7. 7.
    Wanner, S., Goldluecke, B.: Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 36(3), 606–619 (2014)CrossRefGoogle Scholar
  8. 8.
    Wang, T.C., Efros, A.A., Ramamoorthi, R.: Depth estimation with occlusion modeling using light-field cameras. IEEE Trans. Pattern Anal. Mach. Intell. 38(11), 2170–2181 (2016)CrossRefGoogle Scholar
  9. 9.
    Williem, W., Park, I.K., Lee, K.M.: Robust light field depth estimation using occlusion-noise aware data costs. IEEE Trans. Pattern Anal. Mach. Intell. PP(99), 1 (2017)Google Scholar
  10. 10.
    Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 297–306 (2000)Google Scholar
  11. 11.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. CSTR 2(11), 1–11 (2005)Google Scholar
  12. 12.
    Bishop, T.E., Zanetti, S., Favaro, P.: Light field superresolution. In: 2009 IEEE International Conference on Computational Photography (ICCP), pp. 1–9. IEEE (2009)Google Scholar
  13. 13.
    Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R.: Tensor displays: compressive light field synthesis using multilayer displays with directional backlighting. ACM Trans. Graph. (Proc. SIGGRAPH), 31(4), 1–11 (2012)CrossRefGoogle Scholar
  14. 14.
    Huang, F.C., Chen, K., Wetzstein, G.: The light field stereoscope: immersive computer graphics via factored near-eye light field displays with focus cues. ACM Trans. Graph. (TOG), 34(4), 60 (2015)Google Scholar
  15. 15.
    Lee, S., Jang, C., Moon, S., Cho, J., Lee, B.: Additive light field displays: realization of augmented reality with holographic optical elements. ACM Trans. Graph. (TOG), 35(4), 60 (2016)Google Scholar
  16. 16.
    Saito, T., Kobayashi, Y., Takahashi, K., Fujii, T.: Displaying real-world light fields with stacked multiplicative layers: requirement and data conversion for input multiview images. J. Display Technol. 12(11), 1290–1300 (2016)CrossRefGoogle Scholar
  17. 17.
    Maeno, K., Nagahara, H., Shimada, A., Taniguchi, R.I.: Light field distortion feature for transparent object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2786–2793 (2013)Google Scholar
  18. 18.
    Wang, T.-C., Zhu, J.-Y., Hiroaki, E., Chandraker, M., Efros, A.A., Ramamoorthi, R.: A 4D light-field dataset and CNN architectures for material recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 121–138. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_8CrossRefGoogle Scholar
  19. 19.
    MIT Media Lab’s Camera Culture Group: Compressive light field camera. http://cameraculture.media.mit.edu/projects/compressive-light-field-camera/
  20. 20.
    Computer Graphics Laboratory, Stanford University: The (new) stanford light field archive (2018). http://lightfield.stanford.edu
  21. 21.
    Heidelberg Collaboratory for Image Processing: 4D light field dataset (2018). http://hci-lightfield.iwr.uni-heidelberg.de/
  22. 22.
    Heidelberg Collaboratory for Image Processing: Datasets and benchmarks for densely sampled 4D light fields (2016). http://lightfieldgroup.iwr.uni-heidelberg.de/?page_id=713
  23. 23.
    Wilburn, B., et al.: High performance imaging using large camera arrays. ACM Trans. Graph. (TOG) 24(3), 765–776 (2005)CrossRefGoogle Scholar
  24. 24.
    Fujii, T., Mori, K., Takeda, K., Mase, K., Tanimoto, M., Suenaga, Y.: Multipoint measuring system for video and sound-100-camera and microphone system. In: 2006 IEEE International Conference on Multimedia and Expo, pp. 437–440. IEEE (2006)Google Scholar
  25. 25.
    Taguchi, Y., Koike, T., Takahashi, K., Naemura, T.: TransCAIP: a live 3D TV system using a camera array and an integral photography display with interactive control of viewing parameters. IEEE Trans. Vis. Comput. Graph. 15(5), 841–852 (2009)CrossRefGoogle Scholar
  26. 26.
    Adelson, E.H., Wang, J.Y.: Single lens stereo with a plenoptic camera. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 99–106 (1992)CrossRefGoogle Scholar
  27. 27.
    Arai, J., Okano, F., Hoshino, H., Yuyama, I.: Gradient-index lens-array method based on real-time integral photography for three-dimensional images. Appl. Optics 37(11), 2034–2045 (1998)CrossRefGoogle Scholar
  28. 28.
    Ng, R.: Digital light field photography. Ph.D. thesis, Stanford University (2006)Google Scholar
  29. 29.
    Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. (TOG) 26(3), 69 (2007)CrossRefGoogle Scholar
  30. 30.
    Liang, C.K., Lin, T.H., Wong, B.Y., Liu, C., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graph. (TOG) 27(3), 55 (2008)CrossRefGoogle Scholar
  31. 31.
    Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., Nayar, S.K.: Programmable aperture camera using LCoS. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 337–350. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15567-3_25CrossRefGoogle Scholar
  32. 32.
    Babacan, S.D., Ansorge, R., Luessi, M., Mataran, P.R., Molina, R., Katsaggelos, A.K.: Compressive light field sensing. IEEE Trans. Image Process. 21(12), 4746–4757 (2012)MathSciNetCrossRefGoogle Scholar
  33. 33.
    Marwah, K., Wetzstein, G., Bando, Y., Raskar, R.: Compressive light field photography using over complete dictionaries and optimized projections. ACM Trans. Graph. (TOG) 32(4), 46 (2013)CrossRefGoogle Scholar
  34. 34.
    Tambe, S., Veeraraghavan, A., Agrawal, A.: Towards motion aware light field video for dynamic scenes. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1009–1016 (2013)Google Scholar
  35. 35.
    Yagi, Y., Takahashi, K., Fujii, T., Sonoda, T., Nagahara, H.: PCA-coded aperture for light field photography. In: IEEE International Conference on Image Processing (ICIP) (2017)Google Scholar
  36. 36.
    Srinivasan, P.P., Wang, T., Sreelal, A., Ramamoorthi, R., Ng, R.: Learning to synthesize a 4D RGBD light field from a single image. In: IEEE International Conference on Computer Vision, pp. 2262–2270 (2017)Google Scholar
  37. 37.
    Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., Kweon, I.S.: Learning a deep convolutional network for light-field image super-resolution. In: 2015 IEEE International Conference on Computer Vision Workshop (ICCVW), pp. 57–65 (2015)Google Scholar
  38. 38.
    Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)MathSciNetCrossRefGoogle Scholar
  39. 39.
    Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag. 25(2), 21–30 (2008)CrossRefGoogle Scholar
  40. 40.
    Candes, E.J., Eldar, Y.C., Needell, D., Randall, P.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmon. Anal. 31(1), 59–73 (2011)MathSciNetCrossRefGoogle Scholar
  41. 41.
    Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetCrossRefGoogle Scholar
  42. 42.
    Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 1096–1103 (2008)Google Scholar
  43. 43.
    Tokui, S., Oono, K., Hido, S., Clayton, J.: Chainer: a next-generation open source framework for deep learning. In: Workshop on Machine Learning Systems (LearningSys) in the Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS) (2015)Google Scholar
  44. 44.
    Chakrabarti, A.: Learning sensor multiplexing design through back-propagation. In: Proceedings of the 30th International Conference on Neural Information Processing Systems, pp. 3089–3097 (2016)Google Scholar
  45. 45.
    Shedligeri, P.A., Mohan, S., Mitra, K.: Data driven coded aperture design for depth recovery. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 56–60 (2017)Google Scholar
  46. 46.
    Iliadis, M., Spinoulas, L., Katsaggelos, A.K.: Deep fully-connected networks for video compressive sensing. arXiv, http://arxiv.org/abs/1603.04930 (2016)
  47. 47.
    Wang, T.C., Zhu, J.Y., Kalantari, N.K., Efros, A.A., Ramamoorthi, R.: Light field video capture using a learning-based hybrid imaging system. ACM Trans. Graph. 36(4), 133:1–133:13 (2017)Google Scholar
  48. 48.
    Gupta, M., Jauhari, A., Kulkarni, K., Jayasuriya, S., Molnar, A., Turaga, P.: Compressive light field reconstructions using deep learning. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1277–1286 (2017)Google Scholar
  49. 49.
    Sonoda, T., Nagahara, H., Taniguchi, R.: Motion-invariant coding using a programmable aperture camera. IPSJ Trans. Comput. Vis. Appl. 6, 25–33 (2014)CrossRefGoogle Scholar
  50. 50.
    Kim, J., Kwon Lee, J., Mu Lee, K.: Accurate image super-resolution using very deep convolutional networks. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1646–1654 (2016)Google Scholar
  51. 51.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: The International Conference on Learning Representations (ICLR) (2015)Google Scholar
  52. 52.
  53. 53.
    Goodfellow, I., et al.: Generative adversarial nets. Adv. Neural Inf. Process. Syst. 27, 2672–2680 (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Graduate School of EngineeringNagoya UniversityNagoyaJapan
  2. 2.Institute for Datability ScienceOsaka UniversitySuitaJapan

Personalised recommendations