Skip to main content
Log in

Hybrid light field imaging for improved spatial resolution and depth range

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Light field imaging involves capturing both angular and spatial distribution of light; it enables new capabilities, such as post-capture digital refocusing, camera aperture adjustment, perspective shift, and depth estimation. Micro-lens array (MLA)-based light field cameras provide a cost-effective approach to light field imaging. There are two main limitations of MLA-based light field cameras: low spatial resolution and narrow baseline. While low spatial resolution limits the general purpose use and applicability of light field cameras, narrow baseline limits the depth estimation range and accuracy. In this paper, we present a hybrid stereo imaging system that includes a light field camera and a regular camera. The hybrid system addresses both spatial resolution and narrow baseline issues of the MLA-based light field cameras while preserving light field imaging capabilities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Notes

  1. A preliminary version of this work was presented as a conference paper [19]. In this paper, we provide additional experiments, detailed algorithm explanation and analysis.

References

  1. Gershun, A.: The light field. J. Math. Phys. 18(1), 51–151 (1939)

    Article  MATH  Google Scholar 

  2. Levoy, M., Hanrahan, P.: Light field rendering. In: ACM International Conference on Computer Graphics and Interactive Techniques, pp. 31–42 (1996)

  3. Lippmann, G.: Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 7(1), 821–825 (1908)

    Article  Google Scholar 

  4. Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Landy, M., Movshon, J. A. (eds.) Computational Models of Visual Processing, pp. 3–20. MIT Press, Cambridge (1991)

  5. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: ACM Conference on Computer Graphics and Interactive Techniques, pp. 43–54 (1996)

  6. Wilburn, B., Joshi, N., Vaish, V., Talvala, E.V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)

  7. Yang, J.C., Everett, M., Buehler, C., McMillan, L.: A real-time distributed light field camera. In: Eurographics Workshop on Rendering, pp. 77–86 (2002)

  8. Lumsdaine, A., Georgiev, T.: The focused plenoptic camera. In: IEEE International Conference on Computational Photography, pp. 1–8 (2009)

  9. Ng, R.: Fourier slice photography. ACM Trans. Graph. 24(3), 735–744 (2005)

  10. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans. Graph. 26(3), 1–12 (2007)

  11. Georgiev, T., Zheng, K.C., Curless, B., Salesin, D., Nayar, S., Intwala, C.: Spatio-angular resolution tradeoffs in integral photography. In: Eurographics Conference on Rendering Techniques, pp. 263–272 (2006)

  12. Unger, J., Wenger, A., Hawkins, T., Gardner, A., Debevec, P.: Capturing and rendering with incident light fields. In: Eurographics Workshop on Rendering, pp. 141–149 (2003)

  13. Lytro, Inc. https://www.lytro.com/

  14. Raytrix, GmbH. https://www.raytrix.de/

  15. Dansereau, D.G., Pizarro, O., Williams, S.B.: Decoding, calibration and rectification for lenselet-based plenoptic cameras. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1027–1034 (2013)

  16. Jeon, H.G., Park, J., Choe, G., Park, J.: Accurate depth map estimation from a lenslet light field camera. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1547–1555 (2015)

  17. Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: IEEE International Conference on Computer Vision, pp. 673–680 (2013)

  18. Yu, Z., Guo, X., Ling, H., Lumsdaine, A., Yu, J.: Line assisted light field triangulation and stereo matching. In: IEEE International Conference on Computer Vision, pp. 2792–2799 (2013)

  19. Alam, M.Z., Gunturk, B.K.: Hybrid stereo imaging including a light field and a regular camera. In: IEEE Signal Processing and Communication Applications Conference, pp. 1293–1296 (2016)

  20. Boominathan, V., Mitra, K., Veeraraghavan, A.: Improving resolution and depth-of-field of light field cameras using a hybrid imaging system. In: IEEE International Conference on Computational Photography, pp. 1–10 (2014)

  21. Wang, X., Li, L., Hou, G.: High-resolution light field reconstruction using a hybrid imaging system. Appl. Opt. 55(10), 2580–2593 (2016)

    Article  Google Scholar 

  22. Wu, J., Wang, H., Wang, X., Zhang, Y.: A novel light field super-resolution framework based on hybrid imaging system. In: Visual Communications and Image Processing, pp. 1–4 (2015)

  23. Bishop, T.E., Zanetti, S., Favaro, P.: Light field superresolution. In: IEEE International Conference on Computational Photography, pp. 1–9 (2009)

  24. Mitra, K., Veeraraghavan, A.: Light field denoising, light field superresolution and stereo camera based refocussing using a GMM light field patch prior. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 22–28 (2012)

  25. Wanner, S., Goldluecke, B.: Spatial and angular variational super-resolution of 4D light fields. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 901–908 (2012)

  26. Cho, D., Lee, M., Kim, S., Tai, Y.W.: Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. In: IEEE International Conference on Computer Vision, pp. 3280–3287 (2013)

  27. Kalantari, N.K., Wang, T.C., Ramamoorthi, R.: Learning-based view synthesis for light field cameras. ACM Trans. Graph. 35(6), 193 (2016)

    Article  Google Scholar 

  28. Yoon, Y., Jeon, H.G., Yoo, D., Lee, J.Y., Kweon, I.S.: Learning a deep convolutional network for light-field image super-resolution. In: IEEE International Conference Computer Vision Workshop, pp. 24–32 (2015)

  29. Perez, F., Perez, A., Rodriguez, M., Magdaleno, E.: Fourier slice super-resolution in plenoptic cameras. In: IEEE International Conference on Computational Photography, pp. 1–11 (2012)

  30. Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F.: Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph 34(1), 12 (2014)

    Article  MATH  Google Scholar 

  31. Broxton, M., Grosenick, L., Yang, S., Cohen, N., Andalman, A., Deisseroth, K., Levoy, M.: Wave optics theory and 3D deconvolution for the light field microscope. Opt. Express 21(25), 25418–25439 (2013)

    Article  Google Scholar 

  32. Junker, A., Stenau, T., Brenner, K.H.: Scalar wave-optical reconstruction of plenoptic camera images. Appl. Opt. 53(25), 5784–5790 (2014)

    Article  Google Scholar 

  33. Shroff, S.A., Berkner, K.: Image formation analysis and high resolution image reconstruction for plenoptic imaging systems. Appl. Opt. 52(10), 22–31 (2013)

    Article  Google Scholar 

  34. Trujillo-Sevilla, J.M., Rodriguez-Ramos, L.F., Montilla, I., Rodriguez-Ramos, J.M.: High resolution imaging and wavefront aberration correction in plenoptic systems. Opt. Lett. 39(17), 5030–5033 (2014)

    Article  Google Scholar 

  35. Georgiev, T.: New results on the plenoptic 2.0 camera. In: Asilomar Conference on Signals, Systems and Computers, pp. 1243–1247 (2009)

  36. Gallup, D., Frahm, J.M., Mordohai, P., Pollefeys, M.: Variable baseline/resolution stereo. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)

  37. Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 41–48 (2012)

  38. Grossberg, M.D., Nayar, S.K.: Determining the camera response from images: what is knowable? IEEE Trans. Pattern Anal. Mach. Intell. 25(11), 1455–1467 (2003)

    Article  Google Scholar 

  39. Liu, C.: Beyond Pixels: Exploring New Representations and Applications for Motion Analysis. MIT Press, Cambridge (2009)

    Google Scholar 

  40. Mitchell, H.B.: Image Fusion: Theories, Techniques and Applications. Springer, Berlin (2010)

    Book  Google Scholar 

  41. Zeeuw, P.M.: Wavelet and image fusion. In: CWI (1998)

  42. Calderon, F.C., Parra, C.A., Niño, C.L.: Depth map estimation in light fields using an stereo-like taxonomy. In: IEEE Symposium on Image, Signal Processing and Artificial Vision, pp. 1–5 (2014)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bahadir K. Gunturk.

Additional information

This work is supported by TUBITAK Grant 114E095.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alam, M.Z., Gunturk, B.K. Hybrid light field imaging for improved spatial resolution and depth range. Machine Vision and Applications 29, 11–22 (2018). https://doi.org/10.1007/s00138-017-0862-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-017-0862-2

Keywords

Navigation