Skip to main content
Log in

Light field imaging: models, calibrations, reconstructions, and applications

  • Review
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Light field imaging is an emerging technology in computational photography areas. Based on innovative designs of the imaging model and the optical path, light field cameras not only record the spatial intensity of threedimensional (3D) objects, but also capture the angular information of the physical world, which provides new ways to address various problems in computer vision, such as 3D reconstruction, saliency detection, and object recognition. In this paper, three key aspects of light field cameras, i.e., model, calibration, and reconstruction, are reviewed extensively. Furthermore, light field based applications on informatics, physics, medicine, and biology are exhibited. Finally, open issues in light field imaging and long-term application prospects in other natural sciences are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Babacan, S.D., Ansorge, R., Luessi, M., et al., 2012. Compressive light field sensing. IEEE Trans. Image Process., 21(12):4746–4757. https://doi.org/10.1109/tip.2012.2210237

    Article  MathSciNet  Google Scholar 

  • Belden, J., Truscott, T.T., Axiak, M.C., et al., 2010. Threedimensional synthetic aperture particle image velocimetry. Meas. Sci. Technol., 21(12):125403. https://doi.org/10.1088/0957-0233/21/12/125403

    Article  Google Scholar 

  • Bergamasco, F., Albarelli, A., Cosmo, L., et al., 2015. Adopting an unconstrained ray model in light-field cameras for 3D shape reconstruction. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3003–3012. https://doi.org/10.1109/cvpr.2015.7298919

    Google Scholar 

  • Birklbauer, C., Bimber, O., 2014. Panorama light-field imaging. Comput. Graph. Forum, 33(2):43–52. https://doi.org/10.1111/cgf.12289

    Article  Google Scholar 

  • Birklbauer, C., Opelt, S., Bimber, O., 2013. Rendering gigaray light fields. Comput. Graph. Forum, 32(2pt4):469–478. https://doi.org/10.1111/cgf.12067

    Article  Google Scholar 

  • Bishop, T.E., Favaro, P., 2012. The light field camera: extended depth of field, aliasing, and superresolution. IEEE Trans. Patt. Anal. Mach. Intell., 34(5):972–986. https://doi.org/10.1109/tpami.2011.168

    Article  Google Scholar 

  • Bok, Y., Jeon, H.G., Kweon, I.S., 2014. Geometric calibration of micro-lens-based light-field cameras using line features. Proc. European Conf. on Computer Vision, p.47–61. https://doi.org/10.1007/978-3-319-10599-4_4

    Google Scholar 

  • Broxton, M., Grosenick, L., Yang, S., et al., 2013. Wave optics theory and 3-D deconvolution for the light field microscope. Opt. Expr., 21(21):25418–25439. https://doi.org/10.1364/oe.21.025418

    Article  Google Scholar 

  • Buehler, C., Bosse, M., McMillan, L., et al., 2001. Unstructured lumigraph rendering. Proc. 28th Annual Conf. on Computer Graphics and Interactive Techniques, p.425–432. https://doi.org/10.1145/383259.383309

    Google Scholar 

  • Chen, C., Lin, H., Yu, Z., et al., 2014. Light field stereo matching using bilateral statistics of surface cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1518–1525. https://doi.org/10.1109/cvpr.2014.197

    Google Scholar 

  • Cho, D., Lee, M., Kim, S., et al., 2013. Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. Proc. IEEE Int. Conf. on Computer Vision, p.3280–3287. https://doi.org/10.1109/iccv.2013.407

    Google Scholar 

  • Dansereau, D.G., Pizarro, O., Williams, S.B., 2013. Decoding, calibration and rectification for lenselet-based plenoptic cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1027–1034. https://doi.org/10.1109/cvpr.2013.137

    Google Scholar 

  • Dansereau, D.G., Pizarro, O., Williams, S.B., 2015. Linear volumetric focus for light field cameras. ACM Trans. Graph., 34(2):15.1–15.20. https://doi.org/10.1145/2665074

    Article  Google Scholar 

  • Edussooriya, C.U.S., 2015. Low-Complexity Multidimensional Filters for Plenoptic Signal Processing. PhD Thesis, University of Victoria, Canada. http://hdl.handle.net/1828/6894

    Google Scholar 

  • Fahringer, T., Thurow, B.S., 2012. Tomographic reconstruction of a 3-D flow field using a plenoptic camera. Proc. 42nd AIAA Fluid Dynamics Conf. and Exhibit, p.1–13. https://doi.org/10.2514/6.2012-2826

    Google Scholar 

  • Georgiev, T., Lumsdaine, A., 2009. Superresolution with Plenoptic 2.0 cameras. Proc. Frontiers in Optics / Laser Science XXV / Fall OSA Optics & Photonics Technical Digest. https://doi.org/10.1364/srs.2009.stua6

    Google Scholar 

  • Georgiev, T., Lumsdaine, A., 2010. Focused plenoptic camera and rendering. J. Electron. Imag., 19(2):021106. https://doi.org/10.1117/1.3442712

    Article  Google Scholar 

  • Georgiev, T., Lumsdaine, A., 2012. The multifocus plenoptic camera. Proc. Digital Photography VIII. https://doi.org/10.1117/12.908667

    Google Scholar 

  • Georgiev, T., Zheng, K.C., Curless, B., et al., 2006. Spatioangular resolution tradeoffs in integral photography. Proc. 17th Eurographics Conf. on Rendering Techniques, p.263–272. https://doi.org/10.2312/EGWR/EGSR06/263-272

    Google Scholar 

  • Georgiev, T., Chunev, G., Lumsdaine, A., 2011. Superresolution with the focused plenoptic camera. Proc. Computational Imaging IX, p.78730X. https://doi.org/10.1117/12.872666

    Chapter  Google Scholar 

  • Ghasemi, A., Vetterli, M., 2014. Detecting planar surface using a light-field camera with application to distinguishing real scenes from printed photos. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, p.4588–4592. https://doi.org/10.1109/icassp.2014.6854471

    Google Scholar 

  • Gortler, S.J., Grzeszczuk, R., Szeliski, R., et al., 1996. The lumigraph. Proc. 23rd Annual Conf. on Computer Graphics and Interactive Techniques, p.43–54. https://doi.org/10.1145/237170.237200

    Google Scholar 

  • Guo, X., Yu, Z., Kang, S.B., et al., 2016. Enhancing light fields through ray-space stitching. IEEE Trans. Vis. Comput. Graph., 22(7):1852–1861. https://doi.org/10.1109/tvcg.2015.2476805

    Article  Google Scholar 

  • Hahne, C., Aggoun, A., Haxha, S., et al., 2014. Light field geometry of a standard plenoptic camera. Opt. Expr., 22(22):26659–26673. https://doi.org/10.1364/oe.22.026659

    Article  Google Scholar 

  • Hahne, C., Aggoun, A., Velisavljevic, V., 2015. The refocusing distance of a standard plenoptic photograph. Proc. 3DTV-Conf.: the True Vision-Capture, Transmission and Display of 3D Video, p.1-4. https://doi.org/10.1109/3dtv.2015.7169363

    Book  Google Scholar 

  • Iffa, E., Wetzstein, G., Heidrich, W., 2012. Light field optical flow for refractive surface reconstruction. Proc. Applications of Digital Image Processing XXXV, p.84992H. https://doi.org/10.1117/12.981608

    Chapter  Google Scholar 

  • Isaksen, A., McMillan, L., Gortler, S.J., 2000. Dynamically reparameterized light fields. Proc. 27th Annual Conf. on Computer Graphics and Interactive Techniques, p.297–306. https://doi.org/10.1145/344779.344929

    Google Scholar 

  • Jeon, H.G., Park, J., Choe, G., et al., 2015. Accurate depth map estimation from a lenslet light field camera. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1547–1555. https://doi.org/10.1109/cvpr.2015.7298762

    Google Scholar 

  • Johannsen, O., Heinze, C., Goldluecke, B., et al., 2013. On the calibration of focused plenoptic cameras. In: Grzegorzek, M., Theobalt, C., Koch, R., et al. (Eds.), Timeof-Flight and Depth Imaging: Sensors, Algorithms, and Applications, p.302–317. https://doi.org/10.1007/978-3-642-44964-2_15

    Google Scholar 

  • Johannsen, O., Sulc, A., Goldluecke, B., 2015. On linear structure from motion for light field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.720–728. https://doi.org/10.1109/iccv.2015.89

    Google Scholar 

  • Johannsen, O., Sulc, A., Goldluecke, B., 2016. What sparse light field coding reveals about scene structure. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3262–3270. https://doi.org/10.1109/cvpr.2016.355

    Google Scholar 

  • Kalantari, N.K., Wang, T.C., Ramamoorthi, R., 2016. Learning-based view synthesis for light field cameras. ACM Trans. Graph., 35(6):193.1-193.10. https://doi.org/10.1145/2980179.2980251

    Article  Google Scholar 

  • Kim, C., Zimmer, H., Pritch, Y., et al., 2013. Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph., 32(4):73.1-73.12. https://doi.org/10.1145/2461912.2461926

    MATH  Google Scholar 

  • Kim, S., Ban, Y., Lee, S., 2014. Face liveness detection using a light field camera. Sensors, 14(12):22471–22499. https://doi.org/10.3390/s141222471

    Article  Google Scholar 

  • Landy, M., Movshon, J.A., 1991. The Plenoptic Function and the Elements of Early Vision. MIT Press, USA, p.3-20.

    Google Scholar 

  • Levin, A., Durand, F., 2010. Linear view synthesis using a dimensionality gap light field prior. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1831–1838. https://doi.org/10.1109/cvpr.2010.5539854

    Google Scholar 

  • Levin, A., Fergus, R., Durand, F., et al., 2007. Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph., 26(3):70. https://doi.org/10.1145/1239451.1239521

    Article  Google Scholar 

  • Levoy, M., Hanrahan, P., 1996. Light field rendering. Proc. 23rd Annual Conf. on Computer Graphics and Interactive Techniques, p.31–42. https://doi.org/10.1145/237170.237199

    Google Scholar 

  • Levoy, M., Ng, R., Adams, A., et al., 2006. Light field microscopy. ACM Trans. Graph., 25(3):924–934. https://doi.org/10.1145/1141911.1141976

    Article  Google Scholar 

  • Li, J., Lu, M., Li, Z.N., 2015. Continuous depth map reconstruction from light fields. IEEE Trans. Image Process., 24(11):3257–3265. https://doi.org/10.1109/tip.2015.2440760

    Article  MathSciNet  Google Scholar 

  • Li, N., Ye, J., Ji, Y., et al., 2014. Saliency detection on light field. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.2806–2813. https://doi.org/10.1109/cvpr.2014.359

    Google Scholar 

  • Liang, C.K., Shih, Y.C., Chen, H.H., 2011. Light field analysis for modeling image formation. IEEE Trans. Image Process., 20(2):446–460. https://doi.org/10.1109/tip.2010.2063036

    Article  MathSciNet  Google Scholar 

  • Lin, H., Chen, C., Kang, S.B., et al., 2015. Depth recovery from light field using focal stack symmetry. Proc. IEEE Int. Conf. on Computer Vision, p.3451–3459. https://doi.org/10.1109/iccv.2015.394

    Google Scholar 

  • Liu, J., Xu, T., Yue, W., et al., 2015. Light-field moment microscopy with noise reduction. Opt. Expr., 23(22):29154–29162. https://doi.org/10.1364/OE.23.029154

    Article  Google Scholar 

  • Lumsdaine, A., Georgiev, T., 2008. Full Resolution Lightfield Rendering. Indiana University and Adobe Systems, Technical Report.

    Google Scholar 

  • Lytro Inc., 2011. Lytro Cinema Brings Revolutionary Light Field Technology to Film and TV Production. Technical Report. http://www.lytro.com

    Google Scholar 

  • Maeno, K., Nagahara, H., Shimada, A., et al., 2013. Light field distortion feature for transparent object recognition. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.2786–2793. https://doi.org/10.1109/cvpr.2013.359

    Google Scholar 

  • Marwah, K., Wetzstein, G., Bando, Y., et al., 2013. Compressive light field photography using overcomplete dictionaries and optimized projections. ACM Trans. Graph., 32(4):46.1-46.12. https://doi.org/10.1145/2461912.2461914

    Article  Google Scholar 

  • Maximilian, D., 2016. Light-Field Imaging and Heterogeneous Light Fields. PhD Thesis, Heidelberg University, Germany.

    Google Scholar 

  • Mignard-Debise, L., Ihrke, I., 2015. Light-field microscopy with a consumer light-field camera. Proc. Int. Conf. on 3D Vision, p.335–343. https://doi.org/10.1109/3dv.2015.45

    Google Scholar 

  • Mihara, H., Funatomi, T., Tanaka, K., et al., 2016. 4D light field segmentation with spatial and angular consistencies. Proc. Int. Conf. on Computational Photography, p.54–61. https://doi.org/10.1109/iccphot.2016.7492872

    Google Scholar 

  • Ng, R., 2005. Fourier slice photography. ACM Trans. Graph., 24(3):735–744. https://doi.org/10.1145/1073204.1073256

    Article  Google Scholar 

  • Ng, R., 2006. Digital Light Field Photography. PhD Thesis, Stanford University, USA.

    Google Scholar 

  • Ng, R., Levoy, M., Brédif, M., et al., 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Technical Report, CTSR 2005-02, Stanford University, USA.

    Google Scholar 

  • Niu, C.Y., Qi, H., Huang, X., et al., 2016. Efficient and robust method for simultaneous reconstruction of the temperature distribution and radiative properties in absorbing, emitting, and scattering media. J. Quant. Spectros. Rad. Transfer, 184:44–57. https://doi.org/10.1016/j.jqsrt.2016.06.032

    Article  Google Scholar 

  • Orth, A., Crozier, K.B., 2013. Light field moment imaging. Opt. Lett., 38(15):2666–2668. https://doi.org/10.1364/ol.38.002666

    Article  Google Scholar 

  • Perwaß, C., Wietzke, L., 2012. Single lens 3D-camera with extended depth-of-field. Proc. Human Vision and Electronic Imaging XVII. https://doi.org/10.1117/12.909882

    Google Scholar 

  • Perwaß, U., Perwaß, C., 2013. Digital Imaging System, Plenoptic Optical Device and Image Data Processing Method. US Patents.

    MATH  Google Scholar 

  • Pérez, F., Pérez, A., Rodríguez, M., et al., 2012. Fourier slice super-resolution in plenoptic cameras. Proc. IEEE Int. Conf. on Computational Photography, p.1–11. https://doi.org/10.1109/iccphot.2012.6215210

    Google Scholar 

  • Raghavendra, R., Raja, K.B., Busch, C., 2015. Presentation attack detection for face recognition using light field camera. IEEE Trans. Image Process., 24(3):1060–1075. https://doi.org/10.1109/tip.2015.2395951

    Article  MathSciNet  Google Scholar 

  • Sabater, N., Drazic, V., Seifi, M., et al., 2014. Light-Field Demultiplexing and Disparity Estimation. Technical Report, Technicolor Research and Innovation, France.

    Google Scholar 

  • Seifi, M., Sabater, N., Drazic, V., et al., 2014. Disparityguided demosaicking of light field images. Proc. IEEE Int. Conf. on Image Processing, p.5482–5486. https://doi.org/10.1109/icip.2014.7026109

    Google Scholar 

  • Shi, L., Hassanieh, H., Davis, A., et al., 2014. Light field reconstruction using sparsity in the continuous Fourier domain. ACM Trans. Graph., 34(1):12.1–12.13. https://doi.org/10.1145/2682631

    Article  Google Scholar 

  • Shum, H., Kang, S.B., 2000. Review of image-based rendering techniques. Proc. Visual Communications and Image Processing, p.2–13. https://doi.org/10.1117/12.386541

    Google Scholar 

  • Skupsch, C., Brücker, C., 2013. Multiple-plane particle image velocimetry using a light-field camera. Opt. Expr., 21(2):1726–1740. https://doi.org/10.1364/oe.21.001726

    Article  Google Scholar 

  • Srinivasan, P.P., Tao, M.W., Ng, R., et al., 2015. Oriented light-field windows for scene flow. Proc. IEEE Int. Conf. on Computer Vision, p.3496–3504. https://doi.org/10.1109/iccv.2015.399

    Google Scholar 

  • Tao, M.W., Hadap, S., Malik, J., et al., 2013. Depth from combining defocus and correspondence using light-field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.673–680. https://doi.org/10.1109/iccv.2013.89

    Google Scholar 

  • Tao, M.W., Su, J.C., Wang, T.C., et al., 2016. Depth estimation and specular removal for glossy surfaces using point and line consistency with light-field cameras. IEEE Trans. Patt. Anal. Mach. Intell., 38(6):1155–1169. https://doi.org/10.1109/tpami.2015.2477811

    Article  Google Scholar 

  • Thomason, C.M., Thurow, B.S., Fahringer, T., 2014. Calibration of a microlens array for a plenoptic camera. Proc. 52nd Aerospace Sciences Meeting, p.1456–1460. https://doi.org/10.2514/6.2014-0396

    Google Scholar 

  • Thurow, B.S., Fahringer, T., 2013. Recent development of volumetric PIV with a plenoptic camera. Proc. 10th Int. Symp. on Particle Image Velocimetry, p.1–7.

    Google Scholar 

  • Tosic, I., Berkner, K., 2014. Light field scale-depth space transform for dense depth estimation. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.435–442. https://doi.org/10.1109/cvprw.2014.71

    Google Scholar 

  • Vaish, V., 2007. Synthetic Aperture Imaging Using Dense Camera Arrays. PhD Thesis, Stanford University, USA.

    Google Scholar 

  • Vaish, V., Garg, G., Talvala, E., et al., 2005. Synthetic aperture focusing using a shear-warp factorization of the viewing transform. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, 3:129. https://doi.org/10.1109/cvpr.2005.537

    Google Scholar 

  • Vaish, V., Levoy, M., Szeliski, R., et al., 2006. Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. Proc. IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, p.2331–2338. https://doi.org/10.1109/cvpr.2006.244

    Google Scholar 

  • Venkataraman, K., Lelescu, D., Duparré, J., et al., 2013. Pi- Cam: an ultra-thin high performance monolithic camera array. ACM Trans. Graph., 32(6):166.1-166.13. https://doi.org/10.1145/2508363.2508390

    Article  Google Scholar 

  • Wang, T.C., Efros, A.A., Ramamoorthi, R., 2015. Occlusionaware depth estimation using light-field cameras. Proc. IEEE Int. Conf. on Computer Vision, p.3487–3495. https://doi.org/10.1109/iccv.2015.398

    Google Scholar 

  • Wang, T.C., Chandraker, M., Efros, A.A., et al., 2016a. SVBRDF-invariant shape and reflectance estimation from light-field cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.5451–5459. https://doi.org/10.1109/cvpr.2016.588

    Google Scholar 

  • Wang, T.C., Zhu, J.Y., Hiroaki, E., et al., 2016b. A 4D light-field dataset and CNN architectures for material recognition. Proc. European Conf. on Computer Vision, p.121–138. https://doi.org/10.1007/978-3-319-46487-9_8

    Google Scholar 

  • Wanner, S., Goldluecke, B., 2012a. Globally consistent depth labeling of 4D light fields. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.41–48. https://doi.org/10.1109/cvpr.2012.6247656

    Google Scholar 

  • Wanner, S., Goldluecke, B., 2012b. Spatial and angular variational super-resolution of 4D light fields. Proc. European Conf. on Computer Vision, p.608–621. https://doi.org/10.1007/978-3-642-33715-4_44

    Google Scholar 

  • Wanner, S., Goldluecke, B., 2014. Variational light field analysis for disparity estimation and super-resolution. IEEE Trans. Patt. Anal. Mach. Intell., 36(3):606–619. https://doi.org/10.1109/tpami.2013.147

    Article  Google Scholar 

  • Wanner, S., Fehr, J., Jähne, B., 2011. Generating EPI representations of 4D light fields with a single lens focused plenoptic camera. Proc. Int. Symp. on Visual Computing, p.90–101. https://doi.org/10.1007/978-3-642-24028-7_9

    Google Scholar 

  • Wanner, S., Meister, S., Goldluecke, B., 2013. Datasets and benchmarks for densely sampled 4D light fields. In: Bronstein, M., Favre, J., Hormann, K. (Eds.), Vision, Modeling and Visualization, p.225–226. https://doi.org/10.2312/PE.VMV.VMV13.225-226

  • Wilburn, B., 2004. High Performance Imaging Using Arrays of Inexpensive Cameras. PhD Thesis, Stanford University, USA.

    Google Scholar 

  • Williem, W., Park, I.K., 2016. Robust light field depth estimation for noisy scene with occlusion. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.4396–4404. https://doi.org/10.1109/cvpr.2016.476

    Google Scholar 

  • Xiao, Z., Wang, Q., Si, L., et al., 2014. Reconstructing scene depth and appearance behind foreground occlusion using camera array. Proc. IEEE Int. Conf. on Image Processing, p.41–45. https://doi.org/10.1109/icip.2014.7025007

    Google Scholar 

  • Xu, Y., Nagahara, H., Shimada, A., et al., 2015. TransCut: transparent object segmentation from a light-field image. Proc. IEEE Int. Conf. on Computer Vision, p.3442–3450. https://doi.org/10.1109/iccv.2015.393

    Google Scholar 

  • Yoon, Y., Jeon, H.G., Yoo, D., et al., 2015. Learning a deep convolutional network for light-field image superresolution. Proc. IEEE Int. Conf. on Computer Vision, p.24–32. https://doi.org/10.1109/iccvw.2015.17

    Google Scholar 

  • Yu, J., McMillan, L., 2004. General linear cameras. Proc. European Conf. on Computer Vision, p.14–27. https://doi.org/10.1007/978-3-540-24671-8_2

    Google Scholar 

  • Yu, Z., Yu, J., Lumsdaine, A., et al., 2012. An analysis of color demosaicing in plenoptic cameras. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.901–908. https://doi.org/10.1109/cvpr.2012.6247764

    Google Scholar 

  • Yu, Z., Guo, X., Lin, H., et al., 2013. Line assisted light field triangulation and stereo matching. Proc. IEEE Int. Conf. on Computer Vision, p.2792–2799. https://doi.org/10.1109/iccv.2013.347

    Google Scholar 

  • Yuan, Y., Liu, B., Li, S., et al., 2016. Light-field-camera imaging simulation of participatory media using Monte Carlo method. Int. J. Heat Mass Transfer, 102:518–527. https://doi.org/10.1016/j.ijheatmasstransfer.2016.06.053

    Article  Google Scholar 

  • Zhang, C., Ji, Z., Wang, Q., 2016. Rectifying projective distortion in 4D light field. Proc. IEEE Int. Conf. on Image Processing, p.1464–1468. https://doi.org/10.1109/icip.2016.7532601

    Google Scholar 

  • Zhang, Z., Liu, Y., Dai, Q., 2015. Light field from microbaseline image pair. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3800–3809. https://doi.org/10.1109/cvpr.2015.7299004

    Google Scholar 

  • Zhou, C., Miau, D., Nayar, S.K., 2012. Focal Sweep Camera for Space-Time Refocusing. Technical Report CUCS-021-12, Department of Computure Science, Columbia University, USA.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing Wang.

Additional information

Project supported by the National Natural Science Foundation of China (Nos. 61531014 and 61272287)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhu, H., Wang, Q. & Yu, J. Light field imaging: models, calibrations, reconstructions, and applications. Frontiers Inf Technol Electronic Eng 18, 1236–1249 (2017). https://doi.org/10.1631/FITEE.1601727

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.1601727

Key words

CLC number

Navigation