Skip to main content

Ultra-fast Lensless Computational Imaging through 5D Frequency Analysis of Time-resolved Light Transport

Abstract

Light transport has been analyzed extensively, in both the primal domain and the frequency domain. Frequency analyses often provide intuition regarding effects introduced by light propagation and interaction with optical elements; such analyses encourage optimal designs of computational cameras that efficiently capture tailored visual information. However, previous analyses have relied on instantaneous propagation of light, so that the measurement of the time dynamics of light–scene interaction, and any resulting information transfer, is precluded. In this paper, we relax the common assumption that the speed of light is infinite. We analyze free space light propagation in the frequency domain considering spatial, temporal, and angular light variation. Using this analysis, we derive analytic expressions for information transfer between these dimensions and show how this transfer can be exploited for designing a new lensless imaging system. With our frequency analysis, we also derive performance bounds for the proposed computational camera architecture and provide a mathematical framework that will also be useful for future ultra-fast computational imaging systems.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Notes

  1. The concept of light cones is commonly used in space–time physics; see e.g. www.phy.syr.edu/courses/modules/LIGHTCONE/minkowski.html.

References

  • Andersen, G. (2005). Large optical photon sieve. Optics Letters, 30, 2976–2978.

    Article  Google Scholar 

  • Chai, J. X., Tong, X., Chan, S. C., & Shum, H. Y. (2000). Plenoptic sampling. ACM Transactions on Graphics (SIGGRAPH), 19, 307–318.

    Google Scholar 

  • Chandrasekaran, V., Wakin, M., Baron, D., & Baraniuk, R. (2004). Surflets: A sparse representation for multidimensional functions containing smooth discontinuities. In: Information Theory, 2004. ISIT 2004. Proceedings. International Symposium on, IEEE, Chicago (p. 563).

  • Durand, F., Holzschuch, N., Soler, C., Chan, E., & Sillion, F. X. (2005). A frequency analysis of light transport. ACM Transactions on Graphics (SIGGRAPH), 24, 1115–1126.

    Article  Google Scholar 

  • Edward, R., Dowski, J., & Cathey, W. T. (1995). Extended depth of field through wave-front coding. Applied Optics, 34, 1859–1866.

    Article  Google Scholar 

  • Gill, P. R., Lee, C., Lee, D. G., Wang, A., & Molnar, A. (2011). A microscale camera using direct fourier-domain scene capture. Optics Letters, 36, 2949–2951.

    Google Scholar 

  • Hamamatsu: Hamamatsu Streak Camera. (2012). http://learn.hamamatsu.com/tutorials/java/streakcamera/

  • Herman, G. T. (1995). Image reconstruction from projections. Real-Time Imaging, 1, 3–18.

    MathSciNet  Article  Google Scholar 

  • Isaksen, A., McMillan, L., Gortler, S. J. (2000) Dynamically reparameterized light fields. In: Proceedings of the ACM Transactions on Graphics (SIGGRAPH), San Diego (pp. 297–306).

  • Isikman, S. O., Bishara, W., Mavandadi, S., Yu, F. W., Feng, S., Lau, R., Ozcan, A. (2011). Lens-free optical tomographic microscope with a large imaging volume on a chip. In: Proceedings of the National Academy of Sciences, Washington.

  • Kajiya, J. T. (1986). The rendering equation. ACM Transactions on Graphics (SIGGRAPH), 20, 143–150.

    Article  Google Scholar 

  • Kak, A. C., & Slaney, M. (2001). Principles of computerized tomographic imaging. Tahoe: Society for, Industrial Mathematics.

    Book  Google Scholar 

  • Koppal, S. J., Gkioulekas, I., Zickler, T., Barrows, G. L. (2011). Wide-angle micro sensors for vision on a tight budget. In: Proceedings of the CVPR, New York (pp. 361–368).

  • Levin, A., & Durand, F. (2010). Linear view synthesis using a dimensionality gap light field prior. In: Proceedings of the CVPR, New York (pp. 1–8).

  • Levin, A., Hasinoff, S. W., Green, P., Durand, F., & Freeman, W. T. (2009). 4D frequency analysis of computational cameras for depth of field extension. ACM Transactions on Graphics (Siggraph), 28, 97:1–97:14.

    Article  Google Scholar 

  • Naik, N., Zhao, S., Velten, A., Raskar, R., & Bala, K. (2011). Single view reflectance capture using multiplexed scattering and time-of-flight imaging. ACM Transactions on Graphics (SIGGRAPH Asia), 30, 171.

    Google Scholar 

  • Ng, R. (2005). Fourier slice photography. ACM Transactions on Graphics (SIGGRAPH), 24, 735–744.

    Article  Google Scholar 

  • Pandharkar, R., Velten, A., Bardagjy, A., Lawson, E., Bawendi, M., & Raskar, R. (2011). Estimating motion and size of moving non-line-of-sight objects in cluttered environments. In: Proceedings of the CVPR, New York (pp. 265–272).

  • Raskar, R., & Davis, J. (2008). 5D Time-Light Transport Matrix: What Can We Reason about Scene Properties?. Report: MIT Technical.

    Google Scholar 

  • Saleh, B. (2011). Introduction to subsurface imaging. Cambridge: Cambridge University Press.

    Book  MATH  Google Scholar 

  • Smith, A., Skorupski, J., & Davis, J. (2008). Transient Rendering. Technical Report UCSC-SOE-08-26, School of Engineering, University of California, Santa Cruz.

  • Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., & Tumblin, J. (2007). Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics (SIGGRAPH), 26, 69.

    Article  Google Scholar 

  • Velten, A., Willwacher, T., Gupta, O., Veeraraghavan, A., Bawendi, M., & Raskar, R. (2012). Recovering three-dimensional shape around a corner using ultra-fast time-of-flight imaging. Nature, Communications, 3, 745–758.

    Article  Google Scholar 

  • Wu, D., O’Toole, M., Velten, A., Agrawal, A., & Raskar, R. (2012). Decomposing global light transport using time of flight imaging. In: Proceedings of the CVPR, New York (pp. 1–8).

  • Zomet, A., & Nayar, S.K. (2006). Lensless imaging with a controllable aperture. In: Proceedings of the CVPR, New York (pp. 339–346).

Download references

Acknowledgments

The work of the MIT affiliated coauthors was funded by the Media Lab Consortium Members. Gordon Wetzstein was supported by an NSERC Postdoctoral Fellowship. Tsinghua University affiliated coauthors were supported by China National Basic Research Project (No. 2010CB731800) and the Key Project of NSFC (Nos. 61120106003 and 61035002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gordon Wetzstein.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 117 KB)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Wu, D., Wetzstein, G., Barsi, C. et al. Ultra-fast Lensless Computational Imaging through 5D Frequency Analysis of Time-resolved Light Transport. Int J Comput Vis 110, 128–140 (2014). https://doi.org/10.1007/s11263-013-0686-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-013-0686-0

Keywords

  • Computational photography
  • Light transport
  • Frequency analysis
  • Lensless imaging