Skip to main content
Log in

A Combined Theory of Defocused Illumination and Global Light Transport

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Projectors are increasingly being used as light-sources in computer vision applications. In several applications, they are modeled as point light sources, thus ignoring the effects of illumination defocus. In addition, most active vision techniques assume that a scene point is illuminated only directly by the light source, thus ignoring global light transport effects. Since both defocus and global illumination co-occur in virtually all scenes illuminated by projectors, ignoring them can result in strong, systematic biases in the recovered scene properties. To make computer vision techniques work for general real world scenes, it is thus important to account for both these effects.

In this paper, we study the interplay between defocused illumination and global light transport. We show that both these seemingly disparate effects can be expressed as low pass filters on the incident illumination. Using this observation, we derive an invariant between the two effects, which can be used to separate the two. This is directly useful in scenarios where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show applications in two scenarios: (a) accurate depth recovery in the presence of global light transport, and (b) factoring out the effects of illumination defocus for correct direct-global component separation. We demonstrate our approach using scenes with complex shapes, reflectance properties, textures and translucencies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Atcheson, B., Ihrke, I., Heidrich, W., Tevs, A., Bradley, D., Magnor, M., & Seidel, H. (2008). Time-resolved 3d capture of non-stationary gas flows. In SIGGRAPH.

    Google Scholar 

  • Bai, J., Chandraker, M., Ng, T.-T., & Ramamoorthi, R. (2010). A dual theory of inverse and forward light transport. In ECCV.

    Google Scholar 

  • Bouguet, J.-Y., & Perona, P. (1998). 3d photography on your desk. In Proc. IEEE ICCV.

    Google Scholar 

  • Chen, T., Lensch, H. P. A., Fuchs, C., & Seidel, H.-P. (2007). Polarization and phase-shifting for 3d scanning of translucent objects. In CVPR.

    Google Scholar 

  • Chen, T., Seidel, H.-P., & Lensch, H. P. A. (2008). Modulated phase-shifting for 3d scanning. In CVPR.

    Google Scholar 

  • Fuchs, C., Heinz, M., Levoy, M., Seidel, H.-P., & Lensch, H. P. A. (2008). Combining confocal imaging and descattering. Computer Graphics Forum, 27(4), 1245–1253.

    Article  Google Scholar 

  • Garg, G., Talvala, E.-V., Levoy, M., & Lensch, H. P. A. (2006). Symmetric photography: Exploiting data-sparseness in reflectance fields. In EGSR.

    Google Scholar 

  • Godin, G., Beraldin, J.-A., Rioux, M., Levoy, M., Cournoyer, L., & Blais, F. (2001). An assessment of laser range measurement of marble surfaces. In Fifth conference on optical 3-D measurement techniques.

    Google Scholar 

  • Gu, J., Nayar, S. K., Grinspun, E., Belhumeur, P. N., & Ramamoorthi, R. (2008). Compressive structured light for recovering inhomogeneous participating media. In ECCV.

    Google Scholar 

  • Gupta, M., Tian, Y., Narasimhan, S. G., & Zhang, L. (2009). (De) focusing on global light transport for active scene recovery. In Proc. IEEE CVPR.

    Google Scholar 

  • Hardy, A. (1967). How large is a point source? Journal of Optical Society of America, 57(1), 44–47.

    Article  Google Scholar 

  • Hasinoff, S. W., & Kutulakos, K. N. (2006). Confocal stereo. In ECCV (1).

    Google Scholar 

  • Hawkins, T., Einarsson, P., & Debevec, P. (2005). Acquisition of time-varying participating media. In SIGGRAPH.

    Google Scholar 

  • Horn, B. (1975). Obtaining shape from shading information. The Psychology of Computer Vision19(1), 115–155.

    Google Scholar 

  • Hullin, M. B., Fuchs, M., Ihrke, I., Seidel, H.-P., & Lensch, H. P. A. (2008). Fluorescent immersion range scanning. In SIGGRAPH.

    Google Scholar 

  • Kutulakos, K. N., & Steger, E. (2008). A theory of refractive and specular 3d shape by light-path triangulation. IJCV, 76(1), 13–29.

    Article  Google Scholar 

  • Levin, A., Fergus, R., Durand, F., & Freeman, B. (2007). Image and depth from a conventional camera with a coded aperture. In SIGGRAPH.

    Google Scholar 

  • Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., & Bolas, M. (2004). Synthetic aperture confocal imaging. In SIGGRAPH.

    Google Scholar 

  • Liu, S., Ng, T.-T., & Matsushita, Y. (2010). Shape from second-bounce of light transport. In ECCV.

    Google Scholar 

  • Morris, N. J. W., & Kutulakos, K. N. (2007). Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography. In ICCV.

    Google Scholar 

  • Mukaigawa, Y., Yagi, Y., & Raskar, R. (2010). Analysis of light transport in scattering media. In CVPR.

    Google Scholar 

  • Narasimhan, S. G., Gupta, M., Donner, C., Ramamoorthi, R., Nayar, S. K., & Jensen, H. W. (2006). Acquiring scattering properties of participating media by dilution. ACM Transactions on Graphics, 25(3), 1003–1012.

    Article  Google Scholar 

  • Narasimhan, S. G., & Nayar, S. K. (2002). Vision and the atmosphere. IJCV, 48(3), 233–254.

    Article  MATH  Google Scholar 

  • Narasimhan, S. G., Nayar, S. K., Sun, B., & Koppal, S. J. (2005). Structured light in scattering media. In Proc. IEEE ICCV (pp. 420–427).

    Google Scholar 

  • Nayar, S., Ikeuchi, K., & Kanade, T. (1991). Shape from Interreflections. IJCV, 6(3), 173–195.

    Article  Google Scholar 

  • Nayar, S. K., Krishnan, G., Grossberg, M. D., & Raskar, R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. In SIGGRAPH.

  • Nayar, S. K., & Nakagawa, Y. (1994). Shape from focus. PAMI, 16(8), 824–831.

    Article  Google Scholar 

  • O’Toole, M., & Kutulakos, K. N. (2010). Optical computing for fast light transport analysis. In Proc. SIGGRAPH Asia.

    Google Scholar 

  • Schechner, Y. Y., & Kiryati, N. (2000). Depth from defocus vs. stereo: How different really are they? IJCV, 39(2), 141–162.

    Article  MATH  Google Scholar 

  • Seitz, S. M., Matsushita, Y., & Kutulakos, K. N. (2005). A theory of inverse light transport. In ICCV.

    Google Scholar 

  • Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., & Lensch, H. P. A. (2005). Dual photography. In SIGGRAPH.

    Google Scholar 

  • Subbarao, M., & Lu, M.-C. (1992). Computer modeling and simulation of camera defocus. In Machine vision and applications (pp. 277–289).

    Google Scholar 

  • Treibitz, T., & Schechner, Y. Y. (2006). Instant 3Descatter. In Proc. IEEE CVPR (Vol. 2, pp. 1861–1868).

    Google Scholar 

  • Watanabe, M., & Nayar, S. (1998). Rational filters for passive depth from defocus. IJCV, 27(3), 203–225.

    Article  Google Scholar 

  • Woodham, R. (1980). Photometric method for determining surface orientation from multiple images. OptEng, 19(1), 139–144.

    Google Scholar 

  • Zhang, L., & Nayar, S. K. (2006). Projection defocus analysis for scene capture and image display. In SIGGRAPH.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohit Gupta.

Additional information

http://graphics.cs.cmu.edu/projects/DefocusGlobal/

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gupta, M., Tian, Y., Narasimhan, S.G. et al. A Combined Theory of Defocused Illumination and Global Light Transport. Int J Comput Vis 98, 146–167 (2012). https://doi.org/10.1007/s11263-011-0500-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-011-0500-9

Keywords

Navigation