Advertisement

International Journal of Computer Vision

, Volume 98, Issue 2, pp 146–167 | Cite as

A Combined Theory of Defocused Illumination and Global Light Transport

  • Mohit Gupta
  • Yuandong Tian
  • Srinivasa G. Narasimhan
  • Li Zhang
Article

Abstract

Projectors are increasingly being used as light-sources in computer vision applications. In several applications, they are modeled as point light sources, thus ignoring the effects of illumination defocus. In addition, most active vision techniques assume that a scene point is illuminated only directly by the light source, thus ignoring global light transport effects. Since both defocus and global illumination co-occur in virtually all scenes illuminated by projectors, ignoring them can result in strong, systematic biases in the recovered scene properties. To make computer vision techniques work for general real world scenes, it is thus important to account for both these effects.

In this paper, we study the interplay between defocused illumination and global light transport. We show that both these seemingly disparate effects can be expressed as low pass filters on the incident illumination. Using this observation, we derive an invariant between the two effects, which can be used to separate the two. This is directly useful in scenarios where limited depth-of-field devices (such as projectors) are used to illuminate scenes with global light transport and significant depth variations. We show applications in two scenarios: (a) accurate depth recovery in the presence of global light transport, and (b) factoring out the effects of illumination defocus for correct direct-global component separation. We demonstrate our approach using scenes with complex shapes, reflectance properties, textures and translucencies.

Keywords

Defocused illumination Global light transport Depth recovery Direct-global component separation Projectors Physics-based vision 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Atcheson, B., Ihrke, I., Heidrich, W., Tevs, A., Bradley, D., Magnor, M., & Seidel, H. (2008). Time-resolved 3d capture of non-stationary gas flows. In SIGGRAPH. Google Scholar
  2. Bai, J., Chandraker, M., Ng, T.-T., & Ramamoorthi, R. (2010). A dual theory of inverse and forward light transport. In ECCV. Google Scholar
  3. Bouguet, J.-Y., & Perona, P. (1998). 3d photography on your desk. In Proc. IEEE ICCV. Google Scholar
  4. Chen, T., Lensch, H. P. A., Fuchs, C., & Seidel, H.-P. (2007). Polarization and phase-shifting for 3d scanning of translucent objects. In CVPR. Google Scholar
  5. Chen, T., Seidel, H.-P., & Lensch, H. P. A. (2008). Modulated phase-shifting for 3d scanning. In CVPR. Google Scholar
  6. Fuchs, C., Heinz, M., Levoy, M., Seidel, H.-P., & Lensch, H. P. A. (2008). Combining confocal imaging and descattering. Computer Graphics Forum, 27(4), 1245–1253. CrossRefGoogle Scholar
  7. Garg, G., Talvala, E.-V., Levoy, M., & Lensch, H. P. A. (2006). Symmetric photography: Exploiting data-sparseness in reflectance fields. In EGSR. Google Scholar
  8. Godin, G., Beraldin, J.-A., Rioux, M., Levoy, M., Cournoyer, L., & Blais, F. (2001). An assessment of laser range measurement of marble surfaces. In Fifth conference on optical 3-D measurement techniques. Google Scholar
  9. Gu, J., Nayar, S. K., Grinspun, E., Belhumeur, P. N., & Ramamoorthi, R. (2008). Compressive structured light for recovering inhomogeneous participating media. In ECCV. Google Scholar
  10. Gupta, M., Tian, Y., Narasimhan, S. G., & Zhang, L. (2009). (De) focusing on global light transport for active scene recovery. In Proc. IEEE CVPR. Google Scholar
  11. Hardy, A. (1967). How large is a point source? Journal of Optical Society of America, 57(1), 44–47. CrossRefGoogle Scholar
  12. Hasinoff, S. W., & Kutulakos, K. N. (2006). Confocal stereo. In ECCV (1). Google Scholar
  13. Hawkins, T., Einarsson, P., & Debevec, P. (2005). Acquisition of time-varying participating media. In SIGGRAPH. Google Scholar
  14. Horn, B. (1975). Obtaining shape from shading information. The Psychology of Computer Vision19(1), 115–155. Google Scholar
  15. Hullin, M. B., Fuchs, M., Ihrke, I., Seidel, H.-P., & Lensch, H. P. A. (2008). Fluorescent immersion range scanning. In SIGGRAPH. Google Scholar
  16. Kutulakos, K. N., & Steger, E. (2008). A theory of refractive and specular 3d shape by light-path triangulation. IJCV, 76(1), 13–29. CrossRefGoogle Scholar
  17. Levin, A., Fergus, R., Durand, F., & Freeman, B. (2007). Image and depth from a conventional camera with a coded aperture. In SIGGRAPH. Google Scholar
  18. Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., & Bolas, M. (2004). Synthetic aperture confocal imaging. In SIGGRAPH. Google Scholar
  19. Liu, S., Ng, T.-T., & Matsushita, Y. (2010). Shape from second-bounce of light transport. In ECCV. Google Scholar
  20. Morris, N. J. W., & Kutulakos, K. N. (2007). Reconstructing the surface of inhomogeneous transparent scenes by scatter-trace photography. In ICCV. Google Scholar
  21. Mukaigawa, Y., Yagi, Y., & Raskar, R. (2010). Analysis of light transport in scattering media. In CVPR. Google Scholar
  22. Narasimhan, S. G., Gupta, M., Donner, C., Ramamoorthi, R., Nayar, S. K., & Jensen, H. W. (2006). Acquiring scattering properties of participating media by dilution. ACM Transactions on Graphics, 25(3), 1003–1012. CrossRefGoogle Scholar
  23. Narasimhan, S. G., & Nayar, S. K. (2002). Vision and the atmosphere. IJCV, 48(3), 233–254. zbMATHCrossRefGoogle Scholar
  24. Narasimhan, S. G., Nayar, S. K., Sun, B., & Koppal, S. J. (2005). Structured light in scattering media. In Proc. IEEE ICCV (pp. 420–427). Google Scholar
  25. Nayar, S., Ikeuchi, K., & Kanade, T. (1991). Shape from Interreflections. IJCV, 6(3), 173–195. CrossRefGoogle Scholar
  26. Nayar, S. K., Krishnan, G., Grossberg, M. D., & Raskar, R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. In SIGGRAPH. Google Scholar
  27. Nayar, S. K., & Nakagawa, Y. (1994). Shape from focus. PAMI, 16(8), 824–831. CrossRefGoogle Scholar
  28. O’Toole, M., & Kutulakos, K. N. (2010). Optical computing for fast light transport analysis. In Proc. SIGGRAPH Asia. Google Scholar
  29. Schechner, Y. Y., & Kiryati, N. (2000). Depth from defocus vs. stereo: How different really are they? IJCV, 39(2), 141–162. zbMATHCrossRefGoogle Scholar
  30. Seitz, S. M., Matsushita, Y., & Kutulakos, K. N. (2005). A theory of inverse light transport. In ICCV. Google Scholar
  31. Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., & Lensch, H. P. A. (2005). Dual photography. In SIGGRAPH. Google Scholar
  32. Subbarao, M., & Lu, M.-C. (1992). Computer modeling and simulation of camera defocus. In Machine vision and applications (pp. 277–289). Google Scholar
  33. Treibitz, T., & Schechner, Y. Y. (2006). Instant 3Descatter. In Proc. IEEE CVPR (Vol. 2, pp. 1861–1868). Google Scholar
  34. Watanabe, M., & Nayar, S. (1998). Rational filters for passive depth from defocus. IJCV, 27(3), 203–225. CrossRefGoogle Scholar
  35. Woodham, R. (1980). Photometric method for determining surface orientation from multiple images. OptEng, 19(1), 139–144. Google Scholar
  36. Zhang, L., & Nayar, S. K. (2006). Projection defocus analysis for scene capture and image display. In SIGGRAPH. Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Mohit Gupta
    • 1
  • Yuandong Tian
    • 1
  • Srinivasa G. Narasimhan
    • 1
  • Li Zhang
    • 2
  1. 1.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.Computer Science DepartmentUniversity of WisconsinMadisonUSA

Personalised recommendations