Multi Focus Structured Light for Recovering Scene Shape and Global Illumination

  • Supreeth Achar
  • Srinivasa G. Narasimhan
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8689)


Illumination defocus and global illumination effects are major challenges for active illumination scene recovery algorithms. Illumination defocus limits the working volume of projector-camera systems and global illumination can induce large errors in shape estimates. In this paper, we develop an algorithm for scene recovery in the presence of both defocus and global light transport effects such as interreflections and sub-surface scattering. Our method extends the working volume by using structured light patterns at multiple projector focus settings. A careful characterization of projector blur allows us to decode even partially out-of-focus patterns. This enables our algorithm to recover scene shape and the direct and global illumination components over a large depth of field while still using a relatively small number of images (typically 25-30). We demonstrate the effectiveness of our approach by recovering high quality depth maps of scenes containing objects made of optically challenging materials such as wax, marble, soap, colored glass and translucent plastic.


Structured Light Depth from Focus/Defocus Global Light Transport 


  1. 1.
    Achar, S., Nuske, S.T., Narasimhan, S.G.: Compensating for Motion During Direct-Global Separation. In: International Conference on Computer Vision (2013)Google Scholar
  2. 2.
    Chen, T., Seidel, H.P., Lensch, H.P.: Modulated phase-shifting for 3D scanning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (June 2008)Google Scholar
  3. 3.
    Couture, V., Martin, N., Roy, S.: Unstructured light scanning to overcome interreflections. In: International Conference on Computer Vision (2011)Google Scholar
  4. 4.
    Goldman, D.B., Curless, B., Hertzmann, A., Seitz, S.M.: Shape and spatially-varying BRDFs from photometric stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence 32(6), 1060–1071 (2010)CrossRefGoogle Scholar
  5. 5.
    Gupta, M., Agrawal, A., Veeraraghavan, A., Narasimhan, S.G.: Structured light 3D scanning in the presence of global illumination. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 713–720 (June 2011)Google Scholar
  6. 6.
    Gupta, M., Nayar, S.K.: Micro Phase Shifting. In: IEEE Conference on Computer Vision and Pattern Recognition (2012)Google Scholar
  7. 7.
    Gupta, M., Tian, Y., Narasimhan, S.G., Zhang, L.: A Combined Theory of Defocused Illumination and Global Light Transport. International Journal of Computer Vision 98(2), 146–167 (2011)CrossRefMathSciNetGoogle Scholar
  8. 8.
    Hermans, C., Francken, Y., Cuypers, T., Bekaert, P.: Depth from sliding projections. In: IEEE Conference on Computer Vision and Pattern Recognition (2009)Google Scholar
  9. 9.
    Hern, C., Gabriel, V., Bjorn, J.B., Roberto, S.: Non-rigid Photometric Stereo with Colored Lights. In: International Conference on Computer Vision (October 2007)Google Scholar
  10. 10.
    Inokuchi, S., Sato, K., Matsuda, F.: Range imaging system for 3-d object recognition. In: International Conference on Pattern Recognition (1984)Google Scholar
  11. 11.
    Lei, S., Zhang, S.: Digital sinusoidal fringe pattern generation: Defocusing binary patterns VS focusing sinusoidal patterns. Optics and Lasers in Engineering 48(5), 561–569 (2010)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Nayar, S.K., Krishnan, G., Grossberg, M.D., Raskar, R.: Fast separation of direct and global components of a scene using high frequency illumination. ACM Transactions on Graphics 25(3), 935 (2006)CrossRefGoogle Scholar
  13. 13.
    O’Toole, M., Raskar, R., Kutulakos, K.N.: Primal-dual coding to probe light transport. ACM Transactions on Graphics 31(4), 1–11 (2012)CrossRefGoogle Scholar
  14. 14.
    Reddy, D., Ramamoorthi, R., Curless, B.: Frequency-Space Decomposition and Acquisition of Light Transport under Spatially Varying Illumination. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 596–610. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  15. 15.
    Scharstein, D., Szeliski, R.: High-accuracy stereo depth maps using structured light. In: IEEE Conference on Computer Vision and Pattern Recognition (2003)Google Scholar
  16. 16.
    Schechner, Y., Kiryati, N.: Depth from defocus vs. stereo: How different really are they? International Journal of Computer Vision 39(2), 141–162 (2000)CrossRefzbMATHGoogle Scholar
  17. 17.
    Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from Combining Defocus and Correspondence Using Light-Field Cameras. In: International Conference on Computer Vision (2013)Google Scholar
  18. 18.
    Yuan, T., Subbarao, M.: Integration of Multiple-Baseline Color Stereo Vision with Focus and Defocus Analysis for 3-D. In: Proceedings of SPIE, No. i, pp. 44–51 (November 1998)Google Scholar
  19. 19.
    Zhang, L., Nayar, S.: Projection defocus analysis for scene capture and image display. ACM Transactions on Graphics (2006)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Supreeth Achar
    • 1
  • Srinivasa G. Narasimhan
    • 1
  1. 1.Robotics InstituteCarnegie Mellon UniversityUSA

Personalised recommendations