Advertisement

Shading through Defocus

  • José R. A. Torreão
  • João L. Fernandes
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5359)

Abstract

Traditional shape from defocus has been based on modeling the defocusing process through a normalized point spread function (PSF). Here we show that, in the general case, the normalization factor will depend on the depth map, what precludes shape estimation. If the camera is focused at far distances, however, such dependence can be neglected and an unnormalized PSF can be employed. We thus reformulate Pentland’s shape from defocus approach using unnormalized gaussians, and prove that, under certain assumptions, such model allows the estimation of a dense depth map from a single input image. Moreover, by using unnormalized Gabor functions as a generalization of the unnormalized-gaussian PSF, we are able to approximate any signal as resulting from a series of local, frequency-dependent defocusing processes, to which the modified Pentland’s approach also applies. Such approximation proves suitable for shading images, and has allowed us to obtain good shape-from-shading estimates essentially through a shape-from-defocus approach, without resorting to the reflectance map concept.

Keywords

Input Image Point Spread Function Multiplicative Factor Intensity Error Depth Error 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Pentland, A.: A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence 9, 523–531 (1987)CrossRefGoogle Scholar
  2. 2.
    Xiong, Y., Shafer, S.: Depth from focusing and defocusing. In: Proceedings of the IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pp. 67–73 (1993)Google Scholar
  3. 3.
    Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. Int. J. Computer Vision 13, 271–294 (1994)CrossRefGoogle Scholar
  4. 4.
    Ziou, D., Deschenes, F.: Depth from defocus in spatial domain. Comp. Vision and Image Understanding 81, 143–165 (2001)CrossRefzbMATHGoogle Scholar
  5. 5.
    Zhang, R., Tsai, P.S., Cryer, J., Shah, M.: Shape from shading: a survey. IEEE Trans. on Pattern Analysis and Machine Intelligence 21, 690–706 (1999)CrossRefzbMATHGoogle Scholar
  6. 6.
    Samaras, D., Metaxas, D.: Incorporating illumination constraints in deformable models for shape from shading and light direction estimation. IEEE Trans. on Pattern Analysis and Machine Intelligence 25, 247–264 (2003)CrossRefGoogle Scholar
  7. 7.
    Fanany, M., Kumazawa, I.: A neural network for recovering 3d shape from erroneous and few depth maps of shaded surfaces. Patt. Recognition Letters 25, 377–389 (2004)CrossRefGoogle Scholar
  8. 8.
    Torreao, J., Fernandes, J.: Single-image shape from defocus. In: Proceedings of the 18th Brazilian Symposium on Computer Graphics and Image Processing, pp. 241–246 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • José R. A. Torreão
    • 1
  • João L. Fernandes
    • 1
  1. 1.Instituto de ComputaçãoUniversidade Federal FluminenseNiterói RJBrazil

Personalised recommendations