Real Time Monocular Depth from Defocus

  • Jean-Vincent Leroy
  • Thierry Simon
  • François Deschenes
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5099)

Abstract

The method proposed in this paper uses two blurred images, acquired from the same point of view with different focus settings, in order to estimate depth in real time. It falls under the group "Depth from defocus" methods. The blur is modelled by the convolution of a Gaussian Point Spread Function (PSF) which with the theoretical sharp image. We calculate the gradients and Laplacians of the two blurred images and according to the type of contour, step, ramp, roof o line, we compute the differences of these images or their derivative until order 2 to obtain the difference in blur, difference of the variances of the Gaussian ones, which we connect to the depth by taking account of the parameters of the optical system. We use then this difference to the depth by tacking account of the parameters of the optical system. We present a set of results on real images showing the performance of the method and its limits. The computing times which we measured make possible the use of our algorithm at video rate.

Keywords

Depth from defocus image processing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Lin, H.Y., Chang, C.H.: Depth from motion and defocus blur. Optical engineering 12(45), (127201)- (1-12201-10) SPIE (2006)CrossRefGoogle Scholar
  2. 2.
    Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. Int. J. Comp. Vision 13, 271–294 (1994)CrossRefGoogle Scholar
  3. 3.
    Deschênes, F., Ziou, D., Fuchs, P.: An unified approach for simultaneous and cooperative estimatiom of defocus and spatial shifts. Image and Vision Computing n° 22, 35–57 (2004)CrossRefGoogle Scholar
  4. 4.
    Favaro, P., Soatto, S.: 3D shape estimation and image restoration: exploiting defocus and motion blur. Springer, London (2007)Google Scholar
  5. 5.
    Simon, C., Bicking, F., Simon, T.: Depth Estimation Based on Thick Edges in Images. In: IEEE ISIE 2004, Ajaccio, France, May 3-8 (2004)Google Scholar
  6. 6.
    Simon, T., Simon, C.: Depth Perception from three blurred images. International Electronic CongressGoogle Scholar
  7. 7.
    Hopkins, H.H.: The frequency response of a defocused optical system. In: Proc. Of the Royal Society of London series A, pp. 91–103 (February 1955)Google Scholar
  8. 8.
    Ziou, D., Deschênes, F.: Depth from Defocus Estimation in Spatial Domain. Computer Vision and Image Understanding 81, 143–165 (2001)MATHCrossRefGoogle Scholar
  9. 9.
    Pentland, A.P.: A new sense for depth of field. IEEE Trans. On Pattern Analysis and Machine Intellignece 9(4), 523–531 (1987)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Jean-Vincent Leroy
    • 1
  • Thierry Simon
    • 2
  • François Deschenes
    • 1
    • 3
  1. 1.Centre de MOIVREUniversité de SherbrookeSherbrookeCanada
  2. 2.UTM-IUT de Figeac; LRP-mip (Laboratoire de recherche, pluridisciplinaire du nord-est de Midi-Pyrénées)Université de ToulouseFIGEACFrance
  3. 3.Université du Québec en OutaouaisGatineauCanada

Personalised recommendations