Advertisement

International Journal of Computer Vision

, Volume 27, Issue 3, pp 203–225 | Cite as

Rational Filters for Passive Depth from Defocus

  • Masahiro Watanabe
  • Shree K. Nayar
Article

Abstract

A fundamental problem in depth from defocus is the measurement of relative defocus between images. The performance of previously proposed focus operators are inevitably sensitive to the frequency spectra of local scene textures. As a result, focus operators such as the Laplacian of Gaussian result in poor depth estimates. An alternative is to use large filter banks that densely sample the frequency space. Though this approach can result in better depth accuracy, it sacrifices the computational efficiency that depth from defocus offers over stereo and structure from motion. We propose a class of broadband operators that, when used together, provide invariance to scene texture and produce accurate and dense depth maps. Since the operators are broadband, a small number of them are sufficient for depth estimation of scenes with complex textural properties. In addition, a depth confidence measure is derived that can be computed from the outputs of the operators. This confidence measure permits further refinement of computed depth maps. Experiments are conducted on both synthetic and real scenes to evaluate the performance of the proposed operators. The depth detection gain error is less than 1%, irrespective of texture frequency. Depth accuracy is found to be 0.5∼1.2% of the distance of the object from the imaging optics.

passive depth from defocus blur function scene textures normalized image ratio broadband rational operators texture invariance depth confidence measure depth estimation real-time performance 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Besl, P.J. 1988. Range imaging sensors. Technical Report GMR-6090, General Motors Research Laboratories.Google Scholar
  2. Born, M. and Wolf, E. 1965. Principles of Optics. Permagon: London.Google Scholar
  3. Bove, V.M.Jr. 1993. Entropy-based depth from focus. Journal of Optical Society of America A, 10:561-566.Google Scholar
  4. Bracewell, R.N. 1965. The Fourier Transform and Its Applications. McGraw Hill.Google Scholar
  5. Burt, P.J. and Adelson, E.H. 1983. The Laplacian pyramid as a compact image code. IEEE Trans. on Communications, COM-31(4):532-540.Google Scholar
  6. Darrell, T. and Wohn, K. 1988. Pyramid based depth from focus. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 504-509.Google Scholar
  7. Ens, J. and Lawrence, P. 1991. A matrix based method for determining depth from focus. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 600-609.Google Scholar
  8. Gokstorp, M. 1994. Computing depth from out-of-focus blur using a local frequency representation. In Proc. on Intl. Conf. on Patt. Recog. Google Scholar
  9. Hoel, P.G. 1971. Introduction to Mathematical Statistics. JohnWiley & Sons: New York.Google Scholar
  10. Horn, B.K.P. 1968. Focusing. Memo 160, AI Lab., Massachusetts Institute of Technology, Cambridge, MA, USA.Google Scholar
  11. Horn, B.K.P. 1986. Robot Vision. The MIT Press.Google Scholar
  12. Jarvis, R.A. 1983. A perspective on range finding techniques for computer vision. IEEE Trans. on Pattern Analysis and Machine Intelligence, 5(2):122-139.Google Scholar
  13. Jolion, J.M. and Rosenfeld, A. 1994.APyramidFramework for Early Vision. Kluwer Academic Publishers: Boston, MA.Google Scholar
  14. Kingslake, R. 1983. Optical System Design. Academic Press.Google Scholar
  15. Krotkov, E. 1987. Focusing. Intl. Journal of Computer Vision, 1:223- 237.Google Scholar
  16. Nayar, S.K. and Nakagawa, Y. 1994. Shape from focus: An effective approach for rough surfaces. IEEE Trans. on Pattern Analysis and Machine Intelligence, 16(8):824-831.Google Scholar
  17. Nayar, S.K., Watanabe, M., and Noguchi, M. 1995. Real-time focus range sensor. In Proc. of Intl. Conf. on Computer Vision, pp. 995- 1001.Google Scholar
  18. Oppenheim, A.V. and Schafer, R.W. 1989. Discrete-Time Signal Processing. Prentice Hall: Englewood Cliffs, NJ.Google Scholar
  19. Peitgen, H.O. and Saupe, D. (Eds.) 1988. The Science of Fractal Images. Springer-Verlag: New York, NY.Google Scholar
  20. Pentland, A. 1987. A new sense for depth of field. IEEE Trans. on Pattern Analysis and Machine Intelligence, 9(4):523-531.Google Scholar
  21. Press, W.H., Teukolsky, S.A., Vetterling, W.T., and Flannery, B.P. 1992. Numerical Recipes in C. Cambridge University Press.Google Scholar
  22. Subbarao, M. 1988. Parallel depth recovery by changing camera parameters. In Proc. of Intl. Conf. on Computer Vision, pp. 149- 155.Google Scholar
  23. Subbarao, M. and Surya, G. 1994. Depth from defocus: A spatial domain approach. International Journal of Computer Vision, 13(3):271-294.Google Scholar
  24. Watanabe, M., Nayar, S.K., and Noguchi, M. 1995. Real-time computation of depth from defocus. In Proc. of SPIE: Three-Dimensional and Unconventional Imaging for Industrial Inspection and Metrology, 2599:A-03.Google Scholar
  25. Watanabe, M. and Nayar, S.K. 1995a. Minimal operator set for texture invariant depth from defocus. Technical Report CUCS-031-95, Dept. of Computer Science, Columbia University, New York, NY, USA.Google Scholar
  26. Watanabe, M. and Nayar, S.K. 1995b. Telecentric optics for constantmagnification imaging. Technical Report CUCS-026-95, Dept. of Computer Science, Columbia University, New York, NY, USA.Google Scholar
  27. Wolberg, G. 1990. Digital Image Warping. IEEE Compter Society Press: Los Alamitos, CA.Google Scholar
  28. Xiong, Y. and Shafer, S.A. 1993. Depth from focusing and defocusing. In Proc. of IEEE Conf. on Computer Vision and Pattern Recognition, pp. 68-73. Also, Technical Report CMU-RI-TR-93-07, Pittsburgh, PA, USA.Google Scholar
  29. Xiong, Y. and Shafer, S.A. 1995. Moment filters for high precision computation of focus and stereo. In Proc. of IROS, pp. 108- 113. Also, Technical Report CMU-RI-TR-94-28, Pittsburgh, PA, USA.Google Scholar

Copyright information

© Kluwer Academic Publishers 1998

Authors and Affiliations

  • Masahiro Watanabe
    • 1
  • Shree K. Nayar
    • 2
  1. 1.Production Engineering Research Lab.Hitachi Ltd.Totsuka, YokohamaJapan. E-mail
  2. 2.Department of Computer ScienceColumbia UniversityNew York

Personalised recommendations