, Volume 13, Issue 3, pp 271-294

Depth from defocus: A spatial domain approach

Purchase on Springer.com

$39.95 / €34.95 / £29.95*

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems. STM uses image defocus information and is based on a new Spatial-Domain Convolution/Deconvolution Transform. The method requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter. Both images can be arbitrarily blurred and neither of them needs to be a focused image. Therefore STM is very fast in comparison with Depth-from-Focus methods which search for the lens position or focal length of best focus. The method involves simple local operations and can be easily implemented in parallel to obtain the depth-map of a scene. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on real-world planar objects are presented. The results indicate that the accuracy of STM compares well with Depth-from-Focus methods and is useful in practical applications. The utility of the method is demonstrated for rapid autofocusing of electronic cameras.