Shading through Defocus
Traditional shape from defocus has been based on modeling the defocusing process through a normalized point spread function (PSF). Here we show that, in the general case, the normalization factor will depend on the depth map, what precludes shape estimation. If the camera is focused at far distances, however, such dependence can be neglected and an unnormalized PSF can be employed. We thus reformulate Pentland’s shape from defocus approach using unnormalized gaussians, and prove that, under certain assumptions, such model allows the estimation of a dense depth map from a single input image. Moreover, by using unnormalized Gabor functions as a generalization of the unnormalized-gaussian PSF, we are able to approximate any signal as resulting from a series of local, frequency-dependent defocusing processes, to which the modified Pentland’s approach also applies. Such approximation proves suitable for shading images, and has allowed us to obtain good shape-from-shading estimates essentially through a shape-from-defocus approach, without resorting to the reflectance map concept.
KeywordsInput Image Point Spread Function Multiplicative Factor Intensity Error Depth Error
Unable to display preview. Download preview PDF.
- 2.Xiong, Y., Shafer, S.: Depth from focusing and defocusing. In: Proceedings of the IEEE Intl. Conf. on Computer Vision and Pattern Recognition, pp. 67–73 (1993)Google Scholar
- 8.Torreao, J., Fernandes, J.: Single-image shape from defocus. In: Proceedings of the 18th Brazilian Symposium on Computer Graphics and Image Processing, pp. 241–246 (2005)Google Scholar