Coded Aperture Pairs for Depth from Defocus and Defocus Deblurring

Article

Abstract

The classical approach to depth from defocus (DFD) uses lenses with circular apertures for image capturing. We show in this paper that the use of a circular aperture severely restricts the accuracy of DFD. We derive a criterion for evaluating a pair of apertures with respect to the precision of depth recovery. This criterion is optimized using a genetic algorithm and gradient descent search to arrive at a pair of high resolution apertures. These two coded apertures are found to complement each other in the scene frequencies they preserve. This property enables them to not only recover depth with greater fidelity but also obtain a high quality all-focused image from the two captured images. Extensive simulations as well as experiments on a variety of real scenes demonstrate the benefits of using the coded apertures over conventional circular apertures.

Keywords

Depth from defocus Coded aperture Defocus deblurring Deconvolution 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Supplementary material

11263_2010_409_MOESM1_ESM.pdf (16.7 mb)
(PDF 16.7 MB)

References

  1. Campos, J., & Yzuel, M. (1989). Axial and extra-axial responses in aberrated optical systems with apodizers. Journal of Modern Optics, 36(6), 733–749. CrossRefGoogle Scholar
  2. Caroli, E., Stephen, J., Cocco, G., Natalucci, L., & Spizzichino, A. (1987). Coded aperture imaging in X- and Gamma-ray astronomy. Space Science Reviews, 349–403. Google Scholar
  3. Dowski, E. (1993). Passive ranging with an incoherent optical system. PhD Thesis, Colorado Univ., Boulder, CO. Google Scholar
  4. Dowski, E., & Johnson, G. (1999). Wavefront coding: A modern method of achieving high performance and/or low cost imaging systems. Proceedings of SPIE, 3779, 137–145. CrossRefGoogle Scholar
  5. Farid, H., & Simoncelli, E. (1998). Range estimation by optical differentiation. Journal of the Optical Society of America A, 15(7), 1777–1786. CrossRefGoogle Scholar
  6. Favaro, P., & Soatto, S. (2000). Shape and radiance estimation from the information divergence of blurred images. In ECCV (pp. 755–768). Google Scholar
  7. Favaro, P., & Soatto, S. (2005). A geometric approach to shape from defocus. IEEE Transactions on Pattern Analysis and Machine Intelligence, 406–417. Google Scholar
  8. Girod, B., & Adelson, E. (1990). System for ascertaining direction of blur in a range-from-defocus camera. US Patent 4,965,442. Google Scholar
  9. Gottesman, S., & Fenimore, E. (1989). New family of binary arrays for coded aperture imaging. Applied Optics, 20, 4344–4352. CrossRefGoogle Scholar
  10. Greengard, A., Schechner, Y., & Piestun, R. (2006). Depth from diffracted rotation. Optics Letters, 31(2), 181–183. CrossRefGoogle Scholar
  11. Hasinoff, S., & Kutulakos, K. (2009). Confocal stereo. International Journal of Computer Vision, 81(1), 82–104. CrossRefGoogle Scholar
  12. Hausler, G. (1972). A method to increase the depth of focus by two step image processing. Optics Communications, 6, 38–42. CrossRefGoogle Scholar
  13. Hiura, S., & Matsuyama, T. (1998). Depth measurement by the multi-focus camera. In CVPR (pp. 953–959). Google Scholar
  14. Klarquist, W., Geisler, W., & Bovik, A. (1995). Maximum-likelihood depth-from-defocus for active vision. In IEEE/RSJ international conference on intelligent robots and systems (pp. 374–379). Google Scholar
  15. Levin, A., Fergus, R., Durand, F., & Freeman, W. (2007). Image and depth from a conventional camera with a coded aperture. Proceedings of ACM SIGGRAPH, 26(3), 70. CrossRefGoogle Scholar
  16. Levin, A., Hasinoff, S., Green, P., Durand, F., & Freeman, W. (2009). 4D frequency analysis of computational cameras for depth of field extension (p. 97). Google Scholar
  17. Liang, C., Lin, T., Wong, B., Liu, C., & Chen, H. (2008). Programmable aperture photography: multiplexed light field acquisition. Proceedings of ACM SIGGRAPH, 27(3), 1–10. CrossRefGoogle Scholar
  18. Mills, J., & Thompson, B. (1986). Effect of aberrations and apodization on the performance of coherent optical systems. Journal of the Optical Society of America A, 3(5), 694–703. CrossRefGoogle Scholar
  19. Mino, M., & Okano, Y. (1971). Improvement in the OTF of a defocused optical system through the use of shaded apertures. Applied Optics, 10, 2219–2225. CrossRefGoogle Scholar
  20. Nagahara, H., Kuthirummal, S., Zhou, C., & Nayar, S. (2008). Flexible depth of field photography. In ECCV (Vol. 3). Google Scholar
  21. Nayar, S., Watanabe, M., & Noguchi, M. (1996). Real-time focus range sensor. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(12), 1186–1198. CrossRefGoogle Scholar
  22. Ojeda-Castaneda, J., Andres, P., & Diaz, A. (1986). Annular apodizers for low sensitivity to defocus and to spherical aberration. Optics Letters, 11(8), 487–489. CrossRefGoogle Scholar
  23. Ojeda-Castañeda, J., Berriel-Valdos, L., & Montes, E. (1987). Bessel annular apodizers: imaging characteristics. Applied Optics, 26, 2770–2772. CrossRefGoogle Scholar
  24. Ojeda-Castaneda, J., Ramos, R., & Noyola-Isgleas, A. (1988). High focal depth by apodization and digital restoration. Applied Optics, 27(12), 2583–2586. CrossRefGoogle Scholar
  25. Pentland, A. (1987). A new sense for depth of field. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(4), 423–430. CrossRefGoogle Scholar
  26. Piana, M., & Bertero, M. (1996). Regularized deconvolution of multiple images of the same object. Journal of the Optical Society of America A, 13(7), 1516–1523. CrossRefGoogle Scholar
  27. Rajagopalan, A., & Chaudhuri, S. (1997a). A variational approach to recovering depth from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(10), 1159. CrossRefGoogle Scholar
  28. Rajagopalan, A., & Chaudhuri, S. (1997b). Optimal selection of camera parameters for recovery of depth from defocused images. In CVPR (pp. 219–224). Google Scholar
  29. Rajagopalan, A., & Chaudhuri, S. (1999). An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(7), 577. CrossRefGoogle Scholar
  30. Rajan, D., & Chaudhuri, S. (2003). Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1102–1117. Google Scholar
  31. Rav-Acha, A., & Peleg, S. (2005). Two motion-blurred images are better than one. Pattern Recognition Letters, 26(3), 311–318. CrossRefGoogle Scholar
  32. Schechner, Y., & Kiryati, N. (1993). The optimal axial interval in estimating depth from defocus. In ICCV (pp. 843–848). Google Scholar
  33. Schechner, Y., & Kiryati, N. (1998). Depth from defocus vs. stereo: How different really are they? International Journal of Computer Vision, 1784–1786. Google Scholar
  34. Subbarao, M., & Gurumoorthy, N. (1988). Depth recovery from blurred edges. In CVPR (pp. 498–503). Google Scholar
  35. Subbarao, M., & Surya, G. (1994). Depth from defocus: A spatial domain approach. International Journal of Computer Vision, 13(3), 271–294. CrossRefGoogle Scholar
  36. Subbarao, M., & Tyan, J. (1997). Noise sensitivity analysis of depth-from-defocus by a spatial-domain approach. Proceedings of SPIE, 3174, 174. CrossRefGoogle Scholar
  37. Van der Schaaf, A., & Van Hateren, J. (1996). Modelling the power spectra of natural images: statistics and information. Vision Research, 36(17), 2759–2770. CrossRefGoogle Scholar
  38. Varamit, C., & Indebetouw, G. (1985). Imaging properties of defocused partitioned pupils. Journal of the Optical Society of America A, 2(6), 799–802. CrossRefGoogle Scholar
  39. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., & Tumblin, J. (2007). Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Transactions on Graphics, 26(3), 69. CrossRefGoogle Scholar
  40. Watanabe, M., & Nayar, S. (1998). Rational filters for passive depth from defocus. International Journal of Computer Vision, 27(3), 203–225. CrossRefGoogle Scholar
  41. Weiss, Y., & Freeman, W. (2007). What makes a good model of natural images. In CVPR (pp. 1–8). Google Scholar
  42. Welford, W. (1960). Use of annular apertures to increase focal depth. Journal of the Optical Society of America A, 8, 749–753. Google Scholar
  43. Xiong, Y., & Shafer, S. (1993). Depth from focusing and defocusing. In CVPR (pp. 68–68). Google Scholar
  44. Zhou, C., & Nayar, S. (2009). What are good apertures for defocus deblurring. In International conference of computational photography, San Francisco, USA. Google Scholar
  45. Zhou, C., Lin, S., & Nayar, S. (2009). Coded aperture pairs for depth from defocus. In ICCV, Kyoto, Japan. Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Department of Computer ScienceColumbia UniversityNew YorkUSA
  2. 2.Microsoft Research AsiaBeijingChina

Personalised recommendations