The Visual Computer

, Volume 31, Issue 12, pp 1697–1708 | Cite as

Fast depth from defocus from focal stacks

  • Stephen W. Bailey
  • Jose I. Echevarria
  • Bobby Bodenheimer
  • Diego Gutierrez
Original Article

Abstract

We present a new depth from defocus method based on the assumption that a per pixel blur estimate (related with the circle of confusion), while ambiguous for a single image, behaves in a consistent way when applied over a focal stack of two or more images. This allows us to fit a simple analytical description of the circle of confusion to the different per pixel measures to obtain approximate depth values up to a scale. Our results are comparable to previous work while offering a faster and flexible pipeline.

Keywords

Depth from defocus Shape from defocus 

References

  1. 1.
    Bae, S., Durand, F.: Defocus magnification. Comput. Graph. Forum 26(3), 571–579 (2007)CrossRefGoogle Scholar
  2. 2.
    Bauszat, P., Eisemann, M., Magnor, M.: Guided image filtering for interactive high-quality global illumination. Comput. Graph. Forum 30(4), 1361–1368 (2011)CrossRefGoogle Scholar
  3. 3.
    Calderero, F., Caselles, V.: Recovering relative depth from low-level features without explicit t-junction detection and interpretation. Int. J. Comput. Vis. 104, 1–31 (2013)Google Scholar
  4. 4.
    Cao, Y., Fang, S., Wang, F.: Single image multi-focusing based on local blur estimation. In: Image and graphics (ICIG), 2011 Sixth International Conference on, pp. 168–175 (2011)Google Scholar
  5. 5.
    Cao, Y., Fang, S., Wang, Z.: Digital multi-focusing from a single photograph taken with an uncalibrated conventional camera. Image Process. IEEE Trans. 22(9), 3703–3714 (2013). doi:10.1109/TIP.2013.2270086 CrossRefGoogle Scholar
  6. 6.
    Favaro, P.: Recovering thin structures via nonlocal-means regularization with application to depth from defocus. In: Computer vision and pattern recognition (CVPR), 2010 IEEE Conference on, pp. 1133–1140 (2010)Google Scholar
  7. 7.
    Favaro, P., Soatto, S.: 3-D Shape Estimation and Image Restoration: Exploiting Defocus and Motion-Blur. Springer-Verlag New York Inc, Secaucus (2006)Google Scholar
  8. 8.
    Favaro, P., Soatto, S., Burger, M., Osher, S.J.: Shape from defocus via diffusion. Pattern Anal. Mach. Intel. IEEE Trans. 30(3), 518–531 (2008)CrossRefGoogle Scholar
  9. 9.
    Hasinoff, S.W., Kutulakos, K.N.: Confocal stereo. Int. J. Comput. Vis. 81(1), 82–104 (2009)CrossRefGoogle Scholar
  10. 10.
    He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of the 11th European conference on Computer vision: Part I. ECCV’10, pp. 1–14. Springer, Berlin, Heidelberg (2010)Google Scholar
  11. 11.
    Hecht, E.: Optics, Addison-Wesley world student series, 3rd edn. Addison-Wesley (1997)Google Scholar
  12. 12.
    Hu, H., De Haan, G.: Adaptive image restoration based on local robust blur estimation. In: Proceedings of the 9th international conference on Advanced concepts for intelligent vision systems. ACIVS’07, pp. 461–472. Springer, Berlin, Heidelberg (2007)Google Scholar
  13. 13.
    Knutsson, H., Westin, C.F.: Normalized and differential convolution: Methods for interpolation and filtering of incomplete and uncertain data. In: Proceedings of Computer vision and pattern recognition (‘93), pp. 515–523. New York City, USA (1993)Google Scholar
  14. 14.
    Lee, I.H., Shim, S.O., Choi, T.S.: Improving focus measurement via variable window shape on surface radiance distribution for 3d shape reconstruction. Optics Lasers Eng. 51(5), 520–526 (2013)CrossRefGoogle Scholar
  15. 15.
    Levin, A., Fergus, R., Durand, F., Freeman, W.: Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, SIGGRAPH 2007 Conference Proceedings, San Diego, CA (2007)Google Scholar
  16. 16.
    Li, C., Su, S., Matsushita, Y., Zhou, K., Lin, S.: Bayesian depth-from-defocus with shading constraints. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pp. 217–224 (2013). doi:10.1109/CVPR.2013.35
  17. 17.
    Lin, X., Suo, J., Wetzstein, G., Dai, Q., Raskar, R.: Coded focal stack photography. In: IEEE International Conference on Computational photography (2013)Google Scholar
  18. 18.
    Mahmood, M.T., Choi, T.S.: Nonlinear approach for enhancement of image focus volume in shape from focus. Image Process. IEEE Trans. 21(5), 2866–2873 (2012)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Malik, A.: Selection of window size for focus measure processing. In: Imaging systems and techniques (IST), 2010 IEEE International Conference on, pp. 431–435 (2010)Google Scholar
  20. 20.
    Moreno-Noguer, F., Belhumeur, P.N., Nayar, S.K.: Active refocusing of images and videos. In: ACM SIGGRAPH 2007 papers, SIGGRAPH ‘07. ACM, New York, NY, USA (2007)Google Scholar
  21. 21.
    Namboodiri, V., Chaudhuri, S.: Recovery of relative depth from a single observation using an uncalibrated (real-aperture) camera. In: Computer vision and pattern recognition, 2008. CVPR 2008. IEEE Conference on, pp. 1–6 (2008)Google Scholar
  22. 22.
    Nayar, S., Nakagawa, Y.: Shape from focus. Pattern Anal. Mach. Intel. IEEE Trans. 16(8), 824–831 (1994)CrossRefGoogle Scholar
  23. 23.
    Pentland, A.P.: A new sense for depth of field. Pattern Anal. Mach. Intel. IEEE Trans. PAMI 9(4), 523–531 (1987)CrossRefGoogle Scholar
  24. 24.
    Pertuz, S., Puig, D., Garcia, M.A.: Analysis of focus measure operators for shape-from-focus. Pattern Recognit. 46(5), 1415–1432 (2013)CrossRefMATHGoogle Scholar
  25. 25.
    Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H., Toyama, K.: Digital photography with flash and no-flash image pairs. ACM SIGGRAPH 2004 Papers. SIGGRAPH ‘04, pp. 664–672. ACM, New York, NY, USA (2004)Google Scholar
  26. 26.
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes: The Art of Scientific Computing, 3rd edn. Cambridge University Press (2007)Google Scholar
  27. 27.
    Shim, S.O., Choi, T.S.: A fast and robust depth estimation method for 3d cameras. In: Consumer Electronics (ICCE), 2012 IEEE International Conference on, pp. 321–322 (2012)Google Scholar
  28. 28.
    Subbarao, M., Choi, T.: Accurate recovery of three-dimensional shape from image focus. Pattern Anal. Mach. Intel. IEEE Trans. 17(3), 266–274 (1995)CrossRefGoogle Scholar
  29. 29.
    Vaquero, D., Gelfand, N., Tico, M., Pulli, K., Turk, M.: Generalized autofocus. In: IEEE Workshop on Applications of Computer Vision (WACV’11). Kona, Hawaii (2011)Google Scholar
  30. 30.
    Watanabe, M., Nayar, S.: Rational filters for passive depth from defocus. Int. J. Comput. Vis. 27(3), 203–225 (1998)CrossRefGoogle Scholar
  31. 31.
    Zhao, Q., Tan, P., Dai, Q., Shen, L., Wu, E., Lin, S.: A closed-form solution to retinex with nonlocal texture constraints. Pattern Anal. Mach. Intel. IEEE Trans. 34(7), 1437–1444 (2012)CrossRefGoogle Scholar
  32. 32.
    Zhou, C., Cossairt, O., Nayar, S.: Depth from diffusion. In: IEEE Conference on Computer vision and pattern recognition (CVPR) (2010)Google Scholar
  33. 33.
    Zhuo, S., Sim, T.: On the recovery of depth from a single defocused image. In: X. Jiang, N. Petkov (eds.) Computer Analysis of Images and Patterns, Lecture Notes in Computer Science, vol. 5702, pp. 889–897. Springer, Berlin Heidelberg (2009). doi:10.1007/978-3-642-03767-2_108.
  34. 34.
    Zhuo, S., Sim, T.: Defocus map estimation from a single image. Pattern Recognit. 44(9), 1852–1858 (2011)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Stephen W. Bailey
    • 1
  • Jose I. Echevarria
    • 2
  • Bobby Bodenheimer
    • 3
  • Diego Gutierrez
    • 4
  1. 1.University of California at BerkeleyBerkeleyUSA
  2. 2.Universidad de ZaragozaZaragozaSpain
  3. 3.Vanderbilt UniversityNashvilleUSA
  4. 4.Universidad de ZaragozaZaragozaSpain

Personalised recommendations