Skip to main content

Depth from Defocus and Coded Apertures for 3D Scene Sensing

  • Chapter
  • First Online:
Connected Media in the Future Internet Era

Abstract

Depth from defocus (DfD) is one of the popular passive depth sensing approaches in computer vision. Usually, it utilizes the defocus blur depth cue encoded in images, captured by a single photographic camera. Since the defocus blur is characterized by the aperture shape of the camera, it can actually be structured by inserting a coded mask in the camera aperture position. Therefore, by employing optimized masks in such coded aperture cameras, it is possible to improve the performance of DfD approaches. DfD and coded aperture approaches constitute the main theme of this chapter. Noting, however, that the stereopsis cue is actually the most widely utilized depth cue in computer vision, joint utilization of the defocus blur cue, and the stereopsis cue is another interesting topic, which is therefore also included in the discussion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abbeloos W (2010) Real-time stereo vision. Master’s thesis, Karel de Grote-Hogeschool University College

    Google Scholar 

  2. Aggarwal M, Ahuja N (2002) A pupil-centric model of image formation. Int J Comput Vis 48(3):195–214

    Article  MATH  Google Scholar 

  3. Bertero M, Boccacci P (1998) Introduction to inverse problems in imaging. CRC Press, Boca Raton

    Book  MATH  Google Scholar 

  4. Burge J, Geisler WS (2011) Optimal defocus estimation in individual natural images. Proc Natl Acad Sci 108(40):16849–16854

    Article  Google Scholar 

  5. Dowski ER, Cathey WT (1994) Single-lens single-image incoherent passive-ranging systems. Appl Opt 33(29):6762–6773

    Article  Google Scholar 

  6. Dowski ER, Cathey WT (1995) Extended depth of field through wave-front coding. Appl Opt 34(11):1859–1866

    Article  Google Scholar 

  7. Favaro P, Soatto S (2005) A geometric approach to shape from defocus. IEEE Trans Pattern Anal Mach Intell 27:406–417

    Article  Google Scholar 

  8. Gheta I, Frese C, Heizmann M, Beyerer J (2007) A new approach for estimating depth by fusing stereo and defocus information. In: GI Jahrestagung (1)’07, pp 26–31

    Google Scholar 

  9. Goodman J (2004) Introduction to Fourier optics, 3rd edn. Roberts and Company, Englewood

    Google Scholar 

  10. Hiura S, Matsuyama T (1998) Depth measurement by the multi-focus camera. In: Proceedings of 1998 IEEE computer society conference on computer vision and pattern recognition, pp 953–959

    Google Scholar 

  11. Lanman D, Raskar R, Taubin G (2008) Modeling and synthesis of aperture effects in cameras. In: Proceedings of the fourth eurographics conference on computational aesthetics in graphics, visualization and imaging, eurographics association. Computational Aesthetics’08, Aire-la-Ville, pp 81–88

    Google Scholar 

  12. Levin A, Fergus R, Durand F, Freeman WT (2007) Image and depth from a conventional camera with a coded aperture. In: ACM SIGGRAPH 2007 papers, SIGGRAPH ’07. ACM, New York

    Google Scholar 

  13. Liang CK, Lin TH, Wong BY, Liu C, Chen HH (2008) Programmable aperture photography: multiplexed light field acquisition. In: ACM SIGGRAPH 2008 papers, SIGGRAPH ’08. ACM, New York, pp 55:1–55:10

    Google Scholar 

  14. Lin J, Ji X, Xu W, Dai Q (2013) Absolute depth estimation from a single defocused image. IEEE Trans Image Process 22(11):4545–4550

    Article  Google Scholar 

  15. Liu C, Freeman W, Szeliski R, Kang SB (2006) Noise estimation from a single image. In: 2006 IEEE computer society conference on computer vision and pattern recognition, vol 1, pp 901–908. doi:10.1109/CVPR.2006.207

    Google Scholar 

  16. Martinello M, Favaro P (2011) Single image blind deconvolution with higher-order texture statistics. In: Cremers D, Magnor M, Oswald M, Zelnik-Manor L (eds) Video processing and computational video. Lecture notes in computer science, vol 7082. Springer, Berlin/Heidelberg, pp 124–151

    Chapter  Google Scholar 

  17. Masia B, Presa L, Corrales A, Gutierrez D (2012) Perceptually optimized coded apertures for defocus deblurring. Comput Graph Forum 31(6):1867–1879

    Article  Google Scholar 

  18. Nagahara H, Zhou C, Watanabe T, Ishiguro H, Nayar S (2010) Programmable aperture camera using lcos. In: Daniilidis K, Maragos P, Paragios N (eds) Computer vision - ECCV 2010. Lecture notes in computer science, vol 6316. Springer, Berlin/Heidelberg, pp 337–350

    Chapter  Google Scholar 

  19. Pentland AP (1987) A new sense for depth of field. IEEE Trans Pattern Anal Mach Intell 9(4):523–531

    Article  Google Scholar 

  20. Rajagopalan A, Chaudhuri S, Mudenagudi U (2004) Depth estimation and image restoration using defocused stereo pairs. IEEE Trans Pattern Anal Mach Intell 26(11):1521–1525

    Article  Google Scholar 

  21. Saxena A, Schulte J, Ng AY (2007) Depth estimation using monocular and stereo cues. In: IJCAI, vol 7

    Google Scholar 

  22. Schechner Y, Kiryati N (2000) Depth from defocus vs. stereo: How different really are they? Int J Comput Vis 39(2):141–162

    MATH  Google Scholar 

  23. Sellent A, Favaro P (2014) Optimized aperture shapes for depth estimation. Pattern Recognit Lett 40:96–103

    Article  Google Scholar 

  24. Sellent A, Favaro P (2014) Which side of the focal plane are you on? In: 2014 IEEE international conference on computational photography (ICCP), pp 1–8

    Google Scholar 

  25. Snowden R, Thompson P, Troscianko T (2012) Basic vision: an introduction to visual perception, revised edn. Oxford University Press, Oxford, 424 p.

    Google Scholar 

  26. Takeda Y, Hiura S, Sato K (2012) Coded aperture stereo: for extension of depth of field and refocusing. In: VISAPP 2012 - Proceedings of the international conference on computer vision theory and applications, vol 1, pp 103–111

    Google Scholar 

  27. Takeda Y, Hiura S, Sato K (2013) Fusing depth from defocus and stereo with coded apertures. In: 2013 IEEE conference on computer vision and pattern recognition (CVPR), pp 209–216

    Google Scholar 

  28. Tao M, Hadap S, Malik J, Ramamoorthi R (2013) Depth from combining defocus and correspondence using light-field cameras. In: 2013 IEEE international conference on computer vision (ICCV), pp 673–680

    Google Scholar 

  29. Wang C (2015) Design and analysis of coded aperture for 3d scene sensing. Master’s thesis, Tampere University of Technology

    Google Scholar 

  30. Wang C, Sahin E, Suominen OJ, Gotchev AP (2014) Depth estimation by combining stereo matching and coded aperture. In: IEEE conference on visual communications and image processing (VCIP), pp 291–294

    Google Scholar 

  31. Zhou C, Nayar S (2009) What are good apertures for defocus deblurring? In: 2009 IEEE international conference on computational photography (ICCP), pp 1–8

    Google Scholar 

  32. Zhou C, Lin S, Nayar S (2009) Coded aperture pairs for depth from defocus. In: 2009 IEEE 12th international conference on computer vision, pp 325–332

    Google Scholar 

  33. Zhou C, Lin S, Nayar S (2011) Coded aperture pairs for depth from defocus and defocus deblurring. Int J Comput Vis 93(1):53–72

    Article  Google Scholar 

  34. Zhu X, Cohen S, Schiller S, Milanfar P (2013) Estimating spatially varying defocus blur from a single image. IEEE Trans Image Process 22(12):4879–4891

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erdem Sahin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Science+Business Media New York

About this chapter

Cite this chapter

Sahin, E., Wang, C., Gotchev, A. (2017). Depth from Defocus and Coded Apertures for 3D Scene Sensing. In: Kondoz, A., Dagiuklas, T. (eds) Connected Media in the Future Internet Era. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-4026-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4939-4026-4_5

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4939-4024-0

  • Online ISBN: 978-1-4939-4026-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics