Skip to main content

Dense Depth-Map Estimation and Geometry Inference from Light Fields via Global Optimization

  • Conference paper
  • First Online:
Computer Vision – ACCV 2016 (ACCV 2016)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 10113))

Included in the following conference series:

Abstract

Light field camera captures abundant and dense angular samplings in a single shot. The surface camera (SCam) model is an image gathering angular sample rays passing through a 3D point. By analyzing the statistics of SCam, a consistency-depth measurement is evaluated for depth estimation. However, local depth estimation still has limitations. A global method with pixel-wise plane label is presented in this paper. Plane model inference at each pixel not only recovers depth but also local geometry of scene, which is suitable for light fields with floating disparities and continuous view variation. The 2nd order surface smoothness is enforced to allow local curvature surfaces. We use a random strategy to generate candidate plane parameters and refine the plane labels to avoid falling in local minima. We cast the selection of defined labels as fusion move with sequential proposals. The proposals are elaborately constructed to satisfy sub-modular condition with 2nd order smoothness regularizer, so that the minimization can be efficiently solved by graph cuts (GC). Our method is evaluated on public light field datasets and achieves the state-of-the-art accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Levoy, M., Hanrahan, P.: Light field rendering. In: ACM SIGGRAPH, pp. 64–71 (1996).

    Google Scholar 

  2. Vaish, V., Wilburn, B., Joshi, N., Levoy, M.: Using plane + parallax for calibrating dense camera arrays. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, pp. I-2–I-9 (2004)

    Google Scholar 

  3. Lytro: Lytro redefines photography with light field cameras (2011). http://www.lytro.com

  4. Raytrix: Raytrix lightfield camera (2012). http://www.raytrix.de

  5. Bolles, R.C., Baker, H.H., Marimont, D.H.: Epipolar-plane image analysis: an approach to determining structure from motion. Int. J. Comput. Vis. 1, 7–55 (1987)

    Article  Google Scholar 

  6. Yu, J., McMillan, L., Gortler, S.: Surface camera (scam) light field rendering. Int. J. Image Graph. 4, 605–625 (2004)

    Article  Google Scholar 

  7. Szeliski, R., Zabih, R., Scharstein, D., Veksler, O., Kolmogorov, V., Agarwala, A., Tappen, M., Rother, C.: A comparative study of energy minimization methods for Markov random fields with smoothness-based priors. IEEE Trans. Pattern Anal. Mach. Intell. 30, 1068–1080 (2007)

    Article  Google Scholar 

  8. Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. In: Computational Models of Visual Processing, pp. 3–20 (1991)

    Google Scholar 

  9. McMillan, L., Bishop G.: Plenoptic modeling: an image-based rendering system. In: ACM SIGGRAPH, pp. 39–46 (1995)

    Google Scholar 

  10. Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The lumigraph. In: ACM SIGGRAPH, pp. 43–54 (1996)

    Google Scholar 

  11. Criminisi, A., Kang, S.B., Swaminathan, R., Szeliski, R., Anandan, P.: Extracting layers and analyzing their specular properties using epipolar-plane-image analysis. Comput. Vis. Image Underst. 97, 51–85 (2005)

    Article  Google Scholar 

  12. Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4D light fields. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 41–48. IEEE (2012)

    Google Scholar 

  13. Tosic, I., Berkner, K.: Light field scale-depth space transform for dense depth estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 435–442 (2014)

    Google Scholar 

  14. Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32, 96 (2013)

    MATH  Google Scholar 

  15. Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 673–680 (2013)

    Google Scholar 

  16. Chen, C., Lin, H., Yu, Z., Kang, S., Yu, J.: Light field stereo matching using bilateral statistics of surface cameras. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1518–1525 (2014)

    Google Scholar 

  17. Wang, T.C., Efros, A.A., Ramamoorthi, R.: Occlusion-aware depth estimation using light-field cameras. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3487–3495 (2015)

    Google Scholar 

  18. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26, 1124–1137 (2004)

    Article  MATH  Google Scholar 

  19. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001)

    Article  Google Scholar 

  20. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004)

    Article  Google Scholar 

  21. Felzenszwalb, P.F., Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J. Comput. Vis. 70, 41–54 (2004)

    Article  Google Scholar 

  22. Hong, L., Chen, G.: Segment-based stereo matching using graph cuts. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004, vol. 1, pp. I–74. IEEE (2004)

    Google Scholar 

  23. Ladický, L., Sturgess, P., Russell, C., Sengupta, S., Bastanlar, Y., Clocksin, W., Torr, P.H.S.: Joint optimization for object class segmentation and dense stereo reconstruction. Int. J. Comput. Vis. 100, 1–12 (2010)

    MathSciNet  Google Scholar 

  24. Woodford, O., Torr, P., Reid, I., Fitzgibbon, A.: Global stereo reconstruction under second-order smoothness priors. IEEE Trans. Pattern Anal. Mach. Intell. 31, 2115–2128 (2009)

    Article  Google Scholar 

  25. Olsson, C., Ulén, J., Boykov, Y.: In defense of 3d-label stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1730–1737 (2013)

    Google Scholar 

  26. Rother, C., Kolmogorov, V., Lempitsky, V., Szummer, M.: Optimizing binary MRFs via extended roof duality. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007, pp. 1–8. IEEE (2007)

    Google Scholar 

  27. Bleyer, M., Rhemann, C., Rother, C.: Patchmatch stereo-stereo matching with slanted support windows. In: BMVC, vol. 11, pp. 1–11 (2011)

    Google Scholar 

  28. Besse, F., Rother, C., Fitzgibbon, A., Kautz, J.: PMBP: patchmatch belief propagation for correspondence field estimation. Int. J. Comput. Vis. 110, 2–13 (2014)

    Article  Google Scholar 

  29. Taniai, T., Matsushita, Y., Naemura, T.: Graph cut based continuous stereo matching using locally shared labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1613–1620 (2014)

    Google Scholar 

  30. Lempitsky, V., Rother, C., Roth, S., Blake, A.: Fusion moves for Markov random field optimization. IEEE Trans. Pattern Anal. Mach. Intell. 32, 1392–1405 (2010)

    Article  Google Scholar 

  31. Wanner, S., Meister, S., Goldluecke, B.: Datasets and benchmarks for densely sampled 4D light fields. In: VMV, pp. 225–226. Citeseer (2013)

    Google Scholar 

  32. Yu, Z., Guo, X., Lin, H., Lumsdaine, A., Yu, J.: Line assisted light field triangulation and stereo matching. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2792–2799 (2013)

    Google Scholar 

Download references

Acknowledgement

The work in the paper is supported by NSFC funds (61272287, 61531014).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Si, L., Wang, Q. (2017). Dense Depth-Map Estimation and Geometry Inference from Light Fields via Global Optimization. In: Lai, SH., Lepetit, V., Nishino, K., Sato, Y. (eds) Computer Vision – ACCV 2016. ACCV 2016. Lecture Notes in Computer Science(), vol 10113. Springer, Cham. https://doi.org/10.1007/978-3-319-54187-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-54187-7_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-54186-0

  • Online ISBN: 978-3-319-54187-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics