Advertisement

Depth Estimation within a Multi-Line-Scan Light-Field Framework

  • D. Soukup
  • R. Huber-Mörk
  • S. Štolc
  • B. Holländer
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8888)

Abstract

We present algorithms for depth estimation from light-field data acquired by a multi-line-scan image acquisition system. During image acquisition a 3-D light field is generated over time, which consists of multiple views of the object observed from different viewing angles. This allows for the construction of so-called epipolar plane images (EPIs) and subsequent EPI-based depth estimation. We compare several approaches based on testing various slope hypotheses in the EPI domain, which can directly be related to depth. The considered methods used in hypothesis assessment, which belong to a broader class of block-matching algorithms, are modified sum of absolute differences (MSAD), normalized cross correlation (NCC), census transform (CT) and modified census transform (MCT). The methods are compared w.r.t. their qualitative results for depth estimation and are presented for artificial and real-world data.

Keywords

Image Patch Object Point Depth Estimation Viewing Angle Normalize Cross Correlation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bolles, R.C., Baker, H.H., Marimont, D.H.: Epipolarplane image analysis: an approach to determining structure from motion. Int. J. Comput. Vis. 1(1), 7–55 (1987)CrossRefGoogle Scholar
  2. 2.
    Wanner, S., Goldlücke, B.: Globally consistent depth labeling of 4D light fields. In: Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Providence, RI, pp. 41–48. IEEE (2012)Google Scholar
  3. 3.
    Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.H.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 73:1–73:12 (2013)Google Scholar
  4. 4.
    Venkataraman, K., Lelescu, D., Duparré, J., McMahon, A., Molina, G., Chatterjee, P., Mullis, R., Nayar, S.: PiCam: an ultra-thin high performance monolithic camera array. ACM Trans. Graph. 32(6), 166:1–166:13 (2013)Google Scholar
  5. 5.
    Levoy, M., Hanrahan, P.: Light field rendering. In: Proc. Conf. on Computer Graphics and Interactive Techniques, SIGGRAPH, pp. 31–42. ACM, New York (1996)Google Scholar
  6. 6.
    Wilburn, B., Joshi, N., Vaish, V., Talvala, E.-V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 24(3), 765–776 (2005)CrossRefGoogle Scholar
  7. 7.
    Davis, A., Levoy, M., Durand, F.: Unstructured light fields. Comput. Graph. Forum 31(2pt.1), 305–314 (2012)CrossRefGoogle Scholar
  8. 8.
    Liang, C.-K., Lin, T.-H., Wong, B.-Y., Liu, C., Chen, H.H.: Programmable aperture photography: multiplexed light field acquisition. ACM Trans. Graph. 27(3), 55:1–55:10 (2008)Google Scholar
  9. 9.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera, Tech. Rep. CSTR 2005-02, Stanford University (April 2005)Google Scholar
  10. 10.
    Lumsdaine, A., Georgiev, T.: The focused plenoptic camera. In: Proc. IEEE Int. Conf. on Computational Photography (ICCP), San Francisco, CA, pp. 1–8. IEEE (2009)Google Scholar
  11. 11.
    Perwaß, C., Wietzke, L.: Single lens 3D-camera with extended depth-of-field. In: Proc. of SPIE – Human Vision and Electronic Imaging XVII, vol. 8291, pp. 829108-1–829108-15 (2012)Google Scholar
  12. 12.
    Štolc, S., Soukup, D., Holländer, B., Huber-Mörk, R.: Depth and all-in-focus imaging by a multi-line-scan light-field camera. Journal of Electronic Imaging 23(5), 053020 (2014)Google Scholar
  13. 13.
    Štolc, S., Huber-Mörk, R., Holländer, B., Soukup, D.: Depth and all-in-focus images obtained by multi-line-scan light-field approach. In: Niel, K.S., Bingham, P.R. (eds.) Proc. of SPIE-IS&T Electronic Imaging – Image Processing: Machine Vision Applications VII, San Francisco, CA (February 2014)Google Scholar
  14. 14.
    Zabih, R., Woodfill, J.: Non-parametric local transforms for computing visual correspondence. In: Eklundh, J.-O. (ed.) ECCV 1994. LNCS, vol. 801, pp. 151–158. Springer, Heidelberg (1994)CrossRefGoogle Scholar
  15. 15.
    Cyganek, B.: Comparison of nonparametric transformations and bit vector matching for stereo correlation. In: Klette, R., Žunić, J. (eds.) IWCIA 2004. LNCS, vol. 3322, pp. 534–547. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Ambrosch, K., Zinner, C., Kubinger, W.: Algorithmic considerations for real-time stereo vision applications. In: Proc. Machine Vision and Applications (MVA), Keio University, Yokohama, JP, pp. 231–234 (May 2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • D. Soukup
    • 1
  • R. Huber-Mörk
    • 1
  • S. Štolc
    • 1
  • B. Holländer
    • 1
  1. 1.Intelligent Vision Systems, Safety & Security DepartmentAIT Austrian Institute of Technology GmbHViennaAustria

Personalised recommendations