The Visual Computer

, Volume 30, Issue 6–8, pp 897–907 | Cite as

Hand-held 3D light field photography and applications

  • Daniel Donatsch
  • Siavash Arjomand Bigdeli
  • Philippe Robert
  • Matthias Zwicker
Original Article

Abstract

We propose a method to acquire 3D light fields using a hand-held camera, and describe several computational photography applications facilitated by our approach. As our input we take an image sequence from a camera translating along an approximately linear path with limited camera rotations. Users can acquire such data easily in a few seconds by moving a hand-held camera. We include a novel approach to resample the input into regularly sampled 3D light fields by aligning them in the spatio-temporal domain, and a technique for high-quality disparity estimation from light fields. We show applications including digital refocusing and synthetic aperture blur, foreground removal, selective colorization, and others.

Keywords

3D light fields Computational photography Disparity estimation Digital refocusing 

Supplementary material

371_2014_979_MOESM1_ESM.zip (33.4 mb)
Supplementary material 1 (zip 34211 KB)

References

  1. 1.
    Tanskanen, P., Kolev, K., Meier, L, Camposeco, F., Saurer, O., Pollefeys, M.: Live metric 3d reconstruction on mobile phones. In: IEEE ICCV, December 2013, pp. 65–72Google Scholar
  2. 2.
    Davis, A., Levoy, M., Durand, F.: Unstructured light fields. Comput. Graphics Forum 31(2pt1), 305–314 (2012)Google Scholar
  3. 3.
    Liu, F., Gleicher, M., Wang, J., Jin, H., Agarwala, A.: Subspace video stabilization. ACM Trans. Graph. 30(1), 4:1–4:10 (2011)Google Scholar
  4. 4.
    Liu, F., Gleicher, M., Jin, H., Agarwala, A.: Content-preserving warps for 3d video stabilization. ACM Trans. Graph. 28(3), 44:1–44:9 (2009)Google Scholar
  5. 5.
    Wang, Y.-S., Liu, F., Hsu, P.-S., Lee, T.-Y.: Spatially and temporally optimized video stabilization. IEEE Trans. Vis. Comp. Graph. 19(8), 1354–1361 (Aug. 2013)Google Scholar
  6. 6.
    Rhemann, C., Hosni, A., Bleyer, M., Rother, C., Gelautz, M.: Fast cost-volume filtering for visual correspondence and beyond. In: IEEE CVPR, Washington, DC, USA, 2011, CVPR ’11, pp. 3017–3024. IEEE Computer Society, New YorkGoogle Scholar
  7. 7.
    Kim, C., Zimmer, H., Pritch, Y., Sorkine-Hornung, A., Gross, M.: Scene reconstruction from high spatio-angular resolution light fields. ACM Trans. Graph. 32(4), 73:1–73:12 (2013)Google Scholar
  8. 8.
    He, K., Sun, J., Tang, X.: Guided image filtering. In: Proceedings of the 11th European Conference on Computer Vision: Part I, ECCV’10, pp. 1–14. Springer, Berlin (2010)Google Scholar
  9. 9.
    Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. ACM Trans. Graph. 30(4), 69:1–69:12 (2011)Google Scholar
  10. 10.
    Ng, R.: Fourier slice photography. ACM Trans. Graph. 24(3), 735–744 (2005)Google Scholar
  11. 11.
    Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 2000, SIGGRAPH ’00, pp. 297–306. ACM Press/Addison-Wesley Publishing Co., New YorkGoogle Scholar
  12. 12.
    Nokia Refocus app. https://refocus.nokia.com/, (2014)
  13. 13.
    Joshi, N., Avidan, S., Matusik, W., Kriegman, D.: Synthetic aperture tracking: Tracking through occlusions. In: IEEE 11th International Conference on Computer Vision, 2007. ICCV 2007, pp. 1–8 (2007)Google Scholar
  14. 14.
    Bae, S., Durand, F.: Defocus magnification. Comput. Graphics Forum 26(3), 571–579 (2007)CrossRefGoogle Scholar
  15. 15.
    Joshi, N., Matusik, W., Avidan, S.: Natural video matting using camera arrays. ACM Trans. Graph. 25(3), 779–786 (July 2006)Google Scholar
  16. 16.
    Lippmann, G.: Épreuves réversibles donnant la sensation du relief. J. Phys. Theor. Appl. 7(1), 821–825 (1908)CrossRefGoogle Scholar
  17. 17.
    Irani, M.: Multi-frame correspondence estimation using subspace constraints. Int. J. Comput. Vis. 48(3), 173–194 (July 2002)Google Scholar
  18. 18.
    Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. In: ACM SIGGRAPH 2011 Papers, New York, NY, USA, 2011, SIGGRAPH ’11, pp. 69:1–69:12. ACM, New YorkGoogle Scholar
  19. 19.
    Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Proceedings of the Sixth International Conference on Computer Vision, Washington, DC, USA, 1998, ICCV ’98, pp. 839. IEEE Computer Society, New YorkGoogle Scholar
  20. 20.
    Wanner, S., Goldluecke, B.: Globally consistent depth labeling of 4d light fields. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 41–48 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Daniel Donatsch
    • 1
  • Siavash Arjomand Bigdeli
    • 2
    • 3
  • Philippe Robert
    • 3
  • Matthias Zwicker
    • 1
  1. 1.University of BernBernSwitzerland
  2. 2.University of NeuchatelNeuchâtelSwitzerland
  3. 3.3D Impact MediaSarnenSwitzerland

Personalised recommendations