Light Field from Smartphone-Based Dual Video

  • Bernd Krolla
  • Maximilian Diebold
  • Didier Stricker
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8926)

Abstract

In this work, we introduce a light field acquisition approach for standard smartphones. The smartphone is manually translated along a horizontal rail, while recording synchronized video with front and rear camera. The front camera captures a control pattern, mounted parallel to the direction of translation to determine the smartphones current position. This information is used during a postprocessing step to identify an equally spaced subset of recorded frames from the rear camera, which captures the actual scene. From this data we assemble a light field representation of the scene. For subsequent disparity estimation, we apply a structure tensor approach in the epipolar plane images.

We evaluate our method by comparing the light fields resulting from manual translation of the smartphone against those recorded with a constantly moving translation stage.

Keywords

Computer vision Light field imaging Video processing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Adelson, E.H., Bergen, J.R.: The plenoptic function and the elements of early vision. Computational Models of Visual Processing 1, 43–54 (1991)Google Scholar
  2. 2.
    Bolles, R.C., Baker, H.H., Marimont, D.H.: Epipolar-plane image analysis: An approach to determining structure from motion. International Journal of Computer Vision 1(1), 7–55 (1987)CrossRefGoogle Scholar
  3. 3.
    Davis, A., Levoy, M., Durand, F.: Unstructured Light Fields. Comp. Graph. Forum, 31, 305–314 (2012)Google Scholar
  4. 4.
    Georgiev, T., Lumsdaine, A.: Focused plenoptic camera and rendering. Journal of Electronic Imaging 19, 021106 (2010)CrossRefGoogle Scholar
  5. 5.
    Raytrix GmbH. Raytrix (2014). http://www.raytrix.de/
  6. 6.
    Google. Project tango (2014). https://www.google.com/atap/projecttango/
  7. 7.
    Gortler, S.J., Grzeszczuk, R., Szeliski, R., Cohen, M.F.: The Lumigraph. In: Siggraph (1996)Google Scholar
  8. 8.
  9. 9.
    Lytro Inc., Lytro (2014). https://store.lytro.com/
  10. 10.
    Occipital Inc., Structure sensor (2014). http://structure.io/
  11. 11.
    Levoy, M.: Synthcam (2014). https://sites.google.com/site/marclevoy/
  12. 12.
    Levoy, M., Hanrahan, P.: Light field rendering. pp. 31–42 (1996)Google Scholar
  13. 13.
    Liang, C.-K., Lin, T.-H., Wong, B.-Y., Liu, C., Chen, H.: Programmable Aperture Photography: Multiplexed Light Field Acquisition 27(3), 1–10 (2008)Google Scholar
  14. 14.
    Lumsdaine, A.. Georgiev, T.: The Focused Plenoptic Camera. In: Proc. IEEE Int. Conference on Computational Photography, pp. 1–8 (2009)Google Scholar
  15. 15.
    McMillan, L., Bishop, G.: Plenoptic modeling: An image-based rendering system. pp. 39–46 (1995)Google Scholar
  16. 16.
    Ng, R.: Digital Light Field Photography. PhD thesis, Stanford University (2006). Note: thesis led to commercial light field camera, see also http://www.lytro.com
  17. 17.
    Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Light field photography with a hand-held plenoptic camera. Technical Report CSTR 2005–02, Stanford University (2005)Google Scholar
  18. 18.
    Perwass, C., Wietzke, L.: The Next Generation of Photography (2010). http://www.raytrix.de
  19. 19.
    Snavely, N., Seitz, S., Szeliski, R.: Photo Tourism: Exploring image collections in 3D (2006). http://phototour.cs.washington.edu/bundler/
  20. 20.
    Taguchi, Y., Agrawal, A., Ramalingam, S., Veeraraghavan, A.: Axial Light Fields for Curved Mirrors: Reflect your Perspective, Widen your View (2010)Google Scholar
  21. 21.
    Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled Photography: Mask Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocussing 26(3), 1–69 (2007)Google Scholar
  22. 22.
    Venkataraman, K., Lelescu, D., Duparré, J., McMahon, A., Molina, G., Chatterjee, P., Mullis, R., Nayar, S.: Picam: an ultra-thin high performance monolithic camera array. ACM Transactions on Graphics (TOG) 32(6), 166 (2013). http://www.pelicanimaging.com/technology/
  23. 23.
    Vogiatzis, G., Hernández, C.: Video-based, real-time multi-view stereo. Image and Vision Computing 29(7), 434–441 (2011)CrossRefGoogle Scholar
  24. 24.
    Wanner, S., Goldluecke, B.: Variational Light Field Analysis for Disparity Estimation and Super-Resolution (2013)Google Scholar
  25. 25.
    Wilburn, B., Joshi, N., Vaish, V., Talvala, E.-V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Transactions on Graphics 24, 765–776 (2005)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Bernd Krolla
    • 1
  • Maximilian Diebold
    • 2
  • Didier Stricker
    • 1
  1. 1.German Research Center for Artificial IntelligenceKaiserslauternGermany
  2. 2.Heidelberg Collaboratory for Image ProcessingHeidelbergGermany

Personalised recommendations