Advertisement

International Journal of Computer Vision

, Volume 101, Issue 2, pp 384–400 | Cite as

On Plenoptic Multiplexing and Reconstruction

  • Gordon WetzsteinEmail author
  • Ivo Ihrke
  • Wolfgang Heidrich
Article

Abstract

Photography has been striving to capture an ever increasing amount of visual information in a single image. Digital sensors, however, are limited to recording a small subset of the desired information at each pixel. A common approach to overcoming the limitations of sensing hardware is the optical multiplexing of high-dimensional data into a photograph. While this is a well-studied topic for imaging with color filter arrays, we develop a mathematical framework that generalizes multiplexed imaging to all dimensions of the plenoptic function. This framework unifies a wide variety of existing approaches to analyze and reconstruct multiplexed data in either the spatial or the frequency domain. We demonstrate many practical applications of our framework including high-quality light field reconstruction, the first comparative noise analysis of light field attenuation masks, and an analysis of aliasing in multiplexing applications.

Keywords

Computational photography Optical multiplexing Plenoptic function Light fields 

Notes

Acknowledgments

We thank Dolby Canada for their support and the anonymous reviewers for their insightful feedback. Gordon Wetzstein was supported by a UBC Four Year Fellowship, an NSERC Postdoctoral Fellowship, and the DARPA SCENICC program. Wolfgang Heidrich was supported under the Dolby Research Chair in Computer Science at UBC. Ivo Ihrke was supported by a Feodor-Lynen Fellowship of the Humboldt Foundation, Germany.

References

  1. Adelson, E.,& Wang, J. (1992). Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine, 14(2), 99–106.CrossRefGoogle Scholar
  2. Adelson, E. H.,& Bergen, J. R. (1991). The plenoptic function and the elements of early vision. In Computational models of visual processing (pp. 3–20). Cambridge, MA: MIT Press.Google Scholar
  3. Aggarwal, M.,& Ahuja, N. (2004). Split aperture imaging for high dynamic range. International Journal of Computer Vision, 58(1), 7–17.Google Scholar
  4. Agrawal, A., Gupta, M., Veeraraghavan, A.,& Narasimhan, S. (2010a). Optimal coded sampling for temporal super-resolution (pp. 374–380). In Proceedings IEEE CVPR.Google Scholar
  5. Agrawal, A., Veeraraghavan, A.,& Raskar, R. (2010b). Reinterpretable imager: Towards variable post-capture space, angle and time resolution in photography. In: Proceedings of eurographics (pp. 1–10).Google Scholar
  6. Alleyson, D., Süsstrunk, S.,& Hérault, J. (2005). Linear demosaicing inspired by the human visual system. IEEE Transactions On Instrumentation and Measurement, 14(4), 439–449.Google Scholar
  7. Bando, Y., Chen, B . Y.,& Nishita, T. (2008). Extracting depth and matte using a color-filtered aperture. ACM Transactions on Graphics Siggraph Asia, 27(5), 134.Google Scholar
  8. Bayer, B. E. (1976). Color imaging array. US Patent, 3,971,065.Google Scholar
  9. Ben-Ezra, M., Zomet, A.,& Nayar, S. (2005). Video superresolution using controlled subpixel detector shifts. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(6), 977–987.CrossRefGoogle Scholar
  10. Bishop, T., Zanetti, S.,& Favaro, P. (2009). Light-field superresolution. In Proceedings of ICCP (pp. 1–9 ).Google Scholar
  11. Bub, G., Tecza, M., Helmes, M., Lee, P.,& Kohl, P. (2010). Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging. Nature Methods, 7, 209–211.CrossRefGoogle Scholar
  12. Compton, J. (2007). Color filter array 2.0. http://johncompton.pluggedin.kodak.com
  13. Debevec, P. E.,& Malik, J. (1997). Recovering high dynamic range radiance maps from photographs. In Proceedings of ACM siggraph (pp. 369–378).Google Scholar
  14. Georgiev, T.,& Lumsdaine, A. (2010). Rich imamge capture with plenoptic cameras. In Proceedings of ICCP (pp. 1–8).Google Scholar
  15. Georgiev, T., Intwala, C., Babacan, S.,& Lumsdaine, A. (2008). Unified frequency domain analysis of lightfield cameras (pp. 224–237). In Proceedings of ECCV.Google Scholar
  16. Gortler, S., Grzeszczuk, R., Szelinski, R.,& Cohen, M. (1996). The lumigraph. In: Proceedings of ACM siggraph (pp. 43–54).Google Scholar
  17. Gottesman, S . R.,& Fenimore, E . E. (1989). New family of binary arrays for coded aperture imaging. Applied Optics, 28(20), 4344–4352.CrossRefGoogle Scholar
  18. Greivenkamp, J. (1990). Color dependant optical prefilter for the suppression of aliasing artifacts. Applied Optics, 29(5), 676–684.CrossRefGoogle Scholar
  19. Gupta, M., Agrawal, A., Veeraraghavan, A.,& Narasimhan, S. G. (2010). Flexible voxels for motion-aware videography. In Proceedings of ECCV (pp. 100–114).Google Scholar
  20. Horstmeyer, R., Euliss, G., Athale, R.,& Levoy, M. (2009). Flexible multimodal camera using a light field architecture. In Proceedings of ICCP (pp. 1–8).Google Scholar
  21. Ives, F. E. (1903). Parallax stereogram and process of making same. U.S. Patent 725,567.Google Scholar
  22. Lanman, D., Raskar, R., Agrawal, A.,& Taubin, G. (2008). Shield fields: Modeling and capturing 3D occluders. ACM Transactions on Graphics (Siggraph Asia), 27(5), 131.Google Scholar
  23. Levin, A.,& Durand, F. (2010). Linear view synthesis using a dimensionality gap light field prior. In Proceedings of IEEE CVPR (pp. 1–8).Google Scholar
  24. Levin, A., Freeman, W. T.,& Durand, F. (2008). Understanding camera trade-offs through a bayesian analysis of light field projections. In Proceedings of ECCV (pp. 88–101).Google Scholar
  25. Levin, A., Hasinoff, S. W., Green, P., Durand, F.,& Freeman, W. T. (2009). 4D frequency analysis of computational cameras for depth of field extension. ACM Transactions on Graphics (Siggraph), 28(3), 97.CrossRefGoogle Scholar
  26. Levoy, M.,& Hanrahan, P. (1996). Light field rendering. In Proceedings of ACM siggraph (pp. 31–42).Google Scholar
  27. Levoy, M., Ng, R., Adams, A., Footer, M.,& Horowitz, M. (2006). Light field microscopy. ACM Transactions on Graphics (Siggraph), 25(3), 924–934.CrossRefGoogle Scholar
  28. Li, X., Gunturk, B.,& Zhang, L. (2008). Image demosaicing: A systematic survey (pp 68221J–68,221J–15). In SPIE Conference on visual communication and image processing.Google Scholar
  29. Lippmann, G. (1908). La Photographie Intégrale. Academie des Sciences, 146, 446–451.Google Scholar
  30. Lumsdaine, A.,& Georgiev, T. (2009). The focused plenoptic camera. In Proceedings of ICCP (pp. 1–8).Google Scholar
  31. McGuire, M., Matusik, W., Pfister, H., Chen, B., Hughes, J. F.,& Nayar, S. K. (2007). Optical splitting trees for high-precision monocular imaging. IEEE Transaction on Knowledge& Data Engineering, 27(2), 32–42.Google Scholar
  32. Mitsunaga, T.,& Nayar, S. K. (1999). Radiometric self calibration. In Proceedings of IEEE CVPR (pp. 374–380).Google Scholar
  33. Narasimhan, S.,& Nayar, S. (2005). Enhancing resolution along multiple imaging dimensions using assorted pixels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(4), 518–530.CrossRefGoogle Scholar
  34. Ng, R. (2005). Fourier slice photography. ACM Transactions on Graphics (Siggraph), 24(3), 735–744.CrossRefGoogle Scholar
  35. Reddy, D., Veeraraghavan, A.,& Chellappa, R. (2011). P2C2: Programmable pixel compressive camera for High speed imaging. In Proceedings of IEEE CVPR (pp. 1–8).Google Scholar
  36. Sajadi, B., Majumder, A., Hiwada, K., Maki, A.,& Raskar, R. (2011). Switchable primaries using shiftable layers of color filter arrays. ACM Transactions on Graphics (Siggraph), 30(3), 1–8.Google Scholar
  37. Schechner, Y.,& Nayar, S. (2003). Generalized mosaicing: high dynamic range in a wide field of view. International Journal of Computer Vision, 53(3), 245–267. Google Scholar
  38. Schechner, Y., Nayar, S.,& Belhumeur, P. (2007). Multiplexing for optimal lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(8), 1339–1354.CrossRefGoogle Scholar
  39. Takhar, D., Laska, J. N., Wkin, M., Duarte, M. F., Baron, D., Sarvotham, S., Kelly, K. F.,& Baraniuk, R. G. (2006). A new compressive imaging camera architecture using optical-domain compression. IS&T/SPIE computational imaging IV 6065.Google Scholar
  40. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A.,& Tumblin, J. (2007). Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocussing. ACM Transactions on Graphics (Siggraph), 26(3), 69.CrossRefGoogle Scholar
  41. Veeraraghavan, A., Raskar, R., Agrawal, A., Chellappa, R., Mohan, A.,& Tumblin, J. (2008). Non-refractive modulators for encoding and capturing scene appearance and depth. In Proceedings of IEEE CVPR (pp. 1–8).Google Scholar
  42. Veeraraghavan, A., Reddy, D.,& Raskar, R. (2011). Coded strobing photography: compressive sensing of high speed periodic videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4), 671–686.CrossRefGoogle Scholar
  43. Wang, S.,& Heidrich, W. (2004). The design of an inexpensive very high resolution scan camera system. Computer Graphics Forum (Eurographics), 23(10), 441–450.CrossRefGoogle Scholar
  44. Wenger, A., Gardner, A., Tchou, C., Unger, J., Hawkins, T.,& Debevec, P. (2005). Performance relighting and reflectance transformation with time-multiplexed illumination. ACM Transactions on Graphics (Siggraph), 24(3), 756–764.CrossRefGoogle Scholar
  45. Wetzstein, G., Ihrke, I.,& Heidrich, W. (2010). Sensor saturation in Fourier multiplexed imaging. In Proceedings of IEEE CVPR (pp. 1–8).Google Scholar
  46. Wetzstein, G., Ihrke, I., Lanman, D.,& Heidrich, W. (2011). State of the art in computational plenoptic imaging. In Proceedings of eurographics (STAR) (pp 1–24).Google Scholar
  47. Wilburn, B., Joshi, N., Vaish, V., Talvala, E . V., Antunez, E., Barth, A., et al. (2005). High performance imaging using large camera arrays. ACM Transactions on Graphics (Siggraph), 24(3), 765–776.CrossRefGoogle Scholar
  48. Wuttig, A. (2005). Optimal transformations for optical multiplex measurements in the presence of photon noise. Applied Optics, 44(14), 2710–2719.CrossRefGoogle Scholar
  49. Yasuma, F., Mitsunaga, T., Iso, D.,& Nayar, S. K. (2010). Generalized assorted pixel camera: post-capture control of resolution.In IEEE transaction Im proceedings dynamic range and spectrum (p. 99).Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  • Gordon Wetzstein
    • 1
    Email author
  • Ivo Ihrke
    • 2
  • Wolfgang Heidrich
    • 3
  1. 1.MIT Media LabCambridgeUSA
  2. 2.Universität des SaarlandesSaarbrückenGermany
  3. 3.The University of British ColumbiaVancouverUSA

Personalised recommendations