Skip to main content
Log in

An overview of computational photography

  • Rewiew
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Computational photography is an emerging multidisciplinary field. Over the last two decades, it has integrated studies across computer vision, computer graphics, signal processing, applied optics and related disciplines. Researchers are exploring new ways to break through the limitations of traditional digital imaging for the benefit of photographers, vision and graphics researchers, and image processing programmers. Thanks to much effort in various associated fields, the large variety of issues related to these new methods of photography are described and discussed extensively in this paper. To give the reader the full picture of the voluminous literature related to computational photography, this paper briefly reviews the wide range of topics in this new field, covering a number of different aspects, including: (i) the various elements of computational imaging systems and new sampling and reconstruction mechanisms; (ii) the different image properties which benefit from computational photography, e.g. depth of field, dynamic range; and (iii) the sampling subspaces of visual scenes in the real world. Based on this systematic review of the previous and ongoing work in this field, we also discuss some open issues and potential new directions in computational photography. This paper aims to help the reader get to know this new field, including its history, ultimate goals, hot topics, research methodologies, and future directions, and thus build a foundation for further research and related developments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adelson E H, Bergen J R. Computational Models of Visual Processing. Cambridge: MIT Press, 1991. 3–20

    Google Scholar 

  2. Adelson E H, Wang J Y A. Single lens stereo with a plenoptic camera. IEEE Trans Pattern Anal Mach Intell, 1992, 14: 99–106

    Article  Google Scholar 

  3. Zhou C, Nayar S K. What are good apertures for defocus deblurring? In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  4. Liang C K, Lin T H, Wong B Y, et al. Programmable aperture photography: Multiplexed light field acquisition. ACM Trans Graph, 2008, 27: 1–10

    Article  Google Scholar 

  5. Dowski E R, Cathey W T. Extended depth of field through wave-front coding. Appl Optics, 1995, 34: 1859–1866

    Article  Google Scholar 

  6. Veeraraghavan A, Raskar R, Agrawal A, et al. Dappled photography: Mask enhanced cameras for heterodyned light fields and coded aperture refocusing. ACM Trans Graph, 2007, 26: 69

    Article  Google Scholar 

  7. Agrawal A, Veeraraghavan A, Raskar R. Reinterpretable imager: Towards variable post-capture space, angle and time resolution in photography. In: Eurographics, Norrköping, 2 2010

  8. Mohan A, Woo G, Hiura S, et al. Bokode: Imperceptible visual tags for camera based interaction from a distance. ACM Trans Graph, 2009, 28: 1–8

    Article  Google Scholar 

  9. Levoy M, Hanrahan P. Light field rendering. In: International Conference on Computer Graphics and Interactive Techniques, New Orleans, 1996. 31–42

  10. Agrawal A, Raskar R, Nayar S K, et al. Removing photography artifacts using gradient projection and flash-exposure sampling. ACM Trans Graph, 2005, 24: 828–835

    Article  Google Scholar 

  11. Gallo O, Gelfand N, Chen W C, et al. Artifact-free high dynamic range imaging. In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  12. Ren N, Marc L, Mathieu B, et al. Light field photography with a hand-held plenoptic camera. Stanford University Computer Science Technical Report, 2005

  13. Schechner Y Y, Nayar S K. Generalized mosaicing. In: IEEE International Conference on Computer Vision (ICCV), Vancouver, 2001. 17–24

  14. Higo T, Matsushita Y, Joshi N, et al. A hand-held photometric stereo camera for 3-D modeling. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009

  15. Levin A. Analyzing depth from coded aperture sets. In: European Conference on Computer Vision (ECCV), Crete, 2010

  16. Gortler S J, Grzeszczuk R, Szeliski R, et al. The lumigraph. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), New Orleans, 1996. 43–54

  17. Narasimhan S G, Nayar S K. Enhancing resolution along multiple imaging dimensions using assorted pixels. IEEE Trans Pattern Anal Mach Intell, 2005, 27: 518–530

    Article  Google Scholar 

  18. Nayar S K, Branzoi V, Boult T E. Programmable imaging using a digital micromirror array. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Washington, 2004. 436–443

  19. Krishnan D, Fergus R. Dark flash photography. ACM Trans Graph, 2009, 28: 1–11

    Article  Google Scholar 

  20. Levin A, Zomet A, Peleg S, et al. Seamless image stitching in the gradient domain. In: European Conference on Computer Vision (ECCV), Prague, 2004. 377–389

  21. Li W, Zhang J, Dai Q H. Exploring aligned complementary image pair for blind motion deblurring. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, 2011

  22. Mohan A, Raskar R, Tumblin J. Agile spectrum imaging: Programmable wavelength modulation for cameras and projectors. Comput Graph Forum, 2008, 27: 709–717

    Article  Google Scholar 

  23. Zhou C, Lin S, Nayar S K. Coded aperture pairs for depth from defocus. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009

  24. Levin A, Freeman W T. 4D frequency analysis of computational cameras for depth of field extension. ACM Trans Graph, 2009, 28: 1–14

    Article  Google Scholar 

  25. Green P, Sun W, Matusik W, et al. Multi-aperture photography. ACM Trans Graph, 2007, 26: 1–7

    Article  Google Scholar 

  26. Zomet A, Nayar S K. Lensless imaging with a controllable aperture. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), New York, 2006. 339–346

  27. Kuthirummal S, Nayar S K. Multiview radial catadioptric imaging for scene capture. ACM Trans Graph, 2006, 25: 916–923

    Article  Google Scholar 

  28. Schechner Y Y, Nayar S K. Generalized mosaicing: Wide field of view multispectral imaging. IEEE Trans Pattern Anal Mach Intell, 2002, 24: 1334–1348

    Article  Google Scholar 

  29. Nayar S K, Branzoi V, Boult T E. Programmable imaging: Towards a flexible camera. Int J Comput Vis, 2006, 70: 7–22

    Article  Google Scholar 

  30. Horstmeyer R, Euliss G, Athale R, et al. Flexible multimodal camera using a light field architecture. In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  31. Raskar R, Agrawal A, Tumblin J. Coded exposure photography: motion deblurring using fluttered shutter. ACM Trans Graph 2006, 25: 795–804

    Article  Google Scholar 

  32. Ng R. Fourier slice photography. ACM Trans Graph, 2005, 24: 735–744

    Article  Google Scholar 

  33. Rajagopalan A N, Chaudhuri S. Optimal selection of camera parameters for recovery of depth from defocused images. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Juan, 1997. 219–224

  34. Talvala E V, Adams A, Horowitz M, et al. Veiling glare in high dynamic range imaging. ACM Trans Graph, 2007, 26: 37–46

    Article  Google Scholar 

  35. Schechner Y Y, Nayar S K. Generalized mosaicing: Polarization panorama. IEEE Trans Pattern Anal Mach Intell 2005, 27: 631–636

    Article  Google Scholar 

  36. Schechner Y Y, Nayar S K. Uncontrolled modulation imaging. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Washington, 2004. II-197–II-204

  37. Schechner Y Y, Nayar S K. Generalized mosaicing: High dynamic range in a wide field of view. Int J Comput Vis, 2003, 53: 245–267

    Article  Google Scholar 

  38. Kuthirummal S, Nayar S K. Flexible mirror imaging. In: ICCV Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras (OMNIVIS), Rio de Janeiro, 2007

  39. Raskar R, Agrawal A, Wilson C, et al. Glare aware photography: 4D ray sampling for reducing glare effects of camera lenses. ACM Trans Graph, 2008, 27: 56–64

    Article  Google Scholar 

  40. Nomura Y, Zhang L, Nayar S K. Scene collages and flexible camera arrays. In: Eurographics Symposium on Rendering, Grenoble, 2007. 127–138

  41. Mohan A, Lanman D, Hiura S, et al. Image destabilization: Programmable defocus using lens and sensor motion. In: IEEE Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  42. Nagahara H, Kuthirummal S, Zhou C, et al. Flexible depth of field photography. In: European Conference on Computer Vision (ECCV), Marseille, 2008

  43. Levin A, Sand P, Cho T S, et al. Motion-invariant photography. ACM Trans Graph, 2008, 27: 1–9

    Article  Google Scholar 

  44. Ben-Ezra M, Lin Z, Wilburn B. Penrose pixels: Super-resolution in the detector layout domain. In: IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, 2007

  45. Tumblin J, Agrawal A, Raskar R. Why I want a gradient camera? In: International Conference on Computer Vision and Pattern Recognition (CVPR), San Diego, 2005. 103–110

  46. Wetzstein G, Ihrke I, Heidrich W. Sensor saturation in fourier multiplexed imaging. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010

  47. Zhang C, Chen T. Light field sampling. Synth Lect Image Video Multimed Process, 2006, 2: 1–102

    Article  Google Scholar 

  48. Cossairt O, Nayar S K, Ramamoorthi R. Light field transfer: Global illumination between real and synthetic objects. ACM Trans Graph, 2008, 27: 1–6

    Article  Google Scholar 

  49. Park J, Lee M, Grossberg M D, et al. Multispectral imaging using multiplexed illumination. In: IEEE International Conference on Computer Vision (ICCV), Rio de Janeiro, 2007

  50. Yang J C, Everett M, Buehler C, et al. A real-time distributed light field camera. In: Eurographics workshop on Rendering, Pisa, 2002. 77–85

  51. Debevec P, Hawkins T, Tchou C, et al. Acquiring the reflectance field of a human face. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), New Orleans, 2000. 145–156

  52. Petschnigg G, Szeliski R, Agrawala M, et al. Digital photography with flash and no-flash image pairs. ACM Trans Graph, 2004, 23: 664–672

    Article  Google Scholar 

  53. Schechner Y Y, Nayar S K, Belhumeur P N. Multiplexing for optimal lighting. IEEE Trans Pattern Anal Mach Intell, 2007, 29: 1339–1354

    Article  Google Scholar 

  54. Eisemann E, Eisemann E, Durand F. Flash photography enhancement via intrinsic relighting. ACM Trans Graph, 2004, 23: 673–678

    Article  Google Scholar 

  55. Feris R, Raskar R, Tan K, et al. Specular reflection reduction with multi-flash imaging. In: IEEE Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI), Curitiba, 2004. 316–321

  56. Dicarlo J M, Xiao F, Wandell B A. Illuminating illumination. In: Color Imaging Conference, Scottsdale, 2000

  57. Crispell D, Lanman D, Sibley P G, et al. Beyond silhouettes: Surface reconstruction using multi-flash photography. In: 3rd International Symposium on 3D Data Processing, Visualization, Transmission, Chapel Hill, 2006. 405–412

  58. Fattal R, Agrawala M, Rusinkiewicz S. Multiscale shape and detail enhancement from multi-light image collections. ACM Trans Graphics, 2007, 26: 51

    Article  Google Scholar 

  59. Tan K, Feris R, Raskar R, et al. Harnessing real-world depth edges with multiflash imaging. IEEE Comput Graph Appl, 2005, 25: 32–38

    Google Scholar 

  60. Raskar R, Tan K H, Feris R, et al. Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans Graph, 2004, 23: 679–688

    Article  Google Scholar 

  61. Vaquero D A, Raskar R, Feris R S, et al. A projector-camera setup for geometry-invariant frequency demultiplexing. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, 2009. 2082–2089

  62. Zhang L, Nayar S K. Projection defocus analysis for scene capture and image display. ACM Trans Graph, 2006, 25: 907–915

    Article  Google Scholar 

  63. Nayar S K, Krishnan G. Visual chatter in the real world. In: Proceedings of Eurographics Symposium on Rendering, Nicosia, 2006. 11–16

  64. Nayar S K, Krishnan G, Grossberg M D, et al. Fast separation of direct and global components of a scene using high frequency illumination. ACM Trans Graph, 2006, 25: 935–944

    Article  Google Scholar 

  65. Malzbender T, Gelb D, Wolters H. Polynomial texture maps. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Los Angeles, 2001. 519–528

  66. Masselus V, Peers P, Dutre P, et al. Relighting with 4D incident light fields. ACM Trans Graph, 2003, 22: 613–620

    Article  Google Scholar 

  67. Wenger A, Gardner A, Tchou C, et al. Performance relighting and reflectance transformation with time multiplexed illumination. ACM Trans Graph, 2005, 24: 756–764

    Article  Google Scholar 

  68. Liao M, Wang L, Yang R G. Light falloff stereo. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, 2007

  69. Matusik W, Pfister H, Ngan A, et al. Image-based 3D photography using opacity hulls. ACM Trans Graph, 2002, 21: 427–437

    Article  Google Scholar 

  70. Anrys F, Dutre P. Image based lighting design. In: International Conference on Visualization, Imaging, Image Processing (VIIP), Marbella, 2004

  71. Koppal S J, Narasimhan S G. Time-constrained photography. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009. 333–340

  72. Veeraraghavan A, Agrawal A, Raskar R, et al. Non-refractive modulators for encoding and capturing scene appearance and depth. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, 2008

  73. Georgiev T, Intwala C, Babacan S, et al. Unified frequency domain analysis of lightfield cameras. In: Europeon Conference on Computer Vision (ECCV), Marseille, 2008

  74. Alleysson D, Ssstrunk S, Herau J. Linear demosaicing inspired by the human visual system. IEEE Trans Image Process, 2005, 14: 439–449

    Article  Google Scholar 

  75. Lanman D, Raskar R, Agrawal A, et al. Shield fields: Modeling and capturing 3D occluders. ACM Trans Graph, 2008, 27: 1–10

    Article  Google Scholar 

  76. Ihrke I, Wetzstein G, Heidrich W. A theory of plenoptic multiplexing. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010

  77. Adams A, Talvala E, Park S H, et al. The Frankencamera: An experimental platform for computational photography. ACM Trans Graph, 2010, 19: 1–12

    Article  Google Scholar 

  78. Taguchi Y, Agrawal A, Ramalingam S, et al. Axial light field for curved mirrors: Reflect your perspective, widen your view. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010

  79. Agarwala A, Agarwala M, Cohen M, et al. Photographing long scenes with multi-viewpoint panoramas. ACM Trans Graph, 2006, 25: 853–861

    Article  Google Scholar 

  80. Agarwala A, Dontcheva M, Agrawala M, et al. Interactive digital photomontage. ACM Trans Graph, 2004, 23: 294–302

    Article  Google Scholar 

  81. Davis J. Mosaics of scenes with moving objects. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Santa Barbara, 1998. 354–360

  82. Szeliski R, Shum H Y. Creating full view panoramic image mosaics and environment maps. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Los Angeles, 1997. 251–258

  83. Uyttendaele M. Eliminating ghosting and exposure artifacts in image mosaics. In: International Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, 2001. 509–516

  84. Wilburn B, Joshi N, Vaish V, et al. High performance imaging using large camera arrays. ACM Trans Graph, 2005, 24: 765–776

    Article  Google Scholar 

  85. Agarwala A, Zheng C, Pal C, et al. Panoramic video textures. ACM Trans Graph, 2005, 24: 821–827

    Article  Google Scholar 

  86. Hasinoff S W, Durand F, Freeman W T. Noise-optimal capture for high dynamic range photography. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010

  87. Mcguire M, Matusik W, Pfister H, et al. Optical splitting trees for high-precision monocular imaging. IEEE Comput Graph Appl, 2007, 27: 32–42

    Article  Google Scholar 

  88. Burt P J, Kolczynski R J. Enhanced image capture through fusion. In: IEEE International Conference on Computer Vision (ICCV), Berlin, 1993. 173–182

  89. Debevec P, Wenger A, Tchou C, et al. A lighting reproduction approach to live-action compositing. ACM Trans Graph, 2002, 21: 547–556

    Article  Google Scholar 

  90. DiCarlo J, Wandell B. Rendering high dynamic range images. In: The International Society for Optical Engineering (SPIE), San Diego, 2000. 392–401

  91. Madden B C. Extended intensity range imaging. University of Pennsylvania, GRASP Laboratory Technical Report, 1993

  92. Mann S, Picard R W. On being ‘undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures. In: Proceedings of IS and Ts 48th Annual Conference, Washington, 1995. 442–448

  93. Lu P Y, Huang T H, Wu M S, et al. High dynamic range image reconstruction from hand-held cameras. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, 2009

  94. Aggarwal M, Ahuja N. Split aperture imaging for high dynamic range. Int J Comput Vis, 2004, 58: 7–17

    Article  Google Scholar 

  95. Wang H C, Raskar R, Ahuja N. High dynamic range video using split aperture camera. In: IEEE 6th Workshop on Omnidirectional Vision, Camera Networks and Non-classical Cameras (OMNIVIS, in conjunction with ICCV), Beijing, 2005

  96. Morimura A. Imaging method for a wide dynamic range and an imaging device for a wide dynamic range. US Patent, 5455621, 1995

  97. Nayar S K, Mitsunaga T. High dynamic range imaging: Spatially varying pixel exposures. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head Island, 2000. 472–479

  98. Bennett E P, McMillan L. Video enhancement using per-pixel virtual exposures. ACM Trans Graph, 2005, 24: 845–852

    Article  Google Scholar 

  99. Street R A. High dynamic range segmented pixel sensor array. US Patent, 5789737, 1998

  100. Nayar S K, Narasimhan S G. Assorted pixels: Multisampled imaging with structural models. In: European Conference on Computer Vision (ECCV), Copenhagen, 2002. 636–652

  101. Mitsunaga T, Nayar S K. Radiometric self calibration. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Ft. Collins, 1999. 374–380

  102. Knight T F. Design of an integrated optical sensor with on-chip preprocessing. PhD thesis. Massachusetts: Massachusetts Institute of Technology, 1983

    Google Scholar 

  103. Brajovic V, Kanade T. A sorting image sensor: An example of massively parallel intensity-to-time processing for low-latency computational sensors. In: IEEE International Conference on Robotics and Automation, Minneapolis, 1996. 1638–1643

  104. Scheffer D, Kavadias S, Dierickx B, et al. A logarithmic response CMOS image sensor with on-chip calibration. IEEE J Solid State Chem, 2000, 35: 1146–1152

    Article  Google Scholar 

  105. Decker S J, McGrath R D, Brehmer K, et al. A 256x256 CMOS imaging array with wide dynamic range pixels and column-parallel digital output. In: IEEE International Conference on Solid-State Circuits, San Francisco, 1998. 176–177

  106. Handy R J. High dynamic range CCD detector imager. US Patent, 4623928, 1986

  107. Wen D D. High dynamic range charge coupled device. US Patent, 4873561, 1989

  108. Hamazaki M. Non-linear photosite response in CCD imagers. US Patent, RE 34802, 1994

  109. Hamazaki M. Driving method for solid-state image pickup device. US Patent, 5990952, 1999

  110. Cossairt O, Zhou C, Nayar S K. Diffusion coding photography for extended depth of field. ACM Trans Graph, 2010, 29: 1–10

    Article  Google Scholar 

  111. Hasinoff S W, Kutulakos K N. Light-efficient photography. In: European Conference on Computer Vision (ECCV), Marseille, 2008

  112. Telleen J, Sullivan A, Yee J, et al. Synthetic shutter speed imaging. In: European Association for Computer Graphics, Prague, 2007. 591–598

  113. Hasinoff S W, Kutulakos K N, Durand F, et al. Time-constrained photography. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009. 333–340

  114. Guichard F, Nguyen H P, Tessieres R, et al. Extended depth-of-field (eDof) using sharpness transport across color channels. In: The International Society for Optical Engineering (SPIE), Chernivtsi, 2009

  115. Isaksen A, McMillan L, Gortler S J. Dynamically reparameterized light fields. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), New Orleans, 2000. 279–306

  116. Horowitz M, Ng R, Adams A. Light field microscopy. ACM Trans Graph, 2006, 25: 924–934

    Article  Google Scholar 

  117. Du H, Tong X, Cao X, et al. A prism-based system for multispectral video acquisition. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009. 175–182

  118. Cao X, Du H, Tong X, et al. A prism-mask system for multispectral video acquisition. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 2423–2435

    Article  Google Scholar 

  119. Bishop T E, Zanetti S, Favaro P. Light field superresolution. In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  120. Landolt O, Mitros A, Koch C. Visual sensor with resolution enhancement by mechanical vibrations. In: International Conference Advanced Research in VLSI, Salt Lake City, 2001. 233–239

  121. Wang S, Heidrich W. The design of an inexpensive very high resolution scan camera system. Comput Graph Forum, 2004, 23: 441–450

    Article  Google Scholar 

  122. Ben-Ezra M. High resolution large format tile-scan camera: Design, calibration, extended depth of field. In: IEEE International Conference on Computational Photography (ICCP), Cambridge MA, 2010

  123. Cossairt O S, Miau D, Nayar S K. Gigapixel computational imaging. In: IEEE International Conference on Computational Photography (ICCP), Pittsburg, 2011

  124. Kirmani A, Hutchison T, Davis J, et al. Looking around the corner using transient imaging. In: IEEE International Conference on Computer Vision (ICCV), Kyoto, 2009. 159–166

  125. Wilburn B, Joshi N, Vaish V, et al. High performance imaging using large camera arrays. ACM Trans Graph, 2005, 24: 765–776

    Article  Google Scholar 

  126. Tai Y W, Du H, Brown M S, et al. Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Trans Pattern Anal Mach Intell, 2010, 32: 1012–1028

    Article  Google Scholar 

  127. Agrawal A, Raskar R. Resolving objects at higher resolution from a single motion-blurred image. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, 2007

  128. Agrawal A, Raskar R. Optimal single image capture for motion deblurring. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, 2009. 2560–2567

  129. Agrawal A, Xu Y. Coded exposure deblurring: Optimized codes for PSF estimation and invertibility. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, 2009. 2066–2073

  130. Agrawal A, Xu Y, Raskar R. Invertible motion blur in video. ACM Trans Graph, 2009, 28: 95

    Article  Google Scholar 

  131. Veeraraghavan A, Reddy D, Raskar R. Coded strobing photography: Compressive sensing of high-speed periodic events. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 671–686

    Article  Google Scholar 

  132. Agrawal A, Gupta M, Veeraraghavan A. Optimal coded sampling for temporal super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, 2010. 599–606

  133. McCloskey S. Velocity-dependent shutter sequences for motion deblurring. In: Europeon Conference on Computer Vision (ECCV), Crete, 2010

  134. Ezra M B, Nayar S K. Motion-based motion deblurring. IEEE Trans Pattern Anal Mach Intell, 2004, 26: 689–698

    Article  Google Scholar 

  135. Ezra M B, Zomet A, Nayar S K. Jitter camera: High resolution video from a low resolution detector. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Washington, 2004. II-135–II-142

  136. Ezra M B, Nayar S K. Motion deblurring using hybrid imaging. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Madison, 2003. I-657–I-664

  137. Ezra M B, Nayar S K. Motion-based motion deblurring. IEEE Trans Pattern Anal Mach Intell, 2004, 26: 689–698

    Article  Google Scholar 

  138. Bascle B, Blake A, Zisserman A. Motion deblurring and super-resolution from an image sequence. In: European Conference on Computer Vision (ECCV), Freiburg, 1996. 573–582

  139. Fergus R, Singh B, Hertzmann A, et al. Removing camera shake from a single photograph. ACM Trans Graph, 2006, 25: 787–794

    Article  Google Scholar 

  140. Shechtman E, Caspi Y, Irani M. Increasing space-time resolution in video. In: European Conference on Computer Vision (ECCV), Copenhagen, 2002. 753–768

  141. Shechtman E, Caspi Y, Irani M. Space-time super-resolution. IEEE Trans Pattern Anal Mach Intell, 2005, 27: 531–545

    Article  Google Scholar 

  142. Joshi N, Kang S B, Zitnick C. L, et al. Image deblurring using inertial measurement sensors. ACM Trans Graph, 2010, 29: 1–9

    Google Scholar 

  143. Levin A, Fergus R, Durand F, et al. Image and depth from a conventional camera with a coded aperture. ACM Trans Graph, 1996, 26: 1–9

    Google Scholar 

  144. Nayar S K, Nakagawa Y. Shape from focus. IEEE Trans Pattern Anal Mach Intell, 1994, 16: 824–831

    Article  Google Scholar 

  145. Vaish V, Szeliski R, Zitnick C L, et al. Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), New York, 2006. 2331–2338

  146. Favaro P, Soatto S. A geometric approach to shape from defocus. IEEE Trans Pattern Anal Mach Intell, 2005, 27: 406–417

    Article  Google Scholar 

  147. Watanabe M, Nayar S K. Rational filters for passive depth from defocus. Int J Comput Vis, 1998, 27: 203–225

    Article  Google Scholar 

  148. Nayar S K, Watanabe M, Noguchi M. Real-time focus range sensor. IEEE Trans Pattern Anal Mach Intell, 1996, 18: 1186–1198

    Article  Google Scholar 

  149. Levin A. Analyzing depth from coded aperture sets. In: European Conference on Computer Vision (ECCV), Crete, 2010

  150. Zickler T, Belhumeur P N, Kriegman D J. Helmholtz stereopsis: Exploiting reciprocity for surface reconstruction. Int J Comput Vis, 2002, 49: 869–884

    Article  Google Scholar 

  151. Akers D, Losasso F, Rick J. Conveying shape and features with image-based relighting. In: IEEE Visualization (VIS), Seattle, 2003. 349–354

  152. Basri R, Jacobs D. Photometric stereo with general, unknown lighting. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, 2001. II-374–II-381

  153. Gershun A. The light field. J Math Phys, 1936, 18: 51–151

    Google Scholar 

  154. McMillan L. Plenoptic modeling: An image-based rendering system. In: International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Los Angeles, 1995. 39–46

  155. Javidi B, Frauel Y. Three-dimensional object visualization and recognition based on computational integral imaging. Opt Eng, 1999, 38: 1072–1077

    Article  Google Scholar 

  156. Yang J C, Everett M, Buehler C, et al. A real-time distributed light field camera. In: Eurographics workshop on Rendering, Pisa, 2002. 77–86

  157. Einarsson P, Chabert C F, Jones A, et al. Relighting human locomotion with flowed reflectance fields. In: Eurographics Symposium on Rendering, Nicosia, 2006. 183–194

  158. Liu Y. Key Technologies in Light field. PhD thesis. Beijing: Tsinghua University, 2009

    Google Scholar 

  159. Wu C L. Key technologies on multi-view reconstruction under multiple illuminations. Master’s thesis. Beijing: Tsinghua University, 2010

    Google Scholar 

  160. Bayer B E. Color imaging array. US Patent, 3971065, 1976

  161. Georgeiv T, Zheng K C, Curless B, et al. Spatio-angular resolution tradeoff in integral photography. In: Eurographics Symposium on Rendering, Nicosia, 2006. 263–272

  162. Lumsdaine A, Georgiev T. The focused plenoptic camera. In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

  163. Levin A, Freeman W T, Durand F. Understanding camera trade-offs through a Bayesian analysis of light field projections. In: European Conference on Computer Vision (ECCV), Marseille, 2008

  164. Tsuhan C Z, Zhang C, Chen T. A self-reconfigurable camera array. In: Eurographics Symposium on Rendering, Sweden, 2004. 243–253

  165. Liu Y, Cao X, Dai Q H, et al. Continuous depth estimation for multi-view stereo (CMVS). In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Miami, 2009. 2121–2128

  166. Liu Y, Dai Q, Xu W. A point cloud based multi-view stereo algorithm for free-viewpoint video. IEEE Trans Vis Comput Graph, 2010, 16: 407–418

    Article  Google Scholar 

  167. Tong X, Gray M G. Interactive view synthesis from compressed light fields. In: International Conference on Image-Processing (ICIP), Catalonia, 2003. 85–88

  168. Matusik W, Pfister H. 3DTV: A scalable system for real-time acquisition, transmission, autostereoscopic display of dynamic scenes. ACM Trans Graph, 2004, 23: 814–824

    Article  Google Scholar 

  169. Marcus M, Girod B. Data compression for light field rendering. IEEE Trans Circuits Syst Video Technol, 2000, 10: 338–343

    Article  Google Scholar 

  170. Magnor M, Girod B. Hierarchical coding of light fields with disparity maps. In: International Conference on Image Processing (ICIP), Kobe, 1999. 334–338

  171. Sebe I O, Ramanathan P, Girod B. Multi-view geometry estimation for light field compression. In: Vision, Modelling and Visualization, Erlangen, 2002. 265–272

  172. Magnor M, Ramanathan P, Girod B. Multi-view coding for image-based rendering using 3-D scene geometry. IEEE Trans Circuits Syst Video Technol, 2003, 13: 1092–1106

    Article  Google Scholar 

  173. Magnor M, Endmann A, Girod B. Progressive compression and rendering of light fields. In: International Workshop on Vision, Modeling, Visualization (VMV), Stuttgart, 2000. 199–203

  174. Xu D, Dai Q H, Xu W L. Data compression of light field using wavelet packet. In: IEEE International Conference on Multimedia and Expo (ICME), Taipei, 2004

  175. Xu D, Dai Q H, Xu W L. Light field compression based on prediction propagating and wavelet packet. In: IEEE International Conference on Image Processing (ICIP), Singapore, 2004. 3515–3518

  176. Chang C L, Zhu X Q, Ramanathan P, et al. Light field compression using disparity-compensated lifting and shape adaptation. IEEE Trans Image Process, 2006, 15: 793–806

    Article  Google Scholar 

  177. Girod B, Chang C L, Ramanathan P, et al. Light field compression using disparity-compensated lifting. In: IEEE International Conference on Acoustics, Speech, Signal Processing (ICASSP), Hong Kong, 2003. 761–764

  178. Zhang Z Y, Levoy M. Wigner distributions and how they relate to the light field. In: IEEE International Conference on Computational Photography (ICCP), Cluj-Napoca, 2009

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to QiongHai Dai.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Suo, J., Ji, X. & Dai, Q. An overview of computational photography. Sci. China Inf. Sci. 55, 1229–1248 (2012). https://doi.org/10.1007/s11432-012-4587-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11432-012-4587-6

Keywords

Navigation