Flexible Voxels for Motion-Aware Videography

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6311)


The goal of this work is to build video cameras whose spatial and temporal resolutions can be changed post-capture depending on the scene. Building such cameras is difficult due to two reasons. First, current video cameras allow the same spatial resolution and frame rate for the entire captured spatio-temporal volume. Second, both these parameters are fixed before the scene is captured. We propose different components of video camera design: a sampling scheme, processing of captured data and hardware that offer post-capture variable spatial and temporal resolutions, independently at each image location. Using the motion information in the captured data, the correct resolution for each location is decided automatically. Our techniques make it possible to capture fast moving objects without motion blur, while simultaneously preserving high-spatial resolution for static scene parts within the same video sequence. Our sampling scheme requires a fast per-pixel shutter on the sensor-array, which we have implemented using a co-located camera-projector system.


Motion Information Successive Frame Motion Blur Global Illumination High Dynamic Range Imaging 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Zabrodsky, H., Peleg, S.: Attentive transmission. J. of Visual Comm. and Image Representation 1 (1990)Google Scholar
  2. 2.
    Baraniuk, R.: Compressive sensing. IEEE Signal Processing Magazine 24 (2007)Google Scholar
  3. 3.
    Peers, P., Mahajan, D.K., Lamond, B., Ghosh, A., Matusik, W., Ramamoorthi, R., Debevec, P.: Compressive light transport sensing. ACM Trans. Graph 28 (2009)Google Scholar
  4. 4.
    Nayar, S.K., Mitsunaga, T.: High dynamic range imaging: spatially varying pixel exposures. In: IEEE CVPR (2000)Google Scholar
  5. 5.
    Narasimhan, S.G., Nayar, S.K.: Enhancing resolution along multiple imaging dimensions using assorted pixels. PAMI 27 (2005)Google Scholar
  6. 6.
    Ng, R.: Fourier slice photography. ACM Trans. Graphics 24, 735–744 (2005)CrossRefGoogle Scholar
  7. 7.
    Horstmeyer, R., Euliss, G., Athale, R., Levoy, M.: Flexible multimodal camera using a light field architecture. In: ICCP (2009)Google Scholar
  8. 8.
    Ben-Ezra, M., Zomet, A., Nayar, S.K.: Video super-resolution using controlled subpixel detector shifts. PAMI 27 (2005)Google Scholar
  9. 9.
    Bub, G., Tecza, M., Helmes, M., Lee, P., Kohl, P.: Temporal pixel multiplexing for simultaneous high-speed, high-resolution imaging. Nature Methods (2010)Google Scholar
  10. 10.
    Gu, J., Hitomi, Y., Mitsunaga, T., Nayar, S.K.: Coded rolling shutter photography: Flexible space-time sampling. In: ICCP (2010)Google Scholar
  11. 11.
    Agrawal, A., Veeraraghavan, A., Raskar, R.: Reinterpretable imager: Towards variable post capture space, angle & time resolution in photography. In: Eurographics (2010)Google Scholar
  12. 12.
    Gupta, A., Bhat, P., Dontcheva, M., Curless, B., Deussen, O., Cohen, M.: Enhancing and experiencing spacetime resolution with videos and stills. In: ICCP (2009)Google Scholar
  13. 13.
    Ben-Ezra, M., Nayar, S.K.: Motion-based motion deblurring. PAMI 26 (2004)Google Scholar
  14. 14.
    Wilburn, B., Joshi, N., Vaish, V., Levoy, M., Horowitz, M.: High speed video using a dense camera array. In: IEEE CVPR (2004)Google Scholar
  15. 15.
    Shechtman, E., Caspi, Y., Irani, M.: Space-time super-resolution. PAMI 27 (2005)Google Scholar
  16. 16.
    Agrawal, A., Gupta, M., Veeraraghavan, A., Narasimhan, S.G.: Optimal coded sampling for temporal super-resolution. In: IEEE CVPR (2010)Google Scholar
  17. 17.
    Perona, P., Malik, J.: Scale-space and edge detection using anisotropic diffusion. IEEE PAMI 12 (1990)Google Scholar
  18. 18.
    Tschumperle, D., Deriche, R.: Vector-valued image regularization with pdes: A common framework for different applications. PAMI 27 (2005)Google Scholar
  19. 19.
  20. 20.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Pajdla, T., Matas, J(G.) (eds.) ECCV 2004. LNCS, vol. 3024, pp. 25–36. Springer, Heidelberg (2004)Google Scholar
  21. 21.
    Raskar, R., Agrawal, A., Tumblin, J.: Coded exposure photography: motion deblurring using fluttered shutter. ACM Trans. Graphics 25, 795–804 (2006)CrossRefGoogle Scholar
  22. 22.
    External trigger modes supported by point grey cameras,
  23. 23.
    Nayar, S.K., Branzoi, V., Boult, T.E.: Programmable imaging: Towards a flexible camera. In: IJCV (2006)Google Scholar
  24. 24.
    McDowall, I., Bolas, M.: Fast light for display, sensing and control applications. In: IEEE VR 2005 Workshop on Emerging Display Technologies (March 2005)Google Scholar
  25. 25.
    Schechner, Y., Nayar, S.K., Belhumeur, P.: A theory of multiplexed illumination. In: ICCV (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  1. 1.Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.Mitsubishi Electrical Research LabsCambridgeUSA

Personalised recommendations