International Journal of Computer Vision

, Volume 96, Issue 1, pp 125–144 | Cite as

Exploiting DLP Illumination Dithering for Reconstruction and Photography of High-Speed Scenes

  • Sanjeev J. Koppal
  • Shuntaro Yamazaki
  • Srinivasa G. Narasimhan


In this work, we recover fast moving scenes by exploiting the high-speed illumination “dithering” of cheap and easily available digital light processing (DLP) projectors. We first show how to reverse-engineer the temporal dithering for off-the-shelf projectors, using a high-speed camera. DLP dithering can produce temporal patterns commonly used in active vision techniques. Since the dithering occurs at a very high frame-rate, such illumination-based methods can be “speed up” for fast scenes. We demonstrate this with three applications, each of which only requires a single slide to be displayed by the DLP projector. The quality of the result is determined by the camera frame-rate available to the user. Pairing a high-speed camera and a DLP projector, we demonstrate structured light reconstruction at 100 Hz. With the same camera and three or more DLP projectors, we show photometric stereo and demultiplexing applications at 300 Hz. Finally, with a real-time (60 Hz) or still camera, we show that DLP illumination acts as a very fast flash, allowing strobe photography of high-speed scenes. We discuss, in depth, some characteristics of the temporal dithering with a case study of a particular projector. Finally, we describe limitations, trade-offs and other issues relating to this work.


Active vision DMD DLP High speed camera Temporal dithering 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Besl, P. J. (1988). Active, optical range imaging sensors. Machine Vision and Applications, 1(2), 127–152. CrossRefGoogle Scholar
  2. Bitner, J. R., Ehrlich, G., & Reingold, E. M. (1976). Efficient generation of the binary reflected gray code and its applications. Communications of the ACM, 19(9), 517–521. CrossRefzbMATHMathSciNetGoogle Scholar
  3. Boyer, K. L., & Kak, A. C. (1987). Color-encoded structured light for rapid active ranging. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9(1), 14–28. CrossRefGoogle Scholar
  4. Chen, Q., & Wada, T. (2005). A light modulation/demodulation method for real-time 3d imaging. In 3D Digital Imaging and Modeling. Google Scholar
  5. Cotting, D., Naef, M., Gross, M., & Fuchs, H. (2004). Embedding imperceptible patterns into projected images for simultaneous acquisition and display. In ISMAR. Google Scholar
  6. Curless, B., & Levoy, M. (1995). Better optical triangulation through spacetime analysis. In ICCV. Google Scholar
  7. Davis, J., Nehab, D., Ramamoothi, R., & Rusinkiewicz, S. (2003). Spacetime stereo: a unifying framework for depth from triangulation. In IEEE CVPR. Google Scholar
  8. Dudley, D., Duncan, W., & Slaughter, J. (2003). Emerging digital micromirror device (dmd) applications. In Proc. of SPIE (Vol. 4985). Google Scholar
  9. Freeman, W., & Zhang, H. (2003). Shape-time photography. In CVPR. Google Scholar
  10. Hall-Holt, O., & Rusinkiewicz, S. (2001). Stripe boundary codes for real-time structured-light range scanning of moving objects. In Proc. the Eighth International Conference on Computer Vision (Vol. 2, pp. 359–366). Google Scholar
  11. Hertzmann, A., & Seitz, S. M. (2003). Shape and materials by example: a photometric stereo approach. In IEEE CVPR. Google Scholar
  12. Horn, E., & Kiryati, N. (1999). Toward optimal structured light patterns. Image and Vision Computing, 17, 87–97. CrossRefGoogle Scholar
  13. Ishikawa, K., Wada, O., Nakamura, J., & Hatada, T. (1999). Analysis of color breakup in field-sequential color projection system for large area displays. In Display Workshops. Google Scholar
  14. Jarvenpaa, T. (2005). Measuring color breakup of stationary images in field-sequential-color displays. Journal of the Society for Information Display. Google Scholar
  15. Jones, A., McDowall, I., Yamada, H., Bolas, M., & Debevec, P. (2007). Rendering for an interactive 360 degree light field display. In ACM SIGGRAPH. Google Scholar
  16. Kima, S., Shibataa, T., Kawaia, T., & Ukaib, K. (2007). Ergonomic evaluation of a field-sequential colour projection system. Displays. Google Scholar
  17. Koppal, S. J., & Narasimhan, S. G. (2009). Illustrating motion through dlp photography. In PROCAMS. Google Scholar
  18. Koppal, S. J., Yamazaki, S., & Narasimhan, S. (2010a). Dlp photograph website.
  19. Koppal, S. J., Yamazaki, S., & Narasimhan, S. (2010b). Temporal dithering website.
  20. Lee, J., Hudson, S., Summet, J., & Dietz, P. (2005). Moveable interactive projected displays using projector based tracking. In Symposium on User Interface Software and Technology. Google Scholar
  21. McDowall, I., & Bolas, M. (2005). Fast light for display, sensing and control applications. In IEEE VR Workshop on Emerging Display Technologies. Google Scholar
  22. Miyasaka, T., Kuroda, K., Hirose, M., & Araki, K. (2000). High speed 3-d measurement system using incoherent light source for human performance analysis. In Proc. the 19th Congress of The International Society for Photogrammetry and Remote Sensing (pp. 65–69). Google Scholar
  23. Mori, M., Hatada, T., Ishikawa, K., Saishoji, T., Wada, O., Nakamura, J., & Terashima, N. (1999). Mechanism of color breakup in field-sequencial-color projectors. Journal of the Society for Information Display. Google Scholar
  24. Morita, H., Yajima, K., & Sakata, S. (1988). Reconstruction of surfaces of 3-d objects by m-array pattern projection method. In Proc. International Conference on Computer Vision (pp. 468–473). Google Scholar
  25. Narasimhan, S. G., Koppal, S. J., & Yamazaki, S. (2008). Temporal dithering of illumination for fast active vision. In Proc. European Conference on Computer Vision, October 2008. Google Scholar
  26. Nayar, S. K., Branzoi, V., & Boult, T. (2004). Programmable imaging using a digital micromirror array. In IEEE CVPR. Google Scholar
  27. Nayar, S. K., Krishnan, G., Grossberg, M. D., & Raskar, R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. In ACM SIGGRAPH. Google Scholar
  28. Nii, H., Sugimoto, M., & Inami, M. (2005). Smart light-ultra high speed projector for spatial multiplexing optical transmission. In IEEE PROCAMS. Google Scholar
  29. Ogata, M., Ukai, K., & Kawai, T. (2005). Visual fatigue in congenital nystagmus caused by viewing images of color sequential projectors. In Display Technology. Google Scholar
  30. Packer, O., Diller, L., Verweij, J., Lee, B., Pokorny, J., Williams, D., Dacey, D., & Brainard, D. H. (2001). Characterization and use of a digital light projector for vision research. In Vision Research. Google Scholar
  31. Posdamer, J. L., & Altschuler, M. D. (1982). Surface measurement by space-encoded projected beam system. Computer Vision, Graphics, and Image Processing, 18(1), January. Google Scholar
  32. Raskar, R., & Beardsley, P. (2001). A self-correcting projector. In Proc. Computer Vision and Pattern Recognition (pp. 504–508). Google Scholar
  33. Raskar, R., Welch, G., Cutts, M., Lake, A., Stesin, L., & Fuchs, H. (1998). The office of the future: a unified approach to image-based modeling and spatially immersive displays. In ACM SIGGRAPH. Google Scholar
  34. Raskar, R., Agrawal, A., & Tumblin, J. (2006). Coded exposure photography: motion deblurring using fluttered shutter. In ACM SIGGRAPH. Google Scholar
  35. Salvi, J., Pages, J., & Batlle, J. (2004). Pattern codification strategies in structured light systems. Pattern Recognition, 37(4), 827–849. CrossRefzbMATHGoogle Scholar
  36. Sato, K., & Inokuchi, S. (1987). Range-imaging system utilizing nematic liquid crystal mask. In Proc. International Conference on Computer Vision (pp. 657–661). Google Scholar
  37. Scharstein, D., & Szeliski, R. (2003). High-accuracy stereo depth maps using structured light. In CVPR. Google Scholar
  38. Schechner, Y. Y., Nayar, S. K., & Belhumeur, P. N. (2003). A theory of multiplexed illumination. In ICCV. Google Scholar
  39. Sen, P., Chen, B., Garg, G., Marschner, S. R., Horowitz, M., Levoy, M., & Lensch, H. P. A. (2005). Dual photography. In ACM SIGGRAPH. Google Scholar
  40. Takhar, D., Laska, J., Wakin, M., Duarte, M., Baron, D., Sarvotham, S., Kelly, K., & Baraniuk, R. (2006). A new compressive imaging camera architecture using optical-domain compression. In Computational Imaging IV at SPIE Electronic Imaging. Google Scholar
  41. Umezawa, E., Shibata, T., Kawai, T., & Ukai, K. (2004). Ergonomic evaluation of the projector using color-sequential display system. In 45th Annual Congress of the Japan Ergonomics Assoc. Google Scholar
  42. Weise, T., Leibe, B., & Van Gool, L. (2007). Fast 3d scanning with automatic motion compensation. In Proc. Computer Vision and Pattern Recognition (pp. 1–8), June 2007. Google Scholar
  43. Wenger, A., Gardner, A., Tchou, C., Unger, J., Hawkins, T., & Debevec, P. (2005). Performance relighting and reflectance transformation with time-multiplexed illumination. In ACM SIGGRAPH. Google Scholar
  44. Will, P. M., & Pennington, K. S. (1971). Grid coding: a preprocessing technique for robot and machine vision. AI, 2. Google Scholar
  45. Woodham, R. J. (1980). Photometric method for determining surface orientation from multiple images. OptEng, 19(1). Google Scholar
  46. You, Y., & Kaveh, M. (1996). A regularization approach to joint blur identification and image restoration. IEEE Transactions on Image Processing. Google Scholar
  47. Young, M., Beeson, E., Davis, J., Rusinkiewicz, S., & Ramamoorthi, R. (2007). Viewpoint-coded structured light. In IEEE CVPR. Google Scholar
  48. Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–1334. CrossRefGoogle Scholar
  49. Zhang, L., & Nayar, S. K. (2006). Projection defocus analysis for scene capture and image display. In ACM SIGGRAPH. Google Scholar
  50. Zhang, L., Curless, B., & Seitz, S. M. (2002). Rapid shape acquisition using color structured light and multi-pass dynamic programming. In 3DPVT. Google Scholar
  51. Zhang, L., Curless, B., & Seitz, S. M. (2003). Spacetime stereo: shape recovery for dynamic scenes. In IEEE CVPR. Google Scholar
  52. Zickler, T., Belhumeur, P., & Kriegman, D. J. (2002). Helmholtz stereopsis: exploiting reciprocity for surface reconstruction. In ECCV. Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Sanjeev J. Koppal
    • 1
  • Shuntaro Yamazaki
    • 2
  • Srinivasa G. Narasimhan
    • 1
  1. 1.The Robotics InstituteCarnegie Mellon UniversityPittsburghUSA
  2. 2.National Institute of Advanced Industrial Science and TechnologyTsukubaJapan

Personalised recommendations