Dual Structured Light 3D Using a 1D Sensor

  • Jian Wang
  • Aswin C. SankaranarayananEmail author
  • Mohit Gupta
  • Srinivasa G. Narasimhan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9910)


Structured light-based 3D reconstruction methods often illuminate a scene using patterns with 1D translational symmetry such as stripes, Gray codes or sinusoidal phase shifting patterns. These patterns are decoded using images captured by a traditional 2D sensor. In this work, we present a novel structured light approach that uses a 1D sensor with simple optics and no moving parts to reconstruct scenes with the same acquisition speed as a traditional 2D sensor. While traditional methods compute correspondences between columns of the projector and 2D camera pixels, our ‘dual’ approach computes correspondences between columns of the 1D camera and 2D projector pixels. The use of a 1D sensor provides significant advantages in many applications that operate in short-wave infrared range (0.9–2.5 microns) or require dynamic vision sensors (DVS), where a 2D sensor is prohibitively expensive and difficult to manufacture. We analyze the proposed design, explore hardware alternatives and discuss the performance in the presence of ambient light and global illumination.


Structured light Dual photography 



We thank Ms. Chia-Yin Tsai for the help with MeshLab processing. Jian Wang, Aswin C. Sankaranarayanan and Srinivasa G. Narasimhan were supported in part by DARPA REVEAL (\(\#\)HR0011-16-2-0021) grant. Srinivasa G. Narasimhan was also supported in part by NASA (\(\#\)15-15ESI-0085), ONR (\(\#\)N00014-15-1-2358), and NSF (\(\#\)CNS-1446601) grants.

Supplementary material

Supplementary material 1 (mp4 24131 KB)


  1. 1.
    Agin, G.J., Binford, T.O.: Computer description of curved objects. IEEE Trans. Comput. 100(4), 439–449 (1976)CrossRefzbMATHGoogle Scholar
  2. 2.
    Belbachir, A.N., Schraml, S., Mayerhofer, M., Hofstatter, M.: A novel HDR depth camera for real-time 3d 360-degree panoramic vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 425–432 (2014)Google Scholar
  3. 3.
    Bouguet, J.: Camera calibration toolbox for matlab (2015).
  4. 4.
    Brandli, C., Mantel, T.A., Hutter, M., Höpflinger, M.A., Berner, R., Siegwart, R., Delbruck, T.: Adaptive pulsed laser line extraction for terrain reconstruction using a dynamic vision sensor. Front. Neurosci. 7(EPFL-ARTICLE-200448) (2014)Google Scholar
  5. 5.
    Curless, B., Levoy, M.: Better optical triangulation through spacetime analysis. In: Proceedings of International Conference on Computer Vision (ICCV), pp. 987–994 (1995)Google Scholar
  6. 6.
    Delbruck, T.: Frame-free dynamic digital vision. In: International Symposium on Secure-Life Electronics, Advanced Electronics for Quality Life and Society, pp. 21–26 (2008)Google Scholar
  7. 7.
    Forsen, G.E.: Processing visual data with an automaton eye. In: Pictoral Pattern Recognition (1968)Google Scholar
  8. 8.
    Gehm, M.E., Brady, D.J.: Compressive sensing in the EO/IR. Appl. Opt. 54(8), C14–C22 (2015)CrossRefGoogle Scholar
  9. 9.
    Geng, J.: Structured-light 3d surface imaging: a tutorial. Adv. Opt. Photonics 3(2), 128–160 (2011)CrossRefGoogle Scholar
  10. 10.
    Gupta, M., Yin, Q., Nayar, S.K.: Structured light in sunlight. In: Proceedings of International Conference on Computer Vision (ICCV), pp. 545–552 (2013)Google Scholar
  11. 11.
  12. 12.
    Inokuchi, S., Sato, K., Matsuda, F.: Range imaging system for 3-d object recognition. In: Proceedings of International Conference on Pattern Recognition (ICPR), vol. 48, pp. 806–808 (1984)Google Scholar
  13. 13.
    Lanman, D., Taubin, G.: Build your own 3D scanner: 3D photograhy for beginners. In: ACM SIGGRAPH 2009 Courses, pp. 30–34 (2009)Google Scholar
  14. 14.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A 128\(\times \) 128 120 db 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits 43(2), 566–576 (2008)CrossRefGoogle Scholar
  15. 15.
    Matsuda, N., Cossairt, O., Gupta, M.: MC3D: motion contrast 3d scanning. In: IEEE International Conference on Computational Photography (ICCP) (2015)Google Scholar
  16. 16.
    Microsoft: Kinect for xbox 360 (2010).
  17. 17.
    O’Toole, M., Achar, S., Narasimhan, S.G., Kutulakos, K.N.: Homogeneous codes for energy-efficient illumination and imaging. ACM Trans. Graph. 34(4), 35 (2015)Google Scholar
  18. 18.
    Posch, C., Hofstätter, M., Matolin, D., Vanstraelen, G., Schön, P., Donath, N., Litzenberger, M.: A dual-line optical transient sensor with on-chip precision time-stamp generation. In: International Solid-State Circuits Conference, pp. 500–618 (2007)Google Scholar
  19. 19.
    Posdamer, J., Altschuler, M.: Surface measurement by space-encoded projected beam systems. Comput. Graph. Image Process. 18(1), 1–17 (1982)CrossRefGoogle Scholar
  20. 20.
    Proesmans, M., Van Gool, L.J., Oosterlinck, A.J.: One-shot active 3d shape acquisition. In: Proceedings of International Conference on Pattern Recognition (ICPR), pp. 336–340 (1996)Google Scholar
  21. 21.
    Sagawa, R., Ota, Y., Yagi, Y., Furukawa, R., Asada, N., Kawasaki, H.: Dense 3d reconstruction method using a single pattern for fast moving object. In: Proceedings of International Conference on Computer Vision (ICCV), pp. 1779–1786 (2009)Google Scholar
  22. 22.
    Sato, K., Inokuchi, S.: Three-dimensional surface measurement by space encoding range imaging. J. Robotic Syst. 2, 27–39 (1985)Google Scholar
  23. 23.
    Sen, P., Chen, B., Garg, G., Marschner, S.R., Horowitz, M., Levoy, M., Lensch, H.: Dual photography. ACM Trans. Graph. 24(3), 745–755 (2005)CrossRefGoogle Scholar
  24. 24.
    Sensors Unlimited Inc.: Swir image gallery (2016).
  25. 25.
    Shirai, Y., Suwa, M.: Recognition of polyhedrons with a range finder. In: Proceedings of International Joint Conference on Artificial Intelligence, pp. 80–87 (1971)Google Scholar
  26. 26.
    Srinivasan, V., Liu, H.C., Halioua, M.: Automated phase-measuring profilometry: a phase mapping approach. Appl. Opt. 24(2), 185–188 (1985)CrossRefGoogle Scholar
  27. 27.
    Vuylsteke, P., Oosterlinck, A.: Range image acquisition with a single binary-encoded light pattern. IEEE Trans. Pattern Anal. Mach. Intell. 12(2), 148–164 (1990)CrossRefGoogle Scholar
  28. 28.
    Wang, J., Gupta, M., Sankaranarayanan, A.C.: LiSens – a scalable architecture for video compressive sensing. In: IEEE International Conference on Computational Photography (ICCP) (2015)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Jian Wang
    • 1
  • Aswin C. Sankaranarayanan
    • 1
    Email author
  • Mohit Gupta
    • 2
  • Srinivasa G. Narasimhan
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA
  2. 2.University of Wisconsin-MadisonMadisonUSA

Personalised recommendations