Advertisement

Programmable Triangulation Light Curtains

  • Jian Wang
  • Joseph Bartels
  • William Whittaker
  • Aswin C. Sankaranarayanan
  • Srinivasa G. Narasimhan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11207)

Abstract

A vehicle on a road or a robot in the field does not need a full-featured 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity which is an easier task than full depth scanning. We introduce a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. Light curtains offer a light-weight, resource-efficient and programmable approach to proximity awareness for obstacle avoidance and navigation. They also have additional benefits in terms of improving visibility in fog as well as flexibility in handling light fall-off. Our prototype for generating light curtains works by rapidly rotating a line sensor and a line laser, in synchrony. The device is capable of generating light curtains of various shapes with a range of 20–30 m in sunlight (40 m under cloudy skies and 50 m indoors) and adapts dynamically to the demands of the task. We analyze properties of light curtains and various approaches to optimize their thickness as well as power requirements. We showcase the potential of light curtains using a range of real-world scenarios.

Keywords

Computational imaging Proximity sensors 

Notes

Acknowledgments

This research was supported in parts by an ONR grant N00014-15-1-2358, an ONR DURIP Award N00014-16-1-2906, and DARPA REVEAL Co-operative Agreement HR0011-16-2-0021. A. C. Sankaranarayanan was supported in part by the NSF CAREER grant CCF-1652569. J. Bartels was supported by NASA fellowship NNX14AM53H.

Supplementary material

474178_1_En_2_MOESM1_ESM.pdf (1.9 mb)
Supplementary material 1 (pdf 1992 KB)

References

  1. 1.
  2. 2.
  3. 3.
    Achar, S., Bartels, J.R., Whittaker, W.L., Kutulakos, K.N., Narasimhan, S.G.: Epipolar time-of-flight imaging. ACM Trans. Graph. (TOG) 36(4), 37 (2017)CrossRefGoogle Scholar
  4. 4.
    American National Standards Institute: American national standard for safe use of lasers z136.1 (2014)Google Scholar
  5. 5.
    Baker, I.M., Duncan, S.S., Copley, J.W.: A low-noise laser-gated imaging system for long-range target identification. In: Defense and Security, pp. 133–144. International Society for Optics and Photonics (2004)Google Scholar
  6. 6.
    Barry, A.J., Tedrake, R.: Pushbroom stereo for high-speed navigation in cluttered environments. In: International Conference on Robotics and Automation (ICRA), pp. 3046–3052. IEEE (2015)Google Scholar
  7. 7.
    Bouguet, J.Y.: Matlab camera calibration toolbox (2000). http://www.vision.caltech.edu/bouguetj/calib_doc/
  8. 8.
    Burri, S., Homulle, H., Bruschini, C., Charbon, E.: LinoSPAD: a time-resolved 256\(\times \) 1 CMOS SPAD line sensor system featuring 64 FPGA-based TDC channels running at up to 8.5 giga-events per second. In: Optical Sensing and Detection IV, vol. 9899, p. 98990D. International Society for Optics and Photonics (2016)Google Scholar
  9. 9.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3354–3361. IEEE (2012)Google Scholar
  10. 10.
    Grauer, Y., Sonn, E.: Active gated imaging for automotive safety applications. In: Video Surveillance and Transportation Imaging Applications 2015, vol. 9407, p. 94070F. International Society for Optics and Photonics (2015)Google Scholar
  11. 11.
    Gupta, M., Yin, Q., Nayar, S.K.: Structured light in sunlight. In: International Conference on Computer Vision (ICCV), pp. 545–552. IEEE (2013)Google Scholar
  12. 12.
    Hansen, M.P., Malchow, D.S.: Overview of SWIR detectors, cameras, and applications. In: Thermosense Xxx, vol. 6939, p. 69390I. International Society for Optics and Photonics (2008)Google Scholar
  13. 13.
    Lanman, D., Taubin, G.: Build your own 3D scanner: 3D photography for beginners. In: ACM SIGGRAPH 2009 Courses, p. 8. ACM (2009)Google Scholar
  14. 14.
    Li, L.: Time-of-flight camera-an introduction. Technical white paper (SLOA190B) (2014)Google Scholar
  15. 15.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A 128x128 120 dB 15us latency asynchronous temporal contrast vision sensor. IEEE J. Solid State Circ. 43(2), 566–576 (2008)CrossRefGoogle Scholar
  16. 16.
    Marvin, M.: Microscopy apparatus, US Patent 3,013,467, 19 December 1961Google Scholar
  17. 17.
    Matsuda, N., Cossairt, O., Gupta, M.: Mc3d: motion contrast 3D scanning. In: International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2015)Google Scholar
  18. 18.
    Narasimhan, S.G., Nayar, S.K., Sun, B., Koppal, S.J.: Structured light in scattering media. In: International Conference on Computer Vision (ICCV), vol. 1, pp. 420–427. IEEE (2005)Google Scholar
  19. 19.
    O’Toole, M., Achar, S., Narasimhan, S.G., Kutulakos, K.N.: Homogeneous codes for energy-efficient illumination and imaging. ACM Trans. Graph. (ToG) 34(4), 35 (2015)Google Scholar
  20. 20.
    O’Toole, M., Raskar, R., Kutulakos, K.N.: Primal-dual coding to probe light transport. ACM Trans. Graph. (ToG) 31(4), 39:1–39:11 (2012)Google Scholar
  21. 21.
    Satat, G., Tancik, M., Raskar, R.: Towards photography through realistic fog. In: International Conference on Computational Photography (ICCP), pp. 1–10. IEEE (2018)Google Scholar
  22. 22.
    Tadano, R., Kumar Pediredla, A., Veeraraghavan, A.: Depth selective camera: a direct, on-chip, programmable technique for depth selectivity in photography. In: International Conference on Computer Vision (ICCV), pp. 3595–3603. IEEE (2015)Google Scholar
  23. 23.
    Wang, J., Gupta, M., Sankaranarayanan, A.C.: Lisens-a scalable architecture for video compressive sensing. In: International Conference on Computational Photography (ICCP). IEEE (2015)Google Scholar
  24. 24.
    Wang, J., Sankaranarayanan, A.C., Gupta, M., Narasimhan, S.G.: Dual structured light 3D using a 1D sensor. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 383–398. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_23CrossRefGoogle Scholar
  25. 25.
    Weber, M., Mickoleit, M., Huisken, J.: Light sheet microscopy. In: Methods in cell biology, vol. 123, pp. 193–215. Elsevier (2014)Google Scholar
  26. 26.
    Wheaton, S., Bonakdar, A., Nia, I.H., Tan, C.L., Fathipour, V., Mohseni, H.: Open architecture time of fight 3D SWIR camera operating at 150 MHz modulation frequency. Opt. Express 25(16), 19291–19297 (2017)CrossRefGoogle Scholar
  27. 27.
    Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jian Wang
    • 1
  • Joseph Bartels
    • 1
  • William Whittaker
    • 1
  • Aswin C. Sankaranarayanan
    • 1
  • Srinivasa G. Narasimhan
    • 1
  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations