Advertisement

Diffraction Line Imaging

Conference paper
  • 896 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12347)

Abstract

We present a novel computational imaging principle that combines diffractive optics with line (1D) sensing. When light passes through a diffraction grating, it disperses as a function of wavelength. We exploit this principle to recover 2D and even 3D positions from only line images. We derive a detailed image formation model and a learning-based algorithm for 2D position estimation. We show several extensions of our system to improve the accuracy of the 2D positioning and expand the effective field of view. We demonstrate our approach in two applications: (a) fast passive imaging of sparse light sources like street lamps, headlights at night and LED-based motion capture, and (b) structured light 3D scanning with line illumination and line sensing. Line imaging has several advantages over 2D sensors: high frame rate, high dynamic range, high fill-factor with additional on-chip computation, low cost beyond the visible spectrum, and high energy efficiency when used with line illumination. Thus, our system is able to achieve high-speed and high-accuracy 2D positioning of light sources and 3D scanning of scenes.

Keywords

Line sensor Diffraction grating 3D sensing Motion capture Computational imaging 

Notes

Acknowledgments

We thank A. Sankaranarayanan and V. Saragadam for help with building the hardware prototype and S. Panev and F. Moreno for neural network-related advice. We were supported in parts by NSF Grants IIS-1900821 and CCF-1730147 and DARPA REVEAL Contract HR0011-16-C-0025.

Supplementary material

504434_1_En_1_MOESM1_ESM.pdf (2.5 mb)
Supplementary material 1 (pdf 2577 KB)

Supplementary material 2 (mp4 87691 KB)

References

  1. 1.
    Alfano, R.R.: The Supercontinuum Laser Source. Springer, Heidelberg (1989).  https://doi.org/10.1007/b106776CrossRefGoogle Scholar
  2. 2.
    Antipa, N.: Diffusercam: lensless single-exposure 3D imaging. Optica 5(1), 1–9 (2018)CrossRefGoogle Scholar
  3. 3.
    Antipa, N., Oare, P., Bostan, E., Ng, R., Waller, L.: Video from stills: lensless imaging with rolling shutter. In: Proceedings of IEEE ICCP, pp. 1–8 (2019)Google Scholar
  4. 4.
    Chen, Y.L., Wu, B.F., Huang, H.Y., Fan, C.J.: A real-time vision system for nighttime vehicle detection and traffic surveillance. IEEE Trans. Ind. Elect. 58(5), 2030–2044 (2010)CrossRefGoogle Scholar
  5. 5.
    Loadscan: load management solutions (2020). https://www.loadscan.com/
  6. 6.
    Curless, B., Levoy, M.: Better optical triangulation through spacetime analysis. In: Proceedings of IEEE ICCV, pp. 987–994 (1995)Google Scholar
  7. 7.
    Dlis2k: ultra configurable digital output (2020). http://dynamax-imaging.com/products/line-scan-product/dlis2k-2/
  8. 8.
    Gallego, G., et al.: Event-based vision: a survey (2019). arXiv preprint arXiv:1904.08405
  9. 9.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2003)zbMATHGoogle Scholar
  10. 10.
    Harvey, J.E., Vernold, C.L.: Description of diffraction grating behavior in direction cosine space. Appl. Opt. 37(34), 8158–8159 (1998)CrossRefGoogle Scholar
  11. 11.
    Hossain, F., PK, M.K., Yousuf, M.A.: Hardware design and implementation of adaptive canny edge detection algorithm. Int. J. Comput. Appl. 124(9), 31–38 (2015)Google Scholar
  12. 12.
    Huber, P.J.: Robust estimation of a location parameter. In: Kotz, S., Johnson, N.L. (eds.) Breakthroughs in statistics. Springer Series in Statistics (Perspectives in Statistics), pp. 492–518. Springer, Heidelberg (1992).  https://doi.org/10.1007/978-1-4612-4380-9_35CrossRefGoogle Scholar
  13. 13.
    Jeon, D.S., et al.: Compact snapshot hyperspectral imaging with diffracted rotation. ACM TOG 38(4), 117 (2019)CrossRefGoogle Scholar
  14. 14.
    Kim, H., Leutenegger, S., Davison, A.J.: Real-time 3D reconstruction and 6-DoF tracking with an event camera. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 349–364. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_21CrossRefGoogle Scholar
  15. 15.
    Kim, J., Han, G., Lim, H., Izadi, S., Ghosh, A.: Thirdlight: Low-cost and high-speed 3D interaction using photosensor markers. In: Proceedingd of CVMP, p. 4. ACM (2017)Google Scholar
  16. 16.
    Liu, D., Geng, H., Liu, T., Klette, R.: Star-effect simulation for photography. Comput. Graph. 61, 19–28 (2016)CrossRefGoogle Scholar
  17. 17.
    Loewen, E.G., Popov, E.: Diffraction Gratings and Applications. CRC Press, Boca Raton (2018)CrossRefGoogle Scholar
  18. 18.
    Nagaoka, H., Mishima, T.: A combination of a concave grating with a Lummer-Gehrcke plate or an echelon grating for examining fine structure of spectral lines. Astrophys. J. 57, 92 (1923)CrossRefGoogle Scholar
  19. 19.
    Narasimhan, S.G., Nayar, S.K.: Shedding light on the weather. In: Proceedings of IEEE CVPR, pp. 665–672 (2003)Google Scholar
  20. 20.
    Nayar, S.K., Krishnan, G., Grossberg, M.D., Raskar, R.: Fast separation of direct and global components of a scene using high frequency illumination. In: ACM SIGGRAPH, pp. 935–944 (2006)Google Scholar
  21. 21.
    Nelson, P., Churchill, W., Posner, I., Newman, P.: From dusk till dawn: localisation at night using artificial light sources. In: Proceedings of IEEE ICRA (2015)Google Scholar
  22. 22.
  23. 23.
    O’Toole, M., Achar, S., Narasimhan, S.G., Kutulakos, K.N.: Homogeneous codes for energy-efficient illumination and imaging. ACM TOG 34(4), 1–13 (2015)CrossRefGoogle Scholar
  24. 24.
    Phase space inc. (2020). http://www.phasespace.com/
  25. 25.
    Raskar, R., et al.: Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators. ACM TOG 26(3), 36 (2007)CrossRefGoogle Scholar
  26. 26.
    Saragadam, V., Sankaranarayanan, A.C.: KRISM: Krylov subspace-based optical computing of hyperspectral images. ACM TOG 38(5), 1–14 (2019)CrossRefGoogle Scholar
  27. 27.
    Sheinin, M., Schechner, Y.Y., Kutulakos, K.N.: Computational imaging on the electric grid. In: Proceedings of IEEE CVPR, pp. 2363–2372 (2017)Google Scholar
  28. 28.
    Sheinin, M., Schechner, Y.Y., Kutulakos, K.N.: Rolling shutter imaging on the electric grid. In: Proceedings of IEEE ICCP, pp. 1–12 (2018)Google Scholar
  29. 29.
    Tamburo, R., et al.: Programmable automotive headlights. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 750–765. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10593-2_49CrossRefGoogle Scholar
  30. 30.
    Vasilyev, A.: The optoelectronic swept-frequency laser and its applications in ranging, three-dimensional imaging, and coherent beam combining of chirped-seed amplifiers. Ph.D. thesis, Caltech (2013)Google Scholar
  31. 31.
    Vogt, S.S., et al.: HIRES: the high-resolution echelle spectrometer on the Keck 10-m Telescope. In: Instrumentation in Astronomy VIII, vol. 2198, pp. 362–375. International Society for Optics and Photonics (1994)Google Scholar
  32. 32.
    Wang, J., Gupta, M., Sankaranarayanan, A.C.: Lisens-a scalable architecture for video compressive sensing. In: Proceedings of IEEE ICCP, pp. 1–9. IEEE (2015)Google Scholar
  33. 33.
    Wang, J., Sankaranarayanan, A.C., Gupta, M., Narasimhan, S.G.: Dual structured light 3D using a 1D sensor. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 383–398. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_23CrossRefGoogle Scholar
  34. 34.
    Weinberg, G., Katz, O.: 100,000 frames-per-second compressive imaging with a conventional rolling-shutter camera by random point-spread-function engineering. arXiv preprint arXiv:2004.09614 (2020)
  35. 35.
    Xiong, J., et al.: Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging. ACM TOG 36(4), 36 (2017)CrossRefGoogle Scholar
  36. 36.
    Zhi, T., Pires, B.R., Hebert, M., Narasimhan, S.G.: Deep material-aware cross-spectral stereo matching. In: Proceedings of IEEE CVPR, pp. 1916–1925 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations