Advertisement

Asynchronous, Photometric Feature Tracking Using Events and Frames

  • Daniel Gehrig
  • Henri Rebecq
  • Guillermo Gallego
  • Davide Scaramuzza
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11216)

Abstract

We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called “events”. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes.

Notes

Acknowledgment

This work was supported by the DARPA FLA program, the Swiss National Center of Competence Research Robotics, through the Swiss National Science Foundation, and the SNSF-ERC starting grant.

Supplementary material

474200_1_En_46_MOESM1_ESM.pdf (2.4 mb)
Supplementary material 1 (pdf 2493 KB)

Supplementary material 2 (mp4 74492 KB)

References

  1. 1.
    Lichtsteiner, P., Posch, C., Delbruck, T.: A \(128 \times 128\) 120 dB 15 \(\upmu \)s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circ. 43(2), 566–576 (2008)CrossRefGoogle Scholar
  2. 2.
    Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A \(240 \times 180\) 130 dB 3 \(\upmu \)s latency global shutter spatiotemporal vision sensor. IEEE J. Solid-State Circ. 49(10), 2333–2341 (2014)CrossRefGoogle Scholar
  3. 3.
    Zhou, H., Yuan, Y., Shi, C.: Object tracking using SIFT features and mean shift. Comput. Vis. Image. Und. 113(3), 345–352 (2009)CrossRefGoogle Scholar
  4. 4.
    Klein, G., Murray, D.: Parallel tracking and mapping on a camera phone. In: IEEE ACM International Symposium on Mixed and Augmented Reality (ISMAR) (2009)Google Scholar
  5. 5.
    Forster, C., Zhang, Z., Gassner, M., Werlberger, M., Scaramuzza, D.: SVO: semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 33(2), 249–265 (2017)CrossRefGoogle Scholar
  6. 6.
    Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Trans. Robot. 31(5), 1147–1163 (2015)CrossRefGoogle Scholar
  7. 7.
    Rosinol Vidal, A., Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high speed scenarios. IEEE Robot. Autom. Lett. 3(2), 994–1001 (2018)CrossRefGoogle Scholar
  8. 8.
    Kueng, B., Mueggler, E., Gallego, G., Scaramuzza, D.: Low-latency visual odometry using event-based feature tracks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, pp. 16–23, October 2016Google Scholar
  9. 9.
    Zhu, A.Z., Atanasov, N., Daniilidis, K.: Event-based feature tracking with probabilistic data association. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 4465–4470 (2017)Google Scholar
  10. 10.
    Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., Scaramuzza, D.: The event-camera dataset and simulator: event-based data for pose estimation, visual odometry, and SLAM. Int. J. Robot. Res. 36, 142–149 (2017)CrossRefGoogle Scholar
  11. 11.
    Mueggler, E., Huber, B., Scaramuzza, D.: Event-based, 6-DOF pose tracking for high-speed maneuvers. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2761–2768 (2014). Event camera animation: https://youtu.be/LauQ6LWTkxM?t=25
  12. 12.
    Ni, Z., Bolopion, A., Agnus, J., Benosman, R., Regnier, S.: Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics. IEEE Trans. Robot. 28, 1081–1089 (2012)CrossRefGoogle Scholar
  13. 13.
    Lagorce, X., Meyer, C., Ieng, S.H., Filliat, D., Benosman, R.: Asynchronous event-based multikernel algorithm for high-speed visual features tracking. IEEE Trans. Neural Netw. Learn. Syst. 26(8), 1710–1720 (2015)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Clady, X., Ieng, S.H., Benosman, R.: Asynchronous event-based corner detection and matching. Neural Netw. 66, 91–106 (2015)CrossRefGoogle Scholar
  15. 15.
    Tedaldi, D., Gallego, G., Mueggler, E., Scaramuzza, D.: Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS). In: International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP), pp. 1–7 (2016)Google Scholar
  16. 16.
    Clady, X., Maro, J.M., Barré, S., Benosman, R.B.: A motion-based feature for event-based pattern recognition. Front. Neurosci. 10, 594 (2017)CrossRefGoogle Scholar
  17. 17.
    Vasco, V., Glover, A., Bartolozzi, C.: Fast event-based Harris corner detection exploiting the advantages of event-driven cameras. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2016)Google Scholar
  18. 18.
    Mueggler, E., Bartolozzi, C., Scaramuzza, D.: Fast event-based corner detection. In: British Machine Vision Conference (BMVC) (2017)Google Scholar
  19. 19.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of Fourth Alvey Vision Conference, Manchester, UK, vol. 15, pp. 147–151 (1988)Google Scholar
  20. 20.
    Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 430–443. Springer, Heidelberg (2006).  https://doi.org/10.1007/11744023_34CrossRefGoogle Scholar
  21. 21.
    Chaudhry, R., Ravichandran, A., Hager, G., Vidal, R.: Histograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1932–1939, June 2009Google Scholar
  22. 22.
    Besl, P.J., McKay, N.D.: A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRefGoogle Scholar
  23. 23.
    Gallego, G., Lund, J.E.A., Mueggler, E., Rebecq, H., Delbruck, T., Scaramuzza, D.: Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Trans. Pattern Anal. Machi. Intell. 40(10), 2402–2412 (2017)CrossRefGoogle Scholar
  24. 24.
    Gallego, G., Forster, C., Mueggler, E., Scaramuzza, D.: Event-based camera pose tracking using a generative event model. arXiv:1510.01972 (2015)
  25. 25.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence (IJCAI), pp. 674–679 (1981)Google Scholar
  26. 26.
    Evangelidis, G.D., Psarakis, E.Z.: Parametric image alignment using enhanced correlation coefficient maximization. IEEE Trans. Pattern Anal. Mach. Intell. 30(10), 1858–1865 (2008)CrossRefGoogle Scholar
  27. 27.
    Agarwal, A., Mierle, K., et al.: Ceres solver. http://ceres-solver.org
  28. 28.
    Gallego, G., Scaramuzza, D.: Accurate angular velocity estimation with an event camera. IEEE Robot. Autom. Lett. 2, 632–639 (2017)CrossRefGoogle Scholar
  29. 29.
    Gallego, G., Rebecq, H., Scaramuzza, D.: A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3867–3876 (2018)Google Scholar
  30. 30.
    Rebecq, H., Gallego, G., Mueggler, E., Scaramuzza, D.: EMVS: event-based multi-view stereo–3D reconstruction with an event camera in real-time. Int. J. Comput. Vis. 1–21 (2017)Google Scholar
  31. 31.
    Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Real-time visual-inertial odometry for event cameras using keyframe-based nonlinear optimization. In: British Machine Vision Conference (BMVC), September 2017Google Scholar
  32. 32.
    Maqueda, A.I., Loquercio, A., Gallego, G., García, N., Scaramuzza, D.: Event-based vision meets deep learning on steering prediction for self-driving cars. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5419–5427 (2018)Google Scholar
  33. 33.
    Bardow, P., Davison, A.J., Leutenegger, S.: Simultaneous optical flow and intensity estimation from an event camera. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 884–892 (2016)Google Scholar
  34. 34.
    Kim, H., Handa, A., Benosman, R., Ieng, S.H., Davison, A.J.: Simultaneous mosaicing and tracking with an event camera. In: British Machine Vision Conference (BMVC) (2014)Google Scholar
  35. 35.
    Rebecq, H., Horstschäfer, T., Gallego, G., Scaramuzza, D.: EVO: a geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robot. Autom. Lett. 2, 593–600 (2017)CrossRefGoogle Scholar
  36. 36.
    Reinbacher, C., Graber, G., Pock, T.: Real-time intensity-image reconstruction for event cameras using manifold regularisation. In: British Machine Vision Conference (BMVC) (2016)Google Scholar
  37. 37.
    Munda, G., Reinbacher, C., Pock, T.: Real-time intensity-image reconstruction for event cameras using manifold regularisation. Int. J. Comput. Vis. 1–13 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Daniel Gehrig
    • 1
  • Henri Rebecq
    • 1
  • Guillermo Gallego
    • 1
  • Davide Scaramuzza
    • 1
  1. 1.Departments of Informatics and NeuroinformaticsUniversity of Zurich and ETH ZurichZürichSwitzerland

Personalised recommendations