Skip to main content
Log in

Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Event cameras or neuromorphic cameras mimic the human perception system as they measure the per-pixel intensity change rather than the actual intensity level. In contrast to traditional cameras, such cameras capture new information about the scene at MHz frequency in the form of sparse events. The high temporal resolution comes at the cost of losing the familiar per-pixel intensity information. In this work we propose a variational model that accurately models the behaviour of event cameras, enabling reconstruction of intensity images with arbitrary frame rate in real-time. Our method is formulated on a per-event-basis, where we explicitly incorporate information about the asynchronous nature of events via an event manifold induced by the relative timestamps of events. In our experiments we verify that solving the variational model on the manifold produces high-quality images without explicitly estimating optical flow. This paper is an extended version of our previous work (Reinbacher et al. in British machine vision conference (BMVC), 2016) and contains additional details of the variational model, an investigation of different data terms and a quantitative evaluation of our method against competing methods as well as synthetic ground-truth data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://github.com/VLOGroup/dvs-reconstruction.

  2. https://www.youtube.com/watch?v=rvB2URrGT94.

  3. We note that the small image size of \(128 \times 128\) is not enough to fully load the GPU such that we measured almost the same wall clock time on a NVidia 780 GTX Ti.

References

  • Bardow, P., Davison, A., & Leutenegger, S. (2016). Simultaneous optical flow and intensity estimation from an event camera. In CVPR.

  • Barua, S., Miyatani, Y., & Veeraraghavan, A. (2016). Direct face detection and video reconstruction from event cameras. In 2016 IEEE winter conference on applications of computer vision (WACV) (pp. 1–9). https://doi.org/10.1109/WACV.2016.7477561.

  • Benosman, R., Clercq, C., Lagorce, X., Ieng, S. H., & Bartolozzi, C. (2014). Event-based visual flow. IEEE Transactions on Neural Networks and Learning Systems, 25(2), 407–417.

    Article  Google Scholar 

  • Chambolle, A. (2004). An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision, 20(1–2), 89–97.

    MathSciNet  MATH  Google Scholar 

  • Chambolle, A., & Pock, T. (2011). A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1), 120–145.

    Article  MathSciNet  Google Scholar 

  • Cheng, L. T., Burchard, P., Merriman, B., & Osher, S. (2000). Motion of curves constrained on surfaces using a level set approach. Journal of Computational Physics, 175, 2002.

    MathSciNet  MATH  Google Scholar 

  • Cook, M., Gugelmann, L., Jug, F., Krautz, C., & Steger, A. (2011). Interacting maps for fast visual interpretation. In The 2011 international joint conference on neural networks (IJCNN) (pp. 770–776). https://doi.org/10.1109/IJCNN.2011.6033299

  • Delbruck, T., & Lichtsteiner, P. (2007). Fast sensory motor control based on event-based hybrid neuromorphic-procedural system. In International symposium on circuits and systems.

  • Gallego, G., Forster, C., Mueggler, E., & Scaramuzza, D. (2015). Event-based camera pose tracking using a generative event model. CoRR arXiv:1510.01972.

  • Graber, G., Balzer, J., Soatto, S., & Pock, T. (2015). Efficient minimal-surface regularization of perspective depth maps in variational stereo. In CVPR.

  • Hartmann, J., Klüssendorff, J. H., & Maehle, E. (2013). A comparison of feature descriptors for visual slam. In European conference on mobile robots.

  • Kim, H., Handa, A., Benosman, R., Ieng, S. H., & Davison, A. (2014). Simultaneous mosaicing and tracking with an event camera. In BMVC.

  • Kim, H., Leutenegger, S., & Davison, A. (2016). Real-time 3d reconstruction and 6-dof tracking with an event camera. In Proceedings of European conference on computer vision.

    Chapter  Google Scholar 

  • Krueger, M., Delmas, P., & Gimel’farb, G. L. (2008). Active contour based segmentation of 3d surfaces. In ECCV.

    Google Scholar 

  • Lai, R., & Chan, T. F. (2011). A framework for intrinsic image processing on surfaces. Computer Vision and Image Understanding, 115(12), 1647–1661. Special issue on Optimization for Vision. Theory and Applications: Graphics and Medical Imaging.

    Article  Google Scholar 

  • Le, T., Chartrand, R., & Asaki, T. J. (2007). A variational approach to reconstructing images corrupted by poisson noise. Journal of Mathematical Imaging and Vision, 27, 257–263.

    Article  MathSciNet  Google Scholar 

  • Lee, J. M. (1997). Riemannian manifolds: An introduction to curvature., Graduate Texts in Mathematics New York: Springer.

    Book  Google Scholar 

  • Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A \(128\times 128\) 120 db 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566–576.

    Article  Google Scholar 

  • Lui, L. M., Gu, X., Chan, T. F., & Yau, S. T. (2008). Variational method on riemann surfaces using conformal parameterization and its applications to image processing. Methods and Applications of Analysis, 15(4), 513–538.

    Article  MathSciNet  Google Scholar 

  • Milford, M., Kim, H., Leutenegger, S., & Davison, A. (2015). Towards visual slam with event-based cameras. In The problem of mobile sensors workshop in conjunction with RSS.

  • Mittal, A., Moorthy, A. K., & Bovik, A. C. (2012). No-reference image quality assessment in the spatial domain. IEEE Transactions on Image Processing, 21(12), 4695–4708. https://doi.org/10.1109/TIP.2012.2214050.

    Article  MathSciNet  MATH  Google Scholar 

  • Mueggler, E., Gallego, G., & Scaramuzza, D. (2015). Continuous-time trajectory estimation for event-based vision sensors. In Robotics: science and systems.

  • Mueggler, E., Huber, B., & Scaramuzza, D. (2014). Event-based, 6-dof pose tracking for high-speed maneuvers. In International conference on intelligent robots and systems.

  • Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2016). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam. arXiv:1610.08336.

  • Ratner, N., & Schechner, Y. Y. (2007). Illumination multiplexing within fundamental limits. In CVPR.

  • Rebecq, H., Gallego, G., & Scaramuzza, D. (2016). EMVS: Event-based multi-view stereo. In Proceedings of the british machine vision conference, BMVC.

  • Rebecq, H., Horstschaefer, T., Gallego, G., & Scaramuzza, D. (2017). EVO: A geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robotics and Automation Letters, 2(2), 593–600.

    Article  Google Scholar 

  • Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. In British machine vision conference (BMVC).

  • Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1), 259–268.

    Article  MathSciNet  Google Scholar 

  • Stam, J. (2003). Flows on surfaces of arbitrary topology. ACM Transactions on Graphics, 22(3), 724–731.

    Article  Google Scholar 

  • Steidl, G., & Teuber, T. (2010). Removing multiplicative noise by douglas-rachford splitting methods. Journal of Mathematical Imaging and Vision, 36(2), 168–184.

    Article  MathSciNet  Google Scholar 

  • Weikersdorfer, D., Hoffmann, R., & Conradt, J. (2013). Simultaneous localization and mapping for event-based vision systems. In International conference on computer vision systems.

    Google Scholar 

  • Wiesmann, G., Schraml, S., Litzenberger, M., Belbachir, A. N., Hofstätter, M., & Bartolozzi, C. (2012). Event-driven embodied system for feature extraction and object recognition in robotic applications. In CVPR workshops.

  • Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). Fsim: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378–2386. https://doi.org/10.1109/TIP.2011.2109730.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the research initiative Mobile Vision with funding from the AIT and the Austrian Federal Ministry of Science, Research and Economy HRSM Programme (BGBl. II Nr. 292/2012).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gottfried Munda.

Additional information

Communicated by Xiaoou Tang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Munda, G., Reinbacher, C. & Pock, T. Real-Time Intensity-Image Reconstruction for Event Cameras Using Manifold Regularisation. Int J Comput Vis 126, 1381–1393 (2018). https://doi.org/10.1007/s11263-018-1106-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-018-1106-2

Keywords

Navigation