Advertisement

Entropy Minimisation Framework for Event-Based Vision Model Estimation

Conference paper
  • 714 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12350)

Abstract

We propose a novel Entropy Minimisation (EMin) framework for event-based vision model estimation. The framework extends previous event-based motion compensation algorithms to handle models whose outputs have arbitrary dimensions. The main motivation comes from estimating motion from events directly in 3D space (e.g. events augmented with depth), without projecting them onto an image plane. This is achieved by modelling the event alignment according to candidate parameters and minimising the resultant dispersion. We provide a family of suitable entropy loss functions and an efficient approximation whose complexity is only linear with the number of events (e.g. the complexity does not depend on the number of image pixels). The framework is evaluated on several motion estimation problems, including optical flow and rotational motion. As proof of concept, we also test our framework on 6-DOF estimation by performing the optimisation directly in 3D space.

Keywords

Event-based vision Optimisation framework Model estimation Entropy Minimisation 

Notes

Acknowledgements

Urbano Miguel Nunes was supported by the Portuguese Foundation for Science and Technology under Doctoral Grant with reference SFRH/BD/130732/2017. Yiannis Demiris is supported by a Royal Academy of Engineering Chair in Emerging Technologies. This research was supported in part by EPRSC Grant EP/S032398/1. The authors thank the reviewers for their insightful feedback, and the members of the Personal Robotics Laboratory at Imperial College London for their support.

Supplementary material

504441_1_En_10_MOESM1_ESM.pdf (4.5 mb)
Supplementary material 1 (pdf 4582 KB)

References

  1. 1.
    Adams, A., Baek, J., Davis, M.A.: Fast high-dimensional filtering using the permutohedral lattice. Comput. Graph. Forum 29(2), 753–762 (2010)Google Scholar
  2. 2.
    Andreopoulos, A., Kashyap, H.J., Nayak, T.K., Amir, A., Flickner, M.D.: A low power, high throughput, fully event-based stereo system. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 7532–7542 (2018)Google Scholar
  3. 3.
    Bardow, P., Davison, A.J., Leutenegger, S.: Simultaneous optical flow and intensity estimation from an event camera. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 884–892 (2016)Google Scholar
  4. 4.
    Gallego, G., Gehrig, M., Scaramuzza, D.: Focus is all you need: loss functions for event-based vision. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 12280–12289 (2019)Google Scholar
  5. 5.
    Gallego, G., Lund, J.E., Mueggler, E., Rebecq, H., Delbruck, T., Scaramuzza, D.: Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Trans. Pattern Anal. Mach. Intell. 40(10), 2402–2412 (2018)Google Scholar
  6. 6.
    Gallego, G., Rebecq, H., Scaramuzza, D.: A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3867–3876 (2018)Google Scholar
  7. 7.
    Gallego, G., Scaramuzza, D.: Accurate angular velocity estimation with an event camera. IEEE Rob. Autom. Lett. 2(2), 632–639 (2017)Google Scholar
  8. 8.
    Glover, A., Bartolozzi, C.: Robust visual tracking with a freely-moving event camera. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3769–3776 (2017)Google Scholar
  9. 9.
    Glover, A., Vasco, V., Bartolozzi, C.: A controlled-delay event camera framework for on-line robotics. In: IEEE International Conference on Robotics and Automation, pp. 2178–2183 (2018)Google Scholar
  10. 10.
    Haessig, G., Berthelon, X., Ieng, S.H., Benosman, R.: A spiking neural network model of depth from defocus for event-based neuromorphic vision. Sci. Rep. 9(1), 3744 (2019)Google Scholar
  11. 11.
    Kim, H., Leutenegger, S., Davison, A.J.: Real-time 3D reconstruction and 6-DoF tracking with an event camera. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 349–364. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_21CrossRefGoogle Scholar
  12. 12.
    Krähenbühl, P., Koltun, V.: Efficient inference in fully connected CRFs with Gaussian edge potentials. In: Advances in Neural Information Processing Systems, pp. 109–117 (2011)Google Scholar
  13. 13.
    Lafferty, J., McCallum, A., Pereira, F.C.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: International Conference on Machine Learning, pp. 282–289 (2001)Google Scholar
  14. 14.
    Manderscheid, J., Sironi, A., Bourdis, N., Migliore, D., Lepetit, V.: Speed invariant time surface for learning to detect corner points with event-based cameras. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 10245–10254 (2019)Google Scholar
  15. 15.
    Metta, G., et al.: The iCub humanoid robot: an open-systems platform for research in cognitive development. Neural Netw. 23(8–9), 1125–1134 (2010)Google Scholar
  16. 16.
    Mitrokhin, A., Fermüller, C., Parameshwara, C., Aloimonos, Y.: Event-based moving object detection and tracking. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1–9 (2018)Google Scholar
  17. 17.
    Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., Scaramuzza, D.: The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and slam. Int. J. Rob. Res. 36(2), 142–149 (2017)Google Scholar
  18. 18.
    Rebecq, H., Gallego, G., Mueggler, E., Scaramuzza, D.: EMVS: event-based multi-view stereo-3D reconstruction with an event camera in real-time. Int. J. Comput. Vis. 126(12), 1394–1414 (2018)Google Scholar
  19. 19.
    Scheerlinck, C., Barnes, N., Mahony, R.: Asynchronous spatial image convolutions for event cameras. IEEE Rob. Autom. Lett. 4(2), 816–822 (2019)Google Scholar
  20. 20.
    Sharma, B.D., Mittal, D.P.: New non-additive measures of inaccuracy. J. Math. Sci. 10, 120–133 (1975)Google Scholar
  21. 21.
    Valeiras, D.R., Lagorce, X., Clady, X., Bartolozzi, C., Ieng, S.H., Benosman, R.: An asynchronous neuromorphic event-driven visual part-based shape tracking. IEEE Trans. Neural Netw. Learn. Syst. 26(12), 3045–3059 (2015)Google Scholar
  22. 22.
    Vidal, A.R., Rebecq, H., Horstschaefer, T., Scaramuzza, D.: Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Rob. Autom. Lett. 3(2), 994–1001 (2018)Google Scholar
  23. 23.
    Weikersdorfer, D., Adrian, D.B., Cremers, D., Conradt, J.: Event-based 3D SLAM with a depth-augmented dynamic vision sensor. In: IEEE International Conference on Robotics and Automation, pp. 359–364 (2014)Google Scholar
  24. 24.
    Xie, Z., Chen, S., Orchard, G.: Event-based stereo depth estimation using belief propagation. Front. Neurosci. 11, 535 (2017)Google Scholar
  25. 25.
    Zhu, A.Z., Atanasov, N., Daniilidis, K.: Event-based feature tracking with probabilistic data association. In: IEEE International Conference on Robotics and Automation, pp. 4465–4470 (2017)Google Scholar
  26. 26.
    Zhu, A.Z., Chen, Y., Daniilidis, K.: Realtime time synchronized event-based stereo. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11210, pp. 438–452. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01231-1_27CrossRefGoogle Scholar
  27. 27.
    Zhu, A.Z., Thakur, D., Özaslan, T., Pfrommer, B., Kumar, V., Daniilidis, K.: The multivehicle stereo event camera dataset: an event camera dataset for 3d perception. IEEE Rob. Autom. Lett. 3(3), 2032–2039 (2018)Google Scholar
  28. 28.
    Zhu, A.Z., Yuan, L., Chaney, K., Daniilidis, K.: Unsupervised event-based learning of optical flow, depth, and egomotion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 989–997 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Electrical and Electronic Engineering, Personal Robotics LaboratoryImperial College LondonLondonUK

Personalised recommendations