Advertisement

V2VNet: Vehicle-to-Vehicle Communication for Joint Perception and Prediction

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12347)

Abstract

In this paper, we explore the use of vehicle-to-vehicle (V2V) communication to improve the perception and motion forecasting performance of self-driving vehicles. By intelligently aggregating the information received from multiple nearby vehicles, we can observe the same scene from different viewpoints. This allows us to see through occlusions and detect actors at long range, where the observations are very sparse or non-existent. We also show that our approach of sending compressed deep feature map activations achieves high accuracy while satisfying communication bandwidth requirements.

Keywords

Autonomous driving Object detection Motion forecast 

Notes

Acknowledgments

We gratefully acknowledge James Tu for valuable contributions in the final paper.

Supplementary material

504434_1_En_36_MOESM1_ESM.zip (44.2 mb)
Supplementary material 1 (zip 45253 KB)

References

  1. 1.
    Draco 3d data compression (2019). https://github.com/google/draco
  2. 2.
    Ballé, J., Minnen, D., Singh, S., Hwang, S.J., Johnston, N.: Variational image compression with a scale hyperprior. In: International Conference on Learning Representations (2018)Google Scholar
  3. 3.
    Casas, S., Gulino, C., Liao, R., Urtasun, R.: Spatially-aware graph neural networks for relational behavior forecasting from sensor data. arXiv (2019)Google Scholar
  4. 4.
    Casas, S., Gulino, C., Suo, S., Luo, K., Liao, R., Urtasun, R.: Implicit latent variable model for scene-consistent motion forecasting. In: ECCV (2020)Google Scholar
  5. 5.
    Casas, S., Luo, W., Urtasun, R.: Intentnet: learning to predict intention from raw sensor data. In: Conference on Robot Learning (2018)Google Scholar
  6. 6.
    Chai, Y., Sapp, B., Bansal, M., Anguelov, D.: Multipath: multiple probabilistic anchor trajectory hypotheses for behavior prediction. arXiv (2019)Google Scholar
  7. 7.
    Chen, Q., et al.: DSRC and radar object matching for cooperative driver assistance systems. In: 2015 IEEE Intelligent Vehicles Symposium (IV) (2015)Google Scholar
  8. 8.
    Chen, Q., Tang, S., Yang, Q., Fu, S.: Cooper: cooperative perception for connected autonomous vehicles based on 3D point clouds. arXiv (2019)Google Scholar
  9. 9.
    Chen, S., Li, Y., Kwok, N.M.: Active vision in robotic systems: a survey of recent developments. Int. J. Robot. Res. 30(11), 1343–1377 (2011)CrossRefGoogle Scholar
  10. 10.
    Chen, X., Ma, H., Wan, J., Li, B., Xia, T.: Multi-view 3D object detection network for autonomous driving. In: CVPR (2017)Google Scholar
  11. 11.
    Choi, H., Bajic, I.V.: High efficiency compression for object detection. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2018)Google Scholar
  12. 12.
    Cui, H., et al.: Deep kinematic models for physically realistic prediction of vehicle trajectories. arXiv (2019)Google Scholar
  13. 13.
    Davison, A.J., Murray, D.W.: Simultaneous localization and map-building using active vision. PAMI (2002)Google Scholar
  14. 14.
    Davison, A.J.: Mobile robot navigation using active vision. Advances in Scientific Philosophy Essays in Honour of (1999)Google Scholar
  15. 15.
    Duvenaud, D.K., et al.: Convolutional networks on graphs for learning molecular fingerprints. In: NIPS (2015)Google Scholar
  16. 16.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The kitti vision benchmark suite. In: CVPR (2012)Google Scholar
  17. 17.
    Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70 (2017)Google Scholar
  18. 18.
    Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS (2017)Google Scholar
  19. 19.
    Jain, A., Casas, S., Liao, R., Xiong, Y., Feng, S., Segal, S., Urtasun, R.: Discrete residual flow for probabilistic pedestrian behavior prediction. arXiv (2019)Google Scholar
  20. 20.
    Jayaraman, D., Grauman, K.: Look-ahead before you leap: end-to-end active recognition by forecasting the effect of motion. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9909, pp. 489–505. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46454-1_30CrossRefGoogle Scholar
  21. 21.
    Kenney, J.B.: Dedicated short-range communications (DSRC) standards in the united states. Proc. IEEE 99(7), 1162–1182 (2011)CrossRefGoogle Scholar
  22. 22.
    Kim, A., Eustice, R.M.: Active visual slam for robotic area coverage: theory and experiment. Int. J. Robot. Res. 34(4–5), 457–475 (2015)CrossRefGoogle Scholar
  23. 23.
    Kim, S.W., et al.: Multivehicle cooperative driving using cooperative perception: design and experimental validation. IEEE Trans. Intell. Transp. Syst. 16(2), 663–680 (2014)CrossRefGoogle Scholar
  24. 24.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015 (2015)Google Scholar
  25. 25.
    Li, L., Yang, B., Liang, M., Zeng, W., Ren, M., Segal, S., Urtasun, R.: End-to-end contextual perception and prediction with interaction transformer. In: IROS (2020)Google Scholar
  26. 26.
    Li, R., Tapaswi, M., Liao, R., Jia, J., Urtasun, R., Fidler, S.: Situation recognition with graph neural networks. In: ICCV (2017)Google Scholar
  27. 27.
    Li, Y., Tarlow, D., Brockschmidt, M., Zemel, R.S.: Gated graph sequence neural networks. In: 4th International Conference on Learning Representations, ICLR 2016, Conference Track Proceedings, San Juan, Puerto Rico, 2–4 May 2016 (2016)Google Scholar
  28. 28.
    Liang, M., Yang, B., Chen, Y., Hu, R., Urtasun, R.: Multi-task multi-sensor fusion for 3D object detection. In: CVPR (2019)Google Scholar
  29. 29.
    Liang, M., Yang, B., Wang, S., Urtasun, R.: Deep continuous fusion for multi-sensor 3D object detection. In: ECCV (2018)Google Scholar
  30. 30.
    Liang, M., Yang, B., Zeng, W., Chen, Y., Hu, R., Casas, S., Urtasun, R.: PnpNet: learning temporal instance representations for joint perception and motion forecasting. In: CVPR (2020)Google Scholar
  31. 31.
    Luo, W., Yang, B., Urtasun, R.: Fast and furious: real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net. In: CVPR (2018)Google Scholar
  32. 32.
    Maalej, Y., Sorour, S., Abdel-Rahim, A., Guizani, M.: Vanets meet autonomous vehicles: a multimodal 3D environment learning approach. In: GLOBECOM 2017–2017 IEEE Global Communications Conference (2017)Google Scholar
  33. 33.
    Manivasagam, S., et al.: Lidarsim: realistic lidar simulation by leveraging the real world. In: CVPR (2020)Google Scholar
  34. 34.
    Rauch, A., Klanner, F., Rasshofer, R., Dietmayer, K.: Car2x-based perception in a high-level fusion architecture for cooperative perception systems. In: 2012 IEEE Intelligent Vehicles Symposium (2012)Google Scholar
  35. 35.
    Rawashdeh, Z.Y., Wang, Z.: Collaborative automated driving: a machine learning-based method to enhance the accuracy of shared information. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC) (2018)Google Scholar
  36. 36.
    Rhinehart, N., Kitani, K.M., Vernaza, P.: R2p2: a reparameterized pushforward policy for diverse, precise generative path forecasting. In: ECCV (2018)Google Scholar
  37. 37.
    Rhinehart, N., McAllister, R., Kitani, K., Levine, S.: Precog: prediction conditioned on goals in visual multi-agent settings. arXiv (2019)Google Scholar
  38. 38.
    Rockl, M., Strang, T., Kranz, M.: V2V communications in automotive multi-sensor multi-target tracking. In: 2008 IEEE 68th Vehicular Technology Conference (2008)Google Scholar
  39. 39.
    Schlichtkrull, M., Kipf, T.N., Bloem, P., Van Den Berg, R., Titov, I., Welling, M.: Modeling relational data with graph convolutional networks. In: European Semantic Web Conference (2018)Google Scholar
  40. 40.
    Su, H., Maji, S., Kalogerakis, E., Learned-Miller, E.: Multi-view convolutional neural networks for 3D shape recognition. In: ICCV (2015)Google Scholar
  41. 41.
    Sykora, Q., Ren, M., Urtasun, R.: Multi-agent routing value iteration network. In: ICML 2020 (2020)Google Scholar
  42. 42.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR (2015)Google Scholar
  43. 43.
    Wei, X., Barsan, I.A., Wang, S., Martinez, J., Urtasun, R.: Learning to localize through compressed binary maps. In: CVPR (2019)Google Scholar
  44. 44.
    Xiao, Z., Mo, Z., Jiang, K., Yang, D.: Multimedia fusion at semantic level in vehicle cooperactive perception. In: 2018 IEEE International Conference on Multimedia & Expo Workshops (ICMEW) (2018)Google Scholar
  45. 45.
    Yang, B., Luo, W., Urtasun, R.: Pixor: real-time 3D object detection from point clouds. In: CVPR (2018)Google Scholar
  46. 46.
    Yu, B., Yin, H., Zhu, Z.: Spatio-temporal graph convolutional networks: a deep learning framework for traffic forecasting. arXiv (2017)Google Scholar
  47. 47.
    Yuan, T., et al.: Object matching for inter-vehicle communication systems-an IMM-based track association approach with sequential multiple hypothesis test. IEEE Trans. Intell. Transp. Syst. 18(12), 3501–3512 (2017)CrossRefGoogle Scholar
  48. 48.
    Yun, S., Choi, J., Yoo, Y., Yun, K., Young Choi, J.: Action-decision networks for visual tracking with deep reinforcement learning. In: CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.UberATGPittsburghUSA
  2. 2.University of TorontoTorontoCanada

Personalised recommendations