Diverse and Admissible Trajectory Forecasting Through Multimodal Context Understanding

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12356)


Multi-agent trajectory forecasting in autonomous driving requires an agent to accurately anticipate the behaviors of the surrounding vehicles and pedestrians, for safe and reliable decision-making. Due to partial observability in these dynamical scenes, directly obtaining the posterior distribution over future agent trajectories remains a challenging problem. In realistic embodied environments, each agent’s future trajectories should be both diverse since multiple plausible sequences of actions can be used to reach its intended goals, and admissible since they must obey physical constraints and stay in drivable areas. In this paper, we propose a model that synthesizes multiple input signals from the multimodal world|the environment’s scene context and interactions between multiple surrounding agents|to best model all diverse and admissible trajectories. We compare our model with strong baselines and ablations across two public datasets and show a significant performance improvement over previous state-of-the-art methods. Lastly, we offer new metrics incorporating admissibility criteria to further study and evaluate the diversity of predictions. Codes are at:


Trajectory forecasting Diversity Admissibility Generative modeling Autonomous driving 



This work was supported in part by the Technology Innovation Program under Grant 10083646 (Development of Deep Learning-Based Future Prediction and Risk Assessment Technology considering Inter-vehicular Interaction in Cut-in Scenario), funded by the Ministry of Trade, Industry, and Energy, South Korea. We also acknowledge the anonymous reviewers for their constructive comments.

Supplementary material

504452_1_En_17_MOESM1_ESM.pdf (2.4 mb)
Supplementary material 1 (pdf 2434 KB)


  1. 1.
    Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L., Savarese, S.: Social lSTM: human trajectory prediction in crowded spaces. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 961–971 (2016)Google Scholar
  2. 2.
    Ballan, L., Castaldo, F., Alahi, A., Palmieri, F., Savarese, S.: Knowledge transfer for scene-specific motion prediction. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) Computer Vision – ECCV 2016. Lecture Notes in Computer Science, vol. 9905, pp. 697–713. Springer, Cham (2016). Scholar
  3. 3.
    Bansal, M., Krizhevsky, A., Ogale, A.: Chauffeurnet: learning to drive by imitating the best and synthesizing the worst. arXiv preprint arXiv:1812.03079 (2018)
  4. 4.
    Bernstein, D.S., So, W.: Some explicit formulas for the matrix exponential. IEEE Trans. Autom. Control 38(8), 1228–1232 (1993)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Caesar, H., et al.: nuScenes: a multimodal dataset for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11621–11631 (2020)Google Scholar
  6. 6.
    Casas, S., Luo, W., Urtasun, R.: IntentNet: learning to predict intention from raw sensor data. In: Conference on Robot Learning, pp. 947–956 (2018)Google Scholar
  7. 7.
    Chang, M.F., et al.: Argoverse: 3D tracking and forecasting with rich maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8748–8757 (2019)Google Scholar
  8. 8.
    Deo, N., Trivedi, M.M.: Convolutional social pooling for vehicle trajectory prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1468–1476 (2018)Google Scholar
  9. 9.
    Fernando, T., Denman, S., Sridharan, S., Fookes, C.: Soft + hardwired attention: an lstm framework for human trajectory prediction and abnormal event detection. Neural networks 108, 466–478 (2018)CrossRefGoogle Scholar
  10. 10.
    Gindele, T., Brechtel, S., Dillmann, R.: Learning driver behavior models from traffic observations for decision making and planning. IEEE Intell. Transp. Syst. Mag. 7(1), 69–79 (2015)CrossRefGoogle Scholar
  11. 11.
    Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., Alahi, A.: Social GAN: socially acceptable trajectories with generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2255–2264 (2018)Google Scholar
  12. 12.
    Huang, X., et al.: DiversityGAN: diversity-aware vehicle motion prediction via latent semantic sampling. IEEE Robot. Autom. Lett. (2020)Google Scholar
  13. 13.
    Kim, B., Kang, C.M., Kim, J., Lee, S.H., Chung, C.C., Choi, J.W.: Probabilistic vehicle trajectory prediction over occupancy grid map via recurrent neural network. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), pp. 399–404. IEEE (2017)Google Scholar
  14. 14.
    Kingma, D.P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., Welling, M.: Improved variational inference with inverse autoregressive flow. In: Advances in Neural Information Processing systems. pp. 4743–4751 (2016)Google Scholar
  15. 15.
    Kooij, J.F.P.: Context-based pedestrian path prediction. In: Fleet, D., Pajdla, T., Tuytelaars, T. (eds.) European Conference on Computer Vision. Lecture Notes in Computer Science, vol. 8694, pp. 618–633. Springer, Cham (2014)Google Scholar
  16. 16.
    Krajewski, R., Bock, J., Kloeker, L., Eckstein, L.: The highd dataset: a drone dataset of naturalistic vehicle trajectories on German highways for validation of highly automated driving systems. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 2118–2125. IEEE (2018)Google Scholar
  17. 17.
    Lee, N., Choi, W., Vernaza, P., Choy, C.B., Torr, P.H., Chandraker, M.: Desire: distant future prediction in dynamic scenes with interacting agents. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 336–345 (2017)Google Scholar
  18. 18.
    Ma, Y., Zhu, X., Zhang, S., Yang, R., Wang, W., Manocha, D.: Trafficpredict: Trajectory prediction for heterogeneous traffic-agents. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6120–6127 (2019)Google Scholar
  19. 19.
    Park, S.H., Kim, B., Kang, C.M., Chung, C.C., Choi, J.W.: Sequence-to-sequence prediction of vehicle trajectory via LSTM encoder-decoder architecture. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 1672–1678. IEEE (2018)Google Scholar
  20. 20.
    Rezende, D.J., Mohamed, S.: Variational inference with normalizing flows. arXiv preprint arXiv:1505.05770 (2015)
  21. 21.
    Rhinehart, N., Kitani, K.M., Vernaza, P.: R2p2: A reparameterized pushforward policy for diverse, precise generative path forecasting. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 772–788 (2018)Google Scholar
  22. 22.
    Rhinehart, N., McAllister, R., Kitani, K., Levine, S.: Precog: Prediction conditioned on goals in visual multi-agent settings. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 2821–2830 (2019)Google Scholar
  23. 23.
    Rudenko, A., Palmieri, L., Herman, M., Kitani, K.M., Gavrila, D.M., Arras, K.O.: Human motion trajectory prediction: A survey. The International Journal of Robotics Research p. 0278364920917446 (2019)Google Scholar
  24. 24.
    Sadeghian, A., Kosaraju, V., Sadeghian, A., Hirose, N., Rezatofighi, H., Savarese, S.: Sophie: An attentive gan for predicting paths compliant to social and physical constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 1349–1358 (2019)Google Scholar
  25. 25.
    Schulz, J., Hubmann, C., Löchner, J., Burschka, D.: Interaction-aware probabilistic behavior prediction in urban environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 3999–4006. IEEE (2018)Google Scholar
  26. 26.
    Tang, C., Salakhutdinov, R.R.: Multiple futures prediction. In: Advances in Neural Information Processing Systems. pp. 15398–15408 (2019)Google Scholar
  27. 27.
    Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Advances in neural information processing systems. pp. 5998–6008 (2017)Google Scholar
  28. 28.
    Vemula, A., Muelling, K., Oh, J.: Social attention: Modeling attention in human crowds. In: 2018 IEEE international Conference on Robotics and Automation (ICRA). pp. 1–7. IEEE (2018)Google Scholar
  29. 29.
    Verlet, L.: Computer” experiments” on classical fluids. i. thermodynamical properties of lennard-jones molecules. Physical review 159(1), 98 (1967)Google Scholar
  30. 30.
    Xie, G., Gao, H., Qian, L., Huang, B., Li, K., Wang, J.: Vehicle trajectory prediction by integrating physics-and maneuver-based approaches using interactive multiple models. IEEE Transactions on Industrial Electronics 65(7), 5999–6008 (2017)CrossRefGoogle Scholar
  31. 31.
    Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning. pp. 2048–2057 (2015)Google Scholar
  32. 32.
    Yuan, Y., Kitani, K.M.: Diverse trajectory forecasting with determinantal point processes. In: International Conference on Learning Representations (2020)Google Scholar
  33. 33.
    Zhao, T., Xu, Y., Monfort, M., Choi, W., Baker, C., Zhao, Y., Wang, Y., Wu, Y.N.: Multi-agent tensor fusion for contextual trajectory prediction. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 12126–12134 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Hanyang UniversitySeoulKorea
  2. 2.Yonsei UniversitySeoulKorea
  3. 3.Korea UniversitySeoulKorea
  4. 4.Carnegie Mellon UniversityPittsburghUSA
  5. 5.Sogang UniversitySeoulKorea
  6. 6.Bosch Research PittsburghPittsburghUSA

Personalised recommendations