Advertisement

Effects of a Social Force Model Reward in Robot Navigation Based on Deep Reinforcement Learning

  • Óscar GilEmail author
  • Alberto Sanfeliu
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1093)

Abstract

In this paper is proposed an inclusion of the Social Force Model (SFM) into a concrete Deep Reinforcement Learning (RL) framework for robot navigation. These types of techniques have demonstrated to be useful to deal with different types of environments to achieve a goal. In Deep RL, a description of the world to describe the states and a reward adapted to the environment are crucial elements to get the desire behaviour and achieve a high performance. For this reason, this work adds a dense reward function based on SFM and uses the forces in the states like an additional description. Furthermore, obstacles are added to improve the behaviour of works that only consider moving agents. This SFM inclusion can offer a better description of the obstacles for the navigation. Several simulations have been done to check the effects of these modifications in the average performance.

Keywords

Robot navigation Deep Reinforcement Learning Social Force Model Dense reward function 

References

  1. 1.
    Alahi, A., Goel, K., Ramanathan, V., Robicquet, A., Fei-Fei, L., Savarese, S.: Social LSTM: human trajectory prediction in crowded spaces. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 961–971, June 2016Google Scholar
  2. 2.
    Van den Berg, J., Lin, M., Manocha, D.: Reciprocal velocity obstacles for real-time multi-agent navigation. In: 2008 IEEE International Conference on Robotics and Automation, pp. 1928–1935. IEEE (2008)Google Scholar
  3. 3.
    Chen, C., Liu, Y., Kreiss, S., Alahi, A.: Crowd-robot interaction: crowd-aware robot navigation with attention-based deep reinforcement learning. In: 2019 International Conference on Robotics and Automation, pp. 6015–6022, May 2019Google Scholar
  4. 4.
    Chen, Y.F., Everett, M., Liu, M., How, J.P.: Socially aware motion planning with deep reinforcement learning. In: 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1343–1350. IEEE (2017)Google Scholar
  5. 5.
    Chen, Y.F., Liu, M., Everett, M., How, J.P.: Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 285–292 (2017)Google Scholar
  6. 6.
    Everett, M., Chen, Y.F., How, J.P.: Motion planning among dynamic, decision-making agents with deep reinforcement learning. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3052–3059. IEEE (2018)Google Scholar
  7. 7.
    Ferrer, G., Sanfeliu, A.: Proactive kinodynamic planning using the extended social force model and human motion prediction in urban environments. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1730–1735. IEEE (2014)Google Scholar
  8. 8.
    Ferrer, G., Zulueta, A.G., Cotarelo, F.H., Sanfeliu, A.: Robot social-aware navigation framework to accompany people walking side-by-side. Auton. Robots 41(4), 775–793 (2017)CrossRefGoogle Scholar
  9. 9.
    Francis, A., Faust, A., Chiang, H.L., Hsu, J., Kew, J.C., Fiser, M., Lee, T.E.: Long-range indoor navigation with PRM-RL. CoRR abs/1902.09458 (2019)Google Scholar
  10. 10.
    Grzes, M., Kudenko, D.: Reward shaping and mixed resolution function approximation. In: Developments in Intelligent Agent Technologies and Multi-Agent Systems: Concepts and Applications, pp. 95–115. IGI Global (2011)Google Scholar
  11. 11.
    Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., Alahi, A.: Social GAN: socially acceptable trajectories with generative adversarial networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2255–2264, June 2018Google Scholar
  12. 12.
    Haddad, S., Wu, M., Wei, H., Lam, S.K.: Situation-aware pedestrian trajectory prediction with spatio-temporal attention model. CoRR abs/1902.05437 (2019)Google Scholar
  13. 13.
    Hasan, I., Setti, F., Tsesmelis, T., Del Bue, A., Galasso, F., Cristani, M.: MX-LSTM: mixing tracklets and vislets to jointly forecast trajectories and head poses. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6067–6076 (2018)Google Scholar
  14. 14.
    Helbing, D., Molnar, P.: Social force model for pedestrian dynamics. Phys. Rev. E 51(5), 4282 (1995)CrossRefGoogle Scholar
  15. 15.
    Hirose, N., Xia, F., Martin-Martin, R., Sadeghian, A., Savarese, S.: Deep visual MPC-policy learning for navigation. arXiv preprint arXiv:1903.02749 (2019)CrossRefGoogle Scholar
  16. 16.
    Liu, Y., Xu, A., Chen, Z.: Map-based deep imitation learning for obstacle avoidance. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8644–8649. IEEE (2018)Google Scholar
  17. 17.
    Long, P., Fanl, T., Liao, X., Liu, W., Zhang, H., Pan, J.: Towards optimally decentralized multi-robot collision avoidance via deep reinforcement learning. In: 2018 IEEE International Conference on Robotics and Automation, pp. 6252–6259 (2018)Google Scholar
  18. 18.
    Long, P., Liu, W., Pan, J.: Deep-learned collision avoidance policy for distributed multiagent navigation. IEEE Robot. Autom. Lett. 2(2), 656–663 (2017) CrossRefGoogle Scholar
  19. 19.
    Ng, A.Y., Harada, D., Russell, S.J.: Policy invariance under reward transformations: theory and application to reward shaping. In: Proceedings of the Sixteenth International Conference on Machine Learning, ICML 1999, pp. 278–287. Morgan Kaufmann Publishers Inc. (1999)Google Scholar
  20. 20.
    Repiso, E., Garrell, A., Sanfeliu, A.: Adaptive side-by-side social robot navigation to approach and interact with people. Int. J. Soc. Rob. 11, 1–22 (2019)Google Scholar
  21. 21.
    Sadeghian, A., Kosaraju, V., Sadeghian, A., Hirose, N., Rezatofighi, H., Savarese, S.: SoPhie: an attentive GAN for predicting paths compliant to social and physical constraints. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1349–1358 (2019)Google Scholar
  22. 22.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, 2nd edn. The MIT Press (2018)Google Scholar
  23. 23.
    Tail, L., Zhang, J., Liu, M., Burgard, W.: Socially compliant navigation through raw depth inputs with generative adversarial imitation learning. In: 2018 IEEE International Conference on Robotics and Automation, pp. 1111–1117 (2018)Google Scholar
  24. 24.
    Tesauro, G.: Temporal difference learning and TD-Gammon. Commun. ACM 38(3), 58–68 (1995)CrossRefGoogle Scholar
  25. 25.
    Trautman, P., Ma, J., Murray, R.M., Krause, A.: Robot navigation in dense human crowds: statistical models and experimental studies of human-robot cooperation. Int. J. Robot. Res. 34(3), 335–356 (2015)CrossRefGoogle Scholar
  26. 26.
    Van Den Berg, J., Guy, S.J., Lin, M., Manocha, D.: Reciprocal n-body collision avoidance. In: Robotics Research, pp. 3–19. Springer (2011)Google Scholar
  27. 27.
    Van Roy, B.: Temporal-Difference Learning and Applications in Finance. MIT Press, Cambridge (2001)Google Scholar
  28. 28.
    Zou, H., Ren, T., Yan, D., Su, H., Zhu, J.: Reward shaping via meta-learning. CoRR abs/1901.09330 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Institut de Robòtica i Informàtica Industrial, CSIC-UPCBarcelonaSpain

Personalised recommendations