Multimodal Path Planning Using Potential Field for Human–Robot Interaction

  • Yosuke Kawasaki
  • Ayanori YorozuEmail author
  • Masaki Takahashi
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 867)


In a human–robot interaction, a robot must move to a position where the robot can obtain precise information of people, such as positions, postures, and voice. This is because the accuracy of human recognition depends on the positional relation between the person and robot. In addition, the robot should choose what sensor data needs to be focused on during the task that involves the interaction. Therefore, we should change a path approaching the people to improve human recognition accuracy for ease of performing the task. Accordingly, we need to design a path-planning method considering sensor characteristics, human recognition accuracy, and the task contents simultaneously. Although some previous studies proposed path-planning methods considering sensor characteristics, they did not consider the task and the human recognition accuracy, which was important for practical application. Consequently, we present a path-planning method considering the multimodal information which fusion the task contents and the human recognition accuracy simultaneously.


Human–robot interaction Multimodal path planning Potential field 



This study was supported by “A Framework PRINTEPS to Develop Practical Artificial Intelligence” of the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Agency (JST) under Grant Number JPMJCR14E3.


  1. 1.
    Kanda, T.: Enabling harmonized human-robot interaction in a public space. In: Human-Harmonized Information Technology, vol. 2, pp. 115–137. Springer, Tokyo (2017)Google Scholar
  2. 2.
    Satake, S., Kanda, T., Glas, D., Imai, M.: How to approach humans?-strategies for social robots to initiate interaction. In: 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 109–116 (2009)Google Scholar
  3. 3.
    Rimon, E., Koditschek, D.E.: Exact robot navigation using artificial potential functions. IEEE Trans. Robot. Autom. 8(5), 501–518 (1992)CrossRefGoogle Scholar
  4. 4.
    Mead, R., Matarić, M.J.: Autonomous human–robot proxemics: socially aware navigation based on interaction potential. Auton. Robot. 41, 1189–1201 (2017)CrossRefGoogle Scholar
  5. 5.
    Hart, P.E., Nilsson, N.J., Raphael, B.: A formal basis for the heuristic determination of minimal cost paths. IEEE Trans. Syst. Sci. Cybern. 4(2), 100–107 (1968)CrossRefGoogle Scholar
  6. 6.
    Li, B., Jin, H., Zhang, Q., Xia, W., Li, H.: Indoor human detection using RGB-D images. In: IEEE International Conference on Information and Automation, pp. 1354–1360 (2016)Google Scholar
  7. 7.
    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: Computer Vision and Pattern Recognition. arXiv:1611.08050 (2017)
  8. 8.
    CMU-Perceptual-Computing-Lab, “OpenPose”. Accessed Feb 2018
  9. 9.
    Khoshelham, K., Elberink, S.O.: Accuracy and resolution of Kinect depth data for indoor mapping applications. In: Sensors, vol. 12, pp. 1437–1454 (2012)Google Scholar
  10. 10.
    Ishiguro, H., Miyashita, T., Kanda, T.: Science of knowledge Communication robot. Ohmsha, (Japanese) (2005)Google Scholar
  11. 11.
    gmapping, Accessed May 2018
  12. 12.
    AMCL. Accessed May 2018

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yosuke Kawasaki
    • 1
  • Ayanori Yorozu
    • 2
    Email author
  • Masaki Takahashi
    • 1
  1. 1.Department of System Design EngineeringKeio UniversityYokohamaJapan
  2. 2.Graduate School of Science and TechnologyKeio UniversityYokohamaJapan

Personalised recommendations