Estimation of Subjective Evaluation of HRI Performance Based on Objective Behaviors of Human and Robots

  • Yoshiaki MizuchiEmail author
  • Tetsunari Inamura
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11531)


The conventional approach to the evaluation of the performance of human-robot interaction (HRI) is subjective evaluation, such as the application of questionnaires. As such subjective evaluation is time-consuming, an alternative automatic evaluation method based on only objectively observable factors (i.e., human reaction behavior) is required for autonomous learning by robots and for scoring in robot competitions. To this end, we aim to investigate the extent to which subjective evaluation results can be approximated using objective factors. As a case study, we designed and carried out a VR-based robot-competition task in which the robot was required to generate comprehensible and unambiguous natural language expressions and gestures to guide inexpert users in everyday environments. In the competition, both event data and human behavioral data (i.e., interaction histories) were observed and stored. Additionally, to acquire subjective evaluation results, we asked third-parties to evaluate the HRI performance by reviewing the stored interaction histories. From the analysis of the relationship between objective factors and subjective evaluation results, we demonstrate that the subjective evaluation of HRI can indeed be reasonably approximated on the basis of objective factors.


Human-robot interaction Natural language generation RoboCup@Home Virtual reality 


  1. 1.
    Bartneck, C., Kulić, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1(1), 71–81 (2009)CrossRefGoogle Scholar
  2. 2.
    Brščić, D., Kidokoro, H., Suehiro, Y., Kanda, T.: Escaping from children’s abuse of social robots. In: Proceedings of ACM/IEEE International Conference on Human-Robot Interaction, pp. 59–66 (2015)Google Scholar
  3. 3.
    Inamura, T., Mizuchi, Y.: Competition design to evaluate cognitive functions in human-robot interaction based on immersive VR. In: Akiyama, H., Obst, O., Sammut, C., Tonidandel, F. (eds.) RoboCup 2017. LNCS (LNAI), vol. 11175, pp. 84–94. Springer, Cham (2018). Scholar
  4. 4.
    Kamide, H., Kawabe, K., Shigemi, S., Arai, T.: Anshin as a concept of subjective well-being between humans and robots in Japan. Adv. Robot. 29(24), 1624–1636 (2015)CrossRefGoogle Scholar
  5. 5.
    Kanda, T., Ishiguro, H., Imai, M., Ono, T.: Development and evaluation of interactive humanoid robots. Proc. IEEE 92(11), 1839–1850 (2004)CrossRefGoogle Scholar
  6. 6.
    Knepper, R.A., Tellex, S., Li, A., Roy, N., Rus, D.: Recovering from failure by asking for help. Auton. Robots 39(3), 347–362 (2015)CrossRefGoogle Scholar
  7. 7.
    Mizuchi, Y., Inamura, T.: Cloud-based multimodal human-robot interaction simulator utilizing ROS and unity frameworks. In: IEEE/SICE International Symposium on System Integration, pp. 948–955 (2017)Google Scholar
  8. 8.
    Nomura, T., Kanda, T.: Rapport-expectation with a robot scale. Int. J. Soc. Robot. 8(1), 21–30 (2016)CrossRefGoogle Scholar
  9. 9.
    Rossi, S., Staffa, M., Bove, L., Capasso, R., Ercolano, G.: User’s personality and activity influence on HRI comfortable distances. In: Lecture Notes in Computer Science, vol. 10652, pp. 167–177 (2017). Scholar
  10. 10.
    Striegnitz, K., Denis, A., Gargett, A., Garouf, K., Koller, A., Theune, M.: Report on the second second challenge on generating instructions in virtual environments (GIVE-2.5). In: Proceedings of European Workshop on Natural Language Generation, pp. 270–279 (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.National Institute of InformaticsChiyoda-kuJapan
  2. 2.The Graduate University for Advanced Studies, SOKENDAIChiyoda-kuJapan

Personalised recommendations