Advertisement

A Sociable Human-robot Interaction Scheme Based on Body Emotion Analysis

  • Tehao Zhu
  • Zeyang XiaEmail author
  • Jiaqi Dong
  • Qunfei Zhao
Regular Papers Robot and Applications
  • 8 Downloads

Abstract

Many kinds of interaction schemes for human-robot interaction (HRI) have been reported in recent years. However, most of these schemes are realized by recognizing the human actions. Once the recognition algorithm fails, the robot’s reactions will not be able to proceed further. This issue is thoughtless in traditional HRI, but is the key point to further improve the fluency and friendliness of HRI. In this work, a sociable HRI (SoHRI) scheme based on body emotion analysis was developed to achieve reasonable and natural interaction while human actions were not recognized. First, the emotions from the dynamic movements and static poses of humans were quantified using Laban movement analysis. Second, an interaction strategy including a finite state machine model was designed to describe the transition regulations of the human emotion state. Finally, appropriate interactive behavior of the robot was selected according to the inferred human emotion state. The quantification effect of SoHRI was verified using the dataset UTD-MHAD, and the whole scheme was tested using questionnaires filled out by the participants and spectators. The experimental results showed that the SoHRI scheme can analyze the body emotion precisely, and help the robot make reasonable interactive behaviors.

Keywords

Body emotion analysis finite state machin fuzzy inference human-robot interaction Laban movement analysis 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    K. K. Reddy and M. Shah, “Recognizing 50 human action categories of web videos,” Machine Vision and Applications, vol. 24, no. 5, pp. 971–981, June 2013.CrossRefGoogle Scholar
  2. [2]
    M. M. Ullah and I. Laptev, “Actlets: A novel local representation for human action recognition in video,” Proc. of 19th IEEE International Conference on Image Processing, pp. 777–780, 2012.Google Scholar
  3. [3]
    F. Alonso Martín, A. Ramey, and M. A. Salichs, “Speaker identification using three signal voice domains during human–robot interaction,” Proc. of the ACM/IEEE International Conference on Human–robot Interaction, pp. 114–115, 2014.Google Scholar
  4. [4]
    A. A. Chaaraoui, J. R. Padilla–López, P. Climent–Pérez, and F. Flórez–Revuelta, “Evolutionary joint selection to improve human action recognition with RGB–D devices,” Expert Systems with Applications, vol. 41, no. 3, pp. 786–794, February 2014.CrossRefGoogle Scholar
  5. [5]
    J. Wang, Z. Liu, and Y. Wu, “Learning actionlet ensemble for 3D human action recognition,” Human Action Recognition with Depth Cameras, Springer, pp. 11–40, January 2014.CrossRefGoogle Scholar
  6. [6]
    C. Chen, K. Liu, and N. Kehtarnavaz, “Real–time human action recognition based on depth motion maps,” Journal of Real–time Image Processing, vol. 12, no. 1, pp. 155–163, June 2016.CrossRefGoogle Scholar
  7. [7]
    V. Venkataraman, P. Turaga, N. Lehrer, M. Baran, T. Rikakis, and S. L. Wolf, “Attractor–shape for dynamical analysis of human movement: applications in stroke rehabilitation and action recognition,” Proc. of IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 514–520, 2013.Google Scholar
  8. [8]
    F. G. Da Silva, and E. Galeazzo, “Accelerometer based intelligent system for human movement recognition,” Proc. of 5th IEEE International Workshop on Advances in Sensors and Interfaces (IWASI), pp. 20–24, 2013.CrossRefGoogle Scholar
  9. [9]
    M. H. Siddiqi, R. Ali, A. M. Khan, Y. T. Park, and S. Lee, “Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields,” IEEE Transactions on Image Processing, vol. 24, no. 4, pp. 1386–1398, February 2015.MathSciNetCrossRefGoogle Scholar
  10. [10]
    I. B. Yildiz, K. Von Kriegstein, and S. J. Kiebel, “From birdsong to human speech recognition: Bayesian inference on a hierarchy of nonlinear dynamical systems,” PLoS Comput Biol, vol. 9. no. 9, pp. e1003219, September 2013.Google Scholar
  11. [11]
    M. Chatterjee and S.–C. Peng, “Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition,” Hearing Research, vol. 235, no. 1, pp. 143–156, January 2008.CrossRefGoogle Scholar
  12. [12]
    M. Lichtenstern, M. Frassl, B. Perun, and M. Angermann, “A prototyping environment for interaction between a human and a robotic multi–agent system,” Proc. of 7th ACM/IEEE International Conference on Human–Robot Interaction (HRI), pp. 185–186, 2012.Google Scholar
  13. [13]
    T. Yamada, S. Murata, H. Arie, and T. Ogata, “Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction,” Frontiers in Neurorobotics, vol. 10, no. 11, pp. 6014–17, July 2016.Google Scholar
  14. [14]
    M. Farhad, S. N. Hossain, A. S. Khan, and A. Islam, “An efficient optical character recognition algorithm using artificial neural network by curvature properties of characters,” Proc. of International Conference on Informatics, Electronics & Vision (ICIEV), pp. 1–5, 2014.Google Scholar
  15. [15]
    R. Palm, R. Chadalavada, and A. Lilienthal, “Fuzzy modeling and control for intention recognition in human–robot systems,” Proc. of 8th International Conference on Computational Intelligence IJCCI 2016. FCTA, Porto, Portugal, pp. 67–74, 2016.Google Scholar
  16. [16]
    C. R. Guerrero, J. C. F. Marinero, J. P. Turiel, and V. Muõz, “Using ‘human state aware’ robots to enhance physical human–robot interaction in a cooperative scenario,” Computer Methods and Programs in Biomedicine, vol. 112, no. 2, pp. 250–259, November 2013.CrossRefGoogle Scholar
  17. [17]
    P. Liu, D. F. Glas, T. Kanda, and H. Ishiguro, “Data–driven HRI: learning social behaviors by example from humanhuman interaction,” IEEE Transactions on Robotics, vol. 32, no. 4, pp. 988–1008, August 2016.CrossRefGoogle Scholar
  18. [18]
    D. Bohus and E. Horvitz, “Managing human–robot engagement with forecasts and... um... hesitations,” Proceedings of the 16th International Conference on Multimodal Interaction, pp. 2–9, 2014.Google Scholar
  19. [19]
    A. Aly and A. Tapus, “A model for synthesizing a combined verbal and nonverbal behavior based on personality traits in human–robot interaction,” Proceedings of the 8th ACM/IEEE International Conference on Human–robot Interaction, pp. 325–332, 2013.Google Scholar
  20. [20]
    D. Glowinski, A. Camurri, G. Volpe, N. Dael, and K. Scherer, “Technique for automatic emotion recognition by body gesture analysis,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW’08, pp. 1–6, 2008.Google Scholar
  21. [21]
    Z. Liu, M. Wu, D. Li, L. Chen, F. Dong, Y. Yamazaki, and K. Hirota, “Communication atmosphere in humans and robots interaction based on the concept of fuzzy atmosfield generated by emotional states of humans and robots,” Journal of Automation Mobile Robotics and Intelligent Systems, vol. 7, no. 2, pp. 52–63, June 2013.Google Scholar
  22. [22]
    W. H. Kim, J. W. Park, W. H. Lee, H. S. Lee, and M. J. Chung, “LMA based emotional motion representation using RGB–D camera,” Proceedings of the 8th ACM/IEEE International Conference on Human–robot Interaction, pp. 163–164, 2013.Google Scholar
  23. [23]
    A. Robotics, “Nao robot: characteristics–Aldebaran,” https://www.ald.softbankrobotics.com/en/coolrobots/nao/find–out–more–about–nao.Google Scholar
  24. [24]
    R. Laban, The Language of Movement: A Guidebook to Choreutics, Plays Inc, Boston, 1974.Google Scholar
  25. [25]
    Y. Cheng, A Study on Semantic and Emotional Messages in Robot Movements, Department of Multimedia Design, National Taichung Institute of Technology, Taichung, 2010.Google Scholar
  26. [26]
    Y. Juan, Motion Style Synthesis Based on Laban Movement Analysis, Institude of Information Systems and Applications, National Tsing Hua University, Hsinchu, 2004.Google Scholar
  27. [27]
    C. Hsieh and Y. Wang, “Digitalize emotions to improve the quality life–analyzing movement for emotion application,” Journal of Aesthetic Education, vol. 168, pp. 64–69, 2009.Google Scholar
  28. [28]
    M. S. Ku and Y. Chen, “From movement to emotion–a basic research of upper body (analysis foundation of body movement in the digital world 3 of 3),” Journal of Aesthetic Education, vol. 164, pp. 38–43, 2008.Google Scholar
  29. [29]
    R. C. Gonzalez and R. E. Wood, “Using fuzzy techniques for intensity,” Digital Image Processing, 3 ed., Prentice Hall, pp. 128, 2008.Google Scholar
  30. [30]
    I. Asimov, “Runaround,” Astounding Science Fiction, vol. 29, no. 1, pp. 94–103, March 1942.Google Scholar
  31. [31]
    E. Fosch Villaronga, A. Barco, B. Zcan, and J. Shukla, “An interdisciplinary approach to improving cognitive humanrobot interaction–a novel emotion–based model,” What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016. pp. 195–205, October 2016.Google Scholar
  32. [32]
    M. Giuliani, C. Lenz, T. Müller, M. Rickert, and A. Knoll, “Design principles for safety in human–robot interaction,” International Journal of Social Robotics, vol. 2, no. 3, pp. 253–274, March 2010.CrossRefGoogle Scholar
  33. [33]
    G. Xia, J. Tay, R. Dannenberg, and M. Veloso, “Autonomous robot dancing driven by beats and emotions of music,” Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems–Volume 1, pp. 205–212, 2012.Google Scholar
  34. [34]
    C. Chen, R. Jafari and N. Kehtarnavaz, “UTD–MHAD: a multimodal dataset for human action recognition utilizing a depth camera and a wearable inertial sensor,” Proc. of IEEE International Conference on Image Processing (ICIP), pp. 168–172, 2015.Google Scholar
  35. [35]
    G. Castellano, S. D. Villalba, and A. Camurri, “Recognising human emotions from body movement and gesture dynamics,” Proc. of International Conference on Affective Computing and Intelligent Interaction, pp. 71–82, 2007.CrossRefGoogle Scholar
  36. [36]
    B. Kikhia, M. Gomez, L. L. Jiménez, J. Hallberg, N. Karvonen, and K. Synnes, “Analyzing body movements within the laban effort framework using a single accelerometer,” Sensors, vol. 14, no. 3, pp. 5725–5741, March 2014.CrossRefGoogle Scholar

Copyright information

© Institute of Control, Robotics and Systems and The Korean Institute of Electrical Engineers and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  • Tehao Zhu
    • 1
  • Zeyang Xia
    • 2
    Email author
  • Jiaqi Dong
    • 1
  • Qunfei Zhao
    • 1
  1. 1.Department of AutomationShanghai Jiao Tong UniversityShanghaiChina
  2. 2.Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina

Personalised recommendations