Skip to main content

Advertisement

Log in

From Humans and Back: a Survey on Using Machine Learning to both Socially Perceive Humans and Explain to Them Robot Behaviours

  • Service and Interactive Robotics (A Tapus, Section Editor)
  • Published:
Current Robotics Reports Aims and scope Submit manuscript

Abstract

Purpose of Review

As intelligent robots enter our daily routine, it is important to be equipped with proper adaptable social perception and explainable behaviours. To do so, machine learning (ML) is often employed. This paper intends to find a trend in the way ML methods are used and applied to model human social perception and produce explainable robot behaviours.

Recent Findings

The literature has shown a substantial advancement in ML methods with application to social perception and explainable behaviours. There are papers which report models for robots to imitate humans and also for humans to imitate robots. Others use classical methods and propose new and/or improved ones which led to better human-robot interaction performances.

Summary

This paper reports a review on social perception and explainable behaviours based on ML methods. First, we present literature background on these three research areas and finish with a discussion on limitations and future research venues.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

  1. Goodrich MA, Schultz AC. Human–robot interaction: a survey. Found. Trends Hum.-Comput. Interact. 2008;1(3):203–75.

  2. Feil-Seifer D, Matarić MJ. Socially assistive robotics. IEEE Robot Automation Mag. 2011;18(1):24–31.

    Google Scholar 

  3. Villani V, Pini F, Leali F, Secchi C. Survey on human–robot collaboration in industrial settings: safety, intuitive interfaces and applications. Mechatronics. 2018;55:248–66.

    Google Scholar 

  4. Yan H, Ang MH, Poo AN. A survey on perception methods for human–robot interaction in social robots. Int J Soc Robot. 2014;6(1):85–119.

    Google Scholar 

  5. Novoa J, Wuth J, Escudero JP, Fredes J, Mahu R, Yoma NB. DNN-HMM based automatic speech recognition for HRI scenarios. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction [Internet]. Chicago, IL, USA: Association for Computing Machinery; 2018. p. 150–9.

    Google Scholar 

  6. Yu C. Adriana Tapus. Interactive robot learning for multimodal emotion recognition. In: Salichs M. et al. (eds) Social Robotics. ICSR 2019. Lecture Notes in Computer Science, vol 11876. Springer, Cham; 2019.

  7. Boucenna S, Gaussier P, Andry P, Hafemeister L. A robot learns the facial expressions recognition and face/non-face discrimination through an imitation game. Int J Soc Robot. 2014;6(4):633–52.

    Google Scholar 

  8. Crumpton J, Bethel CL. A survey of using vocal prosody to convey emotion in robot speech. Int J Soc Robot. 2016;8(2):271–85.

    Google Scholar 

  9. Devillers L, Tahon M, Sehili MA, Delaborde A. Inference of human beings’ emotional states from speech in human–robot interactions. Int J Soc Robot. 2015;7(4):451–63.

    Google Scholar 

  10. Stephens-Fripp B, Naghdy F, Stirling D, Naghdy G. Automatic affect perception based on body gait and posture: a survey. Int J Soc Robot. 2017;9(5):617–41.

    Google Scholar 

  11. Andreasson R, Alenljung B, Billing E, Lowe R. Affective touch in human–robot interaction: conveying emotion to the Nao robot. Int J Soc Robot. 2018;10(4):473–91.

    Google Scholar 

  12. Hertenstein MJ, Holmes R, McCullough M, Keltner D. The communication of emotion via touch. Emotion. 2009;9(4):566–73.

    Google Scholar 

  13. Feldmaier J, Stimpfl M, Diepold K. Development of an emotion-competent SLAM agent. In: Proceedings of the Companion of the ACM/IEEE International Conference on Human-Robot Interaction [Internet]. Vienna, Austria: Association for Computing Machinery; 2017. p. 1–9.

    Google Scholar 

  14. Fischer K, Jung M, Jensen LC, aus der Wieschen MV. Emotion expression in HRI – when and why. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South); p. 29–38 (2019).

  15. Ghayoumi M. A cognitive-based emotion model for social robots. USA: HRI; 2018.

    Google Scholar 

  16. Zhao Han, Jordan Allspaw, Adam Norton, Holly A. Yanco, “Towards a robot explanation system: a survey and our approach to state summarization, storage and querying, and human interface”, The Artificial Intelligence for Human-Robot Interaction Symposium at AAAI Fall Symposium Series 2019 (AI-HRI).

  17. de Graaf MMA, Malle BF, Dragan A, Ziemke T. Explainable robotic systems. In: Companion of the ACM/IEEE International Conference on Human-Robot Interaction. Chicago IL USA: ACM; 2018. p. 387–8.

    Google Scholar 

  18. Hellström T, Bensch S. Modeling interaction for understanding in HRI. In: Proceedings of Explainable Robotic Systems Workshop at HRI 2018, Chicago, USA, March 2018, 2 pages.

  19. Bekele E, LawsonWE,Horne Z, Khemlani S.Human-level explanatory biases for person re-identification. In: Proceedings of Explainable Robotic Systems Workshop at HRI 2018, Chicago, USA, March 2018, 2 pages.

  20. Hastie H, Lohan K, ChantlerM, Robb DA, Petrick R, Lane D, et al. The ORCA hub: explainable offshore robotics through intelligent interfaces. In: Proceedings of Explainable Robotic Systems Workshop at HRI 2018, Chicago, USA, March 2018, 2 pages

  21. Mohammad Y, Nishida T. Why should we imitate robots? Effect of back imitation on judgment of imitative skill. Int J Soc Robot. 2015;7(4):497–512.

    Google Scholar 

  22. Zanatto D, Patacchiola M, Goslin J, Thill S, Cangelosi A. Do humans imitate robots?: an investigation of strategic social learning in human-robot interaction. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Cambridge United Kingdom: ACM; 2020. p. 449–57.

    Google Scholar 

  23. Dragan A, Srinivasa S. Familiarization to robot motion. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Bielefeld, Germany: ACM Press; 2014. p. 366–73.

    Google Scholar 

  24. Busch B, Grizou J, Lopes M, Stulp F. Learning legible motion from human–robot interactions. Int J Soc Robot. 2017;9(5):765–79.

    Google Scholar 

  25. Kwon M, Huang SH, Dragan AD. Expressing robot incapability. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Chicago, IL, USA: Association for Computing Machinery; 2018. p. 87–95.

    Google Scholar 

  26. Huang SH, Held D, Abbeel P, Dragan AD. Enabling robots to communicate their objectives. Auton Robot. 2019;43(2):309–26.

    Google Scholar 

  27. Basu C, Singhal M, Dragan AD. Learning from richer human guidance: augmenting comparison-based learning with feature queries. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction [Internet]. Chicago, IL, USA: Association for Computing Machinery; 2018. p. 132–40.

    Google Scholar 

  28. Rosenthal-von der Pütten AM, Hoefinghoff J. The more the merrier? Effects of humanlike learning abilities on humans’ perception and evaluation of a robot. Int J Soc Robot. 2018;10(4):455–72.

    Google Scholar 

  29. Ehsan U, Tambwekar P, Chan L, Harrison B, Riedl MO. Automated rationale generation: a technique for explainable AI and its effects on human perceptions. In: Proceedings of the 24th International Conference on Intelligent User Interfaces [Internet]. Marina del Ray, California: Association for Computing Machinery; 2019. p. 263–74.

    Google Scholar 

  30. Daniele AF, Bansal M, Walter MR. Navigational instruction generation as inverse reinforcement learning with neural machine translation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17). Association for Computing Machinery, New York, NY, USA, p. 109–118 (2017).

  31. •• Hayes B, Shah JA. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’17). Association for Computing Machinery, New York, NY, USA, 2017. p. 303–12. Because it generalizes to multiple types of controllers, the approach presented by Hayes and Shah is especially relevant for what we consider to be the next step in explainable behaviours through natural language in collaborative robotics for industrial settings.

  32. Coppola C, Cosar S, Faria DR, Bellotto N. Social activity recognition on continuous RGB-D video sequences. Int J Soc Robot. 2020;12(1):201–15.

    Google Scholar 

  33. Rossi S, Leone E, Staffa M. Using random forests for the estimation of multiple users’ visual focus of attention from head pose. In: Adorni G, Cagnoni S, Gori M, Maratea M, editors. AI*IA 2016 advances in artificial intelligence. AI*IA 2016. Lecture Notes in Computer Science, vol. 10037. Cham: Springer; 2016.

    Google Scholar 

  34. JacksonA, Northcutt BD, Sukthankar G. The benefits of immersive demonstrations for teaching robots. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South), (2019):326–334.

  35. Raggioli L, Rossi S. “A reinforcement-learning approach for adaptive and comfortable assistive robot monitoring behavior,” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 2019, p. 1–6.

  36. Sorostinean M, Tapus A. “Activity recognition based on RGB-D and thermal sensors for socially assistive robots,” 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), Singapore; 2018. p. 1298–1304

  37. Tabrez A, Agrawal S, Hayes B. Explanation-based reward coaching to improve human performance via reinforcement learning. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South): IEEE; p. 249–257 (2019).

  38. Lee JJ, Sha F, Breazeal C. A Bayesian Theory of Mind approach to nonverbal communication. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South): IEEE; p. 487–496 (2019).

  39. Roesler O, Aly A, Taniguchi T, Hayashi Y. Evaluation of word representations in grounding natural language instructions through computational human-robot interaction. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South): IEEE; p. 307–316 (2019).

  40. OudahM, Babushkin V, Chenlinangjia T, Crandall JW. Learning to interact with a human partner. In: 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI); Portland, OR, 2015. p. 311–318.

  41. Aumann RJ, Hart S. Long cheap talk. Econometrica. 2003;71(6):1619–60.

    MathSciNet  MATH  Google Scholar 

  42. Gao Y, Yang F, Frisk M, Hemandez D, Peters C, Castellano G. “Learning socially appropriate robot approaching behavior toward groups using deep reinforcement learning,” 2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India; 2019. p. 1–8.

    Google Scholar 

  43. Vitiello GA, Staffa M, Siciliano B, Rossi S. “A neuro-fuzzy-Bayesian approach for the adaptive control of robot proxemics behavior,” 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples; 2017. p. 1–6.

    Google Scholar 

  44. Nanavati A, Doering M, Brščić D, Kanda T. Autonomously learning one-to-many social interaction logic from human-human interaction data. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Cambridge United Kingdom: ACM; 2020. p. 419–27.

    Google Scholar 

  45. Doering M, Kanda T, Ishiguro H. Neural-network-based memory for a social robot: learning a memory model of human behavior from data. J Hum Robot Interact. 2019;8(4):1–27.

    Google Scholar 

  46. Kato Y, Kanda T, Ishiguro H. May I help you?: design of human-like polite approaching behavior. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI ’15). Association for Computing Machinery, New York, NY, USA, 2015. p. 35–42.

  47. Soh H, Demiris Y. Learning assistance by demonstration: smart mobility with shared control and paired haptic controllers. J Hum Robot Interact. 2015;4(3):76.

    Google Scholar 

  48. Kim B, Pineau J. Socially adaptive path planning in human environments using inverse reinforcement learning. Int J Soc Robot. 2016;8(1):51–66.

    Google Scholar 

  49. Krening S, Feigh KM. Interaction algorithm effect on human experience with reinforcement learning. J Hum Robot Interact. 2018;7(2):1–22.

    Google Scholar 

  50. Rowland E. Theory of games and economic behavior. Nature. 1946;157(3981):172–3.

    Google Scholar 

  51. • Bobu A, DRR S, Fisac JF, Sastry SS, Dragan AD. Less is more: rethinking probabilistic models of human behavior. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Cambridge United Kingdom: ACM; 2020. p. 429–37. The human behaviour model reported in this study was found to capture better human decision-making and to perform better than the classical Boltzmann model, usually used to model human behaviours.

    Google Scholar 

  52. •• Choudhury R, Swamy G, Hadfield-Menell D, Dragan AD. On the utility of model learning in HRI. In the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). Daegu, Korea (South): IEEE; p. 317–325 (2019). The comparison between model-free, black-box model-based, and Theory of Mind–based methods is reported for the first time in the literature. This study provides the advantages and disadvantages of using each of these methods in HRI studies.

  53. Moro C, Nejat G, Mihailidis A. Learning and personalizing socially assistive robot behaviors to aid with activities of daily living. J Hum Robot Interact. 2018;7(2):1–25.

    Google Scholar 

  54. Tykal M,Montebelli A, Kyrki V. Incrementally assisted kinesthetic teaching for programming by demonstration. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, p. 205–212.

  55. Bajcsy A, Losey DP, O’Malley MK, Dragan AD. Learning from physical human corrections, one feature at a time. In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction [Internet]. Chicago, IL, USA: Association for Computing Machinery; 2018. p. 141–9.

    Google Scholar 

  56. Chu V, Fitzgerald T, Thomaz AL. Learning object affordances by leveraging the combination of human-guidance and self-exploration. In the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). p. 221–228 (2016).

  57. Fischer K, Kirstein F, Jensen LC, Krüger N, Kukliński K, aus der Savarimuthu TR. A comparison of types of robot control for programming by demonstration. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, p. 213–220.

  58. Koskinopoulou M, Piperakis S, Trahanias P. Learning from demonstration facilitates human-robot collaborative task execution. In the 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI). p. 59–66 (2016).

  59. Mohseni-Kabir A, Rich C, Chernova S, Sidner CL, Miller D. Interactive hierarchical task learning from a single demonstration. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction. Portland, Oregon, USA: Association for Computing Machinery; 2015. p. 205–12.

    Google Scholar 

  60. Leyzberg D, Spaulding S, Scassellati B. Personalizing robot tutors to individuals’ learning differences. In: Proceedings of the ACM/IEEE international conference on Human-robot interaction: ACM; 2014, 2014. p. 423–30.

  61. Gordon G, Spaulding S, Westlund JK, Lee JJ, Plummer L, Martinez M, et al. Affective personalization of a social robot tutor for children’s second language skills. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, Arizona: AAAI Press; 2016. p. 3951–7.

    Google Scholar 

  62. Gao AY, et al. Personalised human-robot co-adaptation in instructional settings using reinforcement learning. Stockholm, Sweden: IVA Workshop on Persuasive Embodied Agents for Behavior Change; 2017.

    Google Scholar 

  63. Park HW, Grover I, Spaulding S, Gomez L, Breazeal C. A model-free affective reinforcement learning approach to personalization of an autonomous social robot companion for early literacy education. Proceedings of the Thirty-third AAAI Conference on Artificial Intelligence. 2019;33(01):687–94.

  64. Sequeira P, Alves-Oliveira P, Ribeiro T, Di Tullio E, Petisca S, Melo FS, Castellano G, Paiva A. Discovering social interaction strategies for robots from restricted-perception Wizard-of-Oz studies. 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, p.197–204

  65. Hood D, Lemaignan S, Dillenbourg P. When children teach a robot to write: an autonomous teachable humanoid which uses simulated handwriting. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction [Internet]. Portland, Oregon, USA: Association for Computing Machinery; 2015. p. 83–90.

    Google Scholar 

  66. Chandra S, Paradeda R, Yin H, Dillenbourg P, Prada R, Paiva A. Do children perceive whether a robotic peer is learning or not? In: Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction. Chicago, IL, USA: Association for Computing Machinery; 2018. p. 41–9.

    Google Scholar 

  67. Zaraki A, Khamassi M, Wood LJ, Lakatos G, Tzafestas C, Amirabdollahian F, Robins B, Dautenhahn K. A novel reinforcement-based paradigm for children to teach the humanoid Kaspar robot. Int J Soc Robot.,p.1-12, 2019.

  68. Lim V, Rooksby M, Cross ES. Social robots on a global stage: establishing a role for culture during human-robot interaction. PsyArXiv. April 14, 2020.

  69. Johnson DO, Cuijpers RH. Investigating the effect of a humanoid robot’s head position on imitating human emotions. Int J Soc Robot. 2019;11(1):65–74.

    Google Scholar 

  70. Obaid M, Kistler F, Häring M, Bühling R, André E. A framework for user-defined body gestures to control a humanoid robot. Int J Soc Robot. 2014;6(3):383–96.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to François Ferland.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This article does not contain any studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Service and Interactive Robotics

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Panchea, A.M., Ferland, F. From Humans and Back: a Survey on Using Machine Learning to both Socially Perceive Humans and Explain to Them Robot Behaviours. Curr Robot Rep 1, 49–58 (2020). https://doi.org/10.1007/s43154-020-00013-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43154-020-00013-6

Keywords

Navigation