Advertisement

The Relevance of Social Cues in Assistive Training with a Social Robot

  • Neziha AkalinEmail author
  • Andrey Kiselev
  • Annica Kristoffersson
  • Amy Loutfi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11357)

Abstract

This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.

Keywords

Social cues Facial signals Robot-assisted training 

Notes

Acknowledgement

This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 721619 for the SOCRATES project.

References

  1. 1.
    Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)CrossRefzbMATHGoogle Scholar
  2. 2.
    Broekens, J.: Emotion and reinforcement: affective facial expressions facilitate robot learning. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) Artifical Intelligence for Human Computing. LNCS (LNAI), vol. 4451, pp. 113–132. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-72348-6_6CrossRefGoogle Scholar
  3. 3.
    Calle, M.L., Urrea, V.: Letter to the editor: stability of random forest importance measures. Brief. Bioinform. 12(1), 86–89 (2010)CrossRefGoogle Scholar
  4. 4.
    Danner, F.W., Lonky, E.: A cognitive-developmental approach to the effects of rewards on intrinsic motivation. Child Dev. 52, 1043–1052 (1981)CrossRefGoogle Scholar
  5. 5.
    Deci, E.L., Ryan, R.M.: The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol. Inq. 11(4), 227–268 (2000)CrossRefGoogle Scholar
  6. 6.
    Ekman, P., Rosenberg, E.L.: What The Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, Oxford (1997)Google Scholar
  7. 7.
    El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-Time Vision for Human-Computer Interaction, pp. 181–200. Springer, Boston (2005).  https://doi.org/10.1007/0-387-27890-7_11CrossRefGoogle Scholar
  8. 8.
    Fan, J., et al.: A robotic coach architecture for elder care (ROCARE) based on multi-user engagement models. IEEE Trans. Neural Syst. Rehabil. Eng. 25(8), 1153–1163 (2017)CrossRefGoogle Scholar
  9. 9.
    Fasola, J., Mataric, M.J.: Using socially assistive human-robot interaction to motivate physical exercise for older adults. Proc. IEEE 100(8), 2512–2526 (2012)CrossRefGoogle Scholar
  10. 10.
    Fasola, J., Matarić, M.J.: A socially assistive robot exercise coach for the elderly. J. Hum.-Robot Interact. 2(2), 3–32 (2013)CrossRefGoogle Scholar
  11. 11.
    Gordon, G., et al.: Affective personalization of a social robot tutor for children’s second language skills. In: AAAI, pp. 3951–3957 (2016)Google Scholar
  12. 12.
    Görer, B., Salah, A.A., Levent Akın, H.: An autonomous robotic exercise tutor for elderly people. Auton. Robots 41(3), 657–678 (2017)CrossRefGoogle Scholar
  13. 13.
    Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Mach. Learn. 46(1–3), 389–422 (2002)CrossRefzbMATHGoogle Scholar
  14. 14.
    McDaniel, B., D’Mello, S., King, B., Chipman, P., Tapp, K., Graesser, A.: Facial features for affective state detection in learning environments. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 29 (2007)Google Scholar
  15. 15.
    McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., el Kaliouby, R.: AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3723–3726. ACM (2016)Google Scholar
  16. 16.
    Miller, G.A.: The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol. Rev. 63(2), 81 (1956)CrossRefGoogle Scholar
  17. 17.
    Poggi, I., D’Errico, F.: Social signals: a psychological perspective. In: Salah, A., Gevers, T. (eds.) Computer Analysis of Human Behavior, pp. 185–225. Springer, London (2011).  https://doi.org/10.1007/978-0-85729-994-9_8CrossRefGoogle Scholar
  18. 18.
    Ritschel, H., Baur, T., André, E.: Adapting a robot’s linguistic style based on socially-aware reinforcement learning. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 378–384. IEEE (2017)Google Scholar
  19. 19.
    Vail, A.K., Grafsgaard, J.F., Boyer, K.E., Wiebe, E.N., Lester, J.C.: Predicting learning from student affective response to tutor questions. In: Micarelli, A., Stamper, J., Panourgia, K. (eds.) ITS 2016. LNCS, vol. 9684, pp. 154–164. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-39583-8_15CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Neziha Akalin
    • 1
    Email author
  • Andrey Kiselev
    • 1
  • Annica Kristoffersson
    • 1
  • Amy Loutfi
    • 1
  1. 1.Örebro UniversityÖrebroSweden

Personalised recommendations