The Relevance of Social Cues in Assistive Training with a Social Robot
This paper examines whether social cues, such as facial expressions, can be used to adapt and tailor a robot-assisted training in order to maximize performance and comfort. Specifically, this paper serves as a basis in determining whether key facial signals, including emotions and facial actions, are common among participants during a physical and cognitive training scenario. In the experiment, participants performed basic arm exercises with a social robot as a guide. We extracted facial features from video recordings of participants and applied a recursive feature elimination algorithm to select a subset of discriminating facial features. These features are correlated with the performance of the user and the level of difficulty of the exercises. The long-term aim of this work, building upon the work presented here, is to develop an algorithm that can eventually be used in robot-assisted training to allow a robot to tailor a training program based on the physical capabilities as well as the social cues of the users.
KeywordsSocial cues Facial signals Robot-assisted training
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 721619 for the SOCRATES project.
- 2.Broekens, J.: Emotion and reinforcement: affective facial expressions facilitate robot learning. In: Huang, T.S., Nijholt, A., Pantic, M., Pentland, A. (eds.) Artifical Intelligence for Human Computing. LNCS (LNAI), vol. 4451, pp. 113–132. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72348-6_6CrossRefGoogle Scholar
- 6.Ekman, P., Rosenberg, E.L.: What The Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press, Oxford (1997)Google Scholar
- 7.El Kaliouby, R., Robinson, P.: Real-time inference of complex mental states from facial expressions and head gestures. In: Kisačanin, B., Pavlović, V., Huang, T.S. (eds.) Real-Time Vision for Human-Computer Interaction, pp. 181–200. Springer, Boston (2005). https://doi.org/10.1007/0-387-27890-7_11CrossRefGoogle Scholar
- 11.Gordon, G., et al.: Affective personalization of a social robot tutor for children’s second language skills. In: AAAI, pp. 3951–3957 (2016)Google Scholar
- 14.McDaniel, B., D’Mello, S., King, B., Chipman, P., Tapp, K., Graesser, A.: Facial features for affective state detection in learning environments. In: Proceedings of the Annual Meeting of the Cognitive Science Society, vol. 29 (2007)Google Scholar
- 15.McDuff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., el Kaliouby, R.: AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3723–3726. ACM (2016)Google Scholar
- 18.Ritschel, H., Baur, T., André, E.: Adapting a robot’s linguistic style based on socially-aware reinforcement learning. In: 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 378–384. IEEE (2017)Google Scholar
- 19.Vail, A.K., Grafsgaard, J.F., Boyer, K.E., Wiebe, E.N., Lester, J.C.: Predicting learning from student affective response to tutor questions. In: Micarelli, A., Stamper, J., Panourgia, K. (eds.) ITS 2016. LNCS, vol. 9684, pp. 154–164. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39583-8_15CrossRefGoogle Scholar