Abstract
Effective and successful interactions between robots and people are possible only when they both are able to infer the other’s intentions, beliefs, and goals. In particular, robots’ mental models need to be transparent to be accepted by people and facilitate the collaborations between the involved parties. In this study, we focus on investigating how to create legible emotional robots’ behaviours to be used to make their decision-making process more transparent to people. In particular, we used emotions to express the robot’s internal status and feedback during an interactive learning process. We involved 28 participants in an online study where they rated the robot’s behaviours, designed in terms of colours, icons, movements and gestures, according to the perceived intention and emotions.
This work has been partially supported by Italian PON I&C 2014-2020 within the BRILLO research project “Bartending Robot for Interactive Long-Lasting Operations”, no. F/190066/01-02/X44, and CHIST-ERA IV COHERENT project “COllaborative HiErarchical Robotic ExplaNaTions”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Betella, A., Verschure, P.: The affective slider: a digital self-assessment scale for the measurement of human emotions. PLoS ONE 11, e0148037 (2016)
Broekens, J.: A temporal difference reinforcement learning theory of emotion: unifying emotion, cognition and adaptive behavior (2018)
Broekens, J., Chetouani, M.: Towards transparent robot learning through TDRL-based emotional expressions. IEEE Trans. Affect. Comput. 12(2), 352–362 (2021)
Ekman, P.: Basic Emotions. Dalgleish, T., Power, M. (eds.) (1999)
Frijda, N.H., Mesquita, B.: The Social Roles and Functions of Emotions. American Psychological Association, Washington, DC (1994)
Gosling, S.D., Rentfrow, P.J., Swann, W.B.J.: A very brief measure of the big five personality domains. J. Res. Pers. 37, 504–528 (2003)
Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (Xai) program. AI Mag. 40(2), 44–58 (2019)
Kron, A., Goldstein, A., Lee, D., Gardhouse, K., Anderson, A.: How are you feeling? Revisiting the quantification of emotional qualia. Psychol. Sci. 24, 1503–1511 (2013)
Löffler, D., Schmidt, N., Tscharn, R.: Multimodal expression of artificial emotion in social robots using color, motion and sound. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 334–343 (2018)
Marmpena, M., Lim, A., Dahl, T.S.: How does the robot feel? Perception of valence and arousal in emotional body language. Paladyn, J. Behav. Robot. 9(1), 168–182 (2018)
Matarese, M., Rossi, S., Sciutti, A., Rea, F.: Towards transparency of TD-RL robotic systems with a human teacher (2020)
Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L.: A study on how the timing and magnitude of robot errors may influence people trust of robots in an emergency scenario. In: International Conference on Social Robotics (ICSR). Springer, Tsukuba (2017)
Rossi, A., Dautenhahn, K., Koay, K.L., Walters, M.L.: The impact of peoples’ personal dispositions and personalities on their trust of robots in an emergency scenario. Paladyn J. Behav. Robot. 9, 137–154 (2018)
Rossi, S., D’Alterio, P.: Gaze behavioral adaptation towards group members for providing effective recommendations. In: Social Robotics, pp. 231–241. Springer, Cham (2017)
Rossi, S., Rossi, A., Dautenhahn, K.: The secret life of robots: perspectives and challenges for robot’s behaviours during non-interactive tasks. Int. J. Soc. Robot. 12, 1265–1278 (2020)
Rossi, S., Ruocco, M.: Better alone than in bad company: effects of incoherent non-verbal emotional cues for a humanoid robot. Interact. Stud. 20(3), 487–508 (2019)
Schindler, S., Vormbrock, R., Kissler, J.: Emotion in context: how sender predictability and identity affect processing of words as imminent personality feedback. Front. Psychol. 10, 94 (2019)
Sutton, T., Altarriba, J.: Color associations to emotion and emotion-laden words: a collection of norms for stimulus construction and selection. Behav. Res. Meth. 48, 686–728 (2015). https://doi.org/10.3758/s13428-015-0598-8
Watson, D., Clark, L., Tellegen, A.: Development and validation of brief measures of positive and negative affect: the panas scales. J. Pers. Soc. Psychol. 54(6), 1063–70 (1988)
Zhang, J., Sharkey, A.J.C.: Contextual recognition of robot emotions. In: Groß, R., Alboul, L., Melhuish, C., Witkowski, M., Prescott, T.J., Penders, J. (eds.) Towards Autonomous Robotic Systems, pp. 78–89. Springer, Heidelberg (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Rossi, A., Scheunemann, M.M., L’Arco, G., Rossi, S. (2021). Evaluation of a Humanoid Robot’s Emotional Gestures for Transparent Interaction. In: Li, H., et al. Social Robotics. ICSR 2021. Lecture Notes in Computer Science(), vol 13086. Springer, Cham. https://doi.org/10.1007/978-3-030-90525-5_34
Download citation
DOI: https://doi.org/10.1007/978-3-030-90525-5_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-90524-8
Online ISBN: 978-3-030-90525-5
eBook Packages: Computer ScienceComputer Science (R0)