Abstract
Social assistive robots are conceived to cooperate with humans in many areas like healthcare, education, or assistance. In situations where the workforce is scarce and when these machines work with special populations like older adults or children, the behavior must be appropriate and seem natural. In this contribution, we present a Deep Reinforcement Learning model for the autonomous adaptive behavior of social robots. The model emulates some aspects of human biology by generating artificial biologically inspired functions, like sleep or entertainment, to endow robots with long-term autonomous behavior. The Deep Reinforcement Learning system overcomes classical Reinforcement Learning problems such as high dimensional state-action spaces learning which actions better suit each situation the robot is experiencing. Besides, the system aims at maintaining the robot’s internal state in the best possible condition sustaining human-robot interaction. The results show that our robot Mini correctly learns how to regulate the deficits in its biological processes by selecting from six actions in a high diversity of situations that merge the state of the biological process and the external stimuli the robot perceives from the environment.
Miguel Ángel Salichs—The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R &D &I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akalin, N., Kiselev, A., Kristoffersson, A., Loutfi, A.: Enhancing social human-robot interaction with deep reinforcement learning. In: FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction (AI-MHRI), Stockholm, Sweden 14–15 July, 2018, pp. 48–50. MHRI (2018)
Akalin, N., Loutfi, A.: Reinforcement learning approaches in social robotics. Sensors 21(4), 1292 (2021)
Cañamero, D.: Designing emotions for activity selection in autonomous agents. Emotions Humans Artifacts 115, 148 (2003)
Chen, C., Liu, Y., Kreiss, S., Alahi, A.: Crowd-robot interaction: Crowd-aware robot navigation with attention-based deep reinforcement learning. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 6015–6022. IEEE (2019)
Gao, Y., Sibirtseva, E., Castellano, G., Kragic, D.: Fast adaptation with meta-reinforcement learning for trust modelling in human-robot interaction. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 305–312. IEEE (2019)
Kouretas, I., Paliouras, V.: Hardware implementation of a softmax-like function for deep learning. Technologies 8(3), 46 (2020)
Liu, L., Dugas, D., Cesari, G., Siegwart, R., Dubé, R.: Robot navigation in crowded environments using deep reinforcement learning. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5671–5677. IEEE (2020)
Lorenz, K.: The foundations of ethology. Springer Science & Business Media (2013). https://doi.org/10.1007/978-3-7091-3671-3
Maroto-Gómez, M., Castro-González, Á., Castillo, J.C., Malfaz, M., Salichs, M.A.: A bio-inspired motivational decision making system for social robots based on the perception of the user. Sensors 18(8), 2691 (2018)
Maroto-Gómez, M., González, R., Castro-González, Á., Malfaz, M., Salichs, M.Á.: Speeding-up action learning in a social robot with dyna-q+: A bioinspired probabilistic model approach. IEEE Access 9, 98381–98397 (2021)
Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Mousavi, S.S., Schukat, M., Howley, E.: Deep reinforcement learning: an overview. In: Bi, Y., Kapoor, S., Bhatia, R. (eds.) IntelliSys 2016. LNNS, vol. 16, pp. 426–440. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-56991-8_32
Qureshi, A.H., Nakamura, Y., Yoshikawa, Y., Ishiguro, H.: Robot gains social intelligence through multimodal deep reinforcement learning. In: 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pp. 745–751. IEEE (2016)
Qureshi, A.H., Nakamura, Y., Yoshikawa, Y., Ishiguro, H.: Intrinsically motivated reinforcement learning for human-robot interaction in the real-world. Neural Netw. 107, 23–33 (2018)
Salichs, M.A.: Mini: a new social robot for the elderly. Int. J. Soc. Robot. 12(6), 1231–1249 (2020)
Samsani, S.S., Muhammad, M.S.: Socially compliant robot navigation in crowded environment by human behavior resemblance using deep reinforcement learning. IEEE Robot. Autom. Lett. 6(3), 5223–5230 (2021)
Zhang, Z.: Improved adam optimizer for deep neural networks. In: 2018 IEEE/ACM 26th International Symposium on Quality of Service (IWQoS), pp. 1–2. Ieee (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Maroto-Gómez, M., Malfaz, M., Castro-González, Á., Salichs, M.Á. (2022). Deep Reinforcement Learning for the Autonomous Adaptive Behavior of Social Robots. In: Cavallo, F., et al. Social Robotics. ICSR 2022. Lecture Notes in Computer Science(), vol 13817. Springer, Cham. https://doi.org/10.1007/978-3-031-24667-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-031-24667-8_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24666-1
Online ISBN: 978-3-031-24667-8
eBook Packages: Computer ScienceComputer Science (R0)