Abstract
In this study, a novel model of the intelligent agent is proposed by introducing a dynamic emotion model into conventional action selection policy of the reinforcement learning method. Comparing with the conventional Q-learning of reinforcement learning, the proposed method adds two emotional factors in to the state-action value function: “arousal value” factor which affects motivation of action and “pleasure value” factor which influences the probability of action selection. The emotional factors are affected by the other agents when multiple agents exist in the perception area. Computer simulations of pursuit problems of static/dynamic preys were performed and all results showed effectiveness of the proposed method, i.e., faster learning convergence was confirmed comparing with the case of conventional Q-learning method.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Cao, Y.U., Fukunaga, A.S., Kahng, A.B.: Cooperative Mobile Robotics: Antecedents and Directions. Autonomous Robots 4, 7–27 (1997)
Asada, M., Uchibe, E., Hosoda, K.: Cooperative Behavior Acquisition for Mobile Robots in Dynamically Changing Real Worlds via Vision-based Reinforcement Learning and Development. In: Artificial Intelligence, pp. 275–292 (1999)
Kobayashi, K., Kurano, T., Kuremoto, T., Obayashi, M.: Cooperative Behavior Acquisition in Multi-agent Reinforcement Learning System Using Attention Degree. In: Huang, T., Zeng, Z., Li, C., Leung, C.S. (eds.) ICONIP 2012, Part III. LNCS, vol. 7665, pp. 537–544. Springer, Heidelberg (2012)
Sato, S., Nozawa, A., Ide, H.: Characteristics of Behavior of Robots with Emotion Model. IEEJ Transaction on Electronics, Information and Systems (EIS) 124(7), 1390–1395 (2004)
Kusano, T., Nozawa, A., Ide, H.: Emergent of Burden Sharing of Robots with Emotion Model. IEEJ Transaction on Electronics, Information and Systems (EIS) 125(7), 1037–1042 (2005)
Kuremoto, T., Obayashi, M., Kobayashi, K., Feng, L.-B.: Autonomic Behaviors of Swarm Robots Driven by Emotion and Curiosity. In: Li, K., Jia, L., Sun, X., Fei, M., Irwin, G.W. (eds.) LSMS 2010 and ICSEE 2010. LNCS (LNBI), vol. 6330, pp. 541–547. Springer, Heidelberg (2010)
Kuremoto, T., Obayashi, M., Kobayashi, K., Feng, L.-B.: An Improved Internal Model of Autonomous Robot by a Psychological Approach. Cognitive Computation 3(4), 501–509 (2011)
Watada, S., Obayashi, M., Kuremoto, T., Kobayashi, K.: A New Decision-Making System of an Agent Based on Emotional Models in Multi-Agent System. In: Proceedings of 18th International Symposium on Artificial Life and Robotics (ARBO 2013), pp. 452–455 (2013)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press (1998)
Russell, J.A.: A Circumplex Model of Affect. Journal of Personality and Social Psychology 39(6), 1161–1178 (1980)
Larsen, R.J., Diener, E.: Promises and Problems with The Circumplex Model of Emotion. In: Clark, M.S. (ed.) Rev. Personality and Social Psychology: Emotion, vol. 13, pp. 25–59 (1992)
Doya, K.: Metalearning and Neuromodulation. Neural Networks 15(4), 495–506 (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kuremoto, T., Tsurusaki, T., Kobayashi, K., Mabu, S., Obayashi, M. (2013). A Model of Emotional Intelligent Agent for Cooperative Goal Exploration. In: Huang, DS., Bevilacqua, V., Figueroa, J.C., Premaratne, P. (eds) Intelligent Computing Theories. ICIC 2013. Lecture Notes in Computer Science, vol 7995. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-39479-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-39479-9_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-39478-2
Online ISBN: 978-3-642-39479-9
eBook Packages: Computer ScienceComputer Science (R0)