Abstract
“Concept” is a kind of discrete and abstract state representation, and is considered useful for efficient action planning. However, it is supposed to emerge in our brain as a parallel processing and learning system through learning based on a variety of experiences, and so it is difficult to be developed by hand-coding. In this paper, as a previous step of the “concept formation”, it is investigated whether the discrete and abstract state representation is formed or not through learning in a task with multi-step state transitions using Actor-Q learning method and a recurrent neural network. After learning, an agent repeated a sequence two times, in which it pushed a button to open a door and moved to the next room, and finally arrived at the third room to get a reward. In two hidden neurons, discrete and abstract state representation not depending on the door opening pattern was observed. The result of another learning with two recurrent neural networks that are for Q-values and for Actors suggested that the state representation emerged to generate appropriate Q-values.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Tani, J., Nolfi, S.: Learning to Perceive the World as Articulated: An Approach for Hierarchical Learning in Sensory-Motor Systems. Neural Networks 12, 1131–1141 (1999)
Yamashita, Y., Tani, J.: Emergence of functional hierarchy in a multiple timescale neural network model: a humanoid robot experiment. PLoS Computational Biology 4, e100220 (2008)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning Internal Representations by Error Propagation. In: Parallel Distributed Processing, pp. 318–362. The MIT Press (1986)
Shibata, K., Nishino, T., Okabe, Y.: Active Perception and Recognition Learning System Based on Actor-Q Architecture Systems and Computers in Japan 33(14), 12–22 (2002)
Samsudin, M.F., Shibata, K.: Emergence of Multi-Step Discrete State Transition through Reinforcement Learning with a Recurrent Neural Network. In: Proc. of ICONIP 2012 (2012) (to appear)
Utsunomiya, H., Shibata, K.: Contextual Behaviors and Internal Representations Acquired by Reinforcement Learning with a Recurrent Neural Network in a Continuous State and Action Space Task. In: Köppen, M., Kasabov, N., Coghill, G. (eds.) ICONIP 2008, Part II. LNCS, vol. 5507, pp. 970–978. Springer, Heidelberg (2009)
Shibata, K., Ito, K.: Adaptive Space Reconstruction on Hidden Layer and Knowledge Transfer based on Hidden-level Generalization in Layered Neural Networks. Trans. SICE 43(1), 54–63 (2007) (in Japanese)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Sawatsubashi, Y., Samusudin, M.F.b., Shibata, K. (2013). Emergence of Discrete and Abstract State Representation through Reinforcement Learning in a Continuous Input Task. In: Kim, JH., Matson, E., Myung, H., Xu, P. (eds) Robot Intelligence Technology and Applications 2012. Advances in Intelligent Systems and Computing, vol 208. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37374-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-37374-9_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37373-2
Online ISBN: 978-3-642-37374-9
eBook Packages: EngineeringEngineering (R0)