Effectiveness of Considering State Similarity for Reinforcement Learning
This paper presents a novel approach that locates states with similar sub-policies, and incorporates them into the reinforcement learning framework for better learning performance. This is achieved by identifying common action sequences of states, which are derived from possible optimal policies and reflected into a tree structure. Based on the number of such sequences, we define a similarity function between two states, which helps to reflect updates on the action-value function of a state to all similar states. This way, experience acquired during learning can be applied to a broader context. The effectiveness of the method is demonstrated empirically.
KeywordsOptimal Policy Reinforcement Learning Action Sequence Path Tree Reinforcement Learning Algorithm
Unable to display preview. Download preview PDF.
- Kaelbling, L., Littman, M., Moore, A.: Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)Google Scholar
- Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)Google Scholar
- Stolle, M., Precup, D.: Learning options in reinforcement learning. In: Proc. of the 5th Int. Symp. on Abstraction, Reformulation and Approximation, pp. 212–223 (2002)Google Scholar
- McGovern, A., Barto, A.G.: Automatic discovery of subgoals in reinforcement learning using diverse density. In: Proc. of the 18th ICML, pp. 361–368 (2001)Google Scholar
- Menache, I., Mannor, S., Shimkin, N.: Q-cut - dynamic discovery of sub-goals in reinforcement learning. In: Proc. of the 13th ECML, pp. 295–306 (2002)Google Scholar
- Simsek, O., Wolfe, A.P., Barto, A.G.: Identifying useful subgoals in reinforcement learning by local graph partitioning. In: Proc. of the 22nd ICML (2005)Google Scholar
- Lin, L.J.: Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning 8, 293–321 (1992)Google Scholar