Advertisement

Reward Function and Initial Values: Better Choices for Accelerated Goal-Directed Reinforcement Learning

  • Laëtitia Matignon
  • Guillaume J. Laurent
  • Nadine Le Fort-Piat
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4131)

Abstract

An important issue in Reinforcement Learning (RL) is to accelerate or improve the learning process. In this paper, we study the influence of some RL parameters over the learning speed. Indeed, although RL convergence properties have been widely studied, no precise rules exist to correctly choose the reward function and initial Q-values. Our method helps the choice of these RL parameters within the context of reaching a goal in a minimal time. We develop a theoretical study and also provide experimental justifications for choosing on the one hand the reward function, and on the other hand particular initial Q-values based on a goal bias function.

Keywords

Optimal Policy Reinforcement Learning Goal State Reward Function Systematic Exploration 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, Cambridge (1998)Google Scholar
  2. 2.
    Watkins, C.: Learning from Delayed Rewards. PhD thesis, Cambridge University, Cambridge, England (1989)Google Scholar
  3. 3.
    Mataric, M.J.: Reward functions for accelerated learning. In: Proc. of the 11th ICML, pp. 181–189 (1994)Google Scholar
  4. 4.
    Ng, A.Y., Harada, D., Russell, S.: Policy invariance under reward transformations: theory and application to reward shaping. In: Proc. of the 16th ICML, pp. 278–287 (1999)Google Scholar
  5. 5.
    Wiewiora, E.: Potential-based shaping and Q-value initialization are equivalent. Journal of Artificial Intelligence Research 19, 205–208 (2003)MATHMathSciNetGoogle Scholar
  6. 6.
    Hailu, G., Sommer, G.: On amount and quality of bias in reinforcement learning. In: Proc. of the IEEE International Conference on Systems, Man and Cybernetics, Tokyo, pp. 1491–1495 (1999)Google Scholar
  7. 7.
    Koenig, S., Simmons, R.G.: The effect of representation and knowledge on goal-directed exploration with reinforcement-learning algorithms. Machine Learning 22(1-3), 227–250 (1996)MATHCrossRefGoogle Scholar
  8. 8.
    Behnke, S., Bennewitz, M.: Learning to play soccer using imitative reinforcement. In: Proc. of the ICRA Workshop on Social Aspects of Robot Programming through Demonstration, Barcelona (2005)Google Scholar
  9. 9.
    Watkins, C., Dayan, P.: Technical note: Q-learning. Machine Learning 8, 279–292 (1992)MATHGoogle Scholar
  10. 10.
    Doya, K.: Reinforcement learning in continuous time and space. Neural Computation 12(1), 219–245 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Laëtitia Matignon
    • 1
  • Guillaume J. Laurent
    • 1
  • Nadine Le Fort-Piat
    • 1
  1. 1.Laboratoire d’Automatique de Besançon UMR CNRS 6596BesançonFrance

Personalised recommendations