Machine Learning

, Volume 8, Issue 3, pp 293–321

Self-improving reactive agents based on reinforcement learning, planning and teaching

  • Long-Ji Lin
Article

DOI: 10.1007/BF00992699

Cite this article as:
Lin, LJ. Mach Learn (1992) 8: 293. doi:10.1007/BF00992699

Abstract

To date, reinforcement learning has mostly been studied solving simple learning tasks. Reinforcement learning methods that have been studied so far typically converge slowly. The purpose of this work is thus two-fold: 1) to investigate the utility of reinforcement learning in solving much more complicated learning tasks than previously studied, and 2) to investigate methods that will speed up reinforcement learning.

This paper compares eight reinforcement learning frameworks:adaptive heuristic critic (AHC) learning due to Sutton,Q-learning due to Watkins, and three extensions to both basic methods for speeding up learning. The three extensions are experience replay, learning action models for planning, and teaching. The frameworks were investigated using connectionism as an approach to generalization. To evaluate the performance of different frameworks, a dynamic environment was used as a testbed. The environment is moderately complex and nondeterministic. This paper describes these frameworks and algorithms in detail and presents empirical evaluation of the frameworks.

Keywords

Reinforcement learning planning teaching connectionist networks 
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 1992

Authors and Affiliations

  • Long-Ji Lin
    • 1
  1. 1.School of Computer ScienceCarnegie Mellon UniversityPittsburgh

Personalised recommendations