Reinforcement Learning and Apprenticeship Learning for Robotic Control

  • Andrew Y. Ng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4264)


Many robotic control problems, such as autonomous helicopter flight, legged robot locomotion, and autonomous driving, remain challenging even for modern reinforcement learning algorithms. Some of the reasons for these problems being challenging are (i) It can be hard to write down, in closed form, a formal specification of the control task (for example, what is the cost function for “driving well”?), (ii) It is often difficult to learn a good model of the robot’s dynamics, (iii) Even given a complete specification of the problem, it is often computationally difficult to find good closed-loop controller for a high-dimensional, stochastic, control task. However, when we are allowed to learn from a human demonstration of a task—in other words, if we are in the apprenticeship learning setting—then a number of efficient algorithms can be used to address each of these problems.


Cost Function Reinforcement Learning Control Task Autonomous Driving Policy Search 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Abbeel, P., Ng, A.Y.: Apprenticeship learning via inverse reinforcement learning. In: Proc. ICML (2004)Google Scholar
  2. 2.
    Abbeel, P., Ng, A.Y.: Exploration and apprenticeship learning in reinforcement learning. In: Proc. ICML (2005)Google Scholar
  3. 3.
    Bagnell, J.A., Kakade, S., Ng, A.Y., Schneider, J.: Policy search by dynamic programming. In: NIPS 16 (2003)Google Scholar
  4. 4.
    Demiris, J., Hayes, G.: A robot controller using learning by imitation (1994)Google Scholar
  5. 5.
    Kearns, M., Singh, S.: Near-optimal reinforcement learning in polynomial time. Machine Learning journal (2002)Google Scholar
  6. 6.
    Kuniyoshi, Y., Inaba, M., Inoue, H.: Learning by watching: Extracting reusable task knowledge from visual observation of human performance. T-RA 10, 799–822 (1994)Google Scholar
  7. 7.
    Langford, J., Zadrozny, B.: Relating reinforcement learning performance to classification performance. In: Proc. ICML (2005)Google Scholar
  8. 8.
    Ng, A.Y., Russell, S.: Algorithms for inverse reinforcement learning. In: Proc. ICML (2000)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Andrew Y. Ng
    • 1
  1. 1.Computer Science DepartmentStanford UniversityStanfordUSA

Personalised recommendations