Machine Learning and Knowledge Discovery in Databases

Volume 7524 of the series Lecture Notes in Computer Science pp 211-226

Policy Iteration Based on a Learned Transition Model

  • Vivek RamavajjalaAffiliated withDepartment of Computer Science & Engineering, University of California
  • , Charles ElkanAffiliated withDepartment of Computer Science & Engineering, University of California

* Final gross prices may vary according to local VAT.

Get Access


This paper investigates a reinforcement learning method that combines learning a model of the environment with least-squares policy iteration (LSPI). The LSPI algorithm learns a linear approximation of the optimal state-action value function; the idea studied here is to let this value function depend on a learned estimate of the expected next state instead of directly on the current state and action. This approach makes it easier to define useful basis functions, and hence to learn a useful linear approximation of the value function. Experiments show that the new algorithm, called NSPI for next-state policy iteration, performs well on two standard benchmarks, the well-known mountain car and inverted pendulum swing-up tasks. More importantly, the NSPI algorithm performs well, and better than a specialized recent method, on a resource management task known as the day-ahead wind commitment problem. This latter task has action and state spaces that are high-dimensional and continuous.