Tile Coding Based on Hyperplane Tiles

  • Daniele Loiacono
  • Pier Luca Lanzi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5323)


In large and continuous state-action spaces reinforcement learning heavily relies on function approximation techniques. Tile coding is a well-known function approximator that has been successfully applied to many reinforcement learning tasks. In this paper we introduce the hyperplane tile coding, in which the usual tiles are replaced by parameterized hyperplanes that approximate the action-value function. We compared the performance of hyperplane tile coding with the usual tile coding on three well-known benchmark problems. Our results suggest that the hyperplane tiles improve the generalization capabilities of the tile coding approximator: in the hyperplane tile coding broad generalizations over the problem space result only in a soft degradation of the performance, whereas in the usual tile coding they might dramatically affect the performance.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Boyan, J.A., Moore, A.W.: Generalization in reinforcement learning: Safely approximating the value function. In: Tesauro, G., Touretzky, D.S., Leen, T.K. (eds.) Advances in Neural Information Processing Systems, vol. 7, pp. 369–376. The MIT Press, Cambridge (1995)Google Scholar
  2. 2.
    Haykin, S.: Adaptive Filter Theory. Prentice-Hall information and system sciences series (2002)Google Scholar
  3. 3.
    Kretchmar, R., Anderson, C.: Comparison of CMACs and radial basis functions for local function approximators in reinforcement learning. In: Proceedings of the IEEE International Conference on Neural Networks, Houston, TX, pp. 834–837 (1997)Google Scholar
  4. 4.
    Reynolds, S.I.: Reinforcement Learning with Exploration. Ph.D thesis, School of Computer Science. The University of Birmingham, Birmingham, B15 2TT (December 2002)Google Scholar
  5. 5.
    Sherstov, A.A., Stone, P.: Function approximation via tile coding: Automating parameter choice. In: Zucker, J.-D., Saitta, L. (eds.) SARA 2005. LNCS, vol. 3607, pp. 194–205. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  6. 6.
    Sutton, R.S.: Gain adaptation beats least squares? In: Proceedings of the Seventh Yale Workshop on Adaptive and Learning Systems, pp. 161–166. Yale University, New Haven (1992)Google Scholar
  7. 7.
    Sutton, R.S.: Generalization in reinforcement learning: Successful examples using sparse coarse coding. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 1038–1044. The MIT Press, Cambridge (1996)Google Scholar
  8. 8.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning – An Introduction. MIT Press, Cambridge (1998)Google Scholar
  9. 9.
    Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Computation 6(2), 215–219 (1994)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Daniele Loiacono
    • 1
  • Pier Luca Lanzi
    • 1
    • 2
  1. 1.Artificial Intelligence and Robotics Laboratory (AIRLab)MilanoItaly
  2. 2.Illinois Genetic Algorithm Laboratory (IlliGAL)University of Illinois at Urbana ChampaignUrbanaUSA

Personalised recommendations