Advertisement

Trading Value and Information in MDPs

  • Jonathan Rubin
  • Ohad Shamir
  • Naftali Tishby
Part of the Intelligent Systems Reference Library book series (ISRL, volume 28)

Abstract

Interactions between an organism and its environment are commonly treated in the framework of Markov Decision Processes (MDP). While standard MDP is aimed solely at maximizing expected future rewards (value), the circular flow of information between the agent and its environment is generally ignored. In particular, the information gained from the environment by means of perception and the information involved in the process of action selection (i.e., control) are not treated in the standard MDP setting. In this paper, we focus on the control information and show how it can be combined with the reward measure in a unified way. Both of these measures satisfy the familiar Bellman recursive equations, and their linear combination (the free-energy) provides an interesting new optimization criterion. The tradeoff between value and information, explored using our info-rl algorithm, provides a principled justification for stochastic (soft) policies. We use computational learning theory to show that these optimal policies are also robust to uncertainties in settings with only partial knowledge of the MDP parameters.

Keywords

Optimal Policy Markov Decision Process Action Selection Control Information Tradeoff Curve 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (1995)Google Scholar
  2. 2.
    Braun, D.A., Ortega, P.A., Theodorou, E., Schaal, S.: Path integral control and bounded rationality. To appear in Approximate Dynamic Programming and Reinforcement Learnig (2011), http://www-clmc.usc.edu/publications//D/DanielADPRL2011.pdf
  3. 3.
    Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)zbMATHCrossRefGoogle Scholar
  4. 4.
    Friston, K.: The free-energy principle: a rough guide to the brain? Trends Cogn. Sci. 13(7), 293–301 (2009), doi:10.1016/j.tics.2009.04.005CrossRefGoogle Scholar
  5. 5.
    Fuster, J.M.: The prefrontal cortex — an update: Time is of the essence. Neuron 30, 319–333 (2001)CrossRefGoogle Scholar
  6. 6.
    Kappen, B., Gomez, V., Opper, M.: Optimal control as a graphical model inference problem. ArXiv e-prints (2009)Google Scholar
  7. 7.
    Mcallester, D.: Simplified pac-bayesian margin bounds. In: Proc. 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, Hawaii of the 16th Annual Conference on Learning Theory, April 1-5 (2003)Google Scholar
  8. 8.
    Shannon, C.: Coding theorems for a discrete source with a fidelity criterion. IRE NATO Conv. Rec. 4, 142–163 (1959)Google Scholar
  9. 9.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
  10. 10.
    Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: Proc. 37th Annual Allerton Conference on Communication, Control and Computing (1999)Google Scholar
  11. 11.
    Tishby, N., Polani, D.: Information theory of decisions and actions. In: Vassilis, Hussain, Taylor (eds.) Perception-Reason-Action, Cognitive Neuroscience. Springer, Heidelberg (2010)Google Scholar
  12. 12.
    Todorov, E.: Efficient computation of optimal actions. PNAS 106(28), 11,478–11,483 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Jonathan Rubin
    • 1
  • Ohad Shamir
    • 2
  • Naftali Tishby
    • 1
  1. 1.Hebrew University JerusalemIsrael
  2. 2.Microsoft Research Cambridge

Personalised recommendations