Trading Value and Information in MDPs
Interactions between an organism and its environment are commonly treated in the framework of Markov Decision Processes (MDP). While standard MDP is aimed solely at maximizing expected future rewards (value), the circular flow of information between the agent and its environment is generally ignored. In particular, the information gained from the environment by means of perception and the information involved in the process of action selection (i.e., control) are not treated in the standard MDP setting. In this paper, we focus on the control information and show how it can be combined with the reward measure in a unified way. Both of these measures satisfy the familiar Bellman recursive equations, and their linear combination (the free-energy) provides an interesting new optimization criterion. The tradeoff between value and information, explored using our info-rl algorithm, provides a principled justification for stochastic (soft) policies. We use computational learning theory to show that these optimal policies are also robust to uncertainties in settings with only partial knowledge of the MDP parameters.
KeywordsOptimal Policy Markov Decision Process Action Selection Control Information Tradeoff Curve
Unable to display preview. Download preview PDF.
- 1.Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (1995)Google Scholar
- 2.Braun, D.A., Ortega, P.A., Theodorou, E., Schaal, S.: Path integral control and bounded rationality. To appear in Approximate Dynamic Programming and Reinforcement Learnig (2011), http://www-clmc.usc.edu/publications//D/DanielADPRL2011.pdf
- 6.Kappen, B., Gomez, V., Opper, M.: Optimal control as a graphical model inference problem. ArXiv e-prints (2009)Google Scholar
- 7.Mcallester, D.: Simplified pac-bayesian margin bounds. In: Proc. 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, Hawaii of the 16th Annual Conference on Learning Theory, April 1-5 (2003)Google Scholar
- 8.Shannon, C.: Coding theorems for a discrete source with a fidelity criterion. IRE NATO Conv. Rec. 4, 142–163 (1959)Google Scholar
- 9.Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)Google Scholar
- 10.Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: Proc. 37th Annual Allerton Conference on Communication, Control and Computing (1999)Google Scholar
- 11.Tishby, N., Polani, D.: Information theory of decisions and actions. In: Vassilis, Hussain, Taylor (eds.) Perception-Reason-Action, Cognitive Neuroscience. Springer, Heidelberg (2010)Google Scholar
- 12.Todorov, E.: Efficient computation of optimal actions. PNAS 106(28), 11,478–11,483 (2009)Google Scholar