Optimality Issues of Universal Greedy Agents with Static Priors
Finding the universal artificial intelligent agent is the old dream of AI scientists. Solomonoff Induction was one big step towards this, giving a universal solution to the general problem of Sequence Prediction, by defining a universal prior distribution. Hutter defined AIXI, which extends the latter to the Reinforcement Learning framework, where almost all if not all AI problems can be formulated. However, new difficulties arise, because the agent is now active, whereas it is only passive in the Sequence Prediction case. This makes proving AIXI’s optimality difficult. In fact, we prove that the current definition of AIXI can sometimes be only suboptimal in a certain sense, and we generalize this result to infinite horizon agents and to any static prior distribution.
KeywordsAIXI Universal Artificial Intelligence Solomonoff Induction Reinforcement Learning
Unable to display preview. Download preview PDF.
- 2.Hutter, M.: A theory of universal artificial intelligence based on algorithmic complexity. Arxiv (April 2000), http://arxiv.org/abs/cs/0004001
- 6.Kraft, L.G.: A device for quantizing, grouping, and coding amplitude modulated pulses. Ph.D. thesis, MIT, Electrical Engineering Department, Cambridge, MA (1949)Google Scholar
- 8.Russell, S., Norvig, P.: Artificial Intelligence: A Modern Approach, 2nd edn. Prentice-Hall, Englewood Cliffs (2003)Google Scholar
- 12.Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998); a Bradford BookGoogle Scholar
- 13.Veness, J., Ng, K.S., Hutter, M., Silver, D.: A monte carlo AIXI approximation. Arxiv (September 2009), http://arxiv.org/abs/0909.0801