Defensive Universal Learning with Experts
This paper shows how universal learning can be achieved with expert advice. To this aim, we specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain a master algorithm for “reactive” experts problems, which means that the master’s actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. The resulting universal learner performs – in a certain sense – almost as well as any computable strategy, for any online decision problem. We also specify the (worst-case) convergence speed, which is very slow.
KeywordsNeural Information Processing System Expert Advice Repeated Game Bandit Problem Universal Turing Machine
Unable to display preview. Download preview PDF.
- 1.de Farias, D.P., Megiddo, N.: How to combine expert (and novice) advice when actions impact the environment? In: Advances in Neural Information Processing Systems (NIPS) 16. MIT Press, Cambridge (2004)Google Scholar
- 2.Hannan, J.: Approximation to Bayes risk in repeated plays. In: Dresher, M., Tucker, A.W., Wolfe, P. (eds.) Contributions to the Theory of Games 3, pp. 97–139. Princeton University Press, Princeton (1957)Google Scholar
- 3.Kalai, A., Vempala, S.: Efficient algorithms for online decision. In: Proc. 16th Annual Conference on Learning Theory (COLT), pp. 506–521. Springer, Heidelberg (2003)Google Scholar
- 5.Hutter, M., Poland, J.: Prediction with expert advice by following the perturbed leader for general weights. In: International Conference on Algorithmic Learning Theory (ALT), pp. 279–293 (2004)Google Scholar
- 6.Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2004)Google Scholar
- 7.Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: The adversarial multi-armed bandit problem. In: Proc. 36th Annual Symposium on Foundations of Computer Science (FOCS), pp. 322–331. IEEE, Los Alamitos (1995)Google Scholar
- 8.Poland, J.: FPL analysis for adaptive bandits. In: 3rd Symposium on Stochastic Algorithms, Foundations and Applications, SAGA (2005) (to appear)Google Scholar
- 13.Hutter, M.: Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions. In: Proc. 12th European Conference on Machine Learning (ECML-2001), pp. 226–238 (2001)Google Scholar
- 14.de Farias, D.P., Megiddo, N.: Exploration-exploitation tradeoffs for experts algorithms in reactive environments. In: Advances in Neural Information Processing Systems 17 (2005)Google Scholar
- 16.Cesa-Bianchi, N., Lugosi, G., Stoltz, G.: Regret minimization under partial monitoring. Technical report (2004)Google Scholar