Defensive Universal Learning with Experts

  • Jan Poland
  • Marcus Hutter
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3734)

Abstract

This paper shows how universal learning can be achieved with expert advice. To this aim, we specify an experts algorithm with the following characteristics: (a) it uses only feedback from the actions actually chosen (bandit setup), (b) it can be applied with countably infinite expert classes, and (c) it copes with losses that may grow in time appropriately slowly. We prove loss bounds against an adaptive adversary. From this, we obtain a master algorithm for “reactive” experts problems, which means that the master’s actions may influence the behavior of the adversary. Our algorithm can significantly outperform standard experts algorithms on such problems. Finally, we combine it with a universal expert class. The resulting universal learner performs – in a certain sense – almost as well as any computable strategy, for any online decision problem. We also specify the (worst-case) convergence speed, which is very slow.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    de Farias, D.P., Megiddo, N.: How to combine expert (and novice) advice when actions impact the environment? In: Advances in Neural Information Processing Systems (NIPS) 16. MIT Press, Cambridge (2004)Google Scholar
  2. 2.
    Hannan, J.: Approximation to Bayes risk in repeated plays. In: Dresher, M., Tucker, A.W., Wolfe, P. (eds.) Contributions to the Theory of Games 3, pp. 97–139. Princeton University Press, Princeton (1957)Google Scholar
  3. 3.
    Kalai, A., Vempala, S.: Efficient algorithms for online decision. In: Proc. 16th Annual Conference on Learning Theory (COLT), pp. 506–521. Springer, Heidelberg (2003)Google Scholar
  4. 4.
    McMahan, H.B., Blum, A.: Online geometric optimization in the bandit setting against an adaptive adversary. In: Shawe-Taylor, J., Singer, Y. (eds.) COLT 2004. LNCS (LNAI), vol. 3120, pp. 109–123. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  5. 5.
    Hutter, M., Poland, J.: Prediction with expert advice by following the perturbed leader for general weights. In: International Conference on Algorithmic Learning Theory (ALT), pp. 279–293 (2004)Google Scholar
  6. 6.
    Hutter, M.: Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability. Springer, Berlin (2004)Google Scholar
  7. 7.
    Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: Gambling in a rigged casino: The adversarial multi-armed bandit problem. In: Proc. 36th Annual Symposium on Foundations of Computer Science (FOCS), pp. 322–331. IEEE, Los Alamitos (1995)Google Scholar
  8. 8.
    Poland, J.: FPL analysis for adaptive bandits. In: 3rd Symposium on Stochastic Algorithms, Foundations and Applications, SAGA (2005) (to appear)Google Scholar
  9. 9.
    Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)MATHGoogle Scholar
  10. 10.
    Auer, P., Cesa-Bianchi, N., Freund, Y., Schapire, R.E.: The nonstochastic multiarmed bandit problem. SIAM Journal on Computing 32, 48–77 (2002)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Hutter, M., Poland, J.: Adaptive online prediction by following the perturbed leader. Journal of Machine Learning Research 6, 639–660 (2005)MathSciNetGoogle Scholar
  12. 12.
    Solomonoff, R.J.: Complexity-based induction systems: comparisons and convergence theorems. IEEE Trans. Inform. Theory 24, 422–432 (1978)MATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Hutter, M.: Towards a universal theory of artificial intelligence based on algorithmic probability and sequential decisions. In: Proc. 12th European Conference on Machine Learning (ECML-2001), pp. 226–238 (2001)Google Scholar
  14. 14.
    de Farias, D.P., Megiddo, N.: Exploration-exploitation tradeoffs for experts algorithms in reactive environments. In: Advances in Neural Information Processing Systems 17 (2005)Google Scholar
  15. 15.
    Cesa-Bianchi, N., Lugosi, G., Stoltz, G.: Minimizing regret with label efficient prediction. In: Shawe-Taylor, J., Singer, Y. (eds.) COLT 2004. LNCS (LNAI), vol. 3120, pp. 77–92. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  16. 16.
    Cesa-Bianchi, N., Lugosi, G., Stoltz, G.: Regret minimization under partial monitoring. Technical report (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Jan Poland
    • 1
  • Marcus Hutter
    • 2
  1. 1.Grad. School of Inf. Sci. and Tech.Hokkaido UniversityJapan
  2. 2.IDSIAMannoSwitzerland

Personalised recommendations