Abstract
When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of \(\sqrt{\mbox{complexity/current loss}}\) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative “Follow the Perturbed Leader” (FPL) algorithm from [KV03] (based on Hannan’s algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are new.
This work was supported by SNF grant 2100-67712.02.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Auer, P., Cesa-Bianchi, N., Gentile, C.: Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences 64(1), 48–75 (2002)
Auer, P., Gentile, C.: Adaptive and self-confident on-line learning algorithms. In: Proceedings of the 13th Conference on Computational Learning Theory, pp. 107–117. Morgan Kaufmann, San Francisco (2000)
Cesa-Bianchi, N., et al.: How to use expert advice. Journal of the ACM 44(3), 427–485 (1997)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Gentile, C.: The robustness of the p-norm algorithm. Machine Learning 53(3), 265–299 (2003)
Hannan, J.: Approximation to Bayes risk in repeated plays. In: Dresher, M., Tucker, A.W., Wolfe, P. (eds.) Contributions to the Theory of Games, vol. 3, pp. 97–139. Princeton University Press, Princeton (1957)
Hutter, M.: Convergence and loss bounds for Bayesian sequence prediction. IEEE Transactions on Information Theory 49(8), 2061–2067 (2003)
Hutter, M.: Optimality of universal Bayesian prediction for general loss and alphabet. Journal of Machine Learning Research 4, 971–1000 (2003)
Kalai, A., Vempala, S.: Efficient algorithms for online decision. In: Proceedings of the 16th Annual Conference on Learning Theory (COLT 2003), Berlin. LNCS (LNAI), pp. 506–521. Springer, Heidelberg (2003)
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. In: 30th Annual Symposium on Foundations of Computer Science, Research Triangle Park, North Carolina, pp. 256–261. IEEE, Los Alamitos (1989)
Littlestone, N., Warmuth, M.K.: The weighted majority algorithm. Information and Computation 108(2), 212–261 (1994)
McDiarmid, C.: On the method of bounded differences. In: Kastens, U., Zimmermann, E., Hutt, B. (eds.) GAG: A Practical Compiler Generator. LNCS, vol. 141, pp. 148–188. Springer, Heidelberg (1989)
Vovk, V.G.: Aggregating strategies. In: Proceedings of the Third Annual Workshop on Computational Learning Theory, pp. 371–383. ACM Press, New York (1990)
Vovk, V.G.: A game of prediction with expert advice. In: Proceedings of the 8th Annual Conference on Computational Learning Theory, pp. 51–60. ACM Press, New York (1995)
Yaroshinsky, R., El-Yaniv, R., Seiden, S.: How to better use expert advice. Machine Learning (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Hutter, M., Poland, J. (2004). Prediction with Expert Advice by Following the Perturbed Leader for General Weights. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_22
Download citation
DOI: https://doi.org/10.1007/978-3-540-30215-5_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23356-5
Online ISBN: 978-3-540-30215-5
eBook Packages: Springer Book Archive