Abstract
Most traditional online learning algorithms are based on variants of mirror descent or follow-the-leader. In this chapter, we present an online algorithm based on a completely different approach, tailored for transductive settingsTransductive setting—( Transductive online learning—(, which combines “random playout” and randomized rounding of loss subgradients. As an application of our approach, we present the first computationally efficient online algorithm for collaborative filtering with trace-norm constrained matrices. As a second application, we solve an open question linking batch learning and transductive online learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Specifically, we divide the rounds into r consecutive epochs, such that epoch i consists of 2i rounds, and use Theorem 16.3 with confidence \(\delta \prime =\delta /{2}^{i+1}\), and a union bound, to get a regret bound of \(\mathcal{O}(\mathcal{R}_{{2}^{i}}(\mathcal{F}) + \sqrt{\left (i +\log (1/\delta ) \right ) {2}^{i}})\) over any epoch i. In the typical case where \(\mathcal{R}_{T}(\mathcal{F}) = \mathcal{O}(\sqrt{T})\), summing over i = 1, …, r where \(r =\log _{2}(T + 1) - 1\) yields a total regret bound of order \(\mathcal{O}(\sqrt{\log (T/\delta )T})\). Up to log factors, this is the same bound as if T were known in advance.
- 2.
Formally, at each step t: (1) the adversary chooses and reveals the next element π t of the permutation; (2) the forecaster chooses \(p_{t} \in \mathcal{P}\) and simultaneously the adversary chooses \(y_{t} \in \mathcal{Y}\).
- 3.
References
Abernethy, J., Warmuth, M.: Repeated games against budgeted adversaries. In: NIPS, Vancouver (2010)
Abernethy, J., Bartlett, P., Rakhlin, A., Tewari, A.: Optimal strategies and minimax lower bounds for online convex games. In: COLT, Montreal (2009)
Bach, F.: Consistency of trace-norm minimization. J. Mach. Learn. Res. 9, 1019–1048 (2008)
Bartlett, P., Mendelson, S.: Rademacher and Gaussian complexities: risk bounds and structural results. In: COLT, Amsterdam (2001)
Ben-David, S., Kushilevitz, E., Mansour, Y.: Online learning versus offline learning. Mach. Learn. 29(1), 45–63 (1997)
Ben-David, S., Pál, D., Shalev-Shwartz, S.: Agnostic online learning. In: COLT, Montreal (2009)
Blum, A.: Separating distribution-free and mistake-bound learning models over the Boolean domain. SIAM J. Comput. 23(5), 990–1000 (1994)
Cesa-Bianchi, N., Lugosi, G.: Prediction, Learning, and Games. Cambridge University Press, New York (2006)
Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D., Schapire, R., Warmuth, M.: How to use expert advice. J. ACM 44(3), 427–485 (1997)
Cesa-Bianchi, N., Conconi, A., Gentile, C.: On the generalization ability of on-line learning algorithms. IEEE Trans. Inf. Theory 50(9), 2050–2057 (2004)
Chung, T.: Approximate methods for sequential decision making using expert advice. In: COLT, New Brunswick (1994)
Dudley, R.M.: A Course on Empirical Processes, École de Probabilités de St. Flour, 1982. Lecture Notes in Mathematics, vol. 1097. Springer, Berlin (1984)
Foygel, R., Salakhutdinov, R., Shamir, O., Srebro, N.: Learning with the weighted trace-norm under arbitrary sampling distributions. In: NIPS, Granada (2011)
Hazan, E.: The convex optimization approach to regret minimization. In: Nowozin, S., Sra, S., Wright, S. (eds.) Optimization for Machine Learning. MIT, Cambridge (2012)
Hazan, E., Kale, S., Shalev-Shwartz, S.: Near-optimal algorithms for online matrix prediction. In: COLT, Edinburgh (2012)
Kakade, S., Kalai, A.: From batch to transductive online learning. In: NIPS, Vancouver (2005)
Koren, Y.: Collaborative filtering with temporal dynamics. In: KDD, Paris (2009)
Lee, J., Recht, B., Salakhutdinov, R., Srebro, N., Tropp, J.: Practical large-scale optimization for max-norm regularization. In: NIPS, Vancouver (2010)
Rakhlin, A., Sridharan, K., Tewari, A.: Online learning: random averages, combinatorial parameters, and learnability. In: NIPS, Vancouver (2010)
Rakhlin, A., Shamir, O., Sridharan, K.: Relax and localize: from value to algorithms. CoRR abs/1204.0870 (2012)
Salakhutdinov, R., Mnih, A.: Probabilistic matrix factorization. In: NIPS, Vancouver (2007)
Salakhutdinov, R., Srebro, N.: Collaborative filtering in a non-uniform world: learning with the weighted trace norm. In: NIPS, Vancouver (2010)
Shalev-Shwartz, S.: Online learning and online convex optimization. Found. Trends Mach. Learn. 4(2), 107–194 (2012)
Shamir, O., Shalev-Shwartz, S.: Collaborative filtering with the trace norm: learning, bounding, and transducing. In: COLT, Budapest (2011)
Srebro, N., Shraibman, A.: Rank, trace-norm and max-norm. In: COLT, Bertinoro (2005)
Srebro, N., Rennie, J., Jaakkola, T.: Maximum-margin matrix factorization. In: NIPS, Vancouver (2004)
Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998)
Acknowledgements
The first author acknowledges partial support by the PASCAL2 NoE under EC grant FP7-216886.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Cesa-Bianchi, N., Shamir, O. (2013). Efficient Transductive Online Learning via Randomized Rounding. In: Schölkopf, B., Luo, Z., Vovk, V. (eds) Empirical Inference. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41136-6_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-41136-6_16
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41135-9
Online ISBN: 978-3-642-41136-6
eBook Packages: Computer ScienceComputer Science (R0)