Online Multitask Learning
We study the problem of online learning of multiple tasks in parallel. On each online round, the algorithm receives an instance and makes a prediction for each one of the parallel tasks. We consider the case where these tasks all contribute toward a common goal. We capture the relationship between the tasks by using a single global loss function to evaluate the quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own individual loss, and then these loss values are combined using a global loss function. We present several families of online algorithms which can use any absolute norm as a global loss function. We prove worst-case relative loss bounds for all of our algorithms.
KeywordsOnline Learning Online Algorithm Multiple Task Dual Norm Machine Learn Research
Unable to display preview. Download preview PDF.
- 1.Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive aggressive algorithms. Journal of Machine Learning Research 7 (2006)Google Scholar
- 4.Heskes, T.: Solving a huge number of silmilar tasks: A combination of multitask learning and a hierarchical bayesian approach. In: ICML, vol. 15, pp. 233–241 (1998)Google Scholar
- 8.Tsochantaridis, I., Hofmann, T., Joachims, T., Altun, Y.: Support vector machine learning for interdependent and structured output spaces. In: Proceedings of the Twenty-First International Conference on Machine Learning (2004)Google Scholar