Multi-Task Learning Using Shared and Task Specific Information
Multi-task learning solves multiple related learning problems simultaneously by sharing some common structure for improved generalization performance of each task. We propose a novel approach to multi-task learning which captures task similarity through a shared basis vector set. The variability across tasks is captured through task specific basis vector set. We use sparse support vector machine (SVM) algorithm to select the basis vector sets for the tasks. The approach results in a sparse model where the prediction is done using very few examples. The effectiveness of our approach is demonstrated through experiments on synthetic and real multi-task datasets.
KeywordsMulti-task learning Support vector machines Kernel methods Sparse models
Unable to display preview. Download preview PDF.
- 3.Bakker, B., Heskes, T.: Task Clustering and Gating for Bayesian Multitask Learning. J. Mach. Learn. Res. 4, 83–99 (2003)Google Scholar
- 8.Evgeniou, T., Pontil, M.: Regularized Multi-task Learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117. ACM (2004)Google Scholar
- 9.Liao, X., Carin, L.: Radial Basis Function Network for Multi-task Learning. In: Advances in Neural Information Processing Systems 18, pp. 795–802. MIT Press (2006)Google Scholar
- 10.Seeger, M.: Low Rank Updates for the Cholesky Decomposition. Technical report, University of California at Berkeley (2004)Google Scholar