, Volume 28, Issue 1, pp 4175
Multitask Learning
 Rich CaruanaAffiliated withSchool of Computer Science, Carnegie Mellon University
Abstract
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. This paper reviews prior work on MTL, presents new evidence that MTL in backprop nets discovers task relatedness without the need of supervisory signals, and presents new results for MTL with knearest neighbor and kernel regression. In this paper we demonstrate multitask learning in three domains. We explain how multitask learning works, and show that there are many opportunities for multitask learning in real domains. We present an algorithm and results for multitask learning with casebased methods like knearest neighbor and kernel regression, and sketch an algorithm for multitask learning in decision trees. Because multitask learning works, can be applied to many different kinds of domains, and can be used with different learning algorithms, we conjecture there will be many opportunities for its use on realworld problems.
 Title
 Multitask Learning
 Journal

Machine Learning
Volume 28, Issue 1 , pp 4175
 Cover Date
 199707
 DOI
 10.1023/A:1007379606734
 Print ISSN
 08856125
 Online ISSN
 15730565
 Publisher
 Kluwer Academic Publishers
 Additional Links
 Topics
 Keywords

 inductive transfer
 parallel transfer
 multitask learning
 backpropagation
 knearest neighbor
 kernel regression
 supervised learning
 generalization
 Industry Sectors
 Authors

 Rich Caruana ^{(1)}
 Author Affiliations

 1. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, 15213