Exploiting Task Relatedness for Multiple Task Learning

  • Shai Ben-David
  • Reba Schuller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2777)


The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.

We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Shai Ben-David
    • 1
    • 3
  • Reba Schuller
    • 2
  1. 1.School of Electrical and Computer EngineeringCornell UniversityIthacaUSA
  2. 2.Department of MathematicsCornell UniversityIthacaUSA
  3. 3.Department of Computer ScienceTechnionHaifaIsrael

Personalised recommendations