Exploiting Task Relatedness for Multiple Task Learning

  • Shai Ben-David
  • Reba Schuller
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2777)

Abstract

The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.

We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bax95.
    Baxter, J.: Learning Internal Representations. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, San Francisco (1995)Google Scholar
  2. Bax00.
    Baxter, J.: A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research 12, 149–198 (2000)MATHMathSciNetGoogle Scholar
  3. BDGS02.
    Ben-David, S., Gehrke, J., Schuller, R.: A Theoretical Framework for Learning from a Pool of Disparate Data Sources. In: Proceedings of the The 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2002)Google Scholar
  4. BEHW89.
    Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis Dimension. Journal of the Association for Computing Machinery 36(4), 929–965 (1989)MATHMathSciNetGoogle Scholar
  5. BM98.
    Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, San Francisco (1998)Google Scholar
  6. Car97.
    Caruana, R.: Multitask Learning. Machine Learning 28(1), 41–75 (1997)CrossRefGoogle Scholar
  7. Hes98.
    Heskes, T.: Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach. In: International Conference on Machine Learning, pp. 233–241 (1998)Google Scholar
  8. IE96.
    Intrator, N., Edelman, S.: How to Make a Low-Dimensional Representation Suitable for Diverse Tasks. Connection Science 8 (1996)Google Scholar
  9. KV97.
    Kearns, M.J., Vazirani, U.V.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1997)Google Scholar
  10. Thr96.
    Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Touretzky, D., Mozer, M. (eds.) Advances in Neural Information Processing Systems (NIPS), pp. 640–646 (1996)Google Scholar
  11. VC71.
    Vapnik, V., Chervonenkis, A.: On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theoret. Probl. And Its Appl. 16(2), 264–280 (1971)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Shai Ben-David
    • 1
    • 3
  • Reba Schuller
    • 2
  1. 1.School of Electrical and Computer EngineeringCornell UniversityIthacaUSA
  2. 2.Department of MathematicsCornell UniversityIthacaUSA
  3. 3.Department of Computer ScienceTechnionHaifaIsrael

Personalised recommendations