Machine Learning

, Volume 90, Issue 2, pp 161–189

A theory of transfer learning with applications to active learning

Article

DOI: 10.1007/s10994-012-5310-y

Cite this article as:
Yang, L., Hanneke, S. & Carbonell, J. Mach Learn (2013) 90: 161. doi:10.1007/s10994-012-5310-y

Abstract

We explore a transfer learning setting, in which a finite sequence of target concepts are sampled independently with an unknown distribution from a known family. We study the total number of labeled examples required to learn all targets to an arbitrary specified expected accuracy, focusing on the asymptotics in the number of tasks and the desired accuracy. Our primary interest is formally understanding the fundamental benefits of transfer learning, compared to learning each target independently from the others. Our approach to the transfer problem is general, in the sense that it can be used with a variety of learning protocols. As a particularly interesting application, we study in detail the benefits of transfer for self-verifying active learning; in this setting, we find that the number of labeled examples required for learning with transfer is often significantly smaller than that required for learning each target independently.

Keywords

Transfer learning Multi-task learning Active learning Statistical learning theory Bayesian learning Sample complexity 

Copyright information

© The Author(s) 2012

Authors and Affiliations

  1. 1.Machine Learning DepartmentCarnegie Mellon UniversityPittsburghUSA
  2. 2.Department of StatisticsCarnegie Mellon UniversityPittsburghUSA
  3. 3.Language Technologies InstituteCarnegie Mellon UniversityPittsburghUSA

Personalised recommendations