Computer Vision

Living Edition

Transfer Learning

  • Ting-Wu Chin
  • Cha ZhangEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-030-03243-2_837-1
  • 13 Downloads

Related Concepts

Definition

Transfer learning is a methodology in machine learning that exploits the representation and/or features previously learned from some other tasks to better learn a target task. Generally, transfer learning makes learning target task faster, and when the target task lacks training data, transfer learning improves performance. Formally, a domain\(\mathcal {D}=(\mathcal {X},P(X))\)

This is a preview of subscription content, log in to check access.

References

  1. 1.
    Pan SJ and Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359CrossRefGoogle Scholar
  2. 2.
    Dai W, Yang Q, Xue G, Yu Y (2007) Boosting for transfer learning. In: ICMLCrossRefGoogle Scholar
  3. 3.
    Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: NIPSGoogle Scholar
  4. 4.
    Ge W, Yu Y (2017) Borrowing Treasures from the wealthy: deep transfer learning through selective joint fine-tuning. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 10–19Google Scholar
  5. 5.
    Zadrozny B (2004) Learning and evaluating classifiers under sample selection bias. In: ICMLCrossRefGoogle Scholar
  6. 6.
    Huang J, Smola AJ, Gretton A, Borgwardt KM, Schölkopf B (2006) Correcting sample selection bias by unlabeled data. In: NIPSGoogle Scholar
  7. 7.
    Pan SJ, Kwok JT, Yang Q (2008) Transfer learning via dimensionality reduction. In: AAAIGoogle Scholar
  8. 8.
    Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210CrossRefGoogle Scholar
  9. 9.
    Long M, Wang J (2015) Learning transferable features with deep adaptation networks. In: ICMLGoogle Scholar
  10. 10.
    Sun B, Feng J, Saenko K (2016) Return of frustratingly easy domain adaptation. In: AAAIGoogle Scholar
  11. 11.
    Sankaranarayanan S, Balaji Y, Castillo CD, Chellappa R (2018) Generate to adapt: aligning domains using generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8503–8512Google Scholar
  12. 12.
    Blitzer J, Dredze M, Pereira F (2007) Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In: Proceedings of the 45th annual meeting of the association of computational linguistics, pp 440–447Google Scholar
  13. 13.
    Zheng VW, Xiang EW, Yang Q, Shen D (2008) Transferring localization models over time. In: AAAI, pp 1421–1426Google Scholar
  14. 14.
    Mayer N, Ilg E, Hausser P, Fischer P, Cremers D, Dosovitskiy A, Brox T (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4040–4048Google Scholar
  15. 15.
    Kornblith S, Shlens J, Le, QV (2018) Do better imagenet models transfer better? In: 2019 IEEE conference on computer vision and pattern recognition (CVPR)Google Scholar
  16. 16.
    Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K (2018) VisDA: a synthetic-to-real benchmark for visual domain adaptation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 2102–21025Google Scholar
  17. 17.
    Panareda Busto P, Gall J (2017) Open set domain adaptation. In: Proceedings of the IEEE international conference on computer vision, pp 754–763Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringCarnegie Mellon UniversityPittsburghUSA
  2. 2.Microsoft Cloud & AIRedmondUSA

Section editors and affiliations

  • Cha Zhang
    • 1
  • Lei Zhang
    • 2
  1. 1.RedmondUSA
  2. 2.MicrosoftWAUSA