Skip to main content

Visual Domain Adaptation in the Deep Learning Era

  • Book
  • © 2022

Overview

Part of the book series: Synthesis Lectures on Computer Vision (SLCV)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 16.99 USD 44.99
Discount applied Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 16.99 USD 59.99
Discount applied Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (8 chapters)

About this book

Solving problems with deep neural networks typically relies on massive amounts of labeled training data to achieve high performance. While in many situations huge volumes of unlabeled data can be and often are generated and available, the cost of acquiring data labels remains high. Transfer learning (TL), and in particular domain adaptation (DA), has emerged as an effective solution to overcome the burden of annotation, exploiting the unlabeled data available from the target domain together with labeled data or pre-trained models from similar, yet different source domains. The aim of this book is to provide an overview of such DA/TL methods applied to computer vision, a field whose popularity has increased significantly in the last few years. We set the stage by revisiting the theoretical background and some of the historical shallow methods before discussing and comparing different domain adaptation strategies that exploit deep architectures for visual recognition. We introduce the space of self-training-based methods that draw inspiration from the related fields of deep semi-supervised and self-supervised learning in solving the deep domain adaptation. Going beyond the classic domain adaptation problem, we then explore the rich space of problem settings that arise when applying domain adaptation in practice such as partial or open-set DA, where source and target data categories do not fully overlap, continuous DA where the target data comes as a stream, and so on. We next consider the least restrictive setting of domain generalization (DG), as an extreme case where neither labeled nor unlabeled target data are available during training. Finally, we close by considering the emerging area of learning-to-learn and how it can be applied to further improve existing approaches to cross domain learning problems such as DA and DG.

Authors and Affiliations

  • NAVER LABS Europe, France

    Gabriela Csurka

  • University of Edinbugh & Samsung AI Research Centre, United Kingdom

    Timothy M. Hospedales

  • EPFL, Switzerland

    Mathieu Salzmann

  • Politecnico di Torino & Italian Institute of Technology, Italy

    Tatiana Tommasi

About the authors

Gabriela Csurka is a Principal Scientist at NAVER LABS Europe, France. Her main research interests are in computer vision for image understanding, 3D reconstruction, visual localization, as well as domain adaptation and transfer learning. She has contributed to around 100 scientific communications, several on the topic of DA. She has given several invited talks and organized a tutorial on domain adaptation at ECCV’20. In 2017 she edited the Springer book Domain Adaptation for Computer Vision Applications. Timothy M. Hospedales is a Professor at the University of Edinburgh; Principal Researcher at Samsung AI Research Centre, Cambridge; and Alan Turing Institute Fellow. His research focuses on lifelong machine learning, broadly defined to include multi-domain/multi-task learning, domain adaptation, transfer learning, and meta-learning, with applications including computer vision, language, reinforcement learning for control, and finance. He has co-authored numerous papers on domain adaptation, domain generalization, and transfer learning in major venues including CVPR, ICCV, ECCV, ICML, ICLR, NeurIPS, and AAAI. He teaches computer vision at Edinburgh University and has given invited talks and tutorials on these topics at various international venues, renowned universities, and research institutes. Mathieu Salzmann is a Senior Researcher at EPFL and, since May 2020, a part-time Artificial Intelligence Engineer at ClearSpace. His research focuses on developing machine learning algorithms for visual scene understanding, including object recognition, detection, semantic segmentation, 6D pose estimation, and 3D reconstruction. He has published articles on the topic of domain adaptation at major venues, including CVPR, ICCV, ICLR, AAAI, TPAMI, and JMLR. Furthermore, he has been invited to present his domain adaptation work at various venues and internationally renowned universities. Tatiana Tommasi is Associate Professor at Politecnico di Torino, Italy and an affiliated researcher at the Italian Institute of Technology. She pioneered the area of transfer learning for computer vision and has large experience in domain adaptation, generalization, and multimodal learning with applications for robotics and medical imaging. Tatiana received the best paper award at the 1st edition of Task-CV workshop at ECCV’14 and since then she has been leading the organization of the following workshop editions. She also organized a workshop on similar topics at NIPS’13 and ’14 and taught a tutorial at ECCV’14 and ’20.

Bibliographic Information

Publish with us