Skip to main content
Log in

Domain consensual contrastive learning for few-shot universal domain adaptation

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Traditional unsupervised domain adaptation (UDA) aims to transfer the learned knowledge from a fully labeled source domain to another unlabeled target domain on the same label set. The strong assumptions of full annotations on the source domain and a closed label set of the two domains might not hold in real-world applications. In this paper, we investigate a practical but challenging domain adaptation scenario, termed few-shot universal domain adaptation (FUniDA), where only a few labeled data are available in the source domain and the label sets of the source and target domains are different. Existing few-shot UDA (FUDA) methods and universal domain adaptation (UniDA) methods cannot address this novel domain adaptation setting well. The FUDA methods would misalign the unknown samples of the target domain and the private samples of the source domain, and the UniDA methods cannot perform well with only a small number of labeled source samples. To address these challenges, we propose a novel domain consensual contrastive learning (DCCL) framework for FUniDA. Specifically, DCCL comprises two major components: 1) in-domain consensual contrastive learning aims to learn discriminative features from few labeled source data, and 2) cluster matching and cross-domain consensual contrastive learning aim to align the features of common samples in the source and target domains while keeping the private samples as private. We conduct extensive experiments on five standard benchmark datasets, including Office-31, Office-Home, VisDA-17, DomainNet, and ImageCLEF-DA. The results demonstrate that the proposed DCCL achieves state-of-the-art performance with remarkable gains.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Following [30], we use common samples, private samples, known samples, and unknown samples to refer to the samples that belong to the common classes, private classes, known classes, and unknown classes of the source and target domains.

References

  1. Alipour N, Tahmoresnezhad J (2022) Heterogeneous domain adaptation with statistical distribution alignment and progressive pseudo label selection. Appl Intell 52:1–18

  2. Chen J, Wu X, Duan L, Gao S (2020) Domain adversarial reinforcement learning for partial domain adaptation. IEEE Trans Neural Netw Learn Syst 33(2):539–553

    Article  Google Scholar 

  3. Chen T, Kornblith S, Norouzi M, Hinton G (2020) A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning. pp 1597–1607

  4. Chen Y, Song S, Li S, Wu C (2019) A graph embedding framework for maximum mean discrepancy-based domain adaptation algorithms. IEEE Trans Image Process 29:199–213

    Article  MathSciNet  MATH  Google Scholar 

  5. Cheng Z, Chen C, Chen Z, Fang K, Jin X (2021) Robust and high-order correlation alignment for unsupervised domain adaptation. Neural Comput Appl 33:6891–6903

    Article  Google Scholar 

  6. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: IEEE Conference on Computer Vision and Pattern Recognition. pp 248–255

  7. Ebrahimi M, Chai Y, Zhang H H, Chen H (2022) Heterogeneous domain adaptation with adversarial neural representation learning: Experiments on e-commerce and cybersecurity. IEEE Trans Pattern Anal Mach Intell 45:1862–1875

  8. Fang Z, Lu J, Liu F, Xuan J, Zhang G (2021) Open set domain adaptation: Theoretical bound and algorithm. IEEE Trans Neural Netw Learn Syst 32(10):4309–4322

    Article  MathSciNet  Google Scholar 

  9. Feng H, Chen M, Hu J, Shen D, Liu H, Cai D (2021) Complementary pseudo labels for unsupervised domain adaptation on person re-identification. IEEE Trans Image Process 30:2898–2907

    Article  Google Scholar 

  10. Fu B, Cao Z, Long M, Wang J (2020) Learning to detect open classes for universal domain adaptation. In: European Conference on Computer Vision. pp 567–583

  11. He K, Chen X, Xie S, Li Y, Dollár P, Girshick R (2022) Masked autoencoders are scalable vision learners. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 16000–16009

  12. He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual representation learning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 9729–9738

  13. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition. pp 770–778

  14. He Q-Q, Siu SWI, Si Y-W (2022) Attentive recurrent adversarial domain adaptation with top-k pseudo-labeling for time series classification. Appl Intell 53:1–20

  15. Huang J, Zhang P, Zhou Z, Fan K (2021) Domain compensatory adversarial networks for partial domain adaptation. Multimed Tools Appl 80:11255–11272

    Article  Google Scholar 

  16. Kouw WM, Loog M (2021) A review of domain adaptation without target labels. IEEE Trans Pattern Anal Mach Intell 43(3):766–785

    Article  Google Scholar 

  17. Kutbi M, Peng K-C, Wu Z (2021) Zero-shot deep domain adaptation with common representation learning. IEEE Trans Pattern Anal Mach Intell 44(7):3909–3924

    Google Scholar 

  18. Li G, Kang G, Zhu Y, Wei Y, Yang Y (2021) Domain consensus clustering for universal domain adaptation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 9757–9766

  19. Li H, Wan R, Wang S, Kot AC (2021) Unsupervised domain adaptation in the wild via disentangling representation learning. Int J Comput Vis 129:267–283

    Article  MathSciNet  MATH  Google Scholar 

  20. Li S, Liu CH, Lin Q, Wen Q, Su L, Huang G, Ding Z (2020) Deep residual correction network for partial domain adaptation. IEEE Trans Pattern Anal Mach Intell 43(7):2329–2344

    Article  Google Scholar 

  21. Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al (2019) Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems. pp 8024–8035

  22. Peng X, Bai Q, Xia X, Huang Z, Saenko K, Wang B (2019) Moment matching for multi-source domain adaptation. In: IEEE International Conference on Computer Vision. pp 1406–1415

  23. Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K (2018) Visda: A synthetic-to-real benchmark for visual domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp 2021–2026

  24. Qin Z, Yang L, Gao F, Hu Q, Shen C (2022) Uncertainty-aware aggregation for federated open set domain adaptation. IEEE Trans Neural Netw Learn Syst

  25. Rahman MM, Fookes C, Baktashmotlagh M, Sridharan S (2020) Correlation-aware adversarial domain adaptation and generalization. Pattern Recognit 100:107124

    Article  Google Scholar 

  26. Ren C-X, Ge P, Yang P, Yan S (2020) Learning target-domain-specific classifier for partial domain adaptation. IEEE Trans Neural Netw Learn Syst 32(5):1989–2001

    Article  MathSciNet  Google Scholar 

  27. Ren Y, Cong Y, Dong J, Sun G (2022) Uni3da: Universal 3d domain adaptation for object recognition. IEEE Trans Circ Syst Video Technol 33:379–392

  28. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adapting visual category models to new domains. In: European Conference on Computer Vision. pp 213–226

  29. Saito K, Kim D, Sclaroff S, Saenko K (2020) Universal domain adaptation through self supervision. In: Advances in Neural Information Processing Systems. pp 16282–16292

  30. Saito K, Saenko K (2021) Ovanet: One-vs-all network for universal domain adaptation. In: IEEE International Conference on Computer Vision. pp 9000–9009

  31. Shermin T, Lu G, Teng SW, Murshed M, Sohel F (2020) Adversarial network with multiple classifiers for open set domain adaptation. IEEE Trans Multimedia 23:2732–2744

    Article  Google Scholar 

  32. Tian Y, Zhu S (2021) Partial domain adaptation on semantic segmentation. IEEE Trans Circ Syst Video Technol 32(6):3798–3809

    Article  Google Scholar 

  33. Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11):2579–2605

    MATH  Google Scholar 

  34. Venkateswara H, Eusebio J, Chakraborty S, Panchanathan S (2017) Deep hashing network for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition. pp 5018–5027

  35. Wang W, Li H, Ding Z, Nie F, Chen J, Dong X, Wang Z (2021) Rethinking maximum mean discrepancy for visual domain adaptation. IEEE Trans Neural Netw Learn Syst 34:264–277

  36. Wang W, Shen Z, Li D, Zhong P, Chen Y (2022) Probability-based graph embedding cross-domain and class discriminative feature learning for domain adaptation. IEEE Trans Image Process 32:72–87

    Article  Google Scholar 

  37. Wynne G, Duncan AB (2022) A kernel two-sample test for functional data. J Mach Learn Res 23(73):1–51

    MathSciNet  Google Scholar 

  38. Xu Q, Shi Y, Yuan X, Zhu XX (2023) Universal domain adaptation for remote sensing image scene classification. IEEE Trans Geosci Remote Sens 61:1–15

    Google Scholar 

  39. Xu Y, Cao H, Mao K, Chen Z, Xie L, Yang J (2022) Aligning correlation information for domain adaptation in action recognition. IEEE Trans Neural Netw Learn Syst

  40. Yan H, Li Z, Wang Q, Li P, Xu Y, Zuo W (2019) Weighted and class-specific maximum mean discrepancy for unsupervised domain adaptation. IEEE Trans Multimedia 22(9):2420–2433

    Article  Google Scholar 

  41. Ye Y, Fu S, Chen J (2023) Learning cross-domain representations by vision transformer for unsupervised domain adaptation. Neural Comput Appl 35:1–14

  42. Yin Y, Yang Z, Hu H, Wu X (2022) Universal multi-source domain adaptation for image classification. Pattern Recogn 121:108238

    Article  Google Scholar 

  43. You K, Long M, Cao Z, Wang J, Jordan MI (2019) Universal domain adaptation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 2720–2729

  44. Yue X, Zheng Z, Zhang S, Gao Y, Darrell T, Keutzer K, Vincentelli A S (2021) Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 13834–13844

  45. Zhang S, Chen Z, Wang D, Wang ZJ (2022) Cross-domain few-shot contrastive learning for hyperspectral images classification. IEEE Geosci Remote Sens Lett 19:1–5

    Google Scholar 

  46. Zhang W, Li X, Ma H, Luo Z, Li X (2021) Open-set domain adaptation in machinery fault diagnostics using instance-level weighted adversarial learning. IEEE Trans Ind Inform 17(11):7445–7455

    Article  Google Scholar 

  47. Zhao S, Li B, Xu P, Yue X, Ding G, Keutzer K (2021) Madan: multi-source adversarial domain aggregation network for domain adaptation. Int J Comput Vis 129(8):2399–2424

    Article  Google Scholar 

  48. Zhao S, Yue X, Zhang S, Li B, Zhao H, Wu B, Krishna R, Gonzalez JE, Sangiovanni-Vincentelli AL, Seshia SA et al (2022) A review of single-source deep unsupervised visual domain adaptation. IEEE Trans Neural Netw Learn Syst 33(2):473–493

    Article  Google Scholar 

  49. Zhao X, Wang S, Sun Q (2023) Open-set domain adaptation by deconfounding domain gaps. Appl Intell 53(7):7862–7875

    Article  Google Scholar 

  50. Zhou J, Jing B, Wang Z, Xin H, Tong H (2021) Soda: Detecting covid-19 in chest x-rays with semi-supervised open set domain adaptation. IEEE/ACM Trans Comput Biol Bioinforma 19(5):2605–2612

    Article  Google Scholar 

  51. Zhu Y, Sun X, Diao W, Li H, Fu K (2022) Rfa-net: Reconstructed feature alignment network for domain adaptation object detection in remote sensing imagery. IEEE J Sel Top Appl Earth Obs Remote Sens 15:5689–5703

    Article  Google Scholar 

  52. Zhu Y, Wu X, Qiang J, Yuan Y, Li Y (2023) Representation learning via an integrated autoencoder for unsupervised domain adaptation. Front Comput Sci 17(5):175334

    Article  Google Scholar 

  53. Caputo B, Müller H, Martinez-Gomez J, Villegas M, Acar B, Patricia N, Marvasti N, Üsküdarlı S, Paredes R, Cazorla M, et al (2014) Imageclef 2014: Overview and analysis of the results. In: Information Access Evaluation. Multilinguality, Multimodality, and Interaction. pp 192–211

  54. Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: International Conference on Learning Representations

  55. Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861

  56. Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: IEEE International Conference on Computer Vision. pp 10012–10022

  57. Liu Z, Mao H, Wu C-Y, Feichtenhofer C, Darrell T, Xie S (2022) A convnet for the 2020s. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 11976–11986

  58. Tan M, Le Q (2019) Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning. pp. 6105–6114

  59. Xie S, Girshick R, Dollár P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp 1492–1500

Download references

Funding

This paper is supported partly by National Natural Science Foundation of China (62071066) andFundamental Research Funds for the Central Universities(2242022k60006).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Wang.

Ethics declarations

Conflicts of interest

The authors declare that they have no competing interests that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liao, H., Wang, Q., Zhao, S. et al. Domain consensual contrastive learning for few-shot universal domain adaptation. Appl Intell 53, 27191–27206 (2023). https://doi.org/10.1007/s10489-023-04890-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04890-0

Keywords

Navigation