Skip to main content
Log in

Open-set domain adaptation by deconfounding domain gaps

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Open-Set Domain Adaptation (OSDA) aims to adapt the model trained on a source domain to the recognition tasks in a target domain while shielding any distractions caused by open-set classes, i.e., the classes “unknown” to the source model. Compared to standard DA, the key of OSDA lies in the separation between known and unknown classes. Existing OSDA methods often fail the separation because of overlooking the confounders (i.e., the domain gaps), which means their recognition of “unknown classes” is not because of class semantics but domain difference (e.g., styles and contexts). We address this issue by explicitly deconfounding domain gaps (DDP) during class separation and domain adaptation in OSDA. The mechanism of DDP is to transfer domain-related styles and contexts from the target domain to the source domain. It enables the model to recognize a class as known (or unknown) because of the class semantics rather than the confusion caused by spurious styles or contexts. In addition, we propose a module of ensembling multiple transformations (EMT) to produce calibrated recognition scores, i.e., reliable normality scores, for the samples in the target domain. Extensive experiments on two standard benchmarks verify that our proposed method outperforms a wide range of OSDA methods, because of its advanced ability of correctly recognizing unknown classes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778

  2. Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  3. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2961–2969

  4. Sun B, Feng J, Saenko K (2016) Return of frustratingly easy domain adaptation. In: Proceedings of the AAAI conference on artificial intelligence. vol 30, pp 2058–2065

  5. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22 (10):1345–1359

    Article  Google Scholar 

  6. Panareda Busto P, Gall J (2017) Open set domain adaptation. In: Proceedings of the IEEE international conference on computer vision. pp 754–763

  7. Saito K, Yamamoto S, Ushiku Y, Harada T (2018) Open set domain adaptation by backpropagation. In: Proceedings of the european conference on computer vision. pp 153–168

  8. Liu H, Cao Z, Long M, Wang J, Yang Q (2019) Separate to adapt: Open set domain adaptation via progressive separation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2927–2936

  9. Wang T, Zhou C, Sun Q, Zhang H (2021) Causal attention for unbiased visual recognition. In: Proceedings of the IEEE international conference on computer vision. pp 3091–3100

  10. French G, Mackiewicz M, Fisher M (2018) Self-ensembling for visual domain adaptation. In: Proceedings of the international conference on learning representations

  11. Long M, Cao Y, Wang J, Jordan M (2015) Learning transferable features with deep adaptation networks. In: Proceedings of the international conference on machine learning. pp 97–105

  12. Chen C, Fu Z, Chen Z, Jin S, Cheng Z, Jin X, Hua X-S (2020) Homm: higher-order moment matching for unsupervised domain adaptation. In: Proceedings of the AAAI conference on artificial intelligence. vol 34, pp 3422–3429

  13. Courty N, Flamary R, Tuia D, Rakotomamonjy A (2017) Optimal transport for domain adaptation. IEEE Trans Pattern Anal Mach Intell 39(9):1853–1865

    Article  Google Scholar 

  14. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. J Mach Learn Res 17(1):2096–2030

    MathSciNet  MATH  Google Scholar 

  15. Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. In: Advances in neural information processing systems. vol 31, pp 1647–1657

  16. Chen C, Xie W, Huang W, Rong Y, Ding X, Huang Y, Xu T, Huang J (2019) Progressive feature alignment for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 627–636

  17. Cui S, Wang S, Zhuo J, Su C, Huang Q, Tian Q (2020) Gradually vanishing bridge for adversarial domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 12455–12464

  18. Yang G, Ding M, Zhang Y (2022) Bi-directional class-wise adversaries for unsupervised domain adaptation, vol 52, pp 3623–3639

  19. Bousmalis K, Silberman N, Dohan D, Erhan D, Krishnan D (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 3722–3731

  20. Yang G, Xia H, Ding M, Ding Z (2020) Bi-directional generation for unsupervised domain adaptation. In: Proceedings of the AAAI conference on artificial intelligence. vol 34, pp 6615–6622

  21. Carlucci FM, D’Innocente A, Bucci S, Caputo B, Tommasi T (2019) Domain generalization by solving jigsaw puzzles. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2229–2238

  22. Ghifary M, Kleijn WB, Zhang M, Balduzzi D, Li W (2016) Deep reconstruction-classification networks for unsupervised domain adaptation. In: Proceedings of the european conference on computer vision. pp 597–613

  23. Zhao X, Wang S (2019) Adversarial learning and interpolation consistency for unsupervised domain adaptation. IEEE Access 7:170448–170456

    Article  Google Scholar 

  24. Cai R, Li Z, Wei P, Qiao J, Zhang K, Hao Z (2019) Learning disentangled semantic representation for domain adaptation. In: Proceedings of the international joint conference on artificial intelligence. pp 2060–2066

  25. Peng X, Huang Z, Sun X, Saenko K (2019) Domain agnostic learning with disentangled representations. In: Proceedings of the international conference on machine learning. pp 5102–5112

  26. Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D (2016) Domain separation networks. In: Advances in neural information processing systems. pp 343–351

  27. Feng Q, Kang G, Fan H, Yang Y (2019) Attract or distract: exploit the margin of open set. In: Proceedings of the IEEE International Conference on Computer Vision. pp 7990–7999

  28. Bucci S, Loghmani MR, Tommasi T (2020) On the effectiveness of image rotation for open set domain adaptation. In: Proceedings of the european conference on computer vision. pp 422–438

  29. You K, Long M, Cao Z, Wang J, Jordan MI (2019) Universal domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 2720–2729

  30. Fu B, Cao Z, Long M, Wang J (2020) Learning to detect open classes for universal domain adaptation. In: Proceedings of the european conference on computer vision. pp 567–583

  31. Zhai S, Cheng Y, Lu W, Zhang Z (2016) Deep structured energy based models for anomaly detection. In: Proceedings of the international conference on machine learning. pp 1100–1109

  32. Zong B, Song Q, Min MR, Cheng W, Lumezanu C, Cho D, Chen H (2018) Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: Proceedings of the international conference on learning representations

  33. Xia Y, Cao X, Wen F, Hua G, Sun J (2015) Learning discriminative reconstructions for unsupervised outlier removal. In: Proceedings of the IEEE international conference on computer vision. pp 1511–1519

  34. Zhou C, Paffenroth RC (2017) Anomaly detection with robust deep autoencoders. In: Proceedings of the ACM international conference on knowledge discovery and data mining. pp 665–674

  35. Liang S, Li Y, Srikant R (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. In: Proceedings of the international conference on learning representations

  36. Yu Q, Kavitha MS, Kurita T (2021) Mixture of experts with convolutional and variational autoencoders for anomaly detection. Appl Intell 51(6):3241–3254

    Article  Google Scholar 

  37. Zhou K, Yang Y, Qiao Y, Xiang T (2021) Domain generalization with mixstyle. In: Proceedings of the international conference on learning representations

  38. Nam H, Lee H, Park J, Yoon W, Yoo D (2021) Reducing domain gap by reducing style bias. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 8690–8699

  39. Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision. pp 1501–1510

  40. Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4401–4410

  41. Müller R, Kornblith S, Hinton GE (2019) When does label smoothing help?. In: Advances in neural information processing systems. vol 32, pp 4694–4703

  42. Saenko K, Kulis B, Fritz M, Darrell T (2010) Adapting visual category models to new domains. In: Proceedings of the european conference on computer vision. pp 213–226

  43. Venkateswara H, Eusebio J, Chakraborty S, Panchanathan S (2017) Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5018–5027

  44. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 248–255

  45. Jiang J, BoFu ML (2020) Transfer-Learning-library. GitHub

  46. Maaten Lvd, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(Nov):2579–2605

    MATH  Google Scholar 

  47. Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan JW (2010) A theory of learning from different domains, vol 79, pp 151–175

Download references

Acknowledgements

This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME YIRG Grant (Project No. A20E6c0101), Graduate Innovation Fund of Jilin University (101832020CX179), the Innovation Capacity Construction Project of Jilin Province Development and Reform Commission(2021FGWCXNLJSSZ10), the National Key Research and Development Program of China (No. 2020YFA0714103), the Science & Technology Development Project of Jilin Province, China (20190302117GX) and the Fundamental Research Funds for the Central Universities, JLU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qianru Sun.

Additional information

Data availability

The datasets used in this study are publicly available online.

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, X., Wang, S. & Sun, Q. Open-set domain adaptation by deconfounding domain gaps. Appl Intell 53, 7862–7875 (2023). https://doi.org/10.1007/s10489-022-03805-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03805-9

Keywords

Navigation