Skip to main content
Log in

Universal Model Adaptation by Style Augmented Open-set Consistency

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Learning to recognize unknown target samples is of great importance for unsupervised domain adaptation (UDA). Open-set domain adaptation (OSDA) and open-partial domain adaptation (OPDA) are two typical UDA scenarios, and the latter assumes that some source-private categories exist. However, most existing approaches are devised for one UDA scenario and often have bad performance on the other. Furthermore, they also demand access to source data during adaptation, leading them highly impractical due to data privacy concerns. To address the above issues, we propose a novel universal model framework that can handle both UDA scenarios without prior knowledge of the source-target label-set relationship nor access to source data. For source training, we learn a source model with both closed-set and open-set classifiers and provide it to the target domain. For target adaptation, we propose a novel Style Augmented Open-set Consistency (SAOC) objective to minimize the impact of target domain style on model behavior. Specifically, we exploit the proposed Intra-Domain Style Augmentation (IDSA) strategy to generate style-augmented target images. Then we enforce the consistency of the open-set classifier’s prediction between the image and its corresponding style-augmented version. Extensive experiments on OSDA and OPDA scenarios demonstrate that our proposed framework exhibits comparable or superior performance to some recent source-dependent approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The datasets used in this study are publicly available online.

References

  1. He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

  2. Liu T, Wang J, Yang B, Wang X (2021) Facial expression recognition method with multi-label distribution learning for non-verbal behavior understanding in the classroom. Infrared Physics & Technology 112:103594

    Article  Google Scholar 

  3. Liu H, Fang S, Zhang Z, Li D, Lin K, Wang J (2021) Mfdnet: Collaborative poses perception and matrix fisher distribution for head pose estimation. IEEE Transactions on Multimedia 24:2449–2460

    Article  Google Scholar 

  4. Liu T, Yang B, Liu H, Ju J, Tang J, Subramanian S, Zhang Z (2022) Gmdl: Toward precise head pose estimation via gaussian mixed distribution learning for students’ attention understanding. Infrared Physics & Technology 122:104099

    Article  Google Scholar 

  5. Shimodaira H (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of statistical planning and inference 90(2):227–244

  6. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering 22(10):1345–1359

    Article  Google Scholar 

  7. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)

  8. Yang, L., Lu, B., Zhou, Q., Su, P.: Unsupervised domain adaptation via re-weighted transfer subspace learning with inter-class sparsity. Knowledge-Based Systems, 110277 (2023)

  9. Yang, Y., Soatto, S.: Fda: Fourier domain adaptation for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4085–4095 (2020)

  10. Liu H, Zheng C, Li D, Zhang Z, Lin K, Shen X, Xiong NN, Wang J (2022) Multi-perspective social recommendation method with graph representation learning. Neurocomputing 468:469–481

    Article  Google Scholar 

  11. Li Z, Liu H, Zhang Z, Liu T, Xiong NN (2021) Learning knowledge graph embedding with heterogeneous relation attention networks. IEEE Transactions on Neural Networks and Learning Systems 33(8):3961–3973

    Article  MathSciNet  Google Scholar 

  12. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)

  13. Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: Proceedings of the European Conference on Computer Vision, pp. 135–150 (2018)

  14. Saito, K., Yamamoto, S., Ushiku, Y., Harada, T.: Open set domain adaptation by backpropagation. In: Proceedings of the European Conference on Computer Vision, pp. 153–168 (2018)

  15. You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2720–2729 (2019)

  16. Saito, K., Kim, D., Sclaroff, S., Saenko, K.: Universal domain adaptation through self supervision. In: Advances in Neural Information Processing Systems, pp. 16282–16292 (2020)

  17. Li, G., Kang, G., Zhu, Y., Wei, Y., Yang, Y.: Domain consensus clustering for universal domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9757–9766 (2021)

  18. Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: Proceedings of the International Conference on Machine Learning, pp. 6028–6039 (2020)

  19. Li, R., Jiao, Q., Cao, W., Wong, H.-S., Wu, S.: Model adaptation: Unsupervised domain adaptation without source data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9641–9650 (2020)

  20. Liu, H., Cao, Z., Long, M., Wang, J., Yang, Q.: Separate to adapt: Open set domain adaptation via progressive separation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2927–2936 (2019)

  21. Saito, K., Saenko, K.: Ovanet: One-vs-all network for universal domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9000–9009 (2021)

  22. Zhou, K., Yang, Y., Qiao, Y., Xiang, T.: Domain generalization with mixstyle. In: Proceedings of the International Conference on Learning Representations (2021)

  23. Liu H, Liu T, Zhang Z, Sangaiah AK, Yang B, Li Y (2022) Arhpe: Asymmetric relation-aware representation learning for head pose estimation in industrial human-computer interaction. IEEE Transactions on Industrial Informatics 18(10):7107–7117

    Article  Google Scholar 

  24. Liu H, Nie H, Zhang Z, Li Y-F (2021) Anisotropic angle distribution learning for head pose estimation and attention understanding in human-computer interaction. Neurocomputing 433:310–322

    Article  Google Scholar 

  25. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. Journal of Machine Learning Research 17(1):2096–2030

    MathSciNet  MATH  Google Scholar 

  26. Chen, C., Fu, Z., Chen, Z., Jin, S., Cheng, Z., Jin, X., Hua, X.-S.: Homm: Higher-order moment matching for unsupervised domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3422–3429 (2020)

  27. Liu, H., Liu, T., Chen, Y., Zhang, Z., Li, Y.-F.: Ehpe: skeleton cues-based gaussian coordinate encoding for efficient human pose estimation. IEEE Transactions on Multimedia (2022)

  28. Liu H, Zheng C, Li D, Shen X, Lin K, Wang J, Zhang Z, Zhang Z, Xiong NN (2021) Edmf: Efficient deep matrix factorization with review feature learning for industrial recommender system. IEEE Transactions on Industrial Informatics 18(7):4361–4371

    Article  Google Scholar 

  29. Liu, H., Zhang, C., Deng, Y., Xie, B., Liu, T., Zhang, Z., Li, Y.-F.: Transifc: Invariant cues-aware feature concentration learning for efficient fine-grained bird image classification. IEEE Transactions on Multimedia (2023)

  30. Sifan, L., Shengsheng, W., Xin, Z., Zihao, F., Bilin, W.: Cross-domain feature enhancement for unsupervised domain adaptation. Applied Intelligence, 1–15 (2022)

  31. He, C., Tan, T., Fan, X., Zheng, L., Ye, Z.: Noise-residual mixup for unsupervised adversarial domain adaptation. Applied Intelligence, 1–14 (2022)

  32. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., Krishnan, D.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3722–3731 (2017)

  33. Sankaranarayanan, S., Balaji, Y., Castillo, C.D., Chellappa, R.: Generate to adapt: Aligning domains using generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8503–8512 (2018)

  34. French, G., Mackiewicz, M., Fisher, M.: Self-ensembling for visual domain adaptation. In: Proceedings of the International Conference on Learning Representations (2018)

  35. Pernes D, Cardoso JS (2022) Tackling unsupervised multi-source domain adaptation with optimism and consistency. Expert Systems with Applications 194:116486

    Article  Google Scholar 

  36. Panareda Busto, P., Gall, J.: Open set domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 754–763 (2017)

  37. Bucci, S., Loghmani, M.R., Tommasi, T.: On the effectiveness of image rotation for open set domain adaptation. In: Proceedings of the European Conference on Computer Vision, pp. 422–438 (2020)

  38. Fu, B., Cao, Z., Long, M., Wang, J.: Learning to detect open classes for universal domain adaptation. In: Proceedings of the European Conference on Computer Vision, pp. 567–583 (2020)

  39. Kundu, J.N., Venkat, N., Babu, R.V., : Universal source-free domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4544–4553 (2020)

  40. Kundu, J.N., Venkat, N., Revanur, A., Babu, R.V., : Towards inheritable models for open-set domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12376–12385 (2020)

  41. Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9010–9019 (2021)

  42. Yang, S., van de Weijer, J., Herranz, L., Jui, S., : Exploiting the intrinsic neighborhood structure for source-free domain adaptation. In: Advances in Neural Information Processing Systems, pp. 29393–29405 (2021)

  43. Huang, J., Guan, D., Xiao, A., Lu, S.: Model adaptation: Historical contrastive learning for unsupervised domain adaptation without source data. In: Advances in Neural Information Processing Systems, pp. 3635–3649 (2021)

  44. Tarvainen, A., Valpola, H.: Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in Neural Information Processing Systems 30 (2017)

  45. Nussbaumer, H.J.: The fast fourier transform. In: Fast Fourier Transform and Convolution Algorithms, pp. 80–111. Springer, (1981)

  46. Huang, J., Guan, D., Xiao, A., Lu, S.: Fsdr: Frequency space domain randomization for domain generalization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6891–6902 (2021)

  47. Zhou, K., Yang, Y., Hospedales, T., Xiang, T.: Learning to generate novel domains for domain generalization. In: European Conference on Computer Vision, pp. 561–578 (2020). Springer

  48. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Proceedings of the European Conference on Computer Vision, pp. 213–226 (2010)

  49. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5018–5027 (2017)

  50. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: Visda: The visual domain adaptation challenge. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshop (2018)

  51. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1406–1415 (2019)

  52. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  53. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of Machine Learning Research 9(Nov), 2579–2605 (2008)

  54. You, K., Wang, X., Long, M., Jordan, M.: Towards accurate model selection in deep unsupervised domain adaptation. In: Proceedings of the International Conference on Machine Learning, pp. 7124–7133 (2019). PMLR

  55. Saito, K., Kim, D., Teterwak, P., Sclaroff, S., Darrell, T., Saenko, K.: Tune it the right way: Unsupervised validation of domain adaptation via soft neighborhood density. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 9184–9193 (2021)

Download references

Acknowledgements

This work is supported by the National Key Research and Development Program of China (No. 2020YFA0714103), the Innovation Capacity Construction Project of Jilin Province Development and Reform Commission (2021FGWCXNLJS SZ10, 2019C053-3) and the Fundamental Research Funds for the Central Universities, JLU.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shengsheng Wang.

Ethics declarations

Conflicts of interest

The authors declare there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, X., Wang, S. Universal Model Adaptation by Style Augmented Open-set Consistency. Appl Intell 53, 22667–22681 (2023). https://doi.org/10.1007/s10489-023-04731-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-023-04731-0

Keywords

Navigation