Skip to main content

Unsupervised Domain Adaptation with Unified Joint Distribution Alignment

  • Conference paper
  • First Online:
Database Systems for Advanced Applications (DASFAA 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12682))

Included in the following conference series:

Abstract

Unsupervised domain adaptation aims at transferring knowledge from a labeled source domain to an unlabeled target domain. Recently, domain-adversarial learning has become an increasingly popular method to tackle this task, which bridges the source domain and target domain by adversarially learning domain-invariant representations. Despite the great success in domain-adversarial learning, these methods fail to achieve the invariance of representations at a class level, which may lead to incorrect distribution alignment. To address this problem, in this paper, we propose a method called domain adaptation with Unified Joint Distribution Alignment (UJDA) to perform both domain-level and class-level alignments simultaneously in a unified learning process. Instead of adopting the classical domain discriminator, two novel components named joint classifiers, which are provided with both domain information and label information in both domains, are adopted in UJDA. Single joint classifier plays the min-max game with the feature extractor by the joint adversarial loss to align the class-level alignment. Besides, two joint classifiers as a whole also play the min-max game with the feature extractor by the disagreement loss to achieve the domain-level alignment. Comprehensive experiments on two real-world datasets verify that our method outperforms several state-of-the-art domain adaptation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use the notation x[k] for indexing the value at the kth index of the vector x.

References

  1. Ben-David, S., Blitzer, J., Crammer, K., Kulesza, A., Pereira, F.C., Vaughan, J.W.: A theory of learning from different domains. Mach. Learn. 79, 151–175 (2009)

    Article  MathSciNet  Google Scholar 

  2. Cao, Y., Long, M., Wang, J.: Unsupervised domain adaptation with distribution matching machines. In: AAAI (2018)

    Google Scholar 

  3. Chen, Q., Du, Y., Tan, Z., Zhang, Y., Wang, C.: Unsupervised domain adaptation with joint domain-adversarial reconstruction networks. In: ECML/PKDD (2020)

    Google Scholar 

  4. Cicek, S., Soatto, S.: Unsupervised domain adaptation via regularized conditional alignment. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1416–1425 (2019)

    Google Scholar 

  5. Dai, W., Yang, Q., Xue, G.R., Yu, Y.: Boosting for transfer learning. In: ICML 2007 (2007)

    Google Scholar 

  6. Du, Y., Tan, Z., Chen, Q., Zhang, Y., Wang, C.J.: Homogeneous online transfer learning with online distribution discrepancy minimization. In: ECAI (2020)

    Google Scholar 

  7. Ganin, Y., et al.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 59:1–59:35 (2016)

    Google Scholar 

  8. Goodfellow, I.J., et al.: Generative adversarial nets. In: NIPS (2014)

    Google Scholar 

  9. Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: NIPS (2005)

    Google Scholar 

  10. Gretton, A., Borgwardt, K.M., Rasch, M.J., Schölkopf, B., Smola, A.J.: A kernel method for the two-sample-problem. In: NIPS (2006)

    Google Scholar 

  11. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  12. Liu, H., Long, M., Wang, J., Jordan, M.I.: Transferable adversarial training: a general approach to adapting deep classifiers. In: ICML (2019)

    Google Scholar 

  13. Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML (2015)

    Google Scholar 

  14. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: NeurIPS (2018)

    Google Scholar 

  15. Long, M., Wang, J., Ding, G., Sun, J.G., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: ICCV, pp. 2200–2207 (2013)

    Google Scholar 

  16. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: ICML (2017)

    Google Scholar 

  17. Maaten, L.V.D., Hinton, G.E.: Visualizing data using T-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)

    MATH  Google Scholar 

  18. Miyato, T., Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans. Pattern Anal. Mach. Intell. 41, 1979–1993 (2019)

    Article  Google Scholar 

  19. Pan, S., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 1345–1359 (2010)

    Article  Google Scholar 

  20. Pan, S.J., Tsang, I.W.H., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Networks 22, 199–210 (2011)

    Article  Google Scholar 

  21. Pei, Z., Cao, Z., Long, M., Wang, J.: Multi-adversarial domain adaptation. In: AAAI (2018)

    Google Scholar 

  22. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  23. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  24. Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3723–3732 (2018)

    Google Scholar 

  25. Sankaranarayanan, S., Balaji, Y., Castillo, C.D., Chellappa, R.: Generate to adapt: Aligning domains using generative adversarial networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8503–8512 (2018)

    Google Scholar 

  26. Shu, R., Bui, H.H., Narui, H., Ermon, S.: A dirt-t approach to unsupervised domain adaptation. In: ICLR (2018)

    Google Scholar 

  27. Sun, B., Saenko, K.: Deep CORAL: correlation alignment for deep domain adaptation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 443–450. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49409-8_35

    Chapter  Google Scholar 

  28. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2962–2971 (2017)

    Google Scholar 

  29. Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. ArXiv abs/1412.3474 (2014)

    Google Scholar 

  30. Wang, J., Chen, Y., Hao, S., Feng, W., Shen, Z.: Balanced distribution adaptation for transfer learning. In: ICDM, pp. 1129–1134 (2017)

    Google Scholar 

  31. Wang, J., Feng, W., Chen, Y., Yu, H., Huang, M., Yu, P.S.: Visual domain adaptation with manifold embedded distribution alignment. In: MM 18 (2018)

    Google Scholar 

  32. Wu, X., et al.: Top 10 algorithms in data mining. Knowl. Inf. Syst. 14, 1–37 (2007)

    Google Scholar 

  33. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS (2014)

    Google Scholar 

  34. Zellinger, W., Grubinger, T., Lughofer, E., Natschläger, T., Saminger-Platz, S.: Central moment discrepancy (CMD) for domain-invariant representation learning. In: ICLR (2017)

    Google Scholar 

  35. Zhang, Y., Liu, T., Long, M., Jordan, M.I.: Bridging theory and algorithm for domain adaptation. In: ICML (2019)

    Google Scholar 

Download references

Acknowledgements

This paper is supported by the National Key Research and Development Program of China (Grant No. 2018YFB1403400), the National Natural Science Foundation of China (Grant No. 61876080), the Collaborative Innovation Center of Novel Software Technology and Industrialization at Nanjing University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chongjun Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Du, Y., Tan, Z., Zhang, X., Yao, Y., Yu, H., Wang, C. (2021). Unsupervised Domain Adaptation with Unified Joint Distribution Alignment. In: Jensen, C.S., et al. Database Systems for Advanced Applications. DASFAA 2021. Lecture Notes in Computer Science(), vol 12682. Springer, Cham. https://doi.org/10.1007/978-3-030-73197-7_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-73197-7_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-73196-0

  • Online ISBN: 978-3-030-73197-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics