Skip to main content
Log in

Transfer alignment network for blind unsupervised domain adaptation

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

How can we transfer the knowledge from a source domain to a target domain when each side cannot observe the data in the other side? Recent transfer learning methods show significant performance in classification tasks by leveraging both source and target data simultaneously at training time. However, leveraging both source and target data simultaneously is often impossible due to privacy reasons. In this paper, we define the problem of unsupervised domain adaptation under blind constraint, where each of the source and the target domains cannot observe the data in the other domain, but data from both domains are used for training. We propose TAN (Transfer Alignment Network for Blind Domain Adaptation), an effective method for the problem by aligning source and target domain features in the blind setting. TAN maps the target feature into source feature space so that the classifier learned from the labeled data in the source domain is readily used in the target domain. Extensive experiments show that TAN (1) provides the state-of-the-art accuracy for blind domain adaptation outperforming the standard supervised learning by up to 9.0% and (2) performs well regardless of the proportion of target domain data in the training data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. http://archive.ics.uci.edu/ml/datasets/HIGGS.

  2. http://archive.ics.uci.edu/ml/datasets/HEPMASS.

  3. http://archive.ics.uci.edu/ml/datasets/SUSY

  4. http://archive.ics.uci.edu/ml/datasets/Dataset+for+Sensorless+Drive+Diagnosis.

  5. http://archive.ics.uci.edu/ml/datasets/Gas+Sensor+Array+Drift+Dataset.

  6. https://people.eecs.berkeley.edu/jhoffman/domainadapt/.

References

  1. Aytar Y, Zisserman A (2011) Tabula rasa: Model transfer for object category detection. In: International Conference on Computer Vision, IEEE, pp 2252–2259

  2. Ben-David S, Blitzer J, Crammer K, Kulesza A, Pereira F, Vaughan JW (2010) A theory of learning from different domains. Mach Learn 79(1–2):151–175

    Article  MathSciNet  Google Scholar 

  3. Bengio Y (2012) Deep learning of representations for unsupervised and transfer learning. In: Proceedings of ICML Workshop on Unsupervised and Transfer Learning, pp 17–36

  4. Bengio Y et al (2009) Learning deep architectures for ai. Foundations and trends. Machine Learning 2(1):1–127

  5. Bousmalis K, Trigeorgis G, Silberman N, Krishnan D, Erhan D (2016) Domain separation networks. In: Advances in Neural Information Processing Systems, pp 343–351

  6. Cao Y, Long M, Wang J (2018) Unsupervised domain adaptation with distribution matching machines. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, pp 2795–2802

  7. Chen M, Weinberger KQ, Blitzer J (2011) Co-training for domain adaptation. In: Advances in Neural Information Processing Systems, pp 2456–2464

  8. Chen M, Xu Z, Weinberger KQ, Sha F (2012) Marginalized denoising autoencoders for domain adaptation. In: Proceedings of the 29th International Coference on International Conference on Machine Learning, Omnipress, pp 1627–1634

  9. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, Kuksa P (2011) Natural language processing (almost) from scratch. J Mach Learn Res 12:2493–2537

    MATH  Google Scholar 

  10. Dai Z, Yang Z, Yang F, Cohen WW, Salakhutdinov RR (2017) Good semi-supervised learning that requires a bad gan. In: Advances in Neural Information Processing Systems, pp 6513–6523

  11. Erhan D, Bengio Y, Courville A, Manzagol PA, Vincent P, Bengio S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11:625–660

    MathSciNet  MATH  Google Scholar 

  12. Ganin Y, Lempitsky V (2015) Unsupervised domain adaptation by backpropagation. In: International Conference on Machine Learning, pp 1180–1189

  13. Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V (2016) Domain-adversarial training of neural networks. J Mach Learn Res 17(1):2030–2096

    MathSciNet  MATH  Google Scholar 

  14. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 580–587

  15. Glorot X, Bordes A, Bengio Y (2011) Domain adaptation for large-scale sentiment classification: a deep learning approach. In: Proceedings of the 28th International Conference on Machine Learning, pp 513–520

  16. Jeon H, Lee S, Kang U (2021) Unsupervised multi-source domain adaptation with no observable source data. PLOS ONE 16(7):1–16. https://doi.org/10.1371/journal.pone.0253415

    Article  Google Scholar 

  17. Khamis S, Lampert CH (2014) Coconut: Co-classification with output space regularization. In: BMVC

  18. Kumar A, Sattigeri P, Fletcher T (2017) Semi-supervised learning with gans: Manifold invariance with improved inference. In: Advances in Neural Information Processing Systems, pp 5540–5550

  19. Lampert CH (2014) Blind domain adaptation: An RKHS approach. CoRR arXiv:1406.5362

  20. Lee DH (2013) Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML, vol 3, p 2

  21. Lee S, Jeon H, Kang U (2021) Multi-epl: Accurate multi-source domain adaptation. PLOS ONE 16(8):1–15. https://doi.org/10.1371/journal.pone.0255754

    Article  Google Scholar 

  22. Long J, Shelhamer E, Darrell T (2015a) Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, ICML, vol 3, p 2

  23. Long M, Cao Y, Wang J, Jordan M (2015b) Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp 97–105

  24. Long M, Zhu H, Wang J, Jordan MI (2016) Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, pp 136–144

  25. Long M, Zhu H, Wang J, Jordan MI (2017) Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, pp 136–144

  26. Long M, Cao Z, Wang J, Jordan MI (2018) Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, pp 1647–1657

  27. Ma F, Meng D, Xie Q, Li Z, Dong X (2017) Self-paced co-training. In: International Conference on Machine Learning, pp 2275–2284

  28. Miyato T, Maeda Si, Koyama M, Ishii S (2017) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:1704.03976

  29. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359

    Article  Google Scholar 

  30. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp 91–99

  31. Rohrbach M, Ebert S, Schiele B (2013) Transfer learning in a transductive setting. In: Advances in Neural Information Processing Systems, pp 46–54

  32. Saito K, Ushiku Y, Harada T (2017) Asymmetric tri-training for unsupervised domain adaptation. arXiv preprint arXiv:1702.08400

  33. Santos CNd, Wadhawan K, Zhou B (2017) Learning loss functions for semi-supervised learning via discriminative adversarial networks. arXiv preprint arXiv:1707.02198

  34. Sener O, Song HO, Saxena A, Savarese S (2016) Learning transferrable representations for unsupervised domain adaptation. In: Advances in Neural Information Processing Systems, pp 2110–2118

  35. Sermanet P, Eigen D, Zhang X, Mathieu M, Fergus R, LeCun Y (2013) Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229

  36. Sharif Razavian A, Azizpour H, Sullivan J, Carlsson S (2014) Cnn features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 806–813

  37. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474

  38. Tzeng E, Hoffman J, Darrell T, Saenko K (2015) Simultaneous deep transfer across domains and tasks. In: International Conference on Computer Vision (ICCV), IEEE, pp 4068–4076

  39. Tzeng E, Hoffman J, Saenko K, Darrell T (2017) Adversarial discriminative domain adaptation. In: Computer Vision and Pattern Recognition (CVPR), vol 1, p 4

  40. Uzair M, Mian AS (2017) Blind domain adaptation with augmented extreme learning machine features. IEEE Trans Cybern 47(3):651–660

    Article  Google Scholar 

  41. Weiss K, Khoshgoftaar TM, Wang D (2016) A survey of transfer learning. J Big Data 3(1):9

    Article  Google Scholar 

  42. Xiao M, Guo Y (2013) Domain adaptation for sequence labeling tasks with a probabilistic language adaptation model. In: Proceedings of the 30th International Conference on International Conference on Machine Learning, JMLR. org, pp I–293

  43. Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp 3320–3328

  44. Zhang X, Yu FX, Chang SF, Wang S (2015) Deep transfer network: Unsupervised domain adaptation. arXiv preprint arXiv:1503.00591

Download references

Acknowledgements

This work was supported in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00894, Flexible and Efficient Model Compression Method for Various Applications and Environments) and in part by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)). The Institute of Engineering Research at Seoul National University provided research facilities for this work. The ICT at Seoul National University provides research facilities for this study. U Kang is the corresponding author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to U Kang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, H., Kang, U. Transfer alignment network for blind unsupervised domain adaptation. Knowl Inf Syst 63, 2861–2881 (2021). https://doi.org/10.1007/s10115-021-01608-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-021-01608-x

Keywords

Navigation