Advertisement

Unsupervised Domain Adaptation with Robust Deep Logistic Regression

  • Guangbin Wu
  • Weishan Chen
  • Wangmeng Zuo
  • David Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10749)

Abstract

The goal of unsupervised domain adaptation (UDA) is to eliminate the cross-domain discrepancy in probability distributions without the availability of labeled target samples during training. Even recent studies have revealed the benefit of deep convolutional features trained on a large set (e.g., ImageNet) in alleviating domain discrepancy. The transferability of features decreases as (i) the difference between the source and target domains increases, or (ii) the layers are toward the top layer. Therefore, even with deep features, domain adaptation remains necessary. In this paper, we treat UDA as a special case of semi-supervised learning, where the source samples are labeled while the target samples are unlabeled. Conventional semi-supervised learning methods, however, usually attain poor performance for UDA. Due to domain discrepancy, label noise generally is inevitable when using the classifiers trained on source classifier to predict target samples. Thus we deploy a robust deep logistic regression loss on the target samples, resulting in our RDLR model. In such a way, pseudo-labels are gradually assigned to unlabeled target samples according to their maximum classification scores during training. Extensive experiments show that our method yields the state-of-the-art results, demonstrating the effectiveness of robust logistic regression classifiers in UDA.

Keywords

Domain adaptation Deep convolutional networks Robust logistic regression Semi-supervised learning 

Notes

Acknowledgements

The work is partially supported by the GRF fund from the HKSAR Government, the central fund from Hong Kong Polytechnic University, the NSFC fund (61332011, 61671182, 50905040) and Shenzhen Fundamental Research fund (JCYJ20150403161923528, JCYJ20140508160910917). Besides, we gratefully acknowledge NVIDIA corporation providing the Tesla K40c GPU for our research.

References

  1. 1.
    Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 1521–1528. IEEE (2011)Google Scholar
  2. 2.
    Shimodaira, H.: Improving predictive inference under covariate shift by weighting the log-likelihood function. J. Stat. Plann. Infer. 90(2), 227–244 (2000)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Fernando, B., Tommasi, T., Tuytelaars, T.: Joint cross-domain classification and subspace learning for unsupervised adaptation. Pattern Recogn. Lett. 65, 60–66 (2015)CrossRefGoogle Scholar
  4. 4.
    Daumé III, H., Kumar, A., Saha, A.: Frustratingly easy semi-supervised domain adaptation. In: Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pp. 53–59. Association for Computational Linguistics (2010)Google Scholar
  5. 5.
    Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2011)CrossRefGoogle Scholar
  6. 6.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  7. 7.
    Ghosn, J., Bengio, Y.: Bias learning, knowledge sharing. IEEE Trans. Neural Netw. 14(4), 748–765 (2003)CrossRefGoogle Scholar
  8. 8.
    Baktashmotlagh, M., Harandi, M., Salzmann, M.: Distribution-matching embedding for visual domain adaptation. J. Mach. Learn. Res. 17(1), 3760–3789 (2016)MathSciNetMATHGoogle Scholar
  9. 9.
    Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Simultaneous deep transfer across domains and tasks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 4068–4076. IEEE (2015)Google Scholar
  10. 10.
    Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning (ICML), pp. 647–655 (2014)Google Scholar
  11. 11.
    Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems, pp. 3320–3328 (2014)Google Scholar
  12. 12.
    Amini, M.-R., Gallinari, P.: The use of unlabeled data to improve supervised learning for text summarization. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM), pp. 105–112 (2002)Google Scholar
  13. 13.
    Vapnik, V., Sterin, A.: On structural risk minimization or overall risk in a problem of pattern recognition. Autom. Remote Control 10(3), 1495–1503 (1977)MATHGoogle Scholar
  14. 14.
    Gong, B., Shi, Y., Sha, F., Grauman, K.: Geodesic flow kernel for unsupervised domain adaptation. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2066–2073. IEEE (2012)Google Scholar
  15. 15.
    Fernando, B., Habrard, A., Sebban, M., Tuytelaars, T.: Unsupervised visual domain adaptation using subspace alignment. In: IEEE Computer Vision and Pattern Recognition (CVPR), pp. 2960–2967. IEEE (2013)Google Scholar
  16. 16.
    Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
  17. 17.
    Chopra, S., Balakrishnan, S., Gopalan, R.: DLID: deep learning for domain adaptation by interpolating between domains. In: ICML Workshop on Challenges in Representation Learning, vol. 2, no. 6 (2013)Google Scholar
  18. 18.
    Nguyen, H.V., Ho, H.T., Patel, V.M., Chellappa, R.: DASH-N: joint hierarchical domain adaptation and feature learning. IEEE Trans. Image Process. 24(12), 5479–5491 (2015)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  20. 20.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_16 CrossRefGoogle Scholar
  21. 21.
    Griffin, G., Holub, A., Perona, P.: Caltech-256 object category dataset. California Institute of Technology (2007)Google Scholar
  22. 22.
    Zhang, X., Yu, F.X., Chang, S.-F., Wang, S.: Deep transfer network: unsupervised domain adaptation. arXiv preprint arXiv:1503.00591 (2015)
  23. 23.
  24. 24.
    Van, L., Maaten, D.: Accelerating t-SNE using tree-based algorithms. J. Mach. Learn. Res. 15(1), 3221–3245 (2014)MathSciNetMATHGoogle Scholar
  25. 25.
    Ghifary, M., Kleijn, W.B., Zhang, M.: Domain adaptive neural networks for object recognition. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014. LNCS (LNAI), vol. 8862, pp. 898–904. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-13560-1_76 Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Guangbin Wu
    • 1
    • 3
  • Weishan Chen
    • 1
  • Wangmeng Zuo
    • 2
  • David Zhang
    • 3
    • 4
  1. 1.State Key Laboratory of Robotics and SystemHarbin Institute of TechnologyHarbinChina
  2. 2.School of Computer Science and TechnologyHarbin Institute of TechnologyHarbinChina
  3. 3.Department of ComputingThe Hong Kong Polytechnic UniversityHong KongChina
  4. 4.Harbin Institute of Technology Shenzhen Graduate SchoolShenzhenChina

Personalised recommendations