Advertisement

Learning to Detect Open Classes for Universal Domain Adaptation

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12360)

Abstract

Universal domain adaptation (UniDA) transfers knowledge between domains without any constraint on the label sets, extending the applicability of domain adaptation in the wild. In UniDA, both the source and target label sets may hold individual labels not shared by the other domain. A de facto challenge of UniDA is to classify the target examples in the shared classes against the domain shift. A more prominent challenge of UniDA is to mark the target examples in the target-individual label set (open classes) as “unknown”. These two entangled challenges make UniDA a highly under-explored problem. Previous work on UniDA focuses on the classification of data in the shared classes and uses per-class accuracy as the evaluation metric, which is badly biased to the accuracy of shared classes. However, accurately detecting open classes is the mission-critical task to enable real universal domain adaptation. It further turns UniDA problem into a well-established close-set domain adaptation problem. Towards accurate open class detection, we propose Calibrated Multiple Uncertainties (CMU) with a novel transferability measure estimated by a mixture of uncertainty quantities in complementation: entropy, confidence and consistency, defined on conditional probabilities calibrated by a multi-classifier ensemble model. The new transferability measure accurately quantifies the inclination of a target example to the open classes. We also propose a novel evaluation metric called H-score, which emphasizes the importance of both accuracies of the shared classes and the “unknown” class. Empirical results under the UniDA setting show that CMU outperforms the state-of-the-art domain adaptation methods on all the evaluation metrics, especially by a large margin on the H-score.

Keywords

Universal domain adaptation Open class detection 

Notes

Acknowledgement

This work was supported by the Natural Science Foundation of China (61772299, 71690231), and China University S&T Innovation Plan Guided by the Ministry of Education.

Supplementary material

504470_1_En_34_MOESM1_ESM.pdf (2.2 mb)
Supplementary material 1 (pdf 2214 KB)

References

  1. 1.
    Busto, P.P., Iqbal, A., Gall, J.: Open set domain adaptation for image and action recognition. IEEE Trans. Pattern Anal. Mach. Intell. (2018)Google Scholar
  2. 2.
    Cao, Z., Long, M., Wang, J., Jordan, M.I.: Partial transfer learning with selective adversarial networks. In: CVPR, June 2018Google Scholar
  3. 3.
    Cao, Z., Ma, L., Long, M., Wang, J.: Partial adversarial domain adaptation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 139–155. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01237-3_9CrossRefGoogle Scholar
  4. 4.
    Cao, Z., You, K., Long, M., Wang, J., Yang, Q.: Learning to transfer examples for partial domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2985–2994 (2019)Google Scholar
  5. 5.
    Chen, Q., Liu, Y., Wang, Z., Wassell, I., Chetty, K.: Re-weighted adversarial adaptation network for unsupervised domain adaptation. In: CVPR, pp. 7976–7985 (2018)Google Scholar
  6. 6.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)Google Scholar
  7. 7.
    Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: International Conference on Machine Learning, pp. 1050–1059 (2016)Google Scholar
  8. 8.
    Ganin, Y., et al.: Domain-adversarial training of neural networks. JMLR 17, 59:1–59:35 (2016)Google Scholar
  9. 9.
    Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 1321–1330. JMLR. org (2017)Google Scholar
  10. 10.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  11. 11.
    Hu, J., Wang, C., Qiao, L., Zhong, H., Jing, Z.: Multi-weight partial domain adaptation. In: The British Machine Vision Conference (BMVC) (2019)Google Scholar
  12. 12.
    Hu, L., Kan, M., Shan, S., Chen, X.: Duplex generative adversarial network for unsupervised domain adaptation. In: CVPR, June 2018Google Scholar
  13. 13.
    Hu, Y., Stumpfe, D., Bajorath, J.: Computational exploration of molecular scaffolds in medicinal chemistry: miniperspective. J. Med. Chem. 59(9), 4062–4076 (2016)CrossRefGoogle Scholar
  14. 14.
    Kang, G., Zheng, L., Yan, Y., Yang, Y.: Deep adversarial attention alignment for unsupervised domain adaptation: the benefit of target Expectation maximization. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 420–436. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01252-6_25CrossRefGoogle Scholar
  15. 15.
    Konstantinos, B., Nathan, S., David, D., Dumitru, E., Dilip, K.: Unsupervised pixel-level domain adaptation with generative adversarial networks. In: CVPR, pp. 95–104 (2017)Google Scholar
  16. 16.
    Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems, pp. 6402–6413 (2017)Google Scholar
  17. 17.
    Lian, Q., Li, W., Chen, L., Duan, L.: Known-class aware self-ensemble for open set domain adaptation. arXiv preprint arXiv:1905.01068 (2019)
  18. 18.
    Liu, H., Cao, Z., Long, M., Wang, J., Yang, Q.: Separate to adapt: open set domain adaptation via progressive separation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019Google Scholar
  19. 19.
    Liu, Y.C., Yeh, Y.Y., Fu, T.C., Wang, S.D., Chiu, W.C., Frank Wang, Y.C.: Detach and adapt: learning cross-domain disentangled deep representation. In: CVPR, June 2018Google Scholar
  20. 20.
    Long, M., Cao, Y., Wang, J., Jordan, M.I.: Learning transferable features with deep adaptation networks. In: ICML (2015)Google Scholar
  21. 21.
    Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional domain adversarial network. In: NeurIPS (2018)Google Scholar
  22. 22.
    Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: NeurIPS, pp. 136–144 (2016)Google Scholar
  23. 23.
    Maria Carlucci, F., Porzi, L., Caputo, B., Ricci, E., Rota Bulo, S.: Autodial: automatic domain alignment layers. In: ICCV, October 2017Google Scholar
  24. 24.
    Murez, Z., Kolouri, S., Kriegman, D., Ramamoorthi, R., Kim, K.: Image to image translation for domain adaptation. In: CVPR, June 2018Google Scholar
  25. 25.
    Panareda Busto, P., Gall, J.: Open set domain adaptation. In: ICCV, October 2017Google Scholar
  26. 26.
    Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1406–1415 (2019)Google Scholar
  27. 27.
    Peng, X., et al.: VisDA: a synthetic-to-real benchmark for visual domain adaptation. In: CVPR Workshops, pp. 2021–2026 (2018)Google Scholar
  28. 28.
    Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15561-1_16CrossRefGoogle Scholar
  29. 29.
    Saito, K., Watanabe, K., Ushiku, Y., Harada, T.: Maximum classifier discrepancy for unsupervised domain adaptation. In: CVPR, June 2018Google Scholar
  30. 30.
    Saito, K., Yamamoto, S., Ushiku, Y., Harada, T.: Open set domain adaptation by backpropagation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 156–171. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01228-1_10CrossRefGoogle Scholar
  31. 31.
    Sankaranarayanan, S., Balaji, Y., Castillo, C.D., Chellappa, R.: Generate to adapt: aligning domains using generative adversarial networks. In: CVPR, June 2018Google Scholar
  32. 32.
    Snoek, J., et al.: Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In: Advances in Neural Information Processing Systems, pp. 13969–13980 (2019)Google Scholar
  33. 33.
    Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: CVPR (2017)Google Scholar
  34. 34.
    Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)
  35. 35.
    Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: CVPR (2017)Google Scholar
  36. 36.
    Vittorio, F., Alina, K., Rodrigo, B., Victor, G., Matteo, M.: Open images challenge 2019 (2019). https://storage.googleapis.com/openimages/web/challenge2019.html
  37. 37.
    Volpi, R., Morerio, P., Savarese, S., Murino, V.: Adversarial feature augmentation for unsupervised domain adaptation. In: CVPR, June 2018Google Scholar
  38. 38.
    Wu, A., Nowozin, S., Meeds, E., Turner, R., Hernández-Lobato, J., Gaunt, A.: Deterministic variational inference for robust Bayesian neural networks. In: 7th International Conference on Learning Representations, ICLR 2019 (2019)Google Scholar
  39. 39.
    Wu, Z., et al.: Moleculenet: a benchmark for molecular machine learning. Chem. Sci. 9(2), 513–530 (2018)CrossRefGoogle Scholar
  40. 40.
    Xie, S., Zheng, Z., Chen, L., Chen, C.: Learning semantic representations for unsupervised domain adaptation. In: ICML, pp. 5423–5432 (2018)Google Scholar
  41. 41.
    You, K., Long, M., Cao, Z., Wang, J., Jordan, M.I.: Universal domain adaptation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019Google Scholar
  42. 42.
    You, K., Wang, X., Long, M., Jordan, M.: Towards accurate model selection in deep unsupervised domain adaptation. In: ICML, pp. 7124–7133 (2019)Google Scholar
  43. 43.
    Zhang, H., Li, A., Han, X., Chen, Z., Zhang, Y., Guo, Y.: Improving open set domain adaptation using image-to-image translation. In: 2019 IEEE International Conference on Multimedia and Expo (ICME), pp. 1258–1263. IEEE (2019)Google Scholar
  44. 44.
    Zhang, J., Ding, Z., Li, W., Ogunbona, P.: Importance weighted adversarial nets for partial domain adaptation. In: CVPR, June 2018Google Scholar
  45. 45.
    Zhang, W., Ouyang, W., Li, W., Xu, D.: Collaborative and adversarial network for unsupervised domain adaptation. In: CVPR, June 2018Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Software, BNRistTsinghua UniversityBeijingChina
  2. 2.Research Center for Big DataTsinghua UniversityBeijingChina

Personalised recommendations