Skip to main content

Uncertainty-Guided Source-Free Domain Adaptation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13685))

Included in the following conference series:

Abstract

Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model. However, the absence of the source data and the domain shift makes the predictions on the target data unreliable. We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation. For this, we construct a probabilistic source model by incorporating priors on the network parameters inducing a distribution over the model predictions. Uncertainties are estimated by employing a Laplace approximation and incorporated to identify target data points that do not lie in the source manifold and to down-weight them when maximizing the mutual information on the target data. Unlike recent works, our probabilistic treatment is computationally lightweight, decouples source training and target adaptation, and requires no specialized source training or changes of the model architecture. We show the advantages of uncertainty-guided SFDA over traditional SFDA in the closed-set and open-set settings and provide empirical evidence that our approach is more robust to strong domain shifts even without tuning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmed, W., Morerio, P., Murino, V.: Adaptive pseudo-label refinement by negative ensemble learning for source-free unsupervised domain adaptation. In: Proceedings of the Winter Conference on Applications of Computer Vision (WACV) (2022)

    Google Scholar 

  2. Bendale, A., Boult, T.E.: Towards open set deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1563–1572 (2016)

    Google Scholar 

  3. Blei, D.M., Kucukelbir, A., McAuliffe, J.D.: Variational inference: a review for statisticians. J. Am. Stat. Assoc. 112(518), 859–877 (2017)

    Article  MathSciNet  Google Scholar 

  4. Bousmalis, K., Trigeorgis, G., Silberman, N., Krishnan, D., Erhan, D.: Domain separation networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 343–351 (2016)

    Google Scholar 

  5. Chen, W., et al.: Self-supervised noisy label learning for source-free unsupervised domain adaptation. arXiv preprint arXiv:2102.11614 (2021)

  6. Chen, X., Wang, S., Long, M., Wang, J.: Transferability vs. discriminability: batch spectral penalization for adversarial domain adaptation. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 1081–1090 (2019)

    Google Scholar 

  7. Coates, A., Ng, A., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 215–223. Journal of Machine Learning Research Workshop and Conference Proceedings (2011)

    Google Scholar 

  8. Csurka, G.: A comprehensive survey on domain adaptation for visual applications. In: Domain Adaptation in Computer Vision Applications, pp. 1–35 (2017)

    Google Scholar 

  9. Dangel, F., Kunstner, F., Hennig, P.: Backpack: packing more into backprop. In: Proceedings of the International Conference on Learning Representations (ICLR) (2020)

    Google Scholar 

  10. Foong, A., Burt, D., Li, Y., Turner, R.: On the expressiveness of approximate inference in bayesian neural networks. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 15897–15908 (2020)

    Google Scholar 

  11. Fu, B., Cao, Z., Long, M., Wang, J.: Learning to detect open classes for universal domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12360, pp. 567–583. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58555-6_34

    Chapter  Google Scholar 

  12. Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 1050–1059 (2016)

    Google Scholar 

  13. Gal, Y., Hron, J., Kendall, A.: Concrete dropout. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 3581–3590 (2017)

    Google Scholar 

  14. Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17(59), 1–35 (2016)

    MathSciNet  MATH  Google Scholar 

  15. Gelman, A., Carlin, J.B., Stern, H.S., Dunson, D.B., Vehtari, A., Rubin, D.B.: Bayesian Data Analysis. FL, 3rd edn. Chapman and Hall/CRC, Boca Raton (2013)

    Google Scholar 

  16. Ghifary, M., Kleijn, W.B., Zhang, M., Balduzzi, D., Li, W.: Deep reconstruction-classification networks for unsupervised domain adaptation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 597–613. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_36

    Chapter  Google Scholar 

  17. Gomes, R., Krause, A., Perona, P.: Discriminative clustering by regularized information maximization. In: Advances in Neural Information Processing Systems (NeurIPS) (2010)

    Google Scholar 

  18. Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems (NeurIPS) (2014)

    Google Scholar 

  19. Grandvalet, Y., Bengio, Y., et al.: Semi-supervised learning by entropy minimization. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 281–296 (2005)

    Google Scholar 

  20. Han, L., Zou, Y., Gao, R., Wang, L., Metaxas, D.: Unsupervised domain adaptation via calibrating uncertainties. In: CVPR Workshops (2019)

    Google Scholar 

  21. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)

    Google Scholar 

  22. Hein, M., Andriushchenko, M., Bitterwolf, J.: Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 41–50 (2019)

    Google Scholar 

  23. Hoffman, J., et al.: Cycada: cycle-consistent adversarial domain adaptation. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 1989–1998 (2018)

    Google Scholar 

  24. Kendall, A., Gal, Y.: What uncertainties do we need in Bayesian deep learning for computer vision? In: Advances in Neural Information Processing Systems (NeurIPS), pp. 5574–5584 (2017)

    Google Scholar 

  25. Kristiadi, A., Hein, M., Hennig, P.: Being Bayesian, even just a bit, fixes overconfidence in relu networks. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 5436–5446 (2020)

    Google Scholar 

  26. Krizhevsky, A.: Learning Multiple Layers of Features from Tiny Images. Master’s thesis, University of Tronto, Toronto, Canada (2009)

    Google Scholar 

  27. Kundu, J.N., Venkat, N., Babu, R.V., et al.: Universal source-free domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4544–4553 (2020)

    Google Scholar 

  28. Kurmi, V.K., Kumar, S., Namboodiri, V.P.: Attending to discriminative certainty for domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 491–500 (2019)

    Google Scholar 

  29. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  30. Lao, Q., Jiang, X., Havaei, M.: Hypothesis disparity regularized mutual information maximization. In: Proceedings of the AAAI Conference on Artificial Intelligence (2021)

    Google Scholar 

  31. Li, R., Jiao, Q., Cao, W., Wong, H.S., Wu, S.: Model adaptation: unsupervised domain adaptation without source data. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9641–9650 (2020)

    Google Scholar 

  32. Liang, J., He, R., Sun, Z., Tan, T.: Exploring uncertainty in pseudo-label guided unsupervised domain adaptation. Pattern Recogn. 96, 106996 (2019)

    Article  Google Scholar 

  33. Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 6028–6039 (2020)

    Google Scholar 

  34. Liang, J., Hu, D., Wang, Y., He, R., Feng, J.: Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  35. Liang, J., Wang, Y., Hu, D., He, R., Feng, J.: A balanced and uncertainty-aware approach for partial domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12356, pp. 123–140. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58621-8_8

    Chapter  Google Scholar 

  36. Liu, H., Cao, Z., Long, M., Wang, J., Yang, Q.: Separate to adapt: open set domain adaptation via progressive separation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2927–2936 (2019)

    Google Scholar 

  37. Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 97–105 (2015)

    Google Scholar 

  38. Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 1647–1657 (2018)

    Google Scholar 

  39. Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 2208–2217 (2017)

    Google Scholar 

  40. MacKay, D.J.: A practical Bayesian framework for backpropagation networks. Neural Comput. 4(3), 448–472 (1992)

    Article  Google Scholar 

  41. MacKay, D.J.: Information Theory. Cambridge University Press, Inference and Learning Algorithms (2003)

    MATH  Google Scholar 

  42. Maddox, W.J., Izmailov, P., Garipov, T., Vetrov, D.P., Wilson, A.G.: A simple baseline for Bayesian uncertainty in deep learning. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 13132–13143 (2019)

    Google Scholar 

  43. Morerio, P., Cavazza, J., Murino, V.: Minimal-entropy correlation alignment for unsupervised deep domain adaptation. In: Proceedings of the International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  44. Müller, R., Kornblith, S., Hinton, G.: When does label smoothing help? In: Advances in Neural Information Processing Systems (NeurIPS), pp. 4694–4703 (2019)

    Google Scholar 

  45. Neal, R.M.: Bayesian Learning for Neural Networks. Springer Science & Business Media (2012)

    Google Scholar 

  46. Osband, I.: Risk versus uncertainty in deep learning: Bayes, bootstrap and the dangers of dropout. In: NeurIPS Workshop on Bayesian Deep Learning (2016)

    Google Scholar 

  47. Osband, I., Aslanides, J., Cassirer, A.: Randomized prior functions for deep reinforcement learning. In: Advances in Neural Information Processing Systems (NeurIPS), pp. 8617–8629 (2018)

    Google Scholar 

  48. Panareda Busto, P., Gall, J.: Open set domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 754–763 (2017)

    Google Scholar 

  49. Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1406–1415 (2019)

    Google Scholar 

  50. Peng, X., Usman, B., Kaushik, N., Hoffman, J., Wang, D., Saenko, K.: Visda: the visual domain adaptation challenge. arXiv preprint arXiv:1710.06924 (2017)

  51. Ringwald, T., Stiefelhagen, R.: Unsupervised domain adaptation by uncertain feature alignment. In: The British Machine Vision Conference (BMVC) (2020)

    Google Scholar 

  52. Ritter, H., Botev, A., Barber, D.: A scalable laplace approximation for neural networks. In: Proceedings of the International Conference on Learning Representations (ICLR) (2018)

    Google Scholar 

  53. Roy, S., Siarohin, A., Sangineto, E., Bulo, S.R., Sebe, N., Ricci, E.: Unsupervised domain adaptation using feature-whitening and consensus loss. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9471–9480 (2019)

    Google Scholar 

  54. Saenko, K., Kulis, B., Fritz, M., Darrell, T.: Adapting visual category models to new domains. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6314, pp. 213–226. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15561-1_16

    Chapter  Google Scholar 

  55. Saito, K., Saenko, K.: Ovanet: one-vs-all network for universal domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 9000–9009 (2021)

    Google Scholar 

  56. Saito, K., Yamamoto, S., Ushiku, Y., Harada, T.: Open set domain adaptation by backpropagation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 156–171. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01228-1_10

    Chapter  Google Scholar 

  57. Sun, B., Saenko, K.: Deep coral: Correlation alignment for deep domain adaptation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 443–450 (2016)

    Google Scholar 

  58. Tian, J., Zhang, J., Li, W., Xu, D.: VDM-DA: virtual domain modeling for source data-free domain adaptation. IEEE Trans. Circuits Syst. Video Technol. (2021)

    Google Scholar 

  59. Tierney, L., Kadane, J.B.: Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 81(393), 82–86 (1986)

    Article  MathSciNet  Google Scholar 

  60. Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1521–1528 (2011)

    Google Scholar 

  61. Tzeng, E., Hoffman, J., Darrell, T., Saenko, K.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7167–7176 (2017)

    Google Scholar 

  62. Tzeng, E., Hoffman, J., Zhang, N., Saenko, K., Darrell, T.: Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014)

  63. Venkateswara, H., Eusebio, J., Chakraborty, S., Panchanathan, S.: Deep hashing network for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5018–5027 (2017)

    Google Scholar 

  64. Wang, M., Deng, W.: Deep visual domain adaptation: a survey. Neurocomputing 312, 135–153 (2018)

    Article  Google Scholar 

  65. Wen, J., Zheng, N., Yuan, J., Gong, Z., Chen, C.: Bayesian uncertainty matching for unsupervised domain adaptation. In: International Joint Conference on Artificial Intelligence (IJCAI) (2019)

    Google Scholar 

  66. Wilson, A.G.: The case for Bayesian deep learning. New York University, Tech. rep. (2019)

    Google Scholar 

  67. Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 9010–9019 (2021)

    Google Scholar 

  68. Xu, R., Li, G., Yang, J., Lin, L.: Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1426–1435 (2019)

    Google Scholar 

  69. Yang, S., Wang, Y., van de Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 8978–8987 (2021)

    Google Scholar 

  70. Yeh, H.W., Yang, B., Yuen, P.C., Harada, T.: Sofa: source-data-free feature alignment for unsupervised domain adaptation. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 474–483 (2021)

    Google Scholar 

  71. Zhang, H., Li, A., Han, X., Chen, Z., Zhang, Y., Guo, Y.: Improving open set domain adaptation using image-to-image translation. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1258–1263. IEEE (2019)

    Google Scholar 

  72. Zheng, Z., Yang, Y.: Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. Int. J. Comput. Vis. (IJCV) 129, 1106–1120 (2021)

    Google Scholar 

Download references

Acknowledgements

We acknowledge funding from EU H2020 projects SPRING (No. 871245) and AI4Media (No. 951911); the EUREGIO project OLIVER; Academy of Finland (No. 339730, 308640), and the Finnish Center for Artificial Intelligence (FCAI). We acknowledge the computational resources by the Aalto Science-IT project and CSC – IT Center for Science, Finland.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Subhankar Roy .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 461 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Roy, S. et al. (2022). Uncertainty-Guided Source-Free Domain Adaptation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13685. Springer, Cham. https://doi.org/10.1007/978-3-031-19806-9_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19806-9_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19805-2

  • Online ISBN: 978-3-031-19806-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics