Skip to main content
Log in

Domain-Specific Bias Filtering for Single Labeled Domain Generalization

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Conventional Domain Generalization (CDG) utilizes multiple labeled source datasets to train a generalizable model for unseen target domains. However, due to expensive annotation costs, the requirements of labeling all the source data are hard to be met in real-world applications. In this paper, we investigate a Single Labeled Domain Generalization (SLDG) task with only one source domain being labeled, which is more practical and challenging than the CDG task. A major obstacle in the SLDG task is the discriminability-generalization bias: the discriminative information in the labeled source dataset may contain domain-specific bias, constraining the generalization of the trained model. To tackle this challenging task, we propose a novel framework called Domain-Specific Bias Filtering (DSBF), which initializes a discriminative model with the labeled source data and then filters out its domain-specific bias with the unlabeled source data for generalization improvement. We divide the filtering process into (1) feature extractor debiasing via k-means clustering-based semantic feature re-extraction and (2) classifier rectification through attention-guided semantic feature projection. DSBF unifies the exploration of the labeled and the unlabeled source data to enhance the discriminability and generalization of the trained model, resulting in a highly generalizable model. We further provide theoretical analysis to verify the proposed domain-specific bias filtering process. Extensive experiments on multiple datasets show the superior performance of DSBF in tackling both the challenging SLDG task and the CDG task.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  • Bahdanau, D., Cho, K. & Bengio, Y. (2015). Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR).

  • Balaji, Y., Sankaranarayanan, S. & Chellappa, R. (2018). Metareg: Towards domain generalization using meta-regularization. In Advances in Neural Information Processing Systems (NeurIPS), pp. 998–1008.

  • Bellitto, G., Proietto Salanitri, F., Palazzo, S., et al. (2021). Hierarchical domain-adapted feature learning for video saliency prediction. International Journal of Computer Vision (IJCV), 129(12), 3216–3232.

    Article  Google Scholar 

  • Ben-David, S., Blitzer, J., Crammer, K., et al. (2010). A theory of learning from different domains. Machine Learning, 79(1–2), 151–175.

    Article  MathSciNet  MATH  Google Scholar 

  • Blanchard, G., Lee, G., & Scott, C. (2011). Generalizing from several related classification tasks to a new unlabeled sample. Advances in Neural Information Processing Systems (NeurIPS), 24, 2178–2186.

    Google Scholar 

  • Carlucci, F. M., D’Innocente, A., & Bucci, S., et al. (2019). Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 2224–2233.

  • Caron, M., Bojanowski, P., & Joulin, A., et al. (2018). Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pp. 132–149.

  • Chen, Y., Wang, H., Li, W., et al. (2021). Scale-aware domain adaptive faster r-cnn. International Journal of Computer Vision (IJCV), 129(7), 2223–2243.

    Article  Google Scholar 

  • Chen, Z., Zhuang, J., & Liang, X., et al. (2019). Blending-target domain adaptation by adversarial meta-adaptation networks. In IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 2243–2252.

  • Dai, D., Sakaridis, C., Hecker, S., et al. (2020). Curriculum model adaptation with synthetic and real data for semantic foggy scene understanding. International Journal of Computer Vision (IJCV), 128(5), 1182–1204.

    Article  Google Scholar 

  • Devlin, J., Chang, M. W., & Lee, K., et al. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv.

  • Ding, Z., & Fu, Y. (2017). Deep domain generalization with structured low-rank constraint. IEEE Transactions on Image Processing (TIP), 27(1), 304–313.

    Article  MathSciNet  MATH  Google Scholar 

  • Dou, Q., de Castro, D. C., & Kamnitsas, K., et al. (2019). Domain generalization via model-agnostic learning of semantic features. In Advances in Neural Information Processing Systems (NeurIPS).

  • D’Innocente, A., & Caputo, B. (2018). Domain generalization with domain-specific aggregation modules. In German Conference on Pattern Recognition, Springer, pp. 187–198.

  • Fu, J., Liu, J., & Tian, H., et al. (2019). Dual attention network for scene segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 3146–3154.

  • Ganin, Y., Ustinova, E., Ajakan, H., et al. (2016). Domain-adversarial training of neural networks. The Journal of Machine Learning Research (JMLR), 17(1), 2096–2030.

    MathSciNet  MATH  Google Scholar 

  • Gholami, B., Sahu, P., Rudovic, O., et al. (2020). Unsupervised multi-target domain adaptation: An information theoretic approach. IEEE Transactions on Image Processing (TIP), 29, 3993–4002.

    Article  MATH  Google Scholar 

  • Gong, B., Shi, Y., & Sha, F., et al. (2012). Geodesic flow kernel for unsupervised domain adaptation. In 2012 IEEE conference on computer vision and pattern recognition, IEEE, pp. 2066–2073.

  • Gong, B., Grauman, K., & Sha, F. (2013). Reshaping visual datasets for domain adaptation. In Advances in Neural Information Processing Systems (NIPS).

  • Gong, B., Grauman, K., & Sha, F. (2014). Learning kernels for unsupervised domain adaptation with applications to visual object recognition. International Journal of Computer Vision (IJCV), 109(1), 3–27.

    Article  MathSciNet  MATH  Google Scholar 

  • Gong, R., Li, W., & Chen, Y., et al. (2019). Dlow: Domain flow for adaptation and generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2477–2486.

  • He, K., Zhang, X., & Ren, S., et al. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 770–778.

  • Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531

  • Ho, H. T., & Gopalan, R. (2014). Model-driven domain adaptation on product manifolds for unconstrained face recognition. International Journal of Computer Vision (IJCV), 109(1–2), 110–125.

    Article  MATH  Google Scholar 

  • Hoffman, J., Kulis, B., & Darrell, T., et al. (2012). Discovering latent domains for multisource domain adaptation. In European Conference on Computer Vision (ECCV), Springer, pp. 702–715.

  • Hoffman, J., Rodner, E., Donahue, J., et al. (2014). Asymmetric and category invariant feature transformations for domain adaptation. International Journal of Computer Vision (IJCV), 109(1–2), 28–41.

    Article  MathSciNet  MATH  Google Scholar 

  • Huang, Y., Wu, Q., Xu, J., et al. (2021). Unsupervised domain adaptation with background shift mitigating for person re-identification. International Journal of Computer Vision (IJCV), 129(7), 2244–2263.

    Article  Google Scholar 

  • Huang, Z., Wang, H., & Xing, E. P., et al. (2020). Self-challenging improves cross-domain generalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 124–140.

  • Kan, M., Wu, J., Shan, S., et al. (2014). Domain adaptation for face recognition: Targetize source domain bridged by common subspace. International Journal of Computer Vision (IJCV), 109(1–2), 94–109.

    Article  MATH  Google Scholar 

  • Kang, G., Jiang, L., & Yang, Y., et al. (2019). Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 4888–4897.

  • Kundu, J. N., Venkat, N., & Babu, R. V., et al. (2020). Universal source-free domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4544–4553.

  • LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.

    Article  Google Scholar 

  • Li, D., Yang, Y., & Song, Y. Z., et al. (2017). Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 5542–5550.

  • Li, D., Zhang, J., & Yang, Y., et al.(2019). Episodic training for domain generalization. Proceedings of the IEEE international conference on computer vision (ICCV), pp. 1446–1455.

  • Li, H., Pan, S. J. & Wang, S., et al. (2018). Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 5400–5409.

  • Li, H., Wang, Y. & Wan, R., et al. (2020a). Domain generalization for medical imaging classification with linear-dependency regularization. In Advances in neural information processing systems (NeurIPS).

  • Li, H., Wan, R., Wang, S., et al. (2021). Unsupervised domain adaptation in the wild via disentangling representation learning. International Journal of Computer Vision (IJCV), 129(2), 267–283.

    Article  MathSciNet  MATH  Google Scholar 

  • Li, R., Cao, W., Wu, S., et al. (2020). Generating target image-label pairs for unsupervised domain adaptation. IEEE Transactions on Image Processing (TIP), 29, 7997–8011.

    Article  MathSciNet  MATH  Google Scholar 

  • Li, Y., Hu, W., Li, H., et al. (2020). Aligning discriminative and representative features: An unsupervised domain adaptation method for building damage assessment. IEEE Transactions on Image Processing (TIP), 29, 6110–6122.

    Article  MATH  Google Scholar 

  • Liang, J., Hu, D., Feng, J., (2020). Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In International conference on machine learning (ICML), PMLR.

  • Lin, S., Li, C. T., & Kot, A. C. (2020). Multi-domain adversarial feature generalization for person re-identification. IEEE Transactions on Image Processing (TIP), 30, 1596–1607.

    Article  Google Scholar 

  • Liu, Z., Miao, Z. & Pan, X., et al. (2020). Open compound domain adaptation. In IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 12, 403–12, 412.

  • Long, M., Cao, Y. & Wang, J., et al. (2015). Learning transferable features with deep adaptation networks. In International conference on machine learning (ICML), PMLR, pp. 97–105.

  • Long, M., Zhu, H. & Wang, J., et al. (2017). Deep transfer learning with joint adaptation networks. In International conference on machine learning (ICML), PMLR, pp. 2208–2217.

  • Long, M., Cao, Z. & Wang, J., et al. (2018). Conditional adversarial domain adaptation. In Advances in neural information processing systems (NeurIPS), pp. 1640–1650.

  • Mancini, M., Bulo, S. R., Caputo. B., et al. (2019a). Adagraph: Unifying predictive and continuous domain adaptation through graphs. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp. 6568–6577.

  • Mancini, M., Porzi, L. & Bulo. S. R., et al. (2019b). Inferring latent domains for unsupervised deep domain adaptation. IEEE transactions on pattern analysis and machine intelligence (TPAMI).

  • Matsuura, T., Harada, T. (2020). Domain generalization using a mixture of multiple latent domains. In Proceedings of the AAAI conference on artificial intelligence (AAAI).

  • Peng, X., Bai, Q. & Xia, X., et al. (2019). Moment matching for multi-source domain adaptation. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 1406–1415.

  • Qiao, F., Zhao, L., Peng, X. (2020). Learning to learn single domain generalization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 12,556–12,565.

  • Quionero-Candela, J., Sugiyama, M. & Schwaighofer, A., et al. (2009). Dataset shift in machine learning. The MIT Press.

  • Saito, K., Watanabe, K. & Ushiku, Y., et al. (2018). Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 3723–3732.

  • Schmidhuber, J. (1987). Evolutionary principles in self-referential learning, or on learning how to learn: The meta-meta-... hook. PhD thesis, Technische Universität München.

  • Selvaraju, R. R., Cogswell, M., Das, A., et al. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (ICCV), pp. 618–626.

  • Seo, S., Suh, Y., Kim, D., et al. (2020). Learning to optimize domain specific normalization for domain generalization. In European conference on computer vision (ECCV).

  • Shankar, S., Piratla, V., Chakrabarti, S., et al. (2018). Generalizing across domains via cross-gradient training. International conference on learning representation (ICLR).

  • Shen, Z., Huang, M., Shi, J., et al. (2021). Cdtd: A large-scale cross-domain benchmark for instance-level image-to-image translation and domain adaptive object detection. International Journal of Computer Vision (IJCV), 129(3), 761–780.

    Article  Google Scholar 

  • Sindagi, V. A., & Srivastava, S. (2017). Domain adaptation for automatic oled panel defect detection using adaptive support vector data description. International Journal of Computer Vision (IJCV), 122(2), 193–211.

    Article  MathSciNet  Google Scholar 

  • Sohn, K., Berthelot, D., Carlini, N., et al. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in neural information processing systems (NeurIPS).

  • Tarvainen, A., Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems (NeurIPS).

  • Van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research (JMLR), 9(11), 2579–2605.

  • Vapnik, V. (1992). Principles of risk minimization for learning theory. In Advances in neural information processing systems (NeurIPS), pp. 831–838.

  • Vaswani, A., Shazeer, N. & Parmar, N., et al. (2017). Attention is all you need. In Advances in neural information processing systems (NeurIPS), pp. 5998–6008.

  • Venkateswara, H., Eusebio, J. & Chakraborty, S., et al. (2017). Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 5018–5027.

  • Volpi, R., Namkoong, H., Sener, O., et al. (2018). Generalizing to unseen domains via adversarial data augmentation. Advances in neural information processing systems (NeurIPS).

  • Wang, F., Jiang, M. & Qian, C., et al. (2017). Residual attention network for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 3156–3164.

  • Wang, J., Cheng, M. M. & Jiang, J. (2021). Domain shift preservation for zero-shot domain adaptation. IEEE transactions on image processing (TIP).

  • Wang, S., Yu, L., Li, C., et al. (2020a). Learning from extrinsic and intrinsic supervisions for domain generalization. In Proceedings of the European conference on computer vision (ECCV).

  • Wang, X., Kihara, D., Luo, J., et al. (2020). Enaet: A self-trained framework for semi-supervised and supervised learning with ensemble transformations. IEEE Transactions on Image Processing (TIP), 30, 1639–1647.

    Article  Google Scholar 

  • Wang, Y., Zhang, Z., Hao, W., et al. (2020). Attention guided multiple source and target domain adaptation. IEEE Transactions on Image Processing (TIP), 30, 892–906.

    Article  Google Scholar 

  • Wu, Z., Wang, X. & Gonzalez, J. E., et al. (2019). Ace: Adapting to changing environments for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision (ICCV), pp. 2121–2130.

  • Xiong, C., McCloskey, S. & Hsieh, S. H., et al. (2014). Latent domains modeling for visual domain adaptation. In Proceedings of the AAAI conference on artificial intelligence (AAAI).

  • Xu, H., Yang, M., Deng, L., et al. (2021). Neutral cross-entropy loss based unsupervised domain adaptation for semantic segmentation. IEEE Transactions on Image Processing (TIP), 30, 4516–4525.

    Article  MathSciNet  Google Scholar 

  • Xu, J., Ramos, S., Vázquez, D., et al. (2016). Hierarchical adaptive structural svm for domain adaptation. International Journal of Computer Vision (IJCV), 119(2), 159–178.

    Article  MathSciNet  Google Scholar 

  • Yamada, M., Sigal, L., & Chang, Y. (2014). Domain adaptation for structured regression. International Journal of Computer Vision (IJCV), 109(1–2), 126–145.

    Article  MATH  Google Scholar 

  • Yang, X., Song, Z. & King, I., et al. (2021). A survey on deep semi-supervised learning. arXiv preprint arXiv:2103.00550

  • Yasarla, R., Sindagi, V. A. & Patel, V. M. (2021). Semi-supervised image deraining using gaussian processes. IEEE transactions on image processing (TIP).

  • Yu, H., Hu, M. & Chen, S. (2018). Multi-target unsupervised domain adaptation without exactly shared categories. arXiv:1809.00852

  • Yuan, J., Ma, X. & Chen, D., et al. (2021a). Collaborative semantic aggregation and calibration for separated domain generalization. arXiv e-prints pp arXiv–2110

  • Yuan, J., Ma, X. & Kuang, K., et al. (2021b). Learning domain-invariant relationship with instrumental variable for domain generalization. arXiv preprint arXiv:2110.01438

  • Zhang, C., Zhang, K. & Li, Y. (2020a). A causal view on robustness of neural networks. In Advances in neural information processing systems (NeurIPS).

  • Zhang, H., Goodfellow, I. & Metaxas, D., et al. (2019a). Self-attention generative adversarial networks. In International conference on machine learning (ICML), PMLR, pp. 7354–7363.

  • Zhang, K., Gong, M. & Schölkopf, B., et al. (2015). Multi-source domain adaptation: A causal view. In AAAI conference on artificial intelligence (AAAI), pp. 3150–3157.

  • Zhang, Y., Liu, T. & Long, M., et al. (2019b). Bridging theory and algorithm for domain adaptation. In International conference on machine learning (ICML).

  • Zhang, Y., Wei, Y., Wu, Q., et al. (2020). Collaborative unsupervised domain adaptation for medical image diagnosis. IEEE Transactions on Image Processing (TIP), 29, 7834–7844.

    Article  MATH  Google Scholar 

  • Zhao, H., Zhang, S., Wu, G., et al. (2018). Adversarial multiple source domain adaptation. Advances in Neural Information Processing Systems (NeurIPS), 31, 8559–8570.

    Google Scholar 

  • Zhao, S., Gong, M. & Liu, T., et al. (2020) Domain generalization via entropy regularization. In Advances in neural information processing systems (NeurIPS).

  • Zhao, S., Li, B. & Xu, P., et al. (2021). Madan: Multi-source adversarial domain aggregation network for domain adaptation. International Journal of Computer Vision (IJCV), pp. 1–26.

  • Zheng, Z., & Yang, Y. (2021). Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation. International Journal of Computer Vision (IJCV), 129(4), 1106–1120.

    Article  Google Scholar 

  • Zhou, K., Yang, Y. & Hospedales, T., et al. (2020). Learning to generate novel domains for domain generalization. In European conference on computer vision (ECCV), pp. 561–578.

  • Zhou, K., Loy, C. C. & Liu, Z. (2021a). Semi-supervised domain generalization with stochastic stylematch. arXiv preprint arXiv:2106.00592

  • Zhou, K., Yang, Y., Qiao, Y., et al. (2021). Domain adaptive ensemble learning. IEEE Transactions on Image Processing, 30, 8008–8018.

    Article  Google Scholar 

  • Zhou, K., Yang, Y. & Qiao, Y., et al. (2021c). Domain generalization with mixstyle. In International conference on learning representations (ICLR).

  • Zuo, Y., Yao, H., & Xu, C. (2021). Attention-based multi-source domain adaptation. IEEE Transactions on Image Processing, 30, 3793–3803.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This work was supported in part by National Key Research and Development Program of China (2021YFC3340300), National Natural Science Foundation of China (U20A20387, Nos. 62006207, 62037001), Young Elite Scientists Sponsorship Program by CAST(2021QNRC001), Project by Shanghai AI Laboratory (P22KS00111), the Starry Night Science Fund of Zhejiang University Shanghai Institute for Advanced Study (SN-ZJU-SIAS-0010), Natural Science Foundation of Zhejiang Province (LZ22F020012, LQ21F020020), Fundamental Research Funds for the Central Universities (226-2022-00142, 226-2022-00051), National Key Research and Development Project (2022YFC2504605).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kun Kuang.

Additional information

Communicated by Wanli Ouyang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuan, J., Ma, X., Chen, D. et al. Domain-Specific Bias Filtering for Single Labeled Domain Generalization. Int J Comput Vis 131, 552–571 (2023). https://doi.org/10.1007/s11263-022-01712-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-022-01712-7

Keywords

Navigation