Skip to main content

RDA: Reciprocal Distribution Alignment for Robust Semi-supervised Learning

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13690))

Included in the following conference series:

Abstract

In this work, we propose Reciprocal Distribution Alignment (RDA) to address semi-supervised learning (SSL), which is a hyperparameter-free framework that is independent of confidence threshold and works with both the matched (conventionally) and the mismatched class distributions. Distribution mismatch is an often overlooked but more general SSL scenario where the labeled and the unlabeled data do not fall into the identical class distribution. This may lead to the model not exploiting the labeled data reliably and drastically degrade the performance of SSL methods, which could not be rescued by the traditional distribution alignment. In RDA, we enforce a reciprocal alignment on the distributions of the predictions from two classifiers predicting pseudo-labels and complementary labels on the unlabeled data. These two distributions, carrying complementary information, could be utilized to regularize each other without any prior of class distribution. Moreover, we theoretically show that RDA maximizes the input-output mutual information. Our approach achieves promising performance in SSL under a variety of scenarios of mismatched distributions, as well as the conventional matched SSL setting. Our code is available at: https://github.com/NJUyued/RDA4RobustSSL.

Y. Duan, Y. Shi are with the National Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arazo, E., Ortego, D., Albert, P., O’Connor, N.E., McGuinness, K.: Pseudo-labeling and confirmation bias in deep semi-supervised learning. In: International Joint Conference on Neural Networks (2020)

    Google Scholar 

  2. Berthelot, D., et al.: ReMixMatch: semi-supervised learning with distribution matching and augmentation anchoring. In: International Conference on Learning Representations (2020)

    Google Scholar 

  3. Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.A.: MixMatch: a holistic approach to semi-supervised learning. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  4. Bridle, J.S., Heading, A.J., MacKay, D.J.: Unsupervised classifiers, mutual information and ‘phantom targets’. In: Advances in Neural Information Processing Systems (1992)

    Google Scholar 

  5. Chapelle, O., Scholkopf, B., Zien, A.: Semi-supervised learning. IEEE Trans. Neural Netw. 20(3), 542–542 (2009)

    Article  Google Scholar 

  6. Coates, A., Ng, A.Y., Lee, H.: An analysis of single-layer networks in unsupervised feature learning. In: International Conference on Artificial Intelligence and Statistics (2011)

    Google Scholar 

  7. Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.: RandAugment: practical automated data augmentation with a reduced search space. In: Advances in Neural Information Processing Systems (2020)

    Google Scholar 

  8. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (2009)

    Google Scholar 

  9. Duan, Y., et al.: MutexMatch: semi-supervised learning with mutex-based consistency regularization. arXiv preprint arXiv:2203.14316 (2022)

  10. Gong, C., Wang, D., Liu, Q.: AlphaMatch: improving consistency for semi-supervised learning with alpha-divergence. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  11. Grandvalet, Y., Bengio, Y.: Semi-supervised learning by entropy minimization. In: Advances in Neural Information Processing Systems (2005)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  13. Hernán, M.A., Robins, J.M.: Causal Inference: What If. Chapman & Hall/CRC, Boca Raton (2020)

    Google Scholar 

  14. Hofmann, H., Kafadar, K., Wickham, H.: Letter-value plots: boxplots for large data. Technical report (2011)

    Google Scholar 

  15. Ishida, T., Niu, G., Hu, W., Sugiyama, M.: Learning from complementary labels. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  16. Ishida, T., Niu, G., Menon, A.K., Sugiyama, M.: Complementary-label learning for arbitrary losses and models. In: International Conference on Machine Learning (2018)

    Google Scholar 

  17. Kim, J., Hur, Y., Park, S., Yang, E., Hwang, S.J., Shin, J.: Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning. In: Advances in Neural Information Processing Systems (2020)

    Google Scholar 

  18. Kim, Y., Yim, J., Yun, J., Kim, J.: NLNL: negative learning for noisy labels. In: IEEE/CVF International Conference on Computer Vision (2019)

    Google Scholar 

  19. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)

    Google Scholar 

  20. Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Workshop on Challenges in Representation Learning, International Conference on Machine Learning (2013)

    Google Scholar 

  21. Li, J., Xiong, C., Hoi, S.C.: CoMatch: semi-supervised learning with contrastive graph regularization. In: IEEE/CVF International Conference on Computer Vision (2021)

    Google Scholar 

  22. Rizve, M.N., Duarte, K., Rawat, Y.S., Shah, M.: In defense of pseudo-labeling: an uncertainty-aware pseudo-label selection framework for semi-supervised learning. In: International Conference on Learning Representations (2021)

    Google Scholar 

  23. Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Advances in Neural Information Processing Systems (2020)

    Google Scholar 

  24. Van Engelen, J.E., Hoos, H.H.: A survey on semi-supervised learning. Mach. Learn. 109(2), 373–440 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  25. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems (2016)

    Google Scholar 

  26. Wei, C., Sohn, K., Mellina, C., Yuille, A., Yang, F.: CReST: a class-rebalancing self-training framework for imbalanced semi-supervised learning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (2021)

    Google Scholar 

  27. Xu, Y., et al.: Dash: semi-supervised learning with dynamic thresholding. In: International Conference on Machine Learning (2021)

    Google Scholar 

  28. Yang, L., Zhuo, W., Qi, L., Shi, Y., Gao, Y.: Mining latent classes for few-shot segmentation. In: IEEE/CVF International Conference on Computer Vision, pp. 8721–8730 (2021)

    Google Scholar 

  29. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: British Machine Vision Conference (2016)

    Google Scholar 

  30. Zhang, B., et al.: FlexMatch: boosting semi-supervised learning with curriculum pseudo labeling. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  31. Zhao, Z., Zhou, L., Wang, L., Shi, Y., Gao, Y.: LaSSL: label-guided self-training for semi-supervised learning. In: AAAI Conference on Artificial Intelligence (2022)

    Google Scholar 

  32. Zhu, X.: Semi-supervised learning. In: Encyclopedia of Machine Learning and Data Mining, pp. 1142–1147 (2017)

    Google Scholar 

Download references

Acknowledgements

This work is supported by projects from NSFC Major Program (62192783), CAAI-Huawei MindSpore (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation (2021M690609), Jiangsu NSF (BK20210224), and CCF-Lenovo Bule Ocean. Thanks to Prof. Penghui Yao’s helpful discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yinghuan Shi .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 326 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Duan, Y., Qi, L., Wang, L., Zhou, L., Shi, Y. (2022). RDA: Reciprocal Distribution Alignment for Robust Semi-supervised Learning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13690. Springer, Cham. https://doi.org/10.1007/978-3-031-20056-4_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20056-4_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20055-7

  • Online ISBN: 978-3-031-20056-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics