Skip to main content

NCMatch: Semi-supervised Learning with Noisy Labels via Noisy Sample Filter and Contrastive Learning

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14432))

Included in the following conference series:

  • 335 Accesses

Abstract

Semi-supervised learning (SSL) has been widely studied in recent years, which aims to improve the performance of supervised learning by utilizing unlabeled data. However, the presence of noisy labels on labeled data is an inevitable consequence of either limited expertise in the labeling process or inadvertent labeling errors. While these noisy labels can have a detrimental impact on the performance of the model, leading to decreased accuracy and reliability. In this paper, we introduce the paradigm of Semi-Supervised Learning with Noisy Labels (SSLNL), which aims to address the challenges of semi-supervised classification when confronted with limited labeled data and the presence of label noise. To address the challenges of SSLNL, we propose the Noisy Samples Filter (NSF) module, which effectively filters out noisy samples by leveraging class agreement. Additionally, we propose the Semi-Supervised Contrastive Learning (SSCL) module, which harnesses the power of high-confidence pseudo-labels generated by a semi-supervised model to enhance the extraction of robust features through contrastive learning. Extensive experiments on CIFAR-10, CIFAR-100, and SVHN datasets demonstrate that our proposed method, outperforms state-of-the-art methods in addressing the problems of SSL and SSLNL, thus validating the effectiveness of our solution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.A.: MixMatch: a holistic approach to semi-supervised learning. In: Advances in Neural Information Processing Systems, pp. 5050–5060 (2019)

    Google Scholar 

  2. Chapelle, O., Chi, M., Zien, A.: A continuation method for semi-supervised SVMs. In: Proceedings of International Conference on Machine Learning, pp. 185–192 (2006)

    Google Scholar 

  3. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607 (2020)

    Google Scholar 

  4. Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: RandAugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)

    Google Scholar 

  5. Dai, Z., Cai, B., Chen, J.: UniMoCo: unsupervised, semi-supervised and fully-supervised visual representation learning. In: IEEE International Conference on Systems, pp. 3099–3106 (2022)

    Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  7. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in Neural Information Processing Systems, pp. 8536–8546 (2018)

    Google Scholar 

  8. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  10. Jiang, L., Zhou, Z., Leung, T., Li, L.J., Li, F.F.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313 (2018)

    Google Scholar 

  11. Khosla, P., et al.: Supervised contrastive learning. In: Advances in Neural Information Processing Systems, pp. 18661–18673 (2020)

    Google Scholar 

  12. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  13. Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: International Conference on Machine Learning, p. 896 (2013)

    Google Scholar 

  14. Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. In: International Conference on Learning Representations, pp. 1–14 (2020)

    Google Scholar 

  15. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)

    Google Scholar 

  16. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  17. Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Advances in Neural Information Processing Systems, pp. 596–608 (2020)

    Google Scholar 

  18. Song, H., Kim, M., Lee, J.G.: Selfie: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915 (2019)

    Google Scholar 

  19. Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. arXiv preprint: arXiv:2007.08199 (2020)

  20. Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147 (2013)

    Google Scholar 

  21. Van Engelen, J.E., Hoos, H.H.: A survey on semi-supervised learning. Mach. Learn. 109(2), 373–440 (2020)

    Article  MathSciNet  Google Scholar 

  22. Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726–13735 (2020)

    Google Scholar 

  23. Xie, Q., Dai, Z., Hovy, E., Luong, T., Le, Q.: Unsupervised data augmentation for consistency training. In: Advances in Neural Information Processing Systems, pp. 6256–6268 (2020)

    Google Scholar 

  24. Yang, X., Song, Z., King, I., Xu, Z.: A survey on deep semi-supervised learning. IEEE Trans. Knowl. Data Eng., 1–20 (2022)

    Google Scholar 

  25. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint: arXiv:1605.07146 (2016)

  26. Zhang, B., et al.: FlexMatch: boosting semi-supervised learning with curriculum pseudo labeling. In: Advances in Neural Information Processing Systems, pp. 18408–18419 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Can Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, Y., Gao, C. (2024). NCMatch: Semi-supervised Learning with Noisy Labels via Noisy Sample Filter and Contrastive Learning. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14432. Springer, Singapore. https://doi.org/10.1007/978-981-99-8543-2_2

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8543-2_2

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8542-5

  • Online ISBN: 978-981-99-8543-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics