Abstract
Semi-supervised learning (SSL) has been widely studied in recent years, which aims to improve the performance of supervised learning by utilizing unlabeled data. However, the presence of noisy labels on labeled data is an inevitable consequence of either limited expertise in the labeling process or inadvertent labeling errors. While these noisy labels can have a detrimental impact on the performance of the model, leading to decreased accuracy and reliability. In this paper, we introduce the paradigm of Semi-Supervised Learning with Noisy Labels (SSLNL), which aims to address the challenges of semi-supervised classification when confronted with limited labeled data and the presence of label noise. To address the challenges of SSLNL, we propose the Noisy Samples Filter (NSF) module, which effectively filters out noisy samples by leveraging class agreement. Additionally, we propose the Semi-Supervised Contrastive Learning (SSCL) module, which harnesses the power of high-confidence pseudo-labels generated by a semi-supervised model to enhance the extraction of robust features through contrastive learning. Extensive experiments on CIFAR-10, CIFAR-100, and SVHN datasets demonstrate that our proposed method, outperforms state-of-the-art methods in addressing the problems of SSL and SSLNL, thus validating the effectiveness of our solution.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., Raffel, C.A.: MixMatch: a holistic approach to semi-supervised learning. In: Advances in Neural Information Processing Systems, pp. 5050–5060 (2019)
Chapelle, O., Chi, M., Zien, A.: A continuation method for semi-supervised SVMs. In: Proceedings of International Conference on Machine Learning, pp. 185–192 (2006)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607 (2020)
Cubuk, E.D., Zoph, B., Shlens, J., Le, Q.V.: RandAugment: practical automated data augmentation with a reduced search space. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 702–703 (2020)
Dai, Z., Cai, B., Chen, J.: UniMoCo: unsupervised, semi-supervised and fully-supervised visual representation learning. In: IEEE International Conference on Systems, pp. 3099–3106 (2022)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Li, F.F.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)
Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in Neural Information Processing Systems, pp. 8536–8546 (2018)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jiang, L., Zhou, Z., Leung, T., Li, L.J., Li, F.F.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313 (2018)
Khosla, P., et al.: Supervised contrastive learning. In: Advances in Neural Information Processing Systems, pp. 18661–18673 (2020)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Lee, D.H., et al.: Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: International Conference on Machine Learning, p. 896 (2013)
Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. In: International Conference on Learning Representations, pp. 1–14 (2020)
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning (2011)
Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)
Sohn, K., et al.: FixMatch: simplifying semi-supervised learning with consistency and confidence. In: Advances in Neural Information Processing Systems, pp. 596–608 (2020)
Song, H., Kim, M., Lee, J.G.: Selfie: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915 (2019)
Song, H., Kim, M., Park, D., Shin, Y., Lee, J.G.: Learning from noisy labels with deep neural networks: a survey. arXiv preprint: arXiv:2007.08199 (2020)
Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: International Conference on Machine Learning, pp. 1139–1147 (2013)
Van Engelen, J.E., Hoos, H.H.: A survey on semi-supervised learning. Mach. Learn. 109(2), 373–440 (2020)
Wei, H., Feng, L., Chen, X., An, B.: Combating noisy labels by agreement: a joint training method with co-regularization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13726–13735 (2020)
Xie, Q., Dai, Z., Hovy, E., Luong, T., Le, Q.: Unsupervised data augmentation for consistency training. In: Advances in Neural Information Processing Systems, pp. 6256–6268 (2020)
Yang, X., Song, Z., King, I., Xu, Z.: A survey on deep semi-supervised learning. IEEE Trans. Knowl. Data Eng., 1–20 (2022)
Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint: arXiv:1605.07146 (2016)
Zhang, B., et al.: FlexMatch: boosting semi-supervised learning with curriculum pseudo labeling. In: Advances in Neural Information Processing Systems, pp. 18408–18419 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sun, Y., Gao, C. (2024). NCMatch: Semi-supervised Learning with Noisy Labels via Noisy Sample Filter and Contrastive Learning. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14432. Springer, Singapore. https://doi.org/10.1007/978-981-99-8543-2_2
Download citation
DOI: https://doi.org/10.1007/978-981-99-8543-2_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8542-5
Online ISBN: 978-981-99-8543-2
eBook Packages: Computer ScienceComputer Science (R0)