Skip to main content
Log in

A survey of class-imbalanced semi-supervised learning

  • Published:
Machine Learning Aims and scope Submit manuscript

Abstract

Semi-supervised learning(SSL) can substantially improve the performance of deep neural networks by utilizing unlabeled data when labeled data is scarce. The state-of-the-art(SOTA) semi-supervised algorithms implicitly assume that the class distribution of labeled datasets and unlabeled datasets are balanced, which means the different classes have the same numbers of training samples. However, they can hardly perform well on minority classes when the class distribution of training data is imbalanced. Recent work has found several ways to decrease the degeneration of semi-supervised learning models in class-imbalanced learning. In this article, we comprehensively review class-imbalanced semi-supervised learning (CISSL), starting with an introduction to this field, followed by a realistic evaluation of existing class-imbalanced semi-supervised learning algorithms and a brief summary of them.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Data availability

The datasets are the benchmark datasets available online.

Code availability

The codes will be available from the corresponding author upon request.

References

  • Berthelot, D., Carlini, N., & Goodfellow, I. J., et al. (2019). Mixmatch: A holistic approach to semi-supervised learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS, Vancouver, BC, Canada.

  • Berthelot, D., Carlini, N., & Cubuk, E. D., et al. (2020). Remixmatch: Semi-supervised learning with distribution matching and augmentation anchoring. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia.

  • Brown, T. B., Mann, B., & Ryder, N., et al. (2020). Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Buda, M., Maki, A., & Mazurowski, M. A. (2018). A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106, 249–259.

    Article  Google Scholar 

  • Byrd, J., & Lipton, Z. C. (2019). What is the effect of importance weighting in deep learning? In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, Proceedings of Machine Learning Research, vol 97. PMLR, pp 872–881.

  • Cao, K., Wei, C., & Gaidon, A., et al. (2019). Learning imbalanced datasets with label-distribution-aware margin loss. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, Vancouver, BC, Canada, pp 1565–1576.

  • Cao, K., Chen, Y., Lu, J., et al. (2021). Heteroskedastic and imbalanced deep learning with adaptive regularization. In 9th International Conference on Learning Representations, ICLR 2021.

  • Carbonneau, M., Cheplygina, V., Granger, E., et al. (2018). Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77, 329–353.

    Article  Google Scholar 

  • Chapelle, O., Schölkopf, B., & Zien, A. (2006). Introduction to semi-supervised learning. In O. Chapelle, B. Schölkopf, & A. Zien (Eds.), Semi-Supervised Learning (pp. 1–12). The MIT Press.

    Chapter  Google Scholar 

  • Chawla, N. V., Bowyer, K. W., Hall, L. O., et al. (2002). SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16, 321–357.

    Article  MATH  Google Scholar 

  • Chen, T., Kornblith, S., Swersky, K., et al. (2020). Big self-supervised models are strong semi-supervised learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Chen, X., & He, K. (2021). Exploring simple siamese representation learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021. Computer Vision Foundation / IEEE, pp. 15750–15758.

  • Chou, H., Chang, S., Pan, J., et al. (2020). Remix: Rebalanced mixup. In Computer Vision - ECCV 2020 Workshops - Glasgow, UK, August 23-28, 2020, Proceedings, Part VI.

  • Ciresan, D. C., Meier, U., & Schmidhuber, J. (2012). Multi-column deep neural networks for image classification. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA. IEEE Computer Society, pp 3642–3649.

  • Cover, T. M., & Thomas, J. A. (1999). Elements of information theory. Wiley.

    MATH  Google Scholar 

  • Cubuk, E. D., Zoph, B., & Mané, D., et al. (2018). Autoaugment: Learning augmentation policies from data. CoRR abs/1805.09501.

  • Cubuk, E. D., Zoph, B., & Shlens, J., et al. (2019). Randaugment: Practical data augmentation with no separate search. CoRR abs/1909.13719.

  • Cui, Y., Jia, M., & Lin, T., et al. (2019). Class-balanced loss based on effective number of samples. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Computer Vision Foundation/IEEE, pp. 9268–9277.

  • Dablain, D., Krawczyk, B., & Chawla, N. V. (2022). Deepsmote: Fusing deep learning and smote for imbalanced data. In EE Transactions on Neural Networks and Learning Systems, pp. 1–15.

  • Deli, C., Yankai, L., & Guangxiang, Z., et al. (2021). Topology-imbalance learning for semi-supervised node classification. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021.

  • Devi, D., Biswas, S. K., & Purkayastha, B. (2017). Redundancy-driven modified tomek-link based undersampling: A solution to class imbalance. Pattern Recognition Letters, 93, 3–12.

    Article  Google Scholar 

  • Devlin, J., Chang, M., & Lee, K., et al. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR abs/1810.04805.

  • Devries, T., & Taylor, G. W. (2017). Improved regularization of convolutional neural networks with cutout. CoRR abs/1708.04552.

  • Dietterich, T. G., Lathrop, R. H., & Lozano-Pérez, T. (1997). Solving the multiple instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1–2), 31–71.

    Article  MATH  Google Scholar 

  • Edunov, S., Ott, M., & Auli, M., et al. (2018). Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium. Association for Computational Linguistics, pp. 489–500.

  • Fan, Y., Dai, D., & Kukleva, A., et al. (2022). Cossl: Co-learning of representation and classifier for imbalanced semi-supervised learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA. IEEE, pp. 14554–14564.

  • Fang, Z., Li, Y., & Lu, J., et al. (2022). Is out-of-distribution detection learnable? CoRR abs/2210.14707.

  • Foulds, J. R., & Frank, E. (2010). A review of multi-instance learning assumptions. The Knowledge Engineering Review, 25(1), 1–25.

    Article  Google Scholar 

  • Frénay, B., & Verleysen, M. (2014). Classification in the presence of label noise: A survey. IEEE Transactions on Neural Networks and Learning Systems, 25(5), 845–869.

    Article  MATH  Google Scholar 

  • Gao, W., Wang, L., & Li, Y., et al. (2016). Risk minimization in the presence of label noise. In Schuurmans, D., Wellman, M. P. (Eds). Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA. AAAI Press, pp 1575–1581.

  • Gidaris, S., Singh, P., & Komodakis, N. (2018). Unsupervised representation learning by predicting image rotations. CoRR abs/1803.07728.

  • Goodfellow, I. J., Pouget-Abadie, J., & Mirza, M., et al. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014,, Montreal, Quebec, Canada, pp 2672–2680.

  • Goodfellow, I. J., Bengio, Y., & Courville, A. C. (2016). Deep Learning. Adaptive computation and machine learning, MIT Press.

    MATH  Google Scholar 

  • Grandvalet, Y., & Bengio, Y. (2005). Semi-supervised learning by entropy minimization. In Actes de CAP 05, Conférence francophone sur l’apprentissage automatique - 2005, Nice, France.

  • Grill, J., Strub, F., & Altché, F., et al. (2020). Bootstrap your own latent - A new approach to self-supervised learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Guo, L., & Li, Y. (2022). Class-imbalanced semi-supervised learning with adaptive thresholding. In International Conference on Machine Learning, ICML 2022, Baltimore, Maryland, USA, Proceedings of Machine Learning Research, vol 162. PMLR, pp. 8082–8094.

  • Guo, L., Zhang, Z., & Jiang, Y., et al. (2020). Safe deep semi-supervised learning for unseen-class unlabeled data. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, Virtual Event, Proceedings of Machine Learning Research, vol 119. PMLR, pp. 3897–3906.

  • Gupta, A., Dollar, P., & Girshick, R. (2019). LVIS: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

  • Han, T., Gao, J., & Yuan, Y., et al. (2020). Unsupervised semantic aggregation and deformable template matching for semi-supervised learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • He, H., & Garcia, E. A. (2009). Learning from imbalanced data. IEEE Transactions on Knowledge and Data Engineering, 21(9), 1263–1284.

    Article  Google Scholar 

  • He, H., Bai, Y., & Garcia, E. A., et al. (2008). Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In 2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence), IEEE, pp. 1322–1328.

  • He, K., Chen, X., & Xie, S., et al. (2021). Masked autoencoders are scalable vision learners. CoRR abs/2111.06377.

  • He, R, Han, Z., & Lu, X., et al. (2022). Safe-student for safe deep semi-supervised learning with unseen-class unlabeled data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14585–14594.

  • Hinton, G. E., Srivastava, N., & Krizhevsky, A., et al. (2012). Improving neural networks by preventing co-adaptation of feature detectors. CoRR abs/1207.0580.

  • Huang, C., Li, Y., Loy, C. C., et al. (2020). Deep imbalanced learning for face recognition and attribute prediction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(11), 2781–2794.

    Article  Google Scholar 

  • Huang, Z., Xue, C., & Han, B., et al. (2021). Universal semi-supervised learning. In Advances in Neural Information Processing Systems, vol 34. Curran Associates, Inc., pp. 26714–26725.

  • Igual, J., Salazar, A., & Safont, G., et al. (2015). Semi-supervised bayesian classification of materials with impact-echo signals. Sensors 15(5):11,528–11,550.

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Bach, F. R., Blei, D. M. (eds) Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, JMLR Workshop and Conference Proceedings, vol 37. JMLR.org, pp 448–456.

  • Jamal, M. A., Brown, M., & Yang, M., et al. (2020). Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA.

  • Kang, B., Xie, S., & Rohrbach, M., et al. (2020). Decoupling representation and classifier for long-tailed recognition. In 8th International Conference on Learning Representations, ICLR 2020.

  • Karthik, S., Revaud, J., & Boris, C. (2021). Learning from long-tailed data with noisy labels. CoRR abs/2108.11096.

  • Kim, J., Hur, Y., & Park, S., et al. (2020a). Distribution aligning refinery of pseudo-label for imbalanced semi-supervised learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Kim, J., Jeong, J., & Shin, J. (2020b). M2m: Imbalanced classification via major-to-minor translation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020. Computer Vision Foundation/IEEE, pp. 13893–13902.

  • Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, Conference Track Proceedings.

  • Krizhevsky, A. (2009). Learning multiple layers of features from tiny images. Department of Computer Science, University of Tech. rep.

    Google Scholar 

  • Lai, Z., Wang, C., & Gunawan, H., et al. (2022). Smoothed adaptive weighting for imbalanced semi-supervised learning: Improve reliability against unknown distribution data. In Proceedings of the 39th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol 162. PMLR, pp 11828–11843.

  • Laine, S., & Aila, T. (2017). Temporal ensembling for semi-supervised learning. In 5th International Conference on Learning Representations, ICLR 2017, Conference Track Proceedings.

  • Lee, D. H. (2013). Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In In ICML Workshop on Challenges in Representation Learning.

  • Lee, H., Shin, S., & Kim, H. (2021). ABC: auxiliary balanced classifier for class-imbalanced semi-supervised learning. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021.

  • Lin, J. (1991). Divergence measures based on the shannon entropy. IEEE Transactions on Information Theory, 37(1), 145–151.

    Article  MathSciNet  MATH  Google Scholar 

  • Lin, T., Goyal, P., & Girshick, R. B., et al. (2017). Focal loss for dense object detection. In IEEE International Conference on Computer Vision, ICCV 2017. IEEE Computer Society, pp. 2999–3007.

  • Liu, J., Sun, Y., & Han, C., et al. (2020). Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

  • Liu, X., Wu, J., & Zhou, Z. (2009). Exploratory undersampling for class-imbalance learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 39(2), 539–550.

    Article  Google Scholar 

  • Liu, Z., Miao, Z., & Zhan, X., et al. (2019). Large-scale long-tailed recognition in an open world. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Computer Vision Foundation / IEEE, pp. 2537–2546.

  • Mahajan, D., Girshick, R. B., & Ramanathan, V., et al. (2018). Exploring the limits of weakly supervised pretraining. In Computer Vision - ECCV 2018 - 15th European Conference, Proceedings, Part II, Lecture Notes in Computer Science, vol. 11206. Springer, pp. 185–201.

  • Miyato, T., Maeda, S., Koyama, M., et al. (2019). Virtual adversarial training: A regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8), 1979–1993.

    Article  Google Scholar 

  • Netzer, Y., Wang, T., & Coates, A., et al. (2011). Reading digits in natural images with unsupervised feature learning. In Deep Learning and Unsupervised Feature Learning Workshop, Advances in Neural Information Processing Systems 2011, NeurIPS 2011.

  • Oh, Y., Kim, D. J., & Kweon, I. S. (2021). Distribution-aware semantics-oriented pseudo-label for imbalanced semi-supervised learning. CoRR abs/2016.05682.

  • Park, S., Lim, J., & Jeon, Y., et al. (2021). Influence-balanced loss for imbalanced visual classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 735–744.

  • Pham, H., Dai, Z., & Xie, Q., et al. (2021). Meta pseudo labels. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021. Computer Vision Foundation / IEEE, pp. 11557–11568.

  • Pouyanfar, S., Tao, Y., Mohan, A., et al. (2018). Dynamic sampling in convolutional neural networks for imbalanced data classification. In IEEE 1st Conference on Multimedia Information Processing and Retrieval, MIPR 2018. IEEE, pp. 112–117.

  • Rasmus, A., Berglund, M., & Honkala, M., et al. (2015). Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, pp. 3546–3554.

  • Ren, J., Yu, C., & Sheng, S., et al. (2020). Balanced meta-softmax for long-tailed visual recognition. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Rizve, M. N., Kardan, N., & Shah, M., et al. (2022). Towards realistic semi-supervised learning. In S. Avidan, G. Brostow, & M. Cissé (Eds.), Computer Vision - ECCV 2022 (pp. 437–455). Springer.

    Chapter  Google Scholar 

  • Saito, K., Kim, D., & Saenko, K. (2021). Openmatch: Open-set semi-supervised learning with open-set consistency regularization. In Advances in Neural Information Processing Systems, vol 34. Curran Associates, Inc., pp. 25956–25967.

  • Sajjadi, M., Javanmardi, M., & Tasdizen, T. (2017). Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016.

  • Salazar, A., Safont, G., & Vergara, L. (2018). Semi-supervised learning for imbalanced classification of credit card transaction. In 2018 International Joint Conference on Neural Networks, IJCNN 2018. IEEE, pp. 1–7.

  • Shen, L., Lin, Z., & Huang, Q. (2016). Relay backpropagation for effective learning of deep convolutional neural networks. In Computer Vision - ECCV 2016 - 14th European Conference, Proceedings, Part VII, Lecture Notes in Computer Science, vol. 9911. Springer, pp. 467–482.

  • Shu, J., Xie, Q., & Yi, L., et al. (2019). Meta-weight-net: Learning an explicit mapping for sample weighting. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, pp. 1917–1928.

  • Snell, J., Swersky, K., & Zemel, R. S. (2017). Prototypical networks for few-shot learning. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 pp. 4077–4087.

  • Sohn, K., Berthelot, D., & Li, C. L., et al. (2020). Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Tarvainen, A., & Valpola, H. (2017). Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017.

  • Tian, Y., Chen, X., & Ganguli, S. (2021). Understanding self-supervised learning dynamics without contrastive pairs. In Proceedings of the 38th International Conference on Machine Learning, pp. 10268–10278.

  • Van Horn, G., Mac Aodha, O., & Song, Y., et al. (2018). The inaturalist species classification and detection dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 8769–8778.

  • Wald, Y., Feder, A., & Greenfeld, D., et al. (2021). On calibration and out-of-domain generalization. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, pp. 2215–2227.

  • Wang, J., Lan, C., & Liu, C., et al. (2021). Generalizing to unseen domains: A survey on domain generalization. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021. ijcai.org, pp. 4627–4635.

  • Wang, Y., Ramanan, D., & Hebert, M. (2017). Learning to model the tail. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 7029–7039.

  • Wei, C., Sohn, K., & Mellina, C., et al. (2021a). Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021.

  • Wei, T., Shi, J., & Tu, W., et al. (2021b). Robust long-tailed learning under label noise. CoRR abs/2108.11569.

  • Wu, T., Liu, Z., & Huang, Q., et al. (2021). Adversarial robustness under long-tailed distribution. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021. Computer Vision Foundation/IEEE, pp. 8659–8668.

  • Xie, Q., Dai, Z., & Hovy, E. H., et al. (2020). Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Xu, Z., Chai, Z., & Yuan, C. (2021). Towards calibrated model for long-tailed visual recognition from prior perspective. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021.

  • Yang, J., Zhou, K., & Li, Y., et al. (2021). Generalized out-of-distribution detection: A survey. CoRR abs/2110.11334.

  • Yang, Y., & Xu, Z. (2020). Rethinking the value of labels for improving class-imbalanced learning. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020.

  • Yin, X., Yu, X., & Sohn, K., et al. (2019). Feature transfer learning for face recognition with under-represented data. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Computer Vision Foundation / IEEE, pp. 5704–5713.

  • Zagoruyko, S., & Komodakis, N. (2016). Wide residual networks. In Proceedings of the British Machine Vision Conference 2016, BMVC 2016.

  • Zhai, X., Oliver, A., & Kolesnikov, A., et al. (2019). s4l: Self-supervised semisupervised learning. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019. Computer Vision Foundation/IEEE, pp. 2537–2546.

  • Zhang, D., Ahuja, K., & Xu, Y., et al. (2021). Can subnetwork structure be the key to out-of-distribution generalization? In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, Proceedings of Machine Learning Research, vol. 139. PMLR, pp. 12356–12367.

  • Zhang, H., Cissé, M., & Dauphin, Y. N., et al. (2019). mixup: Beyond empirical risk minimization. In 6th International Conference on Learning Representations, ICLR 2018, Conference Track Proceedings.

  • Zhong, Z., Cui, J., & Liu, S., et al. (2021). Improving calibration for long-tailed recognition. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2021.

  • Zhou, B., Cui, Q., & Wei, X., et al. (2020a). BBN: bilateral-branch network with cumulative learning for long-tailed visual recognition. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020.

  • Zhou, J., Cui, G., Hu, S., et al. (2020). Graph neural networks: A review of methods and applications. AI Open, 1, 57–81.

    Article  Google Scholar 

  • Zhou, Z., Guo, L. Z., & Cheng, Z., et al. (2021). Step: Out-of-distribution detection in the presence of limited in-distribution labeled data. In Advances in Neural Information Processing Systems, vol 34. Curran Associates, Inc., pp. 29168–29180.

  • Zhou, Z. H. (2017). A brief introduction to weakly supervised learning. National Science Review, 5(1), 44–53.

    Article  Google Scholar 

  • Zou, Y., Yu, Z., & Liu, X., et al. (2019). Confidence regularized self-training. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEE, pp. 5981–5990.

Download references

Funding

This work is supported by National Natural Science Foundation of China (62072326), and the Key Research and Development Plan of Shanxi Province No. 201903D421007 and No. 202102010101004.

Author information

Authors and Affiliations

Authors

Contributions

QG: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing-original draft, Writing-review and editing, Visualization. HZ: Validation, Formal analysis. NG: Validation, Formal analysis. BN: Resources, Writing-review and editing, Supervision, Funding acquisition.

Corresponding author

Correspondence to Baoning Niu.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Ethics approval

Not Applicable.

Consent to participate

Not Applicable.

Consent for publication

Not Applicable.

Additional information

Editor: Nuno Moniz, Paula Branco, Luís Torgo, Nathalie Japkowicz, Michal Wozniak, Shuo Wang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gui, Q., Zhou, H., Guo, N. et al. A survey of class-imbalanced semi-supervised learning. Mach Learn (2023). https://doi.org/10.1007/s10994-023-06344-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10994-023-06344-7

Keywords

Navigation