Skip to main content

Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection

  • 795 Accesses

Part of the Lecture Notes in Computer Science book series (LNCS,volume 13685)

Abstract

Recent studies on learning with noisy labels have shown remarkable performance by exploiting a small clean dataset. In particular, model agnostic meta-learning-based label correction methods further improve performance by correcting noisy labels on the fly. However, there is no safeguard on the label miscorrection, resulting in unavoidable performance degradation. Moreover, every training step requires at least three back-propagations, significantly slowing down the training speed. To mitigate these issues, we propose a robust and efficient method, FasTEN, which learns a label transition matrix on the fly. Employing the transition matrix makes the classifier skeptical about all the corrected samples, which alleviates the miscorrection issue. We also introduce a two-head architecture to efficiently estimate the label transition matrix every iteration within a single back-propagation, so that the estimated matrix closely follows the shifting noise distribution induced by label correction. Extensive experiments demonstrate that our FasTEN shows the best performance in training efficiency while having comparable or better accuracy than existing methods, especially achieving state-of-the-art performance in a real-world noisy dataset, Clothing1M.

Keywords

  • Learning with noisy labels
  • Label correction
  • Transition matrix estimation

S. M. Kye, K.Choi and J.Yi—Equal contribution.

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-3-031-19806-9_41
  • Chapter length: 22 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   89.00
Price excludes VAT (USA)
  • ISBN: 978-3-031-19806-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   119.99
Price excludes VAT (USA)
Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

Notes

  1. 1.

    https://github.com/hyperconnect/FasTEN.

References

  1. Arpit, D., et al.: A closer look at memorization in deep networks. In: International Conference on Machine Learning, pp. 233–242. PMLR (2017)

    Google Scholar 

  2. Azadi, S., Feng, J., Jegelka, S., Darrell, T.: Auxiliary image regularization for deep CNNs with noisy labels. arXiv preprint arXiv:1511.07069 (2015)

  3. Bahri, D., Jiang, H., Gupta, M.: Deep k-NN for noisy labels. In: International Conference on Machine Learning, pp. 540–550. PMLR (2020)

    Google Scholar 

  4. Bartlett, P., Foster, D.J., Telgarsky, M.: Spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1706.08498 (2017)

  5. Bartlett, P.L., Jordan, M.I., McAuliffe, J.D.: Convexity, classification, and risk bounds. J. Am. Stat. Assoc. 101(473), 138–156 (2006)

    CrossRef  MathSciNet  Google Scholar 

  6. Bartlett, P.L., Mendelson, S.: Rademacher and Gaussian complexities: risk bounds and structural results. J. Mach. Learn. Res. 3(Nov), 463–482 (2002)

    Google Scholar 

  7. Bekker, A.J., Goldberger, J.: Training deep neural-networks based on unreliable labels. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2682–2686. IEEE (2016)

    Google Scholar 

  8. Berthon, A., Han, B., Niu, G., Liu, T., Sugiyama, M.: Confidence scores make instance-dependent label-noise learning possible. arXiv preprint arXiv:2001.03772 (2020)

  9. Boucheron, S., Lugosi, G., Massart, P.: Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, Oxford (2013)

    Google Scholar 

  10. Cao, K., Chen, Y., Lu, J., Arechiga, N., Gaidon, A., Ma, T.: Heteroskedastic and imbalanced deep learning with adaptive regularization. arXiv preprint arXiv:2006.15766 (2020)

  11. Chang, H.S., Learned-Miller, E., McCallum, A.: Active bias: training more accurate neural networks by emphasizing high variance samples. arXiv preprint arXiv:1704.07433 (2017)

  12. Charikar, M., Steinhardt, J., Valiant, G.: Learning from untrusted data. In: Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 47–60 (2017)

    Google Scholar 

  13. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002)

    CrossRef  Google Scholar 

  14. Chen, P., Liao, B.B., Chen, G., Zhang, S.: Understanding and utilizing deep neural networks trained with noisy labels. In: International Conference on Machine Learning, pp. 1062–1070. PMLR (2019)

    Google Scholar 

  15. Chen, P., Ye, J., Chen, G., Zhao, J., Heng, P.A.: Robustness of accuracy metric and its inspirations in learning with noisy labels. arXiv preprint arXiv:2012.04193 (2020)

  16. Cheng, H., Zhu, Z., Li, X., Gong, Y., Sun, X., Liu, Y.: Learning with instance-dependent label noise: a sample sieve approach. In: ICLR (2021)

    Google Scholar 

  17. Cheng, J., Liu, T., Ramamohanarao, K., Tao, D.: Learning with bounded instance and label-dependent label noise. In: International Conference on Machine Learning, pp. 1789–1799. PMLR (2020)

    Google Scholar 

  18. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

  19. Drory, A., Avidan, S., Giryes, R.: How do neural networks overcome label noise. arXiv Preprint (2018)

    Google Scholar 

  20. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)

    Google Scholar 

  21. Floridi, L., Chiriatti, M.: GPT-3: its nature, scope, limits, and consequences. Mind. Mach. 30(4), 681–694 (2020)

    CrossRef  Google Scholar 

  22. Ghosh, A., Kumar, H., Sastry, P.: Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)

    Google Scholar 

  23. Ghosh, A., Lan, A.: Do we really need gold samples for sample weighting under label noise? In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3922–3931 (2021)

    Google Scholar 

  24. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  25. Goldberger, J., Ben-Reuven, E.: Training deep neural-networks using a noise adaptation layer (2016)

    Google Scholar 

  26. Golowich, N., Rakhlin, A., Shamir, O.: Size-independent sample complexity of neural networks. In: Conference On Learning Theory, pp. 297–299. PMLR (2018)

    Google Scholar 

  27. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org

  28. Guan, M., Gulshan, V., Dai, A., Hinton, G.: Who said what: modeling individual labelers improves classification. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  29. Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: International Conference on Machine Learning, pp. 1321–1330. PMLR (2017)

    Google Scholar 

  30. Guo, J., Gong, M., Liu, T., Zhang, K., Tao, D.: LTF: a label transformation framework for correcting label shift. In: International Conference on Machine Learning, pp. 3843–3853. PMLR (2020)

    Google Scholar 

  31. Han, B., et al.: SIGUA: forgetting may make learning with noisy labels more robust. In: International Conference on Machine Learning, pp. 4006–4016. PMLR (2020)

    Google Scholar 

  32. Han, B., et al.: Masking: a new perspective of noisy supervision. arXiv preprint arXiv:1805.08193 (2018)

  33. Han, B., et al.: Co-teaching: robust training of deep neural networks with extremely noisy labels. arXiv preprint arXiv:1804.06872 (2018)

  34. Han, J., Luo, P., Wang, X.: Deep self-learning from noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5138–5147 (2019)

    Google Scholar 

  35. Han, K., Wang, Y., Xu, Y., Xu, C., Wu, E., Xu, C.: Training binary neural networks through learning with noisy supervision. In: International Conference on Machine Learning, pp. 4017–4026. PMLR (2020)

    Google Scholar 

  36. Harutyunyan, H., Reing, K., Ver Steeg, G., Galstyan, A.: Improving generalization by controlling label-noise information in neural network weights. In: International Conference on Machine Learning, pp. 4071–4081. PMLR (2020)

    Google Scholar 

  37. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  38. Hendrycks, D., Lee, K., Mazeika, M.: Using pre-training can improve model robustness and uncertainty. In: International Conference on Machine Learning, pp. 2712–2721. PMLR (2019)

    Google Scholar 

  39. Hendrycks, D., Mazeika, M., Kadavath, S., Song, D.: Using self-supervised learning can improve model robustness and uncertainty. arXiv preprint arXiv:1906.12340 (2019)

  40. Hendrycks, D., Mazeika, M., Wilson, D., Gimpel, K.: Using trusted data to train deep networks on labels corrupted by severe noise. Adv. Neural Inf. Process. Syst. 31 (2018)

    Google Scholar 

  41. Hong, Y., Han, S., Choi, K., Seo, S., Kim, B., Chang, B.: Disentangling label distribution for long-tailed visual recognition. arXiv preprint arXiv:2012.00321 (2020)

  42. Hu, W., Li, Z., Yu, D.: Simple and effective regularization methods for training on noisily labeled data with generalization guarantee. arXiv preprint arXiv:1905.11368 (2019)

  43. Huang, L., Zhang, C., Zhang, H.: Self-adaptive training: beyond empirical risk minimization. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  44. Jiang, L., Zhou, Z., Leung, T., Lif, L.J., Fei-Fei, L.: MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: International Conference on Machine Learning, pp. 2304–2313. PMLR (2018)

    Google Scholar 

  45. Jiang, Z., et al.: An information fusion approach to learning with instance-dependent label noise. In: International Conference on Learning Representations (2022). https://openreview.net/forum?id=ecH2FKaARUp

  46. Jindal, I., Nokleby, M., Chen, X.: Learning deep networks from noisy labels with dropout regularization. In: 2016 IEEE 16th International Conference on Data Mining (ICDM), pp. 967–972. IEEE (2016)

    Google Scholar 

  47. Kim, T., Ko, J., Choi, J., Yun, S.Y., et al.: Fine samples for learning with noisy labels. Adv. Neural Inf. Process. Syst. 34 (2021)

    Google Scholar 

  48. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  49. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  50. Lee, K.H., He, X., Zhang, L., Yang, L.: CleanNet: transfer learning for scalable image classifier training with label noise. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5447–5456 (2018)

    Google Scholar 

  51. Li, D., Chen, C., Liu, W., Lu, T., Gu, N., Chu, S.M.: Mixture-rank matrix approximation for collaborative filtering. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 477–485 (2017)

    Google Scholar 

  52. Li, D., Chen, C., Lv, Q., Yan, J., Shang, L., Chu, S.: Low-rank matrix approximation with stability. In: International Conference on Machine Learning, pp. 295–303. PMLR (2016)

    Google Scholar 

  53. Li, J., Socher, R., Hoi, S.C.: DivideMix: learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394 (2020)

  54. Li, J., Wong, Y., Zhao, Q., Kankanhalli, M.S.: Learning to learn from noisy labeled data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5051–5059 (2019)

    Google Scholar 

  55. Li, M., Soltanolkotabi, M., Oymak, S.: Gradient descent with early stopping is provably robust to label noise for overparameterized neural networks. In: International Conference on Artificial Intelligence and Statistics, pp. 4313–4324. PMLR (2020)

    Google Scholar 

  56. Li, X., Liu, T., Han, B., Niu, G., Sugiyama, M.: Provably end-to-end label-noise learning without anchor points. In: International Conference on Machine Learning. PMLR (2021)

    Google Scholar 

  57. Lienen, J., Hüllermeier, E.: From label smoothing to label relaxation. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI, Online, 2–9 February 2021. AAAI Press (2021)

    Google Scholar 

  58. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)

    Google Scholar 

  59. Liu, S., Niles-Weed, J., Razavian, N., Fernandez-Granda, C.: Early-learning regularization prevents memorization of noisy labels. arXiv preprint arXiv:2007.00151 (2020)

  60. Liu, T., Tao, D.: Classification with noisy labels by importance reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 447–461 (2015)

    CrossRef  Google Scholar 

  61. Liu, Y., Guo, H.: Peer loss functions: learning from noisy labels without knowing noise rates. In: International Conference on Machine Learning, pp. 6226–6236. PMLR (2020)

    Google Scholar 

  62. Lukasik, M., Bhojanapalli, S., Menon, A., Kumar, S.: Does label smoothing mitigate label noise? In: International Conference on Machine Learning, pp. 6448–6458. PMLR (2020)

    Google Scholar 

  63. Ma, X., Huang, H., Wang, Y., Romano, S., Erfani, S., Bailey, J.: Normalized loss functions for deep learning with noisy labels. In: International Conference on Machine Learning, pp. 6543–6553. PMLR (2020)

    Google Scholar 

  64. Ma, X., Wang, Y., Houle, M.E., Zhou, S., Erfani, S., Xia, S., Wijewickrema, S., Bailey, J.: Dimensionality-driven learning with noisy labels. In: International Conference on Machine Learning. pp. 3355–3364. PMLR (2018)

    Google Scholar 

  65. Menon, A.K., Rawat, A.S., Reddi, S.J., Kumar, S.: Can gradient clipping mitigate label noise? (2020)

    Google Scholar 

  66. Menon, A.K., Van Rooyen, B., Natarajan, N.: Learning from binary labels with instance-dependent corruption. arXiv preprint arXiv:1605.00751 (2016)

  67. Mirzasoleiman, B., Cao, K., Leskovec, J.: Coresets for robust training of deep neural networks against noisy labels. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  68. Mnih, V., Hinton, G.E.: Learning to label aerial images from noisy data. In: Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 567–574 (2012)

    Google Scholar 

  69. Mohri, M., Rostamizadeh, A., Talwalkar, A.: Foundations of Machine Learning. MIT Press, Cambridge (2018)

    Google Scholar 

  70. Montgomery-Smith, S.J.: The distribution of Rademacher sums. Proc. Am. Math. Soc. 109(2), 517–522 (1990)

    CrossRef  MathSciNet  Google Scholar 

  71. Natarajan, N., Dhillon, I.S., Ravikumar, P., Tewari, A.: Learning with noisy labels. In: NIPS, vol. 26, pp. 1196–1204 (2013)

    Google Scholar 

  72. Neyshabur, B., Bhojanapalli, S., Srebro, N.: A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564 (2017)

  73. Nishi, K., Ding, Y., Rich, A., Höllerer, T.: Augmentation strategies for learning with noisy labels. arXiv preprint arXiv:2103.02130 (2021)

  74. Ortego, D., Arazo, E., Albert, P., O’Connor, N.E., McGuinness, K.: Multi-objective interpolation training for robustness to label noise. arXiv preprint arXiv:2012.04462 (2020)

  75. Patrini, G., Nielsen, F., Nock, R., Carioni, M.: Loss factorization, weakly supervised learning and label noise robustness. In: International Conference on Machine Learning, pp. 708–717. PMLR (2016)

    Google Scholar 

  76. Patrini, G., Rozza, A., Krishna Menon, A., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1944–1952 (2017)

    Google Scholar 

  77. Pleiss, G., Zhang, T., Elenberg, E.R., Weinberger, K.Q.: Identifying mislabeled data using the area under the margin ranking. arXiv preprint arXiv:2001.10528 (2020)

  78. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)

    Google Scholar 

  79. Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014)

  80. Ren, M., Zeng, W., Yang, B., Urtasun, R.: Learning to reweight examples for robust deep learning. In: International Conference on Machine Learning, pp. 4334–4343. PMLR (2018)

    Google Scholar 

  81. Rodrigues, F., Pereira, F.: Deep learning from crowds. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)

    Google Scholar 

  82. Scott, C.: A rate of convergence for mixture proportion estimation, with application to learning from noisy labels. In: Artificial Intelligence and Statistics, pp. 838–846. PMLR (2015)

    Google Scholar 

  83. Scott, C., et al.: Calibrated asymmetric surrogate losses. Electron. J. Stat. 6, 958–992 (2012)

    CrossRef  MathSciNet  Google Scholar 

  84. Shrivastava, A., Gupta, A., Girshick, R.: Training region-based object detectors with online hard example mining. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761–769 (2016)

    Google Scholar 

  85. Shu, J., Xie, Q., Yi, L., Zhao, Q., Zhou, S., Xu, Z., Meng, D.: Meta-weight-net: learning an explicit mapping for sample weighting. arXiv preprint arXiv:1902.07379 (2019)

  86. Song, H., Kim, M., Lee, J.G.: Selfie: refurbishing unclean samples for robust deep learning. In: International Conference on Machine Learning, pp. 5907–5915. PMLR (2019)

    Google Scholar 

  87. Song, H., Kim, M., Park, D., Lee, J.G.: How does early stopping help generalization against label noise? arXiv preprint arXiv:1911.08059 (2019)

  88. Sukhbaatar, S., Bruna, J., Paluri, M., Bourdev, L., Fergus, R.: Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080 (2014)

  89. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014)

    Google Scholar 

  90. Tanaka, D., Ikami, D., Yamasaki, T., Aizawa, K.: Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5552–5560 (2018)

    Google Scholar 

  91. Tanno, R., Saeedi, A., Sankaranarayanan, S., Alexander, D.C., Silberman, N.: Learning from noisy labels by regularized estimation of annotator confusion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11244–11253 (2019)

    Google Scholar 

  92. Thekumparampil, K.K., Khetan, A., Lin, Z., Oh, S.: Robustness of conditional GANs to noisy labels. arXiv preprint arXiv:1811.03205 (2018)

  93. Thulasidasan, S., Bhattacharya, T., Bilmes, J., Chennupati, G., Mohd-Yusof, J.: Combating label noise in deep learning using abstention. arXiv preprint arXiv:1905.10964 (2019)

  94. Van Rooyen, B., Menon, A.K., Williamson, R.C.: Learning with symmetric label noise: the importance of being unhinged. arXiv preprint arXiv:1505.07634 (2015)

  95. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (2013). https://doi.org/10.1007/978-1-4757-3264-1

    CrossRef  MATH  Google Scholar 

  96. Vapnik, V.N.: An overview of statistical learning theory. IEEE Trans. Neural Netw. 10(5), 988–999 (1999)

    CrossRef  Google Scholar 

  97. Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A., Belongie, S.: Learning from noisy large-scale datasets with minimal supervision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 839–847 (2017)

    Google Scholar 

  98. Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. arXiv preprint arXiv:1606.04080 (2016)

  99. Wang, Y., Ma, X., Chen, Z., Luo, Y., Yi, J., Bailey, J.: Symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 322–330 (2019)

    Google Scholar 

  100. Wang, Y., Kucukelbir, A., Blei, D.M.: Robust probabilistic modeling with Bayesian data reweighting. In: International Conference on Machine Learning, pp. 3646–3655. PMLR (2017)

    Google Scholar 

  101. Wang, Z., Hu, G., Hu, Q.: Training noise-robust deep neural networks via meta-learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4524–4533 (2020)

    Google Scholar 

  102. Wang, Z., Zhu, H., Dong, Z., He, X., Huang, S.L.: Less is better: unweighted data subsampling via influence function. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6340–6347 (2020)

    Google Scholar 

  103. Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., Liu, Y.: Learning with noisy labels revisited: a study using real-world human annotations. In: ICLR (2022)

    Google Scholar 

  104. Wu, P., Zheng, S., Goswami, M., Metaxas, D., Chen, C.: A topological filter for learning with label noise. arXiv preprint arXiv:2012.04835 (2020)

  105. Wu, Y., Shu, J., Xie, Q., Zhao, Q., Meng, D.: Learning to purify noisy labels via meta soft label corrector. arXiv preprint arXiv:2008.00627 (2020)

  106. Xia, X., et al.: Sample selection with uncertainty of losses for learning with noisy labels. arXiv preprint arXiv:2106.00445 (2021)

  107. Xia, X., et al.: Part-dependent label noise: Towards instance-dependent label noise. Adv. Neural Inf. Process. Syst. 33 (2020)

    Google Scholar 

  108. Xia, X., Liu, T., Wang, N., Han, B., Gong, C., Niu, G., Sugiyama, M.: Are anchor points really indispensable in label-noise learning? arXiv preprint arXiv:1906.00189 (2019)

  109. Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2691–2699 (2015)

    Google Scholar 

  110. Yang, F., Koyejo, S.: On the consistency of top-k surrogate losses. In: International Conference on Machine Learning, pp. 10727–10735. PMLR (2020)

    Google Scholar 

  111. Yao, J., Wu, H., Zhang, Y., Tsang, I.W., Sun, J.: Safeguarded dynamic label regression for noisy supervision. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 9103–9110 (2019)

    Google Scholar 

  112. Yao, Y., Liu, T., Gong, M., Han, B., Niu, G., Zhang, K.: Instance-dependent label-noise learning under a structural causal model. Adv. Neural Inf. Process. Syst. 34 (2021)

    Google Scholar 

  113. Yao, Y., et al.: Dual t: reducing estimation error for transition matrix in label-noise learning. arXiv preprint arXiv:2006.07805 (2020)

  114. Yi, K., Wu, J.: Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7017–7025 (2019)

    Google Scholar 

  115. Yu, X., Liu, T., Gong, M., Tao, D.: Learning with biased complementary labels. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 69–85. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_5

    CrossRef  Google Scholar 

  116. Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530 (2016)

  117. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)

  118. Zhang, X., Wu, X., Chen, F., Zhao, L., Lu, C.T.: Self-paced robust learning for leveraging clean labels in noisy data. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 6853–6860 (2020)

    Google Scholar 

  119. Zhang, Y., Niu, G., Sugiyama, M.: Learning noise transition matrix from only noisy labels via total variation regularization. In: International Conference on Machine Learning, pp. 12501–12512. PMLR (2021)

    Google Scholar 

  120. Zhang, Z., Sabuncu, M.R.: Generalized cross entropy loss for training deep neural networks with noisy labels. arXiv preprint arXiv:1805.07836 (2018)

  121. Zheng, G., Awadallah, A.H., Dumais, S.: Meta label correction for noisy label learning. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence (2021)

    Google Scholar 

  122. Zheng, S., Wu, P., Goswami, A., Goswami, M., Metaxas, D., Chen, C.: Error-bounded correction of noisy labels. In: International Conference on Machine Learning, pp. 11447–11457. PMLR (2020)

    Google Scholar 

  123. Zhu, Z., Liu, T., Liu, Y.: A second-order approach to learning with instance-dependent label noise. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10113–10123 (2021)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Buru Chang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1073 KB)

Rights and permissions

Reprints and Permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Kye, S.M., Choi, K., Yi, J., Chang, B. (2022). Learning with Noisy Labels by Efficient Transition Matrix Estimation to Combat Label Miscorrection. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13685. Springer, Cham. https://doi.org/10.1007/978-3-031-19806-9_41

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19806-9_41

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19805-2

  • Online ISBN: 978-3-031-19806-9

  • eBook Packages: Computer ScienceComputer Science (R0)