Suppressing Mislabeled Data via Grouping and Self-attention

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12361)


Deep networks achieve excellent results on large-scale clean data but degrade significantly when learning from noisy labels. To suppressing the impact of mislabeled data, this paper proposes a conceptually simple yet efficient training block, termed as Attentive Feature Mixup (AFM), which allows paying more attention to clean samples and less to mislabeled ones via sample interactions in small groups. Specifically, this plug-and-play AFM first leverages a group-to-attend module to construct groups and assign attention weights for group-wise samples, and then uses a mixup module with the attention weights to interpolate massive noisy-suppressed samples. The AFM has several appealing benefits for noise-robust deep learning. (i) It does not rely on any assumptions and extra clean subset. (ii) With massive interpolations, the ratio of useless samples is reduced dramatically compared to the original noisy ratio. (iii) It jointly optimizes the interpolation weights with classifiers, suppressing the influence of mislabeled data via low attention weights. (iv) It partially inherits the vicinal risk minimization of mixup to alleviate over-fitting while improves it by sampling fewer feature-target vectors around mislabeled data from the mixup vicinal distribution. Extensive experiments demonstrate that AFM yields state-of-the-art results on two challenging real-world noisy datasets: Food101N and Clothing1M.


Noisy-labeled data Mixup Noisy-robust learning 



This work is partially supported by National Key Research and Development Program of China (No. 2020YFC2004800), National Natural Science Foundation of China (U1813218, U1713208), Science and Technology Service Network Initiative of Chinese Academy of Sciences (KFJ-STS-QYZX-092), Guangdong Special Support Program (2016TX03X276), and Shenzhen Basic Research Program (JSGG20180507182100698, CXB201104220032A), Shenzhen Institute of Artificial Intelligence and Robotics for Society.


  1. 1.
    Arazo, E., Ortego, D., Albert, P., O’Connor, N.E., McGuinness, K.: Unsupervised label noise modeling and loss correction (2019)Google Scholar
  2. 2.
    Arpit, D., et al.: A closer look at memorization in deep networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 233–242. JMLR. org (2017)Google Scholar
  3. 3.
    Barandela, R., Gasca, E.: Decontamination of training samples for supervised pattern recognition methods. In: Ferri, F.J., Iñesta, J.M., Amin, A., Pudil, P. (eds.) SSPR /SPR 2000. LNCS, vol. 1876, pp. 621–630. Springer, Heidelberg (2000). Scholar
  4. 4.
    Bengio, Y., Louradour, J., Collobert, R., Weston, J.: Curriculum learning. In: ICML, pp. 41–48. ACM (2009)Google Scholar
  5. 5.
    Brodley, C.E., Friedl, M.A.: Identifying mislabeled training data. J. Artif. Intell. Res. 11, 131–167 (1999)CrossRefGoogle Scholar
  6. 6.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009)Google Scholar
  7. 7.
    Frénay, B., Verleysen, M.: Classification in the presence of label noise: a survey. TNNLS 25(5), 845–869 (2014)zbMATHGoogle Scholar
  8. 8.
    Gong, Y., Ke, Q., Isard, M., Lazebnik, S.: A multi-view embedding space for modeling internet images, tags, and their semantics. IJCV 106(2), 210–233 (2014). Scholar
  9. 9.
    Guo, H., Mao, Y., Zhang, R.: Mixup as locally linear out-of-manifold regularization. In: AAAI, vol. 33, pp. 3714–3722 (2019)Google Scholar
  10. 10.
    Guo, S., et al.: CurriculumNet: weakly supervised learning from large-scale web images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11214, pp. 139–154. Springer, Cham (2018). Scholar
  11. 11.
    Han, J., Luo, P., Wang, X.: Deep self-learning from noisy labels. In: ICCV (2019)Google Scholar
  12. 12.
    Jiang, L., Zhou, Z., Leung, T., Li, L.J., Fei-Fei, L.: MentorNet: regularizing very deep neural networks on corrupted labels. arXiv preprint arXiv:1712.05055 (2017)
  13. 13.
    Joulin, A., van der Maaten, L., Jabri, A., Vasilache, N.: Learning visual features from large weakly supervised data. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9911, pp. 67–84. Springer, Cham (2016). Scholar
  14. 14.
    Krause, J., et al.: The unreasonable effectiveness of noisy data for fine-grained recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 301–320. Springer, Cham (2016). Scholar
  15. 15.
    Lee, K.H., He, X., Zhang, L., Yang, L.: CleanNet: transfer learning for scalable image classifier training with label noise. arXiv preprint arXiv:1711.07131 (2017)
  16. 16.
    Lee, K.H., He, X., Zhang, L., Yang, L.: CleanNet: transfer learning for scalable image classifier training with label noise. In: CVPR, pp. 5447–5456 (2018)Google Scholar
  17. 17.
    Li, J., Wong, Y., Zhao, Q., Kankanhalli, M.S.: Learning to learn from noisy labeled data. In: CVPR, June 2019Google Scholar
  18. 18.
    Li, Q., Peng, X., Cao, L., Du, W., Xing, H., Qiao, Y.: Product image recognition with guidance learning and noisy supervision. Comput. Vis. Image Underst. 196, 102963 (2020)CrossRefGoogle Scholar
  19. 19.
    Li, W., Wang, L., Li, W., Agustsson, E., Van Gool, L.: Webvision database: visual learning and understanding from web data. arXiv preprint arXiv:1708.02862 (2017)
  20. 20.
    Li, Y., Yang, J., Song, Y., Cao, L., Luo, J., Li, L.J.: Learning from noisy labels with distillation. In: ICCV, pp. 1928–1936 (2017)Google Scholar
  21. 21.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). Scholar
  22. 22.
    Mai, Z., Hu, G., Chen, D., Shen, F., Shen, H.T.: MetaMixUp: learning adaptive interpolation policy of MixUp with meta-learning. arXiv preprint arXiv:1908.10059 (2019)
  23. 23.
    Manwani, N., Sastry, P.: Noise tolerance under risk minimization. IEEE Trans. Cybern. 43(3), 1146–1151 (2013)CrossRefGoogle Scholar
  24. 24.
    Miranda, A.L.B., Garcia, L.P.F., Carvalho, A.C.P.L.F., Lorena, A.C.: Use of classification algorithms in noise detection and elimination. In: Corchado, E., Wu, X., Oja, E., Herrero, Á., Baruque, B. (eds.) HAIS 2009. LNCS (LNAI), vol. 5572, pp. 417–424. Springer, Heidelberg (2009). Scholar
  25. 25.
    Misra, I., Lawrence Zitnick, C., Mitchell, M., Girshick, R.: Seeing through the human reporting bias: visual classifiers from noisy human-centric labels. In: CVPR, pp. 2930–2939 (2016)Google Scholar
  26. 26.
    Patrini, G., Rozza, A., Menon, A.K., Nock, R., Qu, L.: Making deep neural networks robust to label noise: a loss correction approach. In: CVPR, pp. 2233–2241 (2017)Google Scholar
  27. 27.
    Reed, S., Lee, H., Anguelov, D., Szegedy, C., Erhan, D., Rabinovich, A.: Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596 (2014)
  28. 28.
    Rolnick, D., Veit, A., Belongie, S., Shavit, N.: Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694 (2017)
  29. 29.
    Schmidt, R.A., Bjork, R.A.: New conceptualizations of practice: common principles in three paradigms suggest new concepts for training. Psychol. Sci. 3(4), 207–218 (1992)CrossRefGoogle Scholar
  30. 30.
    Sukhbaatar, S., Bruna, J., Paluri, M., Bourdev, L., Fergus, R.: Training convolutional networks with noisy labels. arXiv preprint arXiv:1406.2080 (2014)
  31. 31.
    Tanaka, D., Ikami, D., Yamasaki, T., Aizawa, K.: Joint optimization framework for learning with noisy labels. arXiv preprint arXiv:1803.11364 (2018)
  32. 32.
    Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A., Belongie, S.J.: Learning from noisy large-scale datasets with minimal supervision, In: CVPR. pp. 6575–6583 (2017)Google Scholar
  33. 33.
    Verma, V., et al.: Manifold mixup: better representations by interpolating hidden states, pp. 6438–6447 (2019)Google Scholar
  34. 34.
    Wang, K., Peng, X., Yang, J., Lu, S., Qiao, Y.: Suppressing uncertainties for large-scale facial expression recognition. In: CVPR, June 2020Google Scholar
  35. 35.
    Xiao, T., Xia, T., Yang, Y., Huang, C., Wang, X.: Learning from massive noisy labeled data for image classification. In: CVPR, pp. 2691–2699 (2015)Google Scholar
  36. 36.
    Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
  37. 37.
    Zhang, W., Wang, Y., Qiao, Y.: MetaCleaner: learning to hallucinate clean representations for noisy-labeled visual recognition. In: CVPR, pp. 7373–7382 (2019)Google Scholar
  38. 38.
    Zhuang, B., Liu, L., Li, Y., Shen, C., Reid, I.: Attend in groups: a weakly-supervised deep learning framework for learning from web data. In: CVPR, pp. 1878–1887 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institutes of Advanced TechnologyChinese Academy of SciencesShenzhenChina
  2. 2.SIAT BranchShenzhen Institute of Artificial Intelligence and Robotics for SocietyShenzhenChina
  3. 3.Sun Yat-sen UniversityGuangzhouChina
  4. 4.Southwest Jiaotong UniversityChengduChina
  5. 5.Nanyang Technological UniversitySingaporeSingapore

Personalised recommendations