Skip to main content

Causality Encourages the Identifiability of Instance-Dependent Label Noise

  • Chapter
  • First Online:
Machine Learning for Causal Inference
  • 704 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. D. Angluin, P. Laird, Learning from noisy examples. Mach. Learn. 2(4), 343–370 (1988)

    Article  Google Scholar 

  2. D. Arpit et al., A closer look at memorization in deep networks, in International Conference on Machine Learning, PMLR (2017), pp. 233–242

    Google Scholar 

  3. M. Belkin, P. Niyogi, V. Sindhwani, Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7, 2399–2434 (2006)

    MathSciNet  MATH  Google Scholar 

  4. D.M. Blei, A. Kucukelbir, J.D. McAuliffe, Variational inference: a review for statisticians. J. Am. Statist. Assoc. 112(518), 859–877 (2017)

    Article  MathSciNet  Google Scholar 

  5. H. Cheng et al., Learning with instance-dependent label noise: a sample sieve approach, in ICLR (2021)

    Google Scholar 

  6. J. Cheng et al., Learning with bounded instance and label-dependent label noise, in ICML (2020)

    Google Scholar 

  7. B. Han et al., Co-teaching: robust training of deep neural networks with extremely noisy labels, in NeurIPS (2018), pp. 8527–8537

    Google Scholar 

  8. L. Jiang et al., MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels, in ICML (2018), pp. 2309–2318

    Google Scholar 

  9. D.P. Kingma, M. Welling, Auto-encoding variational bayes (2013). arXiv preprint arXiv:1312.6114

    Google Scholar 

  10. A. Krizhevsky, Learning multiple layers of features from tiny images. Technical report, 2009

    Google Scholar 

  11. A. Kuznetsova et al., The open images dataset v4. Int. J. Comput. Vis. 128(7), 1956–1981 (2020)

    Article  Google Scholar 

  12. W. Li et al., Webvision database: visual learning and understanding from web data (2017). arXiv preprint arXiv:1708.02862

    Google Scholar 

  13. X. Li et al., Provably end-to-end label-noise learning without anchor points, in ICML (2021)

    Google Scholar 

  14. T. Liu, D. Tao, Classification with noisy labels by importance reweighting. IEEE Trans. Pattern Anal. Mach. Intell. 38(3), 447–461 (2016)

    Article  MathSciNet  Google Scholar 

  15. Y. Liu, The importance of understanding instance-level noisy labels, in ICML (2021)

    Google Scholar 

  16. D. Mahajan et al., Exploring the limits of weakly supervised pretraining, in Proceedings of the European Conference on Computer Vision (ECCV) (2018), pp. 181–196

    Google Scholar 

  17. E. Malach, S. Shalev-Shwartz, Decoupling when to update from how to update, in NeurIPS (2017), pp. 960–970

    Google Scholar 

  18. N. Natarajan et al., Learning with noisy labels, in NeurIPS (2013), pp. 1196–1204

    Google Scholar 

  19. Y. Netzer et al., Reading digits in natural images with unsupervised feature learning, in NIPS Workshop on Deep Learning and Unsupervised Feature Learning (2011)

    Google Scholar 

  20. G. Patrini et al., Making deep neural networks robust to label noise: a loss correction approach, in CVPR (2017), pp. 1944–1952

    Google Scholar 

  21. J. Pearl, Causality (Cambridge University Press, Cambridge, 2009)

    Book  MATH  Google Scholar 

  22. J. Peters, D. Janzing, B. Schölkopf, Elements of Causal Inference: Foundations and learning Algorithms (The MIT Press, Cambridge, MA, 2017)

    MATH  Google Scholar 

  23. B. Schölkopf et al., On causal and anticausal learning, in 29th International Conference on Machine Learning (ICML 2012) (International Machine Learning Society, 2012), pp. 1255–12620

    Google Scholar 

  24. C. Scott, A rate of convergence for mixture proportion estimation, with application to learning from noisy labels, in AISTATS (2015), pp. 838–846

    Google Scholar 

  25. P. Spirtes, K. Zhang, Causal discovery and inference: concepts and recent methodological advances, in Applied Informatics, vol. 3 (Springer. 2016), p. 3

    Google Scholar 

  26. P. Spirtes et al., Causation, Prediction, and Search (The MIT Press, Cambridge, MA, 2000)

    MATH  Google Scholar 

  27. X. Xia et al., Are anchor points really indispensable in label-noise learning?, in NeurIPS (2019), pp. 6835–6846

    Google Scholar 

  28. X. Xia et al., Are anchor points really indispensable in label-noise Learning?, in: NeurIPS (2019), pp. 6838–6849

    Google Scholar 

  29. X. Xia et al., Part-dependent label noise: towards instance-dependent label noise, in NeurIPS (2020)

    Google Scholar 

  30. H. Xiao, K. Rasul, R. Vollgraf, Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017). arXiv preprint arXiv:1708.07747

    Google Scholar 

  31. T. Xiao et al., Learning from massive noisy labeled data for image classification, in CVPR (2015), pp. 2691–2699

    Google Scholar 

  32. Q. Yao et al., Searching to exploit memorization effect in learning with noisy labels, in ICML (2020)

    Google Scholar 

  33. Y. Yao et al., Dual T: reducing estimation error for transition matrix in label-noise learning, in NeurIPS (2020)

    Google Scholar 

  34. Y. Yao et al., Instance-dependent label-noise learning under a structural causal model, Advances in Neural Information Processing Systems,34, 4409–4420 (2021)

    Google Scholar 

  35. H. Zhang et al., Mixup: beyond empirical risk minimization, in ICLR’18 (2018)

    Google Scholar 

  36. Z. Zhu, T. Liu, Y. Liu, A second-order approach to learning with instance-dependent label noise, in CVPR (2021)

    Google Scholar 

  37. Z. Zhu, Y. Song, Y. Liu, Clusterability as an alternative to anchor points when learning with noisy labels (2021). arXiv preprint arXiv:2102.05291

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tongliang Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Yao, Y., Liu, T., Gong, M., Han, B., Niu, G., Zhang, K. (2023). Causality Encourages the Identifiability of Instance-Dependent Label Noise. In: Li, S., Chu, Z. (eds) Machine Learning for Causal Inference. Springer, Cham. https://doi.org/10.1007/978-3-031-35051-1_11

Download citation

Publish with us

Policies and ethics