Skip to main content
Log in

Why black box machine learning should be avoided for high-stakes decisions, in brief

  • Comment
  • Published:

From Nature Reviews Methods Primers

View current issue Sign up to alerts

Black box machine learning models can be dangerous for high-stakes decisions. They rely on untrustworthy databases, and their predictions are difficult to troubleshoot, explain and error check for real-time predictions. Their use leads to serious ethics and accountability issues.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

  1. Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1, 206–215 (2019).

    Article  Google Scholar 

  2. Rudin, C. et al. Interpretable machine learning: fundamental principles and 10 grand challenges. Stat. Surv. 16, 1–85 (2022).

    Article  MathSciNet  Google Scholar 

  3. Ledford, H. Millions of Black people affected by racial bias in health-care algorithms. Nature 574, 608–609 (2019).

    Article  ADS  Google Scholar 

  4. Kan-Tor, Y., Ben-Meir, A. & Buxboim, A. Can deep learning automatically predict fetal heart pregnancy with almost perfect accuracy? Hum. Reprod. 35, 1473 (2020).

    Article  Google Scholar 

  5. Badgeley, M. A. et al. Deep learning predicts hip fracture using confounding patient and healthcare variables. NPJ Digit. Med. 2, 31 (2019).

    Article  Google Scholar 

  6. Flores, A. W., Bechtel, K. & Lowenkamp, C. T. False positives, false negatives, and false analyses: a rejoinder to “Machine bias: there’s software used across the country to predict future criminals. And it’s biased against Blacks.”. Fed. Probat. 80, 38–46 (2016).

    Google Scholar 

  7. Barnett, A. J. et al. A case-based interpretable deep learning model for classification of mass lesions in digital mammography. Nat. Mach. Intell. 3, 1061–1070 (2021).

    Article  Google Scholar 

  8. Rudin, C. & Radin, J. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harv. Data Sci. Rev. https://doi.org/10.1162/99608f92.5a8a3a3d (2019).

    Article  Google Scholar 

  9. Afnan, M. A. M. et al. Ethical implementation of artificial intelligence to select embryos in in vitro fertilization. in AIES ’21: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society 316–326 (AIES, 2021).

  10. Semenova, L., Rudin, C. & Parr, R. On the existence of simpler machine learning models. in FAccT ’22: 2022 ACM Conference on Fairness, Accountability and Transparency 1827–1858 (ACM, 2022).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cynthia Rudin.

Ethics declarations

Competing interests

The authors declare no competing interests.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rudin, C. Why black box machine learning should be avoided for high-stakes decisions, in brief. Nat Rev Methods Primers 2, 81 (2022). https://doi.org/10.1038/s43586-022-00172-0

Download citation

  • Published:

  • DOI: https://doi.org/10.1038/s43586-022-00172-0

  • Springer Nature Limited

This article is cited by

Navigation