Skip to main content

Algorithmic Fairness and AI Justice in Addressing Health Equity

  • Chapter
  • First Online:
Healthcare Information Management Systems

Abstract

This chapter will focus on some of the technical aspects of addressing health equity associated with using Artificial Intelligence (AI) and Machine-Learning (ML) in healthcare systems and applications. We will examine this issue from both technical and sociological perspectives, describing the impact of algorithmic bias through specific examples of AI algorithms that have been demonstrated to produce biased outcomes in healthcare, and in other arenas that have important implications for healthcare. We will review a variety of analytic methodologies that can be used to address sources of algorithmic bias, including an assessment of the considerations for selecting the best method in different analytic contexts. We will conclude with an examination into the broader societal impact of algorithmic bias and enumerate on the environmental scan we conducted of organizations actively involved in addressing algorithmic bias and AI justice. We will describe some of the projects and efforts that are particularly relevant in the context of addressing health disparities.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Angwin J, Larson J, Mattu S, Kirchner L. Machine bias: pro publica; 2016 [updated May 23, 2016. Available from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  2. Chouldechova A. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data. 2017;5(2):153–63.

    Article  Google Scholar 

  3. Koenecke A, Nam A, Lake E, Nudell J, Quartey M, Mengesha Z, et al. Racial disparities in automated speech recognition. Proc Natl Acad Sci U S A. 2020;117(14):7684–9.

    Article  CAS  Google Scholar 

  4. Buolamwini J, Gebru T. Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on fairness, accountability, and transparency. 2018. p. 1–15.

    Google Scholar 

  5. Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S, et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digit Med. 2020;3:81.

    Article  Google Scholar 

  6. Vyas DA, Eisenstein LG, Jones DS. Hidden in plain sight - reconsidering the use of race correction in clinical algorithms. N Engl J Med. 2020;383(9):874–82.

    Article  Google Scholar 

  7. Gijsberts CM, Groenewegen KA, Hoefer IE, Eijkemans MJ, Asselbergs FW, Anderson TJ, et al. Race/ethnic differences in the associations of the framingham risk factors with carotid IMT and Cardiovascular events. PLoS One. 2015;10(7):e0132321.

    Article  Google Scholar 

  8. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–53.

    Article  CAS  Google Scholar 

  9. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. arXiv:190809635v2 [csLG] [Internet]. 2019.

    Google Scholar 

  10. Calmon F, Wei D, Vinzamuri B, Ramamurthy KN, Varshney KR. Optimized pre-processing for discrimination prevention. Conference on neural information processing systems. 2017. p. 3992–4401.

    Google Scholar 

  11. Friedler SA, Scheidegger C, Venkatasubramanian S. On the (im)possibility of fairness. arXiv:160907236 [csCY] [Internet]. 2016.

    Google Scholar 

  12. Yeom S, Tschantz MC. Avoiding disparity amplification under different worldviews. 2018.

    Google Scholar 

  13. Verma S, Rubin J. Fairness definitions explained. In: FairWare ‘18: proceedings of the international workshop on software fairness. 2018.

    Google Scholar 

  14. Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866–72.

    Article  Google Scholar 

  15. Makhlouf K, Zhioua S, Palamidessi C. On the applicability of ML fairness notions. arXiv:200616745v2 [csLG] [Internet]. 2020.

    Google Scholar 

  16. d’Alessandro B, O’Neil C, LaGatta T. Conscientious classification: a data scientist’s guide to discrimination-aware classification. Big Data. 2017;5(2):120–34.

    Article  Google Scholar 

  17. Kamiran F, Calders T. Data preprocessing techniques for classification without discrimination. Knowl Inf Syst. 2012;33(1):1–33.

    Article  Google Scholar 

  18. Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S. Certifying and removing disparate impact. In: KDD ‘15: proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 2015. p. 259–68.

    Google Scholar 

  19. Zemel R, Wu Y, Swersky K, Pitassi T, Dwork C. Learning fair representations. In: Proceedings of the 30 th international conference on machine learning Atlanta, Georgia, USA. 2013.

    Google Scholar 

  20. Kamishima T, Akaho S, Asoh H, Sakuma J, Flach PA, De Bie T, et al. Fairness-aware classifier with prejudice remover regularizer. In: Databases MLaKDi, editor. Joint European conference on machine learning and knowledge discovery in databases. Springer; 2012.

    Google Scholar 

  21. Kamiran F, Calders T, Pechenizkiy M. Discrimination aware decision tree learning.: IEEE international conference on data mining; Sydney, NSW, Australia. 2010. p. 869–874.

    Google Scholar 

  22. Zhang BH, Lemoine B, Mitchell M. Mitigating unwanted biases with adversarial learning. In: AIES ‘18: proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. 2018. p. 335–340.

    Google Scholar 

  23. Agarwal A, Beygelzimer A, Dudik M, Langford J, Wallach H. A reductions approach to fair classification. In: Proceedings of the 35th international conference on machine learning. 2018. p. 60–69.

    Google Scholar 

  24. Kamiran F, Karim A, Zhang X. Decision theory for discrimination-aware classification. In: 2012 IEEE 12th international conference on data mining Brussels, Belgium. 2012. p. 924–929.

    Google Scholar 

  25. Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning. In: NIPS’16: proceedings of the 30th international conference on neural information processing systems. 2016. p. 3323–3331.

    Google Scholar 

  26. Crawford K, Dobbe R, Dryer T, Fried G, Green B, Kaziunas E, et al. AI Now 2019 report. New York: AI Now Institute; 2019.

    Google Scholar 

  27. Whittaker M, Alper M, Bennett CL, Hendren S, Kaziunas L, Mills M, et al. Disability, bias, and AI. 2019.

    Google Scholar 

  28. Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. Fairness and abstraction in sociotechnical systems. In: FAT* ‘19: proceedings of the conference on fairness, accountability, and transparency. 2019. p. 59–68.

    Google Scholar 

  29. McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health. 2020;2(5):e221–e3.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank our colleagues Irene Dankwa-Mullan and Fernando Suarez Saiz at IBM Watson Health for their contributions to our efforts planning and preparing this chapter.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eileen Koski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Park, Y., Singh, M., Koski, E., Sow, D.M., Scheufele, E.L., Bright, T.J. (2022). Algorithmic Fairness and AI Justice in Addressing Health Equity. In: Kiel, J.M., Kim, G.R., Ball, M.J. (eds) Healthcare Information Management Systems. Health Informatics. Springer, Cham. https://doi.org/10.1007/978-3-031-07912-2_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-07912-2_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-07911-5

  • Online ISBN: 978-3-031-07912-2

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics