Abstract
This chapter will focus on some of the technical aspects of addressing health equity associated with using Artificial Intelligence (AI) and Machine-Learning (ML) in healthcare systems and applications. We will examine this issue from both technical and sociological perspectives, describing the impact of algorithmic bias through specific examples of AI algorithms that have been demonstrated to produce biased outcomes in healthcare, and in other arenas that have important implications for healthcare. We will review a variety of analytic methodologies that can be used to address sources of algorithmic bias, including an assessment of the considerations for selecting the best method in different analytic contexts. We will conclude with an examination into the broader societal impact of algorithmic bias and enumerate on the environmental scan we conducted of organizations actively involved in addressing algorithmic bias and AI justice. We will describe some of the projects and efforts that are particularly relevant in the context of addressing health disparities.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Angwin J, Larson J, Mattu S, Kirchner L. Machine bias: pro publica; 2016 [updated May 23, 2016. Available from: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Chouldechova A. Fair prediction with disparate impact: a study of bias in recidivism prediction instruments. Big Data. 2017;5(2):153–63.
Koenecke A, Nam A, Lake E, Nudell J, Quartey M, Mengesha Z, et al. Racial disparities in automated speech recognition. Proc Natl Acad Sci U S A. 2020;117(14):7684–9.
Buolamwini J, Gebru T. Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on fairness, accountability, and transparency. 2018. p. 1–15.
Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S, et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digit Med. 2020;3:81.
Vyas DA, Eisenstein LG, Jones DS. Hidden in plain sight - reconsidering the use of race correction in clinical algorithms. N Engl J Med. 2020;383(9):874–82.
Gijsberts CM, Groenewegen KA, Hoefer IE, Eijkemans MJ, Asselbergs FW, Anderson TJ, et al. Race/ethnic differences in the associations of the framingham risk factors with carotid IMT and Cardiovascular events. PLoS One. 2015;10(7):e0132321.
Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–53.
Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. arXiv:190809635v2 [csLG] [Internet]. 2019.
Calmon F, Wei D, Vinzamuri B, Ramamurthy KN, Varshney KR. Optimized pre-processing for discrimination prevention. Conference on neural information processing systems. 2017. p. 3992–4401.
Friedler SA, Scheidegger C, Venkatasubramanian S. On the (im)possibility of fairness. arXiv:160907236 [csCY] [Internet]. 2016.
Yeom S, Tschantz MC. Avoiding disparity amplification under different worldviews. 2018.
Verma S, Rubin J. Fairness definitions explained. In: FairWare ‘18: proceedings of the international workshop on software fairness. 2018.
Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866–72.
Makhlouf K, Zhioua S, Palamidessi C. On the applicability of ML fairness notions. arXiv:200616745v2 [csLG] [Internet]. 2020.
d’Alessandro B, O’Neil C, LaGatta T. Conscientious classification: a data scientist’s guide to discrimination-aware classification. Big Data. 2017;5(2):120–34.
Kamiran F, Calders T. Data preprocessing techniques for classification without discrimination. Knowl Inf Syst. 2012;33(1):1–33.
Feldman M, Friedler SA, Moeller J, Scheidegger C, Venkatasubramanian S. Certifying and removing disparate impact. In: KDD ‘15: proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. 2015. p. 259–68.
Zemel R, Wu Y, Swersky K, Pitassi T, Dwork C. Learning fair representations. In: Proceedings of the 30 th international conference on machine learning Atlanta, Georgia, USA. 2013.
Kamishima T, Akaho S, Asoh H, Sakuma J, Flach PA, De Bie T, et al. Fairness-aware classifier with prejudice remover regularizer. In: Databases MLaKDi, editor. Joint European conference on machine learning and knowledge discovery in databases. Springer; 2012.
Kamiran F, Calders T, Pechenizkiy M. Discrimination aware decision tree learning.: IEEE international conference on data mining; Sydney, NSW, Australia. 2010. p. 869–874.
Zhang BH, Lemoine B, Mitchell M. Mitigating unwanted biases with adversarial learning. In: AIES ‘18: proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society. 2018. p. 335–340.
Agarwal A, Beygelzimer A, Dudik M, Langford J, Wallach H. A reductions approach to fair classification. In: Proceedings of the 35th international conference on machine learning. 2018. p. 60–69.
Kamiran F, Karim A, Zhang X. Decision theory for discrimination-aware classification. In: 2012 IEEE 12th international conference on data mining Brussels, Belgium. 2012. p. 924–929.
Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning. In: NIPS’16: proceedings of the 30th international conference on neural information processing systems. 2016. p. 3323–3331.
Crawford K, Dobbe R, Dryer T, Fried G, Green B, Kaziunas E, et al. AI Now 2019 report. New York: AI Now Institute; 2019.
Whittaker M, Alper M, Bennett CL, Hendren S, Kaziunas L, Mills M, et al. Disability, bias, and AI. 2019.
Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. Fairness and abstraction in sociotechnical systems. In: FAT* ‘19: proceedings of the conference on fairness, accountability, and transparency. 2019. p. 59–68.
McCradden MD, Joshi S, Mazwi M, Anderson JA. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health. 2020;2(5):e221–e3.
Acknowledgements
The authors would like to thank our colleagues Irene Dankwa-Mullan and Fernando Suarez Saiz at IBM Watson Health for their contributions to our efforts planning and preparing this chapter.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Park, Y., Singh, M., Koski, E., Sow, D.M., Scheufele, E.L., Bright, T.J. (2022). Algorithmic Fairness and AI Justice in Addressing Health Equity. In: Kiel, J.M., Kim, G.R., Ball, M.J. (eds) Healthcare Information Management Systems. Health Informatics. Springer, Cham. https://doi.org/10.1007/978-3-031-07912-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-031-07912-2_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-07911-5
Online ISBN: 978-3-031-07912-2
eBook Packages: MedicineMedicine (R0)