Avoid common mistakes on your manuscript.
We read with great interest the article by van Genderen et al. [1] which provides a contemporary and comprehensive overview of the potential of federating data access and data sharing in intensive care. Importantly, the authors list the perpetuation of biases encoded in clinical care practice as a major potential shortcoming. Furthermore, they state that this could be mitigated by “ensuring an adequate representation of hospitals from various regions worldwide could lead to more diverse and inclusive health datasets.”
We agree that the use of diverse and inclusive health datasets should be promoted as a necessary first step to build fair machine learning algorithms. However, we do not believe that this will be sufficient to overcome the deeply embedded biases in medicine from a knowledge system that is designed around a majoritized few. Even with high quality data from the intensive care units from across the world, the social patterning of the data generation process can still produce artificial intelligence (AI) that is bound to preserve and even scale existing disparities in care with resulting inequities in patient outcomes. There are numerous examples of data issues that stem from the social patterning of the data capture and data generation process (Fig. 1). These include, but are certainly not limited to, (1) the differential performance of medical devices used to measure physiologic signals across patient populations of which the pulse oximeter is just the tip of the iceberg [2]; (2) variation in the frequency of testing across patient populations that is not explained by clinical factors [3]; and (3) disparities in the performance of routine care that is typically assumed to be administered uniformly across patient populations [4]. These data issues are unlikely going to be discovered even by teams in the dozens, or even the hundreds, as they require a level of cognitive diversity that is not leveraged in federated learning. It is unlikely that individual hospitals, especially outside of the large academic centers, will have the large interdisciplinary teams necessary to understand embedded bias in the electronic health records.
Federated learning promises model development without data sharing to preserve patient privacy. This comes at a steep cost: undiscovered data issues that lead to spurious associations that are learned by a model and that are incorporated into an algorithm. We believe that no one group is smart enough to discover all the data issues to build fair models. If a group was to claim such a skill it would be AI in action: arrogance and ignorance. The promise/hype of AI will only translate into huge dividends if the intensive care community works with computer scientists, social scientists, patients and their caregivers in understanding the backstories of the data and designing an equity-focused curation and analytics pipeline.
References
van Genderen ME, Cecconi M, Jung C (2024) Federated data access and federated learning: improved data sharing, AI model development, and learning in intensive care. Intensive Care Med. https://doi.org/10.1007/s00134-024-07408-5
Wong A-KI, Charpignon M, Kim H et al (2021) Analysis of discrepancies between pulse oximetry and arterial oxygen saturation measurements by race and ethnicity and association with organ dysfunction and mortality. JAMA Netw Open 4:e2131674. https://doi.org/10.1001/jamanetworkopen.2021.31674
Teotia K, Jia Y, Link Woite N et al (2024) Variation in monitoring: glucose measurement in the ICU as a case study to preempt spurious correlations. J Biomed Inform 153:104643. https://doi.org/10.1016/j.jbi.2024.104643
Abdelmalek FM, Angriman F, Moore J et al (2024) Association between patient race and ethnicity and use of invasive ventilation in the United States. Annals ATS 21:287–295. https://doi.org/10.1513/AnnalsATS.202305-485OC
Funding
Open Access funding enabled and organized by Projekt DEAL. CMS is supported by the German Research Foundation (DFG) funded UMEA Clinician Scientist Program, grant FU356/12–2. GP is receiving funding from the program “Netzwerke 2021”, an initiative of the Ministry of Culture and Science of the State of North Rhine Westphalia. The sole responsibility for the content of this publication lies with the authors. LAC is funded by the National Institute of Health through R01 EB017205, DS I Africa TW012043 01 and Bridge2AI OT2OD032701, and the National Science Foundation through ITEST #2148451.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This comment refers to the article available online at https://doi.org/10.1007/s00134-024-07408-5.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/.
About this article
Cite this article
Sauer, C.M., Pucher, G. & Celi, L.A. Why federated learning will do little to overcome the deeply embedded biases in clinical medicine. Intensive Care Med (2024). https://doi.org/10.1007/s00134-024-07491-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s00134-024-07491-8