We would like to thank Sauer and colleagues for their insightful comments [1] on our recent article in Intensive Care Medicine, in which we proposed a federated infrastructure for intensive care unit (ICU) data across Europe [2]. We agree that artificial intelligence (AI) could perpetuate biases in clinical medicine, but we believe federated learning (FL) plays an important role in fostering responsible and equitable AI.

FL allows real-time analysis of diverse, sensitive clinical data from multiple ICUs, crucial in critical decision-making and broader healthcare scenarios like pandemics. The decentralized nature of FL preserves privacy and enables up-to-date use of dynamic AI models based on various data sources. Institutions can contribute their unique expertise and data while retaining control over it. Establishing diverse health datasets from multiple ICUs is crucial for responsible AI. Currently, 31% (152 out of 494) of ICU AI models are trained on large publicly available datasets, such as the MIMIC dataset, which may need to be more adequately represented in different subpopulations [3]. For example, less than 10% (18,719 out of 189,415) of the patients registered in the two largest ICU databases globally are African-American, with the majority being white male patients [4]. FL can enhance patient representation across Europe and beyond, fostering more diverse and inclusive health datasets. This cross-border data and model-sharing framework standardizes data sharing and access and serves as a fundamental data infrastructure for practical AI implementation within ICUs, enabling the comparability of clinical ICU data.

While FL promotes inclusive health datasets, it needs to fully address the complex issue of social patterning in data generation. Effective FL requires robust data governance, comprehensive data management policies, and consensus on adopting a standard data model. However, the crux of enhancing the fairness and safety of AI systems in healthcare lies in first establishing standards that promote informed decision-making. For example, the lack of ethnicity and socioeconomic data collection in most ICUs outside the United States of America complicates understanding dataset composition and detecting potential biases in AI.

Initiatives like STANDING Together are, therefore, crucial in advocating for the collection and representation of diverse demographic data, fostering inclusivity and diversity in health datasets [5]. Uniform and consistent collection of protected personal characteristics in global patient health records is crucial. FL offers a unique opportunity to address this issue globally by integrating ethical and legal data standards into the federated data infrastructure and bringing together diverse datasets worldwide while preserving privacy.

Maintaining transparency at all stages of model development and deployment is essential to build an ecosystem that uses FL to foster responsible AI. This includes setting data standards for representing diverse demographic data, creating diverse datasets from multiple institutions, and transparently documenting the specifics of the AI model before clinical deployment [6].

In conclusion, while FL can facilitate inclusive global data collection and AI integration, addressing deep-rooted biases in medical AI requires an in-depth, multifaceted strategy.