Skip to main content

Federated Learning to Improve Counterfactual Explanations for Sepsis Treatment Prediction

Part of the Lecture Notes in Computer Science book series (LNAI,volume 13897)


In recent years, we have witnessed both artificial intelligence obtaining remarkable results in clinical decision support systems (CDSSs) and explainable artificial intelligence (XAI) improving the interpretability of these models. In turn, this fosters the adoption by medical personnel and improves trustworthiness of CDSSs. Among others, counterfactual explanations prove to be one such XAI technique particularly suitable for the healthcare domain due to its ease of interpretation, even for less technically proficient staff. However, the generation of high-quality counterfactuals relies on generative models for guidance. Unfortunately, training such models requires a huge amount of data that is beyond the means of ordinary hospitals. In this paper, we therefore propose to use federated learning to allow multiple hospitals to jointly train such generative models while maintaining full data privacy. We demonstrate the superiority of our approach compared to locally generated counterfactuals on a CDSS for sepsis treatment prescription using various metrics. Moreover, we prove that generative models for counterfactual generation that are trained using federated learning in a suitable environment perform only marginally worse compared to centrally trained ones while offering the benefit of data privacy preservation.


  • Counterfactual explanations
  • Federated learning
  • Generative models
  • Sepsis treatment

This is a preview of subscription content, access via your institution.

Buying options

USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
USD   59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions


  1. American Hospital Association, et al.: Fast facts on us hospitals (2022)

    Google Scholar 

  2. Caicedo-Torres, W., Gutierrez, J.: Iseeu: visually interpretable deep learning for mortality prediction inside the icu. J. Biomed. Inf. 98 (2019)

    Google Scholar 

  3. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 (2002)

    Google Scholar 

  4. Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: ACM International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)

    Google Scholar 

  5. Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems 31 (2018)

    Google Scholar 

  6. Fang, M.L., Dhami, D.S., Kersting, K.: Dp-ctgan: differentially private medical data generation using ctgans. In: International Conference on AI in Medicine, pp. 178–188. Springer, Cham (2022).

  7. Jia, Y., McDermid, J., Habli, I.: Enhancing the value of counterfactual explanations for deep learning. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds.) AIME 2021. LNCS (LNAI), vol. 12721, pp. 389–394. Springer, Cham (2021).

    CrossRef  Google Scholar 

  8. Johnson, A., Bulgarelli, L., Pollard, T., Horng, S., Celi, L.A., Mark, R.: Mimic-iv (2022).

  9. Kanamori, K., Takagi, T., Kobayashi, K., Arimura, H.: Dace: Distribution-aware counterfactual explanation by mixed-integer linear optimization. In: IJCAI (2020)

    Google Scholar 

  10. Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual xai techniques. arXiv preprint arXiv:2103.01035 (2021)

  11. Lagu, T., Rothberg, M.B., Shieh, M.S., Pekow, P.S., Steingrub, J.S., Lindenauer, P.K.: Hospitalizations, costs, and outcomes of severe sepsis in the united states 2003 to 2007. Crit. Care Med. 40(3), 754–761 (2012)

    CrossRef  Google Scholar 

  12. Li, Q., Diao, Y., Chen, Q., He, B.: Federated learning on non-iid data silos: an experimental study. In: Proceedings of IEEE 38th ICDE, pp. 965–978 (2022)

    Google Scholar 

  13. Lincy, M., Kowshalya, A.M.: Early detection of type-2 diabetes using federated learning. IJSRST (2020)

    Google Scholar 

  14. Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12976, pp. 650–665. Springer, Cham (2021).

    CrossRef  Google Scholar 

  15. McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  16. Mertes, S., Huber, T., Weitz, K., Heimerl, A., André, E.: Ganterfactual-counterfactual explanations for medical non-experts using generative adversarial learning. Frontiers in artificial intelligence 5 (2022)

    Google Scholar 

  17. National Institute of General Medical Sciences: Sepsis (2022)

    Google Scholar 

  18. Nguyen, T.M., Quinn, T.P., Nguyen, T., Tran, T.: Counterfactual explanation with multi-agent reinforcement learning for drug target prediction. arXiv preprint arXiv:2103.12983 (2021)

  19. Samoilescu, R.F., Van Looveren, A., Klaise, J.: Model-agnostic and scalable counterfactual explanations via reinforcement learning. arXiv preprint arXiv:2106.02597 (2021)

  20. Singer, M., Deutschman, C.S., Seymour, C.W., Shankar-Hari, M., Annane, D., Bauer, M., Bellomo, R., Bernard, G.R., Chiche, J.D., Coopersmith, C.M., et al.: The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA 315(8), 801–810 (2016)

    CrossRef  Google Scholar 

  21. Tonekaboni, S., Joshi, S., McCradden, M.D., Goldenberg, A.: What clinicians want: contextualizing explainable machine learning for clinical end use. In: Machine Learning for Healthcare Conference, pp. 359–380. PMLR (2019)

    Google Scholar 

  22. Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596 (2020)

  23. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv. JL & Tech. (2017)

    Google Scholar 

  24. Wang, Z., Samsten, I., Papapetrou, P.: Counterfactual Explanations for Survival Prediction of Cardiovascular ICU Patients. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds.) AIME 2021. LNCS (LNAI), vol. 12721, pp. 338–348. Springer, Cham (2021).

    CrossRef  Google Scholar 

  25. Woensel, W.V., et al.: Explainable clinical decision support: towards patient-facing explanations for education and long-term behavior change. In: International Conference on AI in Medicine, pp. 57–62. Springer, Cham (2022).

  26. Xu, J., Glicksberg, B.S., Su, C., Walker, P., Bian, J., Wang, F.: Federated learning for healthcare informatics. J. Healthcare Inf. Res. (2021)

    Google Scholar 

Download references


This research was partially funded by the German Federal Ministry of Health as part of the KINBIOTICS project.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Christoph Düsing .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Düsing, C., Cimiano, P. (2023). Federated Learning to Improve Counterfactual Explanations for Sepsis Treatment Prediction. In: Juarez, J.M., Marcos, M., Stiglic, G., Tucker, A. (eds) Artificial Intelligence in Medicine. AIME 2023. Lecture Notes in Computer Science(), vol 13897. Springer, Cham.

Download citation

  • DOI:

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34343-8

  • Online ISBN: 978-3-031-34344-5

  • eBook Packages: Computer ScienceComputer Science (R0)