Abstract
In recent years, we have witnessed both artificial intelligence obtaining remarkable results in clinical decision support systems (CDSSs) and explainable artificial intelligence (XAI) improving the interpretability of these models. In turn, this fosters the adoption by medical personnel and improves trustworthiness of CDSSs. Among others, counterfactual explanations prove to be one such XAI technique particularly suitable for the healthcare domain due to its ease of interpretation, even for less technically proficient staff. However, the generation of high-quality counterfactuals relies on generative models for guidance. Unfortunately, training such models requires a huge amount of data that is beyond the means of ordinary hospitals. In this paper, we therefore propose to use federated learning to allow multiple hospitals to jointly train such generative models while maintaining full data privacy. We demonstrate the superiority of our approach compared to locally generated counterfactuals on a CDSS for sepsis treatment prescription using various metrics. Moreover, we prove that generative models for counterfactual generation that are trained using federated learning in a suitable environment perform only marginally worse compared to centrally trained ones while offering the benefit of data privacy preservation.
Keywords
- Counterfactual explanations
- Federated learning
- Generative models
- Sepsis treatment
This is a preview of subscription content, access via your institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
American Hospital Association, et al.: Fast facts on us hospitals (2022)
Caicedo-Torres, W., Gutierrez, J.: Iseeu: visually interpretable deep learning for mortality prediction inside the icu. J. Biomed. Inf. 98 (2019)
Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 (2002)
Chen, T., Guestrin, C.: Xgboost: a scalable tree boosting system. In: ACM International Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016)
Dhurandhar, A., Chen, P.Y., Luss, R., Tu, C.C., Ting, P., Shanmugam, K., Das, P.: Explanations based on the missing: Towards contrastive explanations with pertinent negatives. Advances in neural information processing systems 31 (2018)
Fang, M.L., Dhami, D.S., Kersting, K.: Dp-ctgan: differentially private medical data generation using ctgans. In: International Conference on AI in Medicine, pp. 178–188. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09342-5_17
Jia, Y., McDermid, J., Habli, I.: Enhancing the value of counterfactual explanations for deep learning. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds.) AIME 2021. LNCS (LNAI), vol. 12721, pp. 389–394. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77211-6_46
Johnson, A., Bulgarelli, L., Pollard, T., Horng, S., Celi, L.A., Mark, R.: Mimic-iv (2022). https://doi.org/10.13026/rrgf-xw32
Kanamori, K., Takagi, T., Kobayashi, K., Arimura, H.: Dace: Distribution-aware counterfactual explanation by mixed-integer linear optimization. In: IJCAI (2020)
Keane, M.T., Kenny, E.M., Delaney, E., Smyth, B.: If only we had better counterfactual explanations: five key deficits to rectify in the evaluation of counterfactual xai techniques. arXiv preprint arXiv:2103.01035 (2021)
Lagu, T., Rothberg, M.B., Shieh, M.S., Pekow, P.S., Steingrub, J.S., Lindenauer, P.K.: Hospitalizations, costs, and outcomes of severe sepsis in the united states 2003 to 2007. Crit. Care Med. 40(3), 754–761 (2012)
Li, Q., Diao, Y., Chen, Q., He, B.: Federated learning on non-iid data silos: an experimental study. In: Proceedings of IEEE 38th ICDE, pp. 965–978 (2022)
Lincy, M., Kowshalya, A.M.: Early detection of type-2 diabetes using federated learning. IJSRST (2020)
Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes. In: Oliver, N., Pérez-Cruz, F., Kramer, S., Read, J., Lozano, J.A. (eds.) ECML PKDD 2021. LNCS (LNAI), vol. 12976, pp. 650–665. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-86520-7_40
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)
Mertes, S., Huber, T., Weitz, K., Heimerl, A., André, E.: Ganterfactual-counterfactual explanations for medical non-experts using generative adversarial learning. Frontiers in artificial intelligence 5 (2022)
National Institute of General Medical Sciences: Sepsis (2022)
Nguyen, T.M., Quinn, T.P., Nguyen, T., Tran, T.: Counterfactual explanation with multi-agent reinforcement learning for drug target prediction. arXiv preprint arXiv:2103.12983 (2021)
Samoilescu, R.F., Van Looveren, A., Klaise, J.: Model-agnostic and scalable counterfactual explanations via reinforcement learning. arXiv preprint arXiv:2106.02597 (2021)
Singer, M., Deutschman, C.S., Seymour, C.W., Shankar-Hari, M., Annane, D., Bauer, M., Bellomo, R., Bernard, G.R., Chiche, J.D., Coopersmith, C.M., et al.: The third international consensus definitions for sepsis and septic shock (sepsis-3). JAMA 315(8), 801–810 (2016)
Tonekaboni, S., Joshi, S., McCradden, M.D., Goldenberg, A.: What clinicians want: contextualizing explainable machine learning for clinical end use. In: Machine Learning for Healthcare Conference, pp. 359–380. PMLR (2019)
Verma, S., Dickerson, J., Hines, K.: Counterfactual explanations for machine learning: a review. arXiv preprint arXiv:2010.10596 (2020)
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv. JL & Tech. (2017)
Wang, Z., Samsten, I., Papapetrou, P.: Counterfactual Explanations for Survival Prediction of Cardiovascular ICU Patients. In: Tucker, A., Henriques Abreu, P., Cardoso, J., Pereira Rodrigues, P., Riaño, D. (eds.) AIME 2021. LNCS (LNAI), vol. 12721, pp. 338–348. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-77211-6_38
Woensel, W.V., et al.: Explainable clinical decision support: towards patient-facing explanations for education and long-term behavior change. In: International Conference on AI in Medicine, pp. 57–62. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09342-5_6
Xu, J., Glicksberg, B.S., Su, C., Walker, P., Bian, J., Wang, F.: Federated learning for healthcare informatics. J. Healthcare Inf. Res. (2021)
Acknowledgement
This research was partially funded by the German Federal Ministry of Health as part of the KINBIOTICS project.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Düsing, C., Cimiano, P. (2023). Federated Learning to Improve Counterfactual Explanations for Sepsis Treatment Prediction. In: Juarez, J.M., Marcos, M., Stiglic, G., Tucker, A. (eds) Artificial Intelligence in Medicine. AIME 2023. Lecture Notes in Computer Science(), vol 13897. Springer, Cham. https://doi.org/10.1007/978-3-031-34344-5_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-34344-5_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-34343-8
Online ISBN: 978-3-031-34344-5
eBook Packages: Computer ScienceComputer Science (R0)