External Validation of a “Black-Box” Clinical Predictive Model in Nephrology: Can Interpretability Methods Help Illuminate Performance Differences?
- 1.4k Downloads
The number of machine learning clinical prediction models being published is rising, especially as new fields of application are being explored in medicine. Notwithstanding these advances, only few of such models are actually deployed in clinical contexts for a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients when applied to an external cohort of a German research hospital. To help account for the performance differences observed, we utilized interpretability methods which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. We argue that such methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.
KeywordsClinical predictive modeling Nephrology Validation Interpretability methods
Parts of the given work were generously supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 780495.
- 1.Andrew, A.: Git-crypt (2013). https://github.com/AGWA/git-crypt
- 2.Che, Z., Purushotham, S., Khemani, R., Liu, Y.: Interpretable deep models for ICU outcome prediction. In: AMIA Symposium 2016, pp. 371–380 (2016)Google Scholar
- 3.Freitas da Cruz, H., Schneider, F., Schapranow, M.P.: Prediction of acute kidney injury in cardiac surgery patients. In: Proceedings of the 12th International Conference on Biomedical Engineering Systems and Technologies, vol. 5, pp. 380–387 (2019)Google Scholar
- 4.Doshi-Velez, F., Kim, B.: Towards A Rigorous Science of Interpretable Machine Learning. arXiv e-prints arXiv:1702.08608, February 2017
- 8.Guidotti, R. et al.: A Survey of Methods for Explaining Black Box Models. arXiv e-prints arXiv:1802.01933, February 2018
- 9.Hall, P., Gill, N.: An Introduction to Machine Learning Interpretability. O’Reilly, Boca Raton (2018)Google Scholar
- 12.Knöpfel, A., Gröne, B., Tabeling, P.: Fundamental Modeling Concepts: Effective Communication of IT Systems. Wiley, Hoboken (2005)Google Scholar
- 16.Louppe, G., Wehenkel, L., Sutera, A., Geurts, P.: Understanding variable importances in forests of randomized trees. In: Neural Information Processing Systems, pp. 1–9 (2013)Google Scholar
- 18.Murdoch, W.J., et al.: Interpretable Machine Learning: Definitions, Methods, and Applications. arXiv e-prints arXiv:1901.04592, January 2019
- 20.Ribeiro, M., Singh, S., Guestrin, C.: "Why should i trust you?": explaining the predictions of any classifier. In: Proceedings of 22nd ACM SIGKDD, pp. 1135–1144, NY, USA (2016)Google Scholar
- 21.Rossum, G.V., Drake, F.L.: Python tutorial. History 42(4), 1–122 (2010)Google Scholar
- 22.Thakar, C.V., et al.: A clinical score to predict acute renal failure after cardiac surgery. J. Am. Soc. Nephrol. 14(8), 2176–7 (2004)Google Scholar