Advertisement

Deep Learning and Explainable AI in Healthcare Using EHR

  • Sujata KhedkarEmail author
  • Priyanka Gandhi
  • Gayatri Shinde
  • Vignesh Subramanian
Chapter
Part of the Studies in Big Data book series (SBD, volume 68)

Abstract

With the evolving time, Artificial Intelligence (AI) has proved to be of great assistance in the medical field. Rapid advancements led to the availability of technology which could predict many different diseases risks. Patients Electronic Health Records (EHR) contains all different kinds of medical data for each patient, for each medical visit. Now there are many predictive models like random forests, boosted trees which provide high accuracy but not end-to-end interpretability while the ones such as Naive-Bayes, logistic regression and single decision trees are intelligible enough but less accurate. These models are interpretable but they lack to see the temporal relationships in the characteristic attributes present in the EHR data. Eventually, the model accuracy is compromised. Interpretability of a model is essential in critical healthcare applications. Interpretability helps the medical personnel with explanations that build trust towards machine learning systems. This chapter contains the design and implementation of an Explainable Deep Learning System for Healthcare using EHR. In this chapter, use of an attention mechanism and Recurrent Neural Network(RNN) on EHR data has been discussed, for predicting heart failure of patients and providing insight into the key diagnoses that have led to the prediction. The patient’s medical history is given as a sequential input to the RNN which predicts the heart failure risk and provides explainability along with it. This represents an ante-hoc explainability model. A neural network having two levels and attention model is trained for detecting those visits of the patient in his history that could be influential and significant to understand the reasons behind any prediction done on the medical history of the patient data. Thus, considering the last visit first proves to be beneficial. When a prediction is made, the visit-level contribution is prioritized i.e. which visit contributes the most to the final prediction where each visit consists of multiple codes. This model can be helpful to medical persons for predicting the heart failure risks of patients with diseases they have been diagnosed with based on EHR. This model is then worked upon by local interpretable model-agnostic explanations (LIME) which provide the different features that positively and negatively contribute to heart failure risk.

Keywords

Heart failure Predictive modeling Deep learning RNN LIME Explainability Interpretability Attention 

References

  1. 1.
    Goldberger, A.L., Amaral, L.A.N., Glass, L., Hausdorff, J.M., Ivanov, P.Ch., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.-K., Stanley, H.E.: PhysioBank, PhysioToolkit, and PhysioNet(June 13): components of a new research resource for complex physiologic signals. Circulation 101(23), e215–e220. (Circulation Electronic Pages; http://circ.ahajournals.org/content/101/23/e215.full) (2000)
  2. 2.
    MIMIC III dataset (Medical Information Mart for Intensive Care III). https://mimic.physionet.org/
  3. 3.
    Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain (2017). arXiv:1712.09923v1
  4. 4.
    Zhao, C., Shen, Y., Yao, L.-P.: Convolutional neural network-based model for patient representation learning to uncover temporal phenotypes for heart failure (2017)Google Scholar
  5. 5.
    Choi, E., Bahadori, M.T., Kulas, J.A., Schuetz, A., Stewart, W.F., Sun, J.: RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism. In: 30th conference on neural information processing systems (NIPS), Barcelona, Spain (2016)Google Scholar
  6. 6.
    Guestrin, C., Singh, S., Ribeiro, M.T.: Why should i trust you? Explaining the predictions of any classifier (2016). arXiv:1602.04938
  7. 7.
    Bahdanau, D., Cho, K.H., Bengio, Y.: Neural machine translation using attention mechanism paper, ICLR (2015)Google Scholar
  8. 8.
    Choi, E., Bahadori, M.T., Song, L., Stewart, W.F., Sun, J.: GRAM: graph-based attention model for healthcare. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 787–795 (2017)Google Scholar
  9. 9.
    Cleveland Heart Disease Dataset: (1988). https://archive.ics.uci.edu/ml/datasets/heart+Disease

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Sujata Khedkar
    • 1
    Email author
  • Priyanka Gandhi
    • 1
  • Gayatri Shinde
    • 1
  • Vignesh Subramanian
    • 1
  1. 1.Department of Computer EngineeringVESITMumbaiIndia

Personalised recommendations