Skip to main content

Explainability and Interpretability: Keys to Deep Medicine

  • Chapter
  • First Online:
Explainable AI in Healthcare and Medicine

Part of the book series: Studies in Computational Intelligence ((SCI,volume 914))

Abstract

Deep medicine, which aims to push the boundaries of artificial intelligence to reshape the health and medical intelligence and decision making, is a promising concept that is gaining attention over traditional EMR-based medical information management systems. The success of intelligent solutions in health and medicine depends on the degree to which they support interoperability, to allow consistent integration of different systems and data sources, and explainability, to make their decisions understandable, interpretable, and justifiable by humans.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shaban-Nejad, A., Michalowski, M., Buckeridge, D.L.: Health intelligence: how artificial intelligence transforms population and personalized health. NPJ Digit. Med. 1 (2018). Article number: 53

    Google Scholar 

  2. Shaban-Nejad, A., Michalowski, M.: Precision Health and Medicine - A Digital Revolution in Healthcare. Studies in Computational Intelligence, vol. 843. Springer, Cham (2020). ISBN 978-3-030-24408-8

    Google Scholar 

  3. Shaban-Nejad, A., Michalowski, M., Peek, N., Brownstein, J.S., Buckeridge, D.L.: Seven pillars of precision digital health and medicine. Artif. Intell. Med. 103, 101793 (2020)

    Article  Google Scholar 

  4. Topol, E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, 1 edn. Basic Books, 11 July 2019

    Google Scholar 

  5. Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9(4), e1312 (2019). https://doi.org/10.1002/widm.1312

    Article  Google Scholar 

  6. Pearl, J.: Interpretability and explainability from a causal lens. IPAM Workshop, 16 October 2019. www.helper.ipam.ucla.edu/publications/mlpws2/mlpws2_15879.pdf. Accessed 05 Apr 2020

  7. Lehne, M., Sass, J., Essenwanger, A., et al.: Why digital medicine depends on interoperability. NPJ Digit. Med. 2, 79 (2019). https://doi.org/10.1038/s41746-019-0158-1

  8. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: Proceedings of IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI). Melbourne, Australia 2017

    Google Scholar 

  9. Matuchansky, C.: Deep medicine, artificial intelligence, and the practicing clinician. Lancet 394(10200), 736 (2019)

    Article  Google Scholar 

  10. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  11. Felton, E.: What does it mean to ask for an “explainable” algorithm? 31 May 2017. https://freedom-to-tinker.com/2017/05/31/what-does-it-mean-to-ask-for-an-explainable-algorithm/. Accessed 5 May 2020

  12. Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA) (2017)

    Google Scholar 

  13. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: Challenges and Prospects. arXiv:1812.04608 [cs.AI], February 2019

  14. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  15. Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, Klein, G.A.: Explanation in h-AI systems: a literature meta-review synopsis of key ideas and publications and bibliography for explainable AI. DARPA XAI Literature Review. DARPA XAI Program February 2019. arXiv:1902.01876 [cs.AI]

  16. Fellous, J.M., Sapiro, G., Rossi, A., Mayberg, H., Ferrante, M.: Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Front Neurosci. 13, 1346 (2019). https://doi.org/10.3389/fnins.2019.01346. eCollection 2019

  17. Anguita-Ruiz, A., Segura-Delgado, A., Alcalá, R., Aguilera, C.M., Alcalá-Fdez, J.: eXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research. PLoS Comput Biol. 16(4), e1007792 (2020). https://doi.org/10.1371/journal.pcbi.1007792. eCollection 2020 Apr. PMID: 32275707

  18. Lamy, J.B., Sekar, B., Guezennec, G., Bouaud, J., Séroussi, B.: Explainable artificial intelligence for breast cancer: a visual case-based reasoning approach. Artif. Intell. Med. 94, 42–53 (2019). https://doi.org/10.1016/j.artmed.2019.01.001. Epub 2019 Jan 14 PMID: 30871682

    Article  Google Scholar 

  19. Landgrebe, J., Smith, B.: Making AI meaningful again. Synthese (2019). https://doi.org/10.1007/s11229-019-02192-y

    Article  Google Scholar 

  20. Shaban-Nejad, A., Lavigne, M., Okhmatovskaia, A., Buckeridge, D.L.: PopHR: a knowledge-based platform to support integration, analysis, and visualization of population health data. Ann. N. Y. Acad. Sci. 1387(1), 44–53 (2017)

    Article  Google Scholar 

  21. Buckeridge, D.L., Izadi, M.T., Shaban-Nejad, A., Mondor, L., Jauvin, C., Dubé, L., Jang, Y., Tamblyn, R.: An infrastructure for real-time population health assessment and monitoring. IBM J. Res. Dev. 56(5), 2 (2012)

    Article  Google Scholar 

  22. Shaban-Nejad, A., Buckeridge, D.L., Dubé, L.: COPE: Childhood obesity prevention [Knowledge] enterprise. In: Peleg, M., Lavrač, N., Combi, C. (eds) Artificial Intelligence in Medicine. AIME 2011. Lecture Notes in Computer Science, vol. 6747, pp. 225–229, Springer, Heidelberg (2011)

    Google Scholar 

  23. Brenas, J.H., Shin, E.K., Shaban-Nejad, A.: Adverse childhood experiences ontology for mental health surveillance, research, and evaluation: advanced knowledge representation and semantic web techniques. JMIR Ment. Health 6(5), e13498 (2019). https://doi.org/10.2196/13498

    Article  Google Scholar 

  24. Shaban-Nejad, A., Mamiya, H., Riazanov, A., Forster, A.J., Baker, C.J., Tamblyn, R., Buckeridge, D.L.: From cues to nudge: a knowledge-based framework for surveillance of healthcare-associated infections. J. Med. Syst. 40(1), 23 (2016). https://doi.org/10.1007/s10916-015-0364-6

    Article  Google Scholar 

  25. Riazanov, A., Rose, G.W., Klein, A., Forster, A.J., Baker, C.J.O., Shaban-Nejad, A., Buckeridge, D.L.: Towards clinical intelligence with SADI semantic web services: a case study with hospital-acquired infections data. SWAT4LS 2011, pp. 106–113 (2011)

    Google Scholar 

  26. Brenas, J.H., Al Manir, M.S., Baker, C.J.O., Shaban-Nejad, A.: A malaria analytics framework to support evolution and interoperability of global health surveillance systems. IEEE Access 5, 21605–21619 (2017)

    Article  Google Scholar 

  27. Al Manir, M.S., Brenas, J.H., Baker, C.J., Shaban-Nejad, A.: A surveillance infrastructure for malaria analytics: provisioning data access and preservation of interoperability MIR public health surveill 4(2), e10218 (2018). https://doi.org/10.2196/10218

    Article  Google Scholar 

  28. Brenas, J.H., Shaban-Nejad, A.: Health intervention evaluation using semantic explainability and causal reasoning. IEEE Access 8, 9942–9952 (2020)

    Article  Google Scholar 

  29. Shaban-Nejad, A., Okhmatovskaia, A., Shin, E.K., Davis, R.L., Franklin, B.E., Buckeridge, D.L.: A semantic framework for logical cross-validation, evaluation and impact analyses of population health interventions. Stud. Health Technol. Inform. 235, 481–485 (2017)

    Google Scholar 

  30. What is Interoperability in Healthcare? https://www.himss.org/what-interoperability. Accessed 5 May 2020

  31. Perlin, J.B.: Health information technology interoperability and use for better care and evidence. JAMA 316, 1667–1668 (2016)

    Article  Google Scholar 

  32. Shin, E.K., Mahajan, R., Akbilgic, O., Shaban-Nejad, A.: Sociomarkers and biomarkers: predictive modeling in identifying pediatric asthma patients at risk of hospital revisits. NPJ Digit. Med. 1, 50, https://doi.org/10.1038/s41746-018-0056-y (2018)

  33. Shin, E.K., Shaban-Nejad, A.: Urban decay and pediatric asthma prevalence in Memphis, tennessee: urban data integration for efficient population health surveillance. IEEE Access 6, 46281–46289 (2018). https://doi.org/10.1109/ACCESS.2018.2866069

    Article  Google Scholar 

  34. Shin, E.K., Kwon, Y., Shaban-Nejad, A.: Geo-clustered chronic affinity: pathways from socio-economic disadvantages to health disparities. JAMIA Open 2(3), 317–322 (2019)

    Article  Google Scholar 

  35. Wang, K., Xia, E., Zhao, S., Huang, Z., Huang, S., Mei, J., Li, S.: Fast Similar Patient Retrieval from Large Scale Healthcare Data: A Deep Learning-based Binary Hashing Approach. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  36. Mikalsen, K.Ø., Soguero-Ruiz, C., Jenssen, R.: A kernel to exploit informative missingness in multivariate time series from EHRs. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer, 2020

    Google Scholar 

  37. Seedat, N., Aharonson, V.A.: Machine learning discrimination of Parkinson’s Disease stages from walker-mounted sensors data. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  38. Zhu, T., Li, K., Georgiou, P.: Personalized dual-hormone control for type 1 diabetes using deep reinforcement learning. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  39. Kollada, M., Gao, Q., Mellem, M.S., Banerjee, T., Martin, W.J.: A generalizable method for automated quality control of functional neuroimaging datasets. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  40. Guo, Y., Liu, Z., Ramasamy, S., Krishnaswamy, P.: Uncertainty characterization for predictive analytics with clinical time series data. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  41. Hügle, M., Kalweit, G., Hügle, T., Boedecker, J.: A dynamic deep neural network for multimodal clinical data analysis. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  42. Oskooei, A., Chau, S.M., Weiss, J., Sridhar, A., Martínez, M.R., Michel, B.: DeStress: deep learning for unsupervised identification of mental stress in firefighters from Heart-rate Variability (HRV) data. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  43. Suresha, P.B., Wang, Y., Xiao, C., Glass, L., Yuan, Y., Clifford, G.D.: A deep learning approach for classifying nonalcoholic steatohepatitis patients from nonalcoholic fatty liver disease patients using electronic medical records. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  44. Yao, J., Liu, Y., Li, B., Gou, S., Pou-Prom, C., Murray, J., Verma, A., Mamdani, M., Ghassemi, M.: Visualization of deep models on nursing notes and physiological data for predicting health outcomes through temporal sliding windows. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  45. Nadkarni, P.M., Ohno-Machado, L., Chapman, W.W.: Natural language processing: an introduction. J. Am. Med. Inform. Assoc. 18(5), 544–51 (2011). https://doi.org/10.1136/amiajnl-2011-000464

    Article  Google Scholar 

  46. Singh, G., Sabet, Z., Shawe-Taylor, J., Thomas, J.: Constructing artificial data for fine-tuning for low-resource biomedical text tagging with applications in PICO annotation. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  47. Sakka, K., Nakayama, K., Kimura, N., Inoue, T., Iwasawa, Y., Yamaguchi, R., Kawazoe, Y., Ohe, K., Matsuo, Y.: Character-level Japanese text generation with attention mechanism for chest radiography diagnosis. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  48. Krishna, K., Pavel, A., Schloss, B., Bigham, J.P., Lipton, Z.C.: Extracting structured data from physician-patient conversations by predicting noteworthy utterances. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  49. Horn, M., Li, X., Chen, L., Kae, S.: A multi-talent healthcare AI bot platform. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  50. Batan, H., Radpour, D., Kehlbacher, A., Klein-Seetharaman, J., Paul, M.J.: Natural vs. artificially sweet tweets: characterizing discussions of non-nutritive sweeteners on Twitter. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  51. Lee, B., Jeong, H., Shin, E.K.: On-line (TweetNet) and Off-line (EpiNet): The Distinctive Structures of the Infectious. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence – Springer, 2020

    Google Scholar 

  52. Selvaraj, S.P., and Konam, S. Medication Regimen Extraction From Medical Conversations. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  53. Sato, K., Onishi, M., Yoda, I., Uchida, K., Kuroshima, S., Kawashima, M.: Quantitative evaluation of emergency medicine resident’s non-technical skills based on trajectory and conversation analysis. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  54. Caligtan, C.A., Dykes, P.C.: Electronic health records and personal health records. Semin. Oncol. Nurs. 27(3), 218–28 (2011). https://doi.org/10.1016/j.soncn.2011.04.007

    Article  Google Scholar 

  55. Ammar, N., Bailey, J.E., Davis, R.L., Shaban-Nejad, A.: Implementation of a Personal Health Library (PHL) to support chronic disease self-management. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer, (2020)

    Google Scholar 

  56. Huang, M., Shah, N.D., Yao, L.: KELSA: A Knowledge-Enriched Local Sequence Alignment Algorithm for Comparing Patient Medical Records. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  57. Meng, Y., Speier, W., Ong, M., Arnold, C.W.: Multi-level embedding with topic modeling on electronic health records for predicting depression. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  58. Hu, Y., An, Y., Subramanian, R., Zhao, N., Gu, Y., Wu, W.: Faster clinical time series classification with filter based feature engineering tree boosting methods. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  59. Chen, H., Lundberg, S., and Lee, S.I. Explaining models by propagating shapley values of local components. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  60. Howell, K. Barnes, M., Curtis, J.R., Engelberg, R.A., Lee, R.Y., Lober, W.B., Sibley, J., Cohen, T.: Controlling for confounding variables: accounting for dataset bias in classifying patient-provider interactions. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  61. Karimian, H.R., Pollard, K., Moore, M.J., Kordjamshidi, P.: Learning representations to augment statistical analysis of drug effects on nerve tissues? In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  62. Dasgupta, T., Mondal, I., Naskar, A., Dey, L.: Automatic Segregation and Classification of Inclusion and Exclusion Criteria of Clinical Trials to Improve Patient Eligibility Matching. Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  63. Ferland, L., Sauve, J., Lucke, M., Nie, R., Khadar, M., Pakhomov, S., Gini, M.: Tell me about your day: designing a conversational agent for time and stress management. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  64. Larsen, T., Malkomes, G., Barbour, D.: Accelerating psychometric screening tests with prior information. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  65. McElfresh, D.C., Dooley, S., Cui, Y., Griesman, K., Wang, W., Will, T., Sehgal, N., Dickerson, J.P.: Can an algorithm be my healthcare proxy? Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  66. Byrd, J., Balakrishnan, S., Jiang, X., Lipton, Z.C.: Predicting mortality in liver transplant candidates. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

  67. Nilanon, T., Nocera, L.P., Nieva, J.J., Shahabi, C.: Towards automated performance status assessment: temporal alignment of motion skeleton time series. In: Explainable AI in Healthcare and Medicine: Building a Culture of Transparency and Accountability. Studies in Computational Intelligence. Springer (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arash Shaban-Nejad .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Shaban-Nejad, A., Michalowski, M., Buckeridge, D.L. (2021). Explainability and Interpretability: Keys to Deep Medicine. In: Shaban-Nejad, A., Michalowski, M., Buckeridge, D.L. (eds) Explainable AI in Healthcare and Medicine. Studies in Computational Intelligence, vol 914. Springer, Cham. https://doi.org/10.1007/978-3-030-53352-6_1

Download citation

Publish with us

Policies and ethics