Skip to main content

Advertisement

Log in

An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients

  • Systems-Level Quality Improvement
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

In recent years, artificial intelligence-based computer aided diagnosis (CAD) system for the hepatitis has made great progress. Especially, the complex models such as deep learning achieve better performance than the simple ones due to the nonlinear hypotheses of the real world clinical data. However,complex model as a black box, which ignores why it make a certain decision, causes the model distrust from clinicians. To solve these issues, an explainable artificial intelligence (XAI) framework is proposed in this paper to give the global and local interpretation of auxiliary diagnosis of hepatitis while retaining the good prediction performance. First, a public hepatitis classification benchmark from UCI is used to test the feasibility of the framework. Then, the transparent and black-box machine learning models are both employed to forecast the hepatitis deterioration. The transparent models such as logistic regression (LR), decision tree (DT)and k-nearest neighbor (KNN) are picked. While the black-box model such as the eXtreme Gradient Boosting (XGBoost), support vector machine (SVM), random forests (RF) are selected. Finally, the SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME) and Partial Dependence Plots (PDP) are utilized to improve the model interpretation of liver disease. The experimental results show that the complex models outperform the simple ones. The developed RF achieves the highest accuracy (91.9%) among all the models. The proposed framework combining the global and local interpretable methods improves the transparency of complex models, and gets insight into the judgments from the complex models, thereby guiding the treatment strategy and improving the prognosis of hepatitis patients. In addition, the proposed framework could also assist the clinical data scientists to design a more appropriate structure of CAD.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Pratt D. S., Kaplan M. M.: Evaluation of liver function. Harrisons Principles of Internal Medicine New York: McGraw-Hill, 2002, pp 1711–1715

    Google Scholar 

  2. Acharya U. R., Koh J. E. W., Hagiwara Y. K., Tan J. H., Gertych A., Vijayananthan A., Yaakup N. A., Abdullah H. J. J., Fabell M. K. B. M., Yeong C. H.: Automated diagnosis of focal liver lesions using bidirectional empirical mode decomposition features. Comput. Biol. Med. Vol. 94: 11–18, 2018. https://doi.org/10.1016/J.COMPBIOMED.2017.12.024

    Article  Google Scholar 

  3. Lok A. S. F.: Chronic hepatitis B. New Engl. J. Med. 346 (22): 1682–1683, 2002. https://doi.org/10.1056/NEJM200205303462202

    Article  Google Scholar 

  4. Organization (WHO) (2002) Hepatitis B

  5. Longo D., Fauci A., Kasper D., Hauser S., Jameson J., Loscalzo J.: Harrisons manual of medicine New York City: McGraw Hill Professional, 2019

    Google Scholar 

  6. Lee W. M., Hepatitis B.: Virus infection. New Engl. J. Med. 337 (24): 1733–1745, 1997

    Article  CAS  Google Scholar 

  7. Hews S., Eikenberry S., Nagy J. D., Kuang Y.: Rich dynamics of a hepatitis B viral infection model with logistic hepatocyte growth. J. Math. Biol. 60 (4): 573–590, 2010. https://doi.org/10.1007/s00285-009-0278-3

    Article  Google Scholar 

  8. Lin R. H., Chuang C. L.: A hybrid diagnosis model for determining the types of the liver disease. Comput. Biol. Med. 40 (7): 665–670, 2010. https://doi.org/10.1016/J.COMPBIOMED.2010.06.002

    Article  Google Scholar 

  9. Cholongitas E., Marelli L., Shusang V., Senzolo M., Rolles K., Patch D., Burroughs A. K.: A systematic review of the performance of the model for end-stage liver disease (MELD) in the setting of liver transplantation. Liver Transplant. 12 (7): 1049–1061, 2006

    Article  Google Scholar 

  10. Luca A., Angermayr B., Bertolini G., Koenig F., Vizzini G., Ploner M., Peck Radosavljevic M., Gridelli B., Bosch J.: An integrated MELD model including serum sodium and age improves the prediction of early mortality in patients with cirrhosis. Liver Transplant. 13 (8): 1174–1180, 2007

    Article  Google Scholar 

  11. Lukáová A., Babi B., Paraliová Z., Parali J.: How to increase the effectiveness of the hepatitis diagnostics by means of appropriate machine learning methods. Information Technology in Bio- and Medical Informatics Berlin: Springer International, 2015

    Google Scholar 

  12. Chen Y., Luo Y., Huang W., et al.: Machine-learning-based classification of real-time tissue elastography for hepatic fibrosis in patients with chronic hepatitis B. Comput. Biol. Med. 89: 18–23, 2017

    Article  Google Scholar 

  13. Hashem S., Esmat G., Elakel W., et al.: Comparison of machine learning approaches for prediction of advanced liver fibrosis in chronic hepatitis c patients. IEEE/ACM Trans. Comput. Biol. Bioinform. 15 (3): 861–868, 2018

    Article  Google Scholar 

  14. Tian X., Chong Y., Huang Y., et al.: Using machine learning algorithms to predict hepatitis b surface antigen seroclearance. Comput. Math. Methods Med. 2019: 1–7, 2019

    Article  Google Scholar 

  15. Singh A., Mehta J. C., Anand D., et al. (2020) An intelligent hybrid approach for hepatitis disease diagnosis: Combining enhanced k?means clustering and improved ensemble learninge. Expert Syst., e12526

  16. Molnar C. (2018) Interpretable machine learning. Retrieved from https://christophm.github.io/interpretable-ml-book/

  17. Lundberg S. M., Nair B., Vavilala M. S., et al.: Explainable machine-learning predictions for the prevention of hypoxaemia during surgery. Nat. Biomed. Eng. 2 (10): 749–760, 2018

    Article  Google Scholar 

  18. Lundberg S. M., Lee S. I., Vavilala M. S.: A unified approach to interpreting model predictionsy. Neural Inf. Process. Syst. 30: 4768–4777, 2017

    Google Scholar 

  19. Friedman J. H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29 (5): 1189–1232, 2001

    Article  Google Scholar 

  20. Ribeiro M. T., Singh S., Guestrin C.: Why should i trust you?: Explaining the predictions of any classifier.. In: North American Chapter of the Association for Computational Linguistics., 2016, pp 97–101

  21. Blake C. L. U. C. I. (1997) Repository of Machine Learning Databases. Dept. of Information and Computer Science. Univ. of California, Irvine. http://archive.ics.uci.edu/ml/datasets/Hepatitis

  22. Chawla N. V., Bowyer K. W., Hall L. O., Kegelmeyer W. P.: SMOTE: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16 (1): 321–357, 2001

    Google Scholar 

  23. Kim B., Rajiv K., Oluwasanmi O. K.: Examples are not enough, learn to criticize! criticism for interpretability. Neural Inf. Process. Syst. 29: 2280–2288, 2015

    Google Scholar 

  24. Vapnik V., Chervonenkis A.: The necessary and sufficient conditions for consistency in the empirical risk minimization method. Pattern Recognit. Image Anal. 1 (3): 283–305 , 1991

    Google Scholar 

  25. Chen T. Q., Guestrin C. (2016) XGBoost: a scalable tree boosting system. Knowl. Discov. Data Mining,785–794

  26. Breiman L.: Random Forests. Mach. Learn. 45 (1): 785–794, 2001

    Article  Google Scholar 

  27. Ribeiro M. T., Sameer S., Carlos G.: Model-agnostic interpretability of machine learning ICML.. In: Workshop on Human Interpretability in Machine Learning, 2016

  28. Du M., Liu N., Hu X.: Techniques for interpretable machine learning. Commun. ACM 63 (1): 68–77, 2016

    Article  Google Scholar 

  29. Thomson W., Roth A. E.: The Shapley value: essays in honor of Lloyd S. Shapley. Economica 58 (229): 123, 1991

    Article  Google Scholar 

  30. Štrumbelj E, Kononenko I., Hu X.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41 (3): 647–665, 2014

    Article  Google Scholar 

Download references

Funding

This work has been supported by Science and Technology Planning Project of Guangzhou(No. 201804010280), Foundation for Young Innovative Talents in Higher Education of Guangdong, China (No. 2017KQNCX140).

Author information

Authors and Affiliations

Authors

Contributions

JFP conceived and designed the study, JFP, KQZ, and MZ drafted the manuscript. JFP, KQZ, MZ, YT, XYZ, FFZ and JX critically revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Junfeng Peng.

Ethics declarations

Ethics approval and consent to participate

This article does not contain any studies with human participants or animals performed by any of the authors.

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Junfeng Peng, Kaiqiang Zou and Mi Zhou are jointly of the first authorship of the paper.

This article is part of the Topical Collection on Systems-Level Quality Improvement

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, J., Zou, K., Zhou, M. et al. An Explainable Artificial Intelligence Framework for the Deterioration Risk Prediction of Hepatitis Patients. J Med Syst 45, 61 (2021). https://doi.org/10.1007/s10916-021-01736-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-021-01736-5

Keywords

Navigation