Skip to main content

Explainable Artificial Intelligence for Deep Learning Models in Diagnosing Brain Tumor Disorder

  • Conference paper
  • First Online:
Micro-Electronics and Telecommunication Engineering (ICMETE 2023)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 894))

  • 77 Accesses

Abstract

Deep neural networks (DNNs) have shown great potential in diagnosing brain tumor disorder, but their decision-making processes can be difficult to interpret, leading to concerns about their reliability and safety. This paper presents overview of explainable artificial intelligence techniques which have been developed to improve the interpretability and transparency of DNNs and have been applied to diagnostic systems for such disorders. Based on the utilized framework of explainable artificial intelligence (XAI) in collaboration with deep learning models, authors diagnosed brain tumor with the help of convolutional neural network and interpreted its outcomes with the help of numerical gradient-weighted class activation mapping (numGrad-CAM-CNN), therefore achieved highest accuracy of 97.11%. Thus, XAI can help healthcare professionals in understanding how a DNN arrived at a diagnosis, providing insights into the reasoning and decision-making processes of the model. XAI techniques can also help to identify biases in the data used to train the model and address potential ethical concerns. However, challenges remain in implementing XAI techniques in diagnostic systems, including the need for large, diverse datasets, and the development of user-friendly interfaces. Despite these challenges, the potential benefits for improving patient outcomes and increasing trust in AI-based medical systems make it a promising area of research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Louis DN, Perry A, Reifenberger G, Von Deimling A, Figarella-Branger D, Cavenee WK, Ohgaki H, Wiestler OD, Kleihues P, Ellison DW (2016) The 2016 world health organization classification of tumors of the central nervous system: a summary. Acta Neuropathologica 131:803–820

    Google Scholar 

  2. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Google Scholar 

  3. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828

    Article  Google Scholar 

  4. Bengio Y et al. (2009) Learning deep architectures for AI, Foundations and trends® in Machine Learning 2(1):1–127

    Google Scholar 

  5. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by back-propagating errors. Nature 323(6088):533–536

    Google Scholar 

  6. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, Thrun S (2017) Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639):115–118

    Google Scholar 

  7. Miotto R, Wang F, Wang S, Jiang X, Dudley JT (2018) Deep learning for healthcare: review, opportunities and challenges. Brief Bioinform 19(6):1236–1246

    Article  Google Scholar 

  8. Ahmed S, Nobel SN, Ullah O (2023) An effective deep CNN model for multiclass brain tumor detection using mri images and shap explainability. In: 2023 International conference on electrical, computer and communication engineering (ECCE), IEEE, 2023, pp 1–6

    Google Scholar 

  9. Jin W, Li X, Fatehi M, Hamarneh G (2023) Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks. MethodsX 10:102009

    Article  Google Scholar 

  10. Kamnitsas K, Ledig C, Newcombe VF, Simpson JP, Kane AD, Menon DK, Rueckert D, Glocker B (2017) Efficient multi-scale 3d CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 36:61–78

    Article  Google Scholar 

  11. Bechelli S (2022) Computer-aided cancer diagnosis via machine learning and deep learning: a comparative review, arXiv preprint arXiv:2210.11943

  12. Sharma S, Gupta S, Gupta D, Juneja A, Khatter H, Malik S, Bitsue ZK (2022) Deep learning model for automatic classification and prediction of brain tumor. J Sens

    Google Scholar 

  13. Kukreja V, Ahuja S et al. (2021) Recognition and classification of mathematical expressions using machine learning and deep learning methods. In: 2021 9th International conference on reliability, infocom technologies and optimization (Trends and Future Directions) (ICRITO), IEEE, 2021, pp 1–5

    Google Scholar 

  14. Thapa K, Khan H, Singh TG, Kaur A (2021) Traumatic brain injury: mechanistic insight on pathophysiology and potential therapeutic targets. J Mol Neurosci 71(9):1725–1742

    Article  Google Scholar 

  15. Rehni AK, Singh TG, Jaggi AS, Singh N (2008) Pharmacological preconditioning of the brain: a possible interplay between opioid and calcitonin gene related peptide transduction systems. Pharmacol Reports 60(6):904

    Google Scholar 

  16. Kamini, Rani S (2023) Artificial intelligence and machine learning models for diagnosing neurodegenerative disorders. In: Data analysis for neurodegenerative disorders, Springer, pp 15–48

    Google Scholar 

  17. Ribeiro MT, Singh S, Guestrin C (2016) why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 2016, pp 1135–1144

    Google Scholar 

  18. Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. In: Advances in neural information processing systems, pp 30

    Google Scholar 

  19. Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International conference on machine learning, PMLR, 2017, pp 3319–3328

    Google Scholar 

  20. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision, pp 618–626

    Google Scholar 

  21. Pertzborn D, Arolt C, Ernst G, Lechtenfeld OJ, Kaesler J, Pelzel D, Guntinas-Lichius O, von Eggeling F, Hoffmann F (2022) Multi-class cancer subtyping in salivary gland carcinomas with maldi imaging and deep learning. Cancers 14(17):4342

    Google Scholar 

  22. Gaur L, Bhandari M, Razdan T, Mallik S, Zhao Z (2022) Explanation-driven deep learning model for prediction of brain tumour status using MRI image data. Front Genet 448

    Google Scholar 

  23. Park KH, Batbaatar E, Piao Y, Theera-Umpon N, Ryu KH (2021) Deep learning feature extraction approach for hematopoietic cancer subtype classification. Int J Environ Res Public Health 18(4):2197

    Article  Google Scholar 

  24. Marmolejo-Saucedo JA, Kose U (2022) Numerical grad-cam based explainable convolutional neural network for brain tumor diagnosis. Mobile Netw Appl 1–10

    Google Scholar 

  25. Montavon G, Samek W, Mu¨ller K-R (2018) Methods for interpreting and understanding deep neural networks. Digital Signal Process 73:1–15

    Google Scholar 

  26. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning, arXiv preprint arXiv:1702.08608

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shalli Rani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lamba, K., Rani, S. (2024). Explainable Artificial Intelligence for Deep Learning Models in Diagnosing Brain Tumor Disorder. In: Sharma, D.K., Peng, SL., Sharma, R., Jeon, G. (eds) Micro-Electronics and Telecommunication Engineering. ICMETE 2023. Lecture Notes in Networks and Systems, vol 894. Springer, Singapore. https://doi.org/10.1007/978-981-99-9562-2_13

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9562-2_13

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9561-5

  • Online ISBN: 978-981-99-9562-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics